doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1701.07274 | 252 | Han, S., Mao, H., and Dally, W. J. (2016). Deep compression: Compressing deep neural net- works with pruning, trained quantization and Huffman coding. In the International Conference on Learning Representations (ICLR).
63
Harrison, B., Ehsan, U., and Riedl, M. O. (2017). Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations. ArXiv e-prints.
Harutyunyan, A., Vrancx, P., Bacon, P.-L., Precup, D., and Nowe, A. (2018). Learning with options that terminate off-policy. In the AAAI Conference on Artiï¬cial Intelligence (AAAI).
Hassabis, D., Kumaran, D., Summerï¬eld, C., and Botvinick, M. (2017). Neuroscience-inspired artiï¬cial intelligence. Neuron, 95:245â258.
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer.
Hausknecht, M. and Stone, P. (2015). Deep recurrent Q-learning for partially observable MDPs. In the AAAI Conference on Artiï¬cial Intelligence (AAAI). | 1701.07274#252 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 253 | Hausknecht, M. and Stone, P. (2016). Deep reinforcement learning in parameterized action space. In the International Conference on Learning Representations (ICLR).
Haykin, S. (2005). Cognitive radio: brain-empowered wireless communications. IEEE Journal on Selected Areas in Communications, 23(2):201â220.
Haykin, S. (2008). Neural Networks and Learning Machines (third edition). Prentice Hall.
He, D., Xia, Y., Qin, T., Wang, L., Yu, N., Liu, T.-Y., and Ma, W.-Y. (2016a). Dual learning for machine translation. In the Annual Conference on Neural Information Processing Systems (NIPS).
He, F. S., Liu, Y., Schwing, A. G., and Peng, J. (2017). Learning to play in a day: Faster deep In the International Conference on Learning reinforcement learning by optimality tightening. Representations (ICLR).
He, J., Chen, J., He, X., Gao, J., Li, L., Deng, L., and Ostendorf, M. (2016b). Deep reinforcement learning with a natural language action space. In the Association for Computational Linguistics annual meeting (ACL). | 1701.07274#253 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 254 | He, J., Ostendorf, M., He, X., Chen, J., Gao, J., Li, L., and Deng, L. (2016c). Deep reinforcement learning with a combinatorial action space for predicting popular reddit threads. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
He, K., Gkioxari, G., Doll´ar, P., and Girshick, R. (2017). Mask R-CNN. In the IEEE International Conference on Computer Vision (ICCV).
He, K., Zhang, X., Ren, S., and Sun, J. (2016d). Deep residual learning for image recognition. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
He, L., Lee, K., Lewis, M., and Zettlemoyer, L. (2017). Deep semantic role labeling: What works and whatâs next. In the Association for Computational Linguistics annual meeting (ACL).
He, X. and Deng, L. (2013). Speech-centric information processing: An optimization-oriented approach. Proceedings of the IEEE â Vol. 101, No. 5, May 2013, 101(5):1116â1135. | 1701.07274#254 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 255 | Heaton, J. B., Polson, N. G., and Witte, J. H. (2016). Deep learning for ï¬nance: deep portfolios. Applied Stochastic Models in Business and Industry.
Heess, N., TB, D., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, A., Riedmiller, M., and Silver, D. (2017). Emergence of Locomotion Behaviours in Rich Environments. ArXiv e-prints.
Hein, D., Depeweg, S., Tokic, M., Udluft, S., Hentschel, A., Runkler, T. A., and Sterzing, V. (2017). In IEEE Symposium on A benchmark environment motivated by industrial control problems. Adaptive Dynamic Programming and Reinforcement Learning (IEEE ADPRLâ17).
Heinrich, J. and Silver, D. (2016). Deep reinforcement learning from self-play in imperfect- information games. In NIPS 2016 Deep Reinforcement Learning Workshop.
64
Held, D., Geng, X., Florensa, C., and Abbeel, P. (2017). Automatic Goal Generation for Reinforce- ment Learning Agents. ArXiv e-prints. | 1701.07274#255 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 256 | Henaff, M., Whitney, W. F., and LeCun, Y. (2017). Model-Based Planning in Discrete Action Spaces. ArXiv e-prints.
Hessel, M., Modayil, J., van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D. (2018). Rainbow: Combining Improvements in Deep Reinforcement Learning. In the AAAI Conference on Artiï¬cial Intelligence (AAAI).
Hester, T. and Stone, P. (2017). Intrinsically motivated model learning for developing curious robots. Artiï¬cial Intelligence, 247:170â86.
Hester, T., Vecerik, M., Pietquin, O., Lanctot, M., Schaul, T., Piot, B., Horgan, D., Quan, J., Sendonaris, A., Dulac-Arnold, G., Osband, I., Agapiou, J., Leibo, J. Z., and Gruslys, A. (2018). Deep Q-learning from demonstrations. In the AAAI Conference on Artiï¬cial Intelligence (AAAI). | 1701.07274#256 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 257 | Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2017). β-VAE: Learning basic visual concepts with a constrained variational framework. In the International Conference on Learning Representations (ICLR).
Hinton, G., Deng, L., Yu, D., Dahl, G. E., rahman Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., , and Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 82.
Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786):504â507.
Hirschberg, J. and Manning, C. D. (2015). Advances in natural language processing. Science, 349(6245):261â266.
Ho, J. and Ermon, S. (2016). Generative adversarial imitation learning. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#257 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 258 | Ho, J. and Ermon, S. (2016). Generative adversarial imitation learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Ho, J., Gupta, J. K., and Ermon, S. (2016). Model-free imitation learning with policy optimization. In the International Conference on Machine Learning (ICML).
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9:1735â 1780.
Hoshen, Y. (2017). Vain: Attentional multi-agent predictive modeling. In the Annual Conference on Neural Information Processing Systems (NIPS).
Houthooft, R., Chen, X., Duan, Y., Schulman, J., Turck, F. D., and Abbeel, P. (2016). Vime: Variational information maximizing exploration. In the Annual Conference on Neural Information Processing Systems (NIPS).
Hu, Z., Yang, Z., Salakhutdinov, R., and Xing, E. P. (2017). On Unifying Deep Generative Models. ArXiv e-prints.
Huang, G., Liu, Z., Weinberger, K. Q., and van der Maaten, L. (2017). Densely connected convolu- tional networks. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). | 1701.07274#258 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 259 | Huang, S., Papernot, N., Goodfellow, I., Duan, Y., and Abbeel, P. (2017). Adversarial Attacks on Neural Network Policies. ArXiv e-prints.
Huk Park, D., Hendricks, L. A., Akata, Z., Schiele, B., Darrell, T., and Rohrbach, M. (2016). Atten- tive Explanations: Justifying Decisions and Pointing to the Evidence. ArXiv e-prints.
Hull, J. C. (2014). Options, Futures and Other Derivatives (9th edition). Prentice Hall.
Ian J. Goodfellow, Jonathon Shlens, C. S. (2015). Explaining and harnessing adversarial examples. In the International Conference on Learning Representations (ICLR).
65
Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In the International Conference on Machine Learning (ICML).
Islam, R., Henderson, P., Gomrokchi, M., and Precup, D. (2017). Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. In ICML 2017 Reproducibility in Ma- chine Learning Workshop. | 1701.07274#259 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 260 | Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., Fernando, C., and Kavukcuoglu, K. (2017). Population Based Training of Neural Networks. ArXiv e-prints.
Jaderberg, M., Mnih, V., Czarnecki, W., Schaul, T., Leibo, J. Z., Silver, D., and Kavukcuoglu, K. (2017). Reinforcement learning with unsupervised auxiliary tasks. In the International Confer- ence on Learning Representations (ICLR).
Jaderberg, M., Simonyan, K., Zisserman, A., and Kavukcuoglu, K. (2015). Spatial transformer networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
James, G., Witten, D., Hastie, T., and Tibshirani, R. (2013). An Introduction to Statistical Learning with Applications in R. Springer.
Jaques, N., Gu, S., Turner, R. E., and Eck, D. (2017). Tuning recurrent neural networks with reinforcement learning. Submitted to Intâl Conference on Learning Representations. | 1701.07274#260 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 261 | Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J., and Schapire, R. E. (2016). Contextual Decision Processes with Low Bellman Rank are PAC-Learnable. ArXiv e-prints.
Jie, Z., Liang, X., Feng, J., Jin, X., Lu, W. F., and Yan, S. (2016). Tree-structured reinforcement In the Annual Conference on Neural Information learning for sequential object localization. Processing Systems (NIPS).
Johansson, F. D., Shalit, U., and Sontag, D. (2016). Learning representations for counterfactual inference. In the International Conference on Machine Learning (ICML).
Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Vi´egas, F., Watten- berg, M., Corrado, G., Hughes, M., and Dean, J. (2016). Googleâs Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. ArXiv e-prints.
Jordan, M. I. and Mitchell, T. (2015). Machine learning: Trends, perspectives, and prospects. Sci- ence, 349(6245):255â260. | 1701.07274#261 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 262 | Jordan, M. I. and Mitchell, T. (2015). Machine learning: Trends, perspectives, and prospects. Sci- ence, 349(6245):255â260.
Joulin, A., Grave, E., Bojanowski, P., and Mikolov, T. (2017). Bag of tricks for efï¬cient text clas- siï¬cation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Jurafsky, D. and Martin, J. H. (2017). Speech and Language Processing (3rd ed. draft). Prentice Hall.
Justesen, N., Bontrager, P., Togelius, J., and Risi, S. (2017). Deep Learning for Video Game Playing. ArXiv e-prints.
Justesen, N. and Risi, S. (2017). Learning macromanagement in starcraft from replays using deep learning. In IEEE Conference on Computational Intelligence and Games (CIG).
Kadlec, R., Schmid, M., Bajgar, O., and Kleindienst, J. (2016). Text Understanding with the Atten- tion Sum Reader Network. ArXiv e-prints. | 1701.07274#262 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 263 | Kaelbling, L. P., Littman, M. L., and Moore, A. (1996). Reinforcement learning: A survey. Journal of Artiï¬cial Intelligence Research, 4:237â285.
Kaiser, L. and Bengio, S. (2016). Can active memory replace attention? In the Annual Conference on Neural Information Processing Systems (NIPS).
Kaiser, L., Gomez, A. N., Shazeer, N., Vaswani, A., Parmar, N., Jones, L., and Uszkoreit, J. (2017a). One Model To Learn Them All. ArXiv e-prints.
66
Kaiser, Å., Nachum, O., Roy, A., and Bengio, S. (2017b). Learning to Remember Rare Events. In the International Conference on Learning Representations (ICLR).
Kakade, S. (2002). A natural policy gradient. In the Annual Conference on Neural Information Processing Systems (NIPS).
Kalchbrenner, N. and Blunsom, P. (2013). Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Processing (EMNLP). | 1701.07274#263 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 264 | Kalchbrenner, N. and Blunsom, P. (2013). Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Kandasamy, K., Bachrach, Y., Tomioka, R., Tarlow, D., and Carter, D. (2017). Batch policy gradient methods for improving neural conversation models. In the International Conference on Learning Representations (ICLR).
Kansky, K., Silver, T., M´ely, D. A., Eldawy, M., L´azaro-Gredilla, M., Lou, X., Dorfman, N., Sidor, S., Phoenix, S., and George, D. (2017). Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. In the International Conference on Machine Learning (ICML).
Karpathy, A., Johnson, J., and Fei-Fei, L. (2016). Visualizing and understanding recurrent networks. In ICLR 2016 Workshop.
Kavosh and Littman, M. L. (2017). A new softmax operator for reinforcement learning. International Conference on Machine Learning (ICML). In the
Kawaguchi, K., Pack Kaelbling, L., and Bengio, Y. (2017). Generalization in Deep Learning. ArXiv e-prints. | 1701.07274#264 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 265 | Kawaguchi, K., Pack Kaelbling, L., and Bengio, Y. (2017). Generalization in Deep Learning. ArXiv e-prints.
Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and Jas´kowski, W. (2016). ViZDoom: A Doom- based AI research platform for visual reinforcement learning. In IEEE Conference on Computa- tional Intelligence and Games.
Khandani, A. E., Kim, A. J., and Lo, A. W. (2010). Consumer credit-risk models via machine- learning algorithms. Journal of Banking & Finance, 34:2767â2787.
Killian, T., Daulton, S., Konidaris, G., and Doshi-Velez, F. (2017). Robust and efï¬cient transfer learning with hidden-parameter markov decision processes. In the Annual Conference on Neural Information Processing Systems (NIPS).
Kim, B., massoud Farahmand, A., Pineau, J., and Precup, D. (2014). Learning from limited demon- strations. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#265 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 266 | Kingma, D. P., Rezende, D. J., Mohamed, S., and Welling, M. (2014). Semi-supervised learning with deep generative models. In the Annual Conference on Neural Information Processing Systems (NIPS).
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. PNAS, 114(13):3521â 3526.
Klambauer, G., Unterthiner, T., Mayr, A., and Hochreiter, S. (2017). Self-Normalizing Neural Networks. ArXiv e-prints.
Klein, G., Kim, Y., Deng, Y., Senellart, J., and Rush, A. M. (2017). OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints.
Kober, J., Bagnell, J. A., and Peters, J. (2013). Reinforcement learning in robotics: A survey. International Journal of Robotics Research, 32(11):1238â1278. | 1701.07274#266 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 267 | Koch, G., Zemel, R., and Salakhutdinov, R. (2015). Siamese neural networks for one-shot image recognition. In the International Conference on Machine Learning (ICML).
Koh, P. W. and Liang, P. (2017). Understanding black-box predictions via inï¬uence functions. In the International Conference on Machine Learning (ICML).
67
Kompella, V. R., Stollenga, M., Luciw, M., and Schmidhuber, J. (2017). Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artiï¬cial Intelligence, 247:313â335.
Kong, X., Xin, B., Wang, Y., and Hua, G. (2017). Collaborative deep reinforcement learning for joint object search. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Kosorok, M. R. and Moodie, E. E. M. (2015). Adaptive Treatment Strategies in Practice: Plan- ning Trials and Analyzing Data for Personalized Medicine. ASA-SIAM Series on Statistics and Applied Probability. | 1701.07274#267 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 268 | Kottur, S., Moura, J. M., Lee, S., and Batra, D. (2017). Natural language does not emerge ânaturallyâ In Conference on Empirical Methods in Natural Language Processing in multi-agent dialog. (EMNLP).
Krakovsky, M. (2016). Reinforcement renaissance. Communications of the ACM, 59(8):12â14.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classiï¬cation with deep convo- lutional neural networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
Krull, A., Brachmann, E., Nowozin, S., Michel, F., Shotton, J., and Rother, C. (2017). Poseagent: Budget-constrained 6d object pose estimation via reinforcement learning. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Kuhn, M. and Johnson, K. (2013). Applied Predictive Modeling. Springer.
Kulkarni, T. D., Narasimhan, K. R., Saeedi, A., and Tenenbaum, J. B. (2016). Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#268 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 269 | Kulkarni, T. D., Whitney, W., Kohli, P., and Tenenbaum, J. B. (2015). Deep convolutional inverse graphics network. In the Annual Conference on Neural Information Processing Systems (NIPS).
Lagoudakis, M. G. and Parr, R. (2003). Least-squares policy iteration. The Journal of Machine Learning Research, 4:1107 â 1149.
Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266):1332â1338.
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. (2016). Building machines that learn and think like people. Behavioral and Brain Sciences, 24:1â101.
Lamb, A., Goyal, A., Zhang, Y., Zhang, S., Courville, A., and Bengio, Y. (2016). Professor forcing: A new algorithm for training recurrent networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
Lample, G. and Chaplot, D. S. (2017). Playing FPS games with deep reinforcement learning. In the AAAI Conference on Artiï¬cial Intelligence (AAAI). | 1701.07274#269 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 270 | Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Tuyls, K., Perolat, J., Silver, D., and Graepel, T. (2017). A uniï¬ed game-theoretic approach to multiagent reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Le, Q. V., Ranzato, M., Monga, R., Devin, M., Chen, K., Corrado, G. S., Dean, J., and Ng, A. Y. (2012). Building high-level features using large scale unsupervised learning. In the International Conference on Machine Learning (ICML).
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521:436â444.
Lee, A. X., Levine, S., and Abbeel, P. (2017). Learning visual servoing with deep features and trust region ï¬tted Q-iteration. In the International Conference on Learning Representations (ICLR).
Lehman, J., Chen, J., Clune, J., and Stanley, K. O. (2017). Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients. ArXiv e-prints.
68 | 1701.07274#270 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 271 | 68
Lei, T., Barzilay, R., and Jaakkola, T. (2016). Rationalizing neural predictions. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Leibo, J. Z., de Masson dâAutume, C., Zoran, D., Amos, D., Beattie, C., Anderson, K., Garc´ıa CastaËneda, A., Sanchez, M., Green, S., Gruslys, A., Legg, S., Hassabis, D., and Botvinick, M. M. (2018). Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents. ArXiv e-prints.
Leibo, J. Z., Zambaldi, V., Lanctot, M., Marecki, J., and Graepel, T. (2017). Multi-agent reinforce- In the International Conference on Autonomous ment learning in sequential social dilemmas. Agents & Multiagent Systems (AAMAS).
Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2016a). End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17:1â40. | 1701.07274#271 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 272 | Levine, S., Pastor, P., Krizhevsky, A., and Quillen, D. (2016b). Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. ArXiv e-prints.
Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D., and Batra, D. (2017). Deal or no deal? end-to-end learning for negotiation dialogues. In FAIR.
Leyton-Brown, K. and Shoham, Y. (2008). Essentials of Game Theory: A Concise, Multidisciplinary Introduction. Morgan & Claypool Publishers.
Li, J., Miller, A. H., Chopra, S., Ranzato, M., and Weston, J. (2017a). Dialogue learning with human-in-the-loop. In the International Conference on Learning Representations (ICLR).
Li, J., Miller, A. H., Chopra, S., Ranzato, M., and Weston, J. (2017b). Learning through dialogue interactions by asking questions. In the International Conference on Learning Representations (ICLR).
Li, J., Monroe, W., and Jurafsky, D. (2016a). A Simple, Fast Diverse Decoding Algorithm for Neural Generation. ArXiv e-prints. | 1701.07274#272 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 273 | Li, J., Monroe, W., and Jurafsky, D. (2016a). A Simple, Fast Diverse Decoding Algorithm for Neural Generation. ArXiv e-prints.
Li, J., Monroe, W., and Jurafsky, D. (2016b). Understanding Neural Networks through Representa- tion Erasure. ArXiv e-prints.
Li, J., Monroe, W., and Jurafsky, D. (2017a). Learning to Decode for Future Success. ArXiv e-prints.
Li, J., Monroe, W., Ritter, A., Galley, M., Gao, J., and Jurafsky, D. (2016c). Deep reinforcement In Conference on Empirical Methods in Natural Language learning for dialogue generation. Processing (EMNLP).
Li, K. and Malik, J. (2017). Learning to optimize. In the International Conference on Learning Representations (ICLR).
Li, K. and Malik, J. (2017). Learning to Optimize Neural Nets. ArXiv e-prints.
Li, L., Chu, W., Langford, J., and Schapire, R. E. (2010). A contextual-bandit approach to person- alized news article recommendation. In the International World Wide Web Conference (WWW). | 1701.07274#273 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 274 | Li, X., Chen, Y.-N., Li, L., and Gao, J. (2017b). End-to-End Task-Completion Neural Dialogue Systems. ArXiv e-prints.
Li, X., Li, L., Gao, J., He, X., Chen, J., Deng, L., and He, J. (2015). Recurrent Reinforcement Learning: A Hybrid Approach. ArXiv e-prints.
Li, X., Lipton, Z. C., Dhingra, B., Li, L., Gao, J., and Chen, Y.-N. (2016d). A User Simulator for Task-Completion Dialogues. ArXiv e-prints.
Li, Y., Song, J., and Ermon, S. (2017). Infogail: Interpretable imitation learning from visual demon- strations. In the Annual Conference on Neural Information Processing Systems (NIPS).
Li, Y., Szepesv´ari, C., and Schuurmans, D. (2009). Learning exercise policies for American options. In International Conference on Artiï¬cial Intelligence and Statistics (AISTATS09).
69 | 1701.07274#274 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 275 | 69
Liang, C., Berant, J., Le, Q., Forbus, K. D., and Lao, N. (2017a). Neural symbolic machines: Learn- ing semantic parsers on freebase with weak supervision. In the Association for Computational Linguistics annual meeting (ACL).
Liang, C., Berant, J., Le, Q., Forbus, K. D., and Lao, N. (2017b). Neural symbolic machines: Learn- ing semantic parsers on freebase with weak supervision. In the Association for Computational Linguistics annual meeting (ACL).
Liang, E., Liaw, R., Nishihara, R., Moritz, P., Fox, R., Gonzalez, J., Goldberg, K., and Stoica, I. In NIPS 2017 (2017c). Ray rllib: A composable and scalable reinforcement learning library. Deep Reinforcement Learning Symposium.
Liang, X., Lee, L., and Xing, E. P. (2017d). Deep variation-structured reinforcement learning for In the IEEE Conference on Computer Vision and visual relationship and attribute detection. Pattern Recognition (CVPR). | 1701.07274#275 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 276 | Liang, Y., Machado, M. C., Talvitie, E., and Bowling, M. (2016). State of the art control of atari In the International Conference on Autonomous games using shallow reinforcement learning. Agents & Multiagent Systems (AAMAS).
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2016). Continuous control with deep reinforcement learning. In the International Conference on Learning Representations (ICLR).
Lin, L.-J. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293â321.
Lin, Z., Gehring, J., Khalidov, V., and Synnaeve, G. (2017). Stardata: A starcraft ai research dataset. In AAAI Conference on Artiï¬cial Intelligence and Interactive Digital Entertainment (AIIDE).
Ling, Y., Hasan, S. A., Datla, V., Qadir, A., Lee, K., Liu, J., and Farri, O. (2017). Diagnostic infer- encing via improving clinical concept extraction with deep reinforcement learning: A preliminary study. In Machine Learning for Healthcare. | 1701.07274#276 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 277 | Lipton, Z. C. (2016). The Mythos of Model Interpretability. ArXiv e-prints.
Lipton, Z. C., Gao, J., Li, L., Li, X., Ahmed, F., and Deng, L. (2016). Efï¬cient Exploration for Dialogue Policy Learning with BBQ Networks & Replay Buffer Spiking. ArXiv e-prints.
Littman, M. L. (2015). Reinforcement learning improves behaviour from evaluative feedback. Na- ture, 521:445â451.
Liu, B. (2012). Sentiment Analysis and Opinion Mining. Morgan & Claypool Publishers.
Liu, C. and Tomizuka, M. (2016). Algorithmic safety measures for intelligent industrial co-robots. In IEEE International Conference on Robotics and Automation (ICRA).
Liu, C. and Tomizuka, M. (2017). Designing the robot behavior for safe human robot interactions, in Trends in Control and Decision-Making for Human-Robot Collaboration Systems (Y. Wang and F. Zhang (Eds.)). Springer. | 1701.07274#277 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 278 | Liu, C., Zoph, B., Shlens, J., Hua, W., Li, L.-J., Fei-Fei, L., Yuille, A., Huang, J., and Murphy, K. (2017). Progressive Neural Architecture Search. ArXiv e-prints.
Liu, F., Li, S., Zhang, L., Zhou, C., Ye, R., Wang, Y., and Lu, J. (2017). 3DCNN-DQN-RNN: A deep reinforcement learning framework for semantic parsing of large-scale 3d point clouds. In the IEEE International Conference on Computer Vision (ICCV).
Liu, H., Simonyan, K., Vinyals, O., Fernando, C., and Kavukcuoglu, K. (2017). Hierarchical Rep- resentations for Efï¬cient Architecture Search. ArXiv e-prints.
Liu, N., Li, Z., Xu, Z., Xu, J., Lin, S., Qiu, Q., Tang, J., and Wang, Y. (2017). A hierarchical frame- work of cloud resource allocation and power management using deep reinforcement learning. In 37th IEEE International Conference on Distributed Computing (ICDCS 2017).
70 | 1701.07274#278 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 279 | 70
Liu, S., Zhu, Z., Ye, N., Guadarrama, S., and Murphy, K. (2016). Improved Image Captioning via Policy Gradient optimization of SPIDEr. ArXiv e-prints.
Liu, Y., Chen, J., and Deng, L. (2017). Unsupervised Sequence Classiï¬cation using Sequential Output Statistics. ArXiv e-prints.
Liu, Y.-E., Mandel, T., Brunskill, E., and Popovi´c, Z. (2014). Trading off scientiï¬c knowledge and user learning with multi-armed bandits. In Educational Data Mining (EDM).
Lo, A. W. (2004). The Adaptive Markets Hypothesis: Market efï¬ciency from an evolutionary perspective. Journal of Portfolio Management, 30:15â29.
Long, M., Cao, Y., Wang, J., and Jordan, M. I. (2015). Learning transferable features with deep adaptation networks. In the International Conference on Machine Learning (ICML).
Long, M., Cao, Z., Wang, J., and Yu, P. S. (2017). Learning multiple tasks with multilinear relation- ship networks. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#279 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 280 | Long, M., Zhu, H., Wang, J., and Jordan, M. I. (2016). Unsupervised domain adaptation with residual transfer networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
Longstaff, F. A. and Schwartz, E. S. (2001). Valuing American options by simulation: a simple least-squares approach. The Review of Financial Studies, 14(1):113â147.
Loos, S., Irving, G., Szegedy, C., and Kaliszyk, C. (2017). Deep Network Guided Proof Search. ArXiv e-prints.
Lopez-Paz, D. and Ranzato, M. (2017). Gradient Episodic Memory for Continuum Learning. ArXiv e-prints.
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., and Mordatch, I. (2017). Multi-agent actor-critic for mixed cooperative-competitive environments. In the Annual Conference on Neural Informa- tion Processing Systems (NIPS).
Lu, J., Xiong, C., Parikh, D., and Socher, R. (2016). Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning. ArXiv e-prints.
Luenberger, D. G. (1997). Investment Science. Oxford University Press. | 1701.07274#280 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 281 | Luenberger, D. G. (1997). Investment Science. Oxford University Press.
Luo, Y., Chiu, C.-C., Jaitly, N., and Sutskever, I. (2016). Learning Online Alignments with Contin- uous Rewards Policy Gradient. ArXiv e-prints.
Machado, M. C., Bellemare, M. G., and Bowling, M. (2017). A Laplacian framework for option dis- covery in reinforcement learning. In the International Conference on Machine Learning (ICML).
Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M., and Bowling, M. (2017). Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. ArXiv e-prints.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2017). Towards Deep Learning Models Resistant to Adversarial Attacks. ArXiv e-prints. | 1701.07274#281 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 282 | Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Aparicio Ojea, J., and Goldberg, K. (2017). Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. In Robotics: Science and Systems (RSS).
Mahmood, A. R., van Hasselt, H., and Sutton, R. S. (2014). Weighted importance sampling for In the Annual Conference on Neural off-policy learning with linear function approximation. Information Processing Systems (NIPS).
Mandel, T., Liu, Y. E., Levine, S., Brunskill, E., and Popovi´c, Z. (2014). Ofï¬ine policy evaluation across representations with applications to educational games. In the International Conference on Autonomous Agents & Multiagent Systems (AAMAS).
71
Manning, C. D., Raghavan, P., and Sch¨utze, H. (2008). Introduction to Information Retrieval. Cam- bridge University Press. | 1701.07274#282 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 283 | 71
Manning, C. D., Raghavan, P., and Sch¨utze, H. (2008). Introduction to Information Retrieval. Cam- bridge University Press.
Mannion, P., Duggan, J., and Howley, E. (2016). An experimental review of reinforcement learn- ing algorithms for adaptive trafï¬c signal control. Autonomic Road Transport Support Systems, edited by McCluskey, T., Kotsialos, A., M¨uller, J., Kl¨ugl, F., Rana, O., and Schumann R., Springer International Publishing, Cham, pages 47â66.
Mao, H., Alizadeh, M., Menache, I., and Kandula, S. (2016). Resource management with deep reinforcement learning. In ACM Workshop on Hot Topics in Networks (HotNets).
Mao, X., Li, Q., Xie, H., Lau, R. Y. K., and Wang, Z. (2016). Least Squares Generative Adversarial Networks. ArXiv e-prints.
Mathe, S., Pirinen, A., and Sminchisescu, C. (2016). Reinforcement learning for visual object detection. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). | 1701.07274#283 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 284 | Matiisen, T., Oliver, A., Cohen, T., and Schulman, J. (2017). Teacher-Student Curriculum Learning. ArXiv e-prints.
Maurer, A., Pontil, M., and Romera-Paredes, B. (2016). The beneï¬t of multitask representation learning. The Journal of Machine Learning Research, 17(81):1â32.
McAllister, R. and Rasmussen, C. E. (2017). Data-efï¬cient reinforcement learning in continuous- state POMDPs. In the Annual Conference on Neural Information Processing Systems (NIPS).
McCann, B., Bradbury, J., Xiong, C., and Socher, R. (2017). Learned in Translation: Contextualized Word Vectors. ArXiv e-prints.
Melis, G., Dyer, C., and Blunsom, P. (2017). On the State of the Art of Evaluation in Neural Language Models. ArXiv e-prints.
Merel, J., Tassa, Y., TB, D., Srinivasan, S., Lemmon, J., Wang, Z., Wayne, G., and Heess, N. (2017). Learning human behaviors from motion capture by adversarial imitation. ArXiv e-prints. | 1701.07274#284 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 285 | Mesnil, G., Dauphin, Y., Yao, K., Bengio, Y., Deng, L., He, X., Heck, L., Tur, G., Hakkani-T¨ur, D., Yu, D., and Zweig, G. (2015). Using recurrent neural networks for slot ï¬lling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530â539.
Mestres, A., Rodriguez-Natal, A., Carner, J., Barlet-Ros, P., Alarc´on, E., Sol´e, M., Munt´es, V., Meyer, D., Barkai, S., Hibbett, M. J., Estrada, G., Ma`ruf, K., Coras, F., Ermagan, V., Latapie, H., Cassar, C., Evans, J., Maino, F., Walrand, J., and Cabellos, A. (2016). Knowledge-Deï¬ned Networking. ArXiv e-prints.
Mhamdi, E. M. E., Guerraoui, R., Hendrikx, H., and Maurer, A. (2017). Dynamic safe interrupt- ibility for decentralized multi-agent reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#285 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 286 | Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efï¬cient estimation of word representations in vector space. In the International Conference on Learning Representations (ICLR).
Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., and Joulin, A. (2017). Advances in Pre-Training Distributed Word Representations. ArXiv e-prints.
Miller, T. (2017). Explanation in Artiï¬cial Intelligence: Insights from the Social Sciences. ArXiv e-prints.
Miotto, R., Wang, F., Wang, S., Jiang, X., and Dudley, J. T. (2017). Deep learning for healthcare: review, opportunities and challenges. Brieï¬ngs in Bioinformatics, pages 1â11.
Mirhoseini, A., Pham, H., Le, Q. V., Steiner, B., Larsen, R., Zhou, Y., Kumar, N., and Moham- mad Norouzi, Samy Bengio, J. D. (2017). Device placement optimization with reinforcement learning. In the International Conference on Machine Learning (ICML).
72 | 1701.07274#286 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 287 | 72
Mirowski, P., Pascanu, R., Viola, F., Soyer, H., Ballard, A., Banino, A., Denil, M., Goroshin, R., Sifre, L., Kavukcuoglu, K., Kumaran, D., and Hadsell, R. (2017). Learning to navigate in complex environments. In the International Conference on Learning Representations (ICLR).
Mitra, B. and Craswell, N. (2017). Neural Models for Information Retrieval. ArXiv e-prints.
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Harley, T., Lillicrap, T. P., Silver, D., and In the In- Kavukcuoglu, K. (2016). Asynchronous methods for deep reinforcement learning. ternational Conference on Machine Learning (ICML).
Mnih, V., Heess, N., Graves, A., and Kavukcuoglu, K. (2014). Recurrent models of visual attention. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#287 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 288 | Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529â533.
Mo, K., Li, S., Zhang, Y., Li, J., and Yang, Q. (2016). Personalizing a Dialogue System with Transfer Learning. ArXiv e-prints.
Monroe, D. (2017). Deep learning takes on translation. Communications of the ACM, 60(6):12â14.
Moody, J. and Saffell, M. (2001). Learning to trade via direct reinforcement. IEEE Transactions on Neural Networks, 12(4):875â889. | 1701.07274#288 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 289 | Moody, J. and Saffell, M. (2001). Learning to trade via direct reinforcement. IEEE Transactions on Neural Networks, 12(4):875â889.
MoravËc´ık, M., Schmid, M., Burch, N., Lis´y, V., Morrill, D., Bard, N., Davis, T., Waugh, K., Jo- hanson, M., and Bowling, M. (2017). Deepstack: Expert-level artiï¬cial intelligence in heads-up no-limit poker. Science.
M¨uller, M. (2002). Computer go. Artiï¬cial Intelligence, 134(1-2):145â179.
Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. G. (2016). Safe and efï¬cient off- policy reinforcement learning. In the Annual Conference on Neural Information Processing Sys- tems (NIPS).
Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. The MIT Press.
Improving policy gradient by exploring under-appreciated rewards. In the International Conference on Learning Representations (ICLR). | 1701.07274#289 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 290 | Improving policy gradient by exploring under-appreciated rewards. In the International Conference on Learning Representations (ICLR).
Nachum, O., Norouzi, M., Xu, K., and Schuurmans, D. (2017). Bridging the gap between value and policy based reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Nair, A., Srinivasan, P., Blackwell, S., Alcicek, C., Fearon, R., De Maria, A., Panneershelvam, V., Suleyman, M., Beattie, C., Petersen, S., Legg, S., Mnih, V., Kavukcuoglu, K., and Silver, In ICML 2015 Deep D. (2015). Massively parallel methods for deep reinforcement learning. Learning Workshop.
Narasimhan, K., Kulkarni, T., and Barzilay, R. (2015). Language understanding for text-based games using deep reinforcement learning. In Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP).
Narasimhan, K., Yala, A., and Barzilay, R. (2016). Improving information extraction by acquiring external evidence with reinforcement learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP). | 1701.07274#290 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 291 | Nedi´c, A. and Bertsekas, D. P. (2003). Least squares policy evaluation algorithms with linear func- tion approximation. Discrete Event Dynamic Systems: Theory and Applications, 13:79â110.
Neuneier, R. (1997). Enhancing q-learning for optimal asset allocation. In the Annual Conference on Neural Information Processing Systems (NIPS).
73
Neyshabur, B., Tomioka, R., Salakhutdinov, R., and Srebro, N. (2017). Geometry of Optimization and Implicit Regularization in Deep Learning. ArXiv e-prints.
Ng, A. and Russell, S. (2000). Algorithms for inverse reinforcement learning. In the International Conference on Machine Learning (ICML).
Nogueira, R. and Cho, K. (2016). End-to-End Goal-Driven Web Navigation. ArXiv e-prints.
Nogueira, R. and Cho, K. (2017). Task-Oriented Query Reformulation with Reinforcement Learn- ing. ArXiv e-prints.
OâDonoghue, B., Munos, R., Kavukcuoglu, K., and Mnih, V. (2017). PGQ: Combining policy gradient and Q-learning. In the International Conference on Learning Representations (ICLR). | 1701.07274#291 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 292 | OâDonovan, P., Leahy, K., Bruton, K., and OâSullivan, D. T. J. (2015). Big data in manufacturing: a systematic mapping study. Journal of Big Data, 2(20).
Oh, J., Chockalingam, V., Singh, S., and Lee, H. (2016). Control of memory, active perception, and action in minecraft. In the International Conference on Machine Learning (ICML).
Oh, J., Guo, X., Lee, H., Lewis, R., and Singh, S. (2015). Action-conditional video prediction using deep networks in atari games. In the Annual Conference on Neural Information Processing Systems (NIPS).
Oh, J., Singh, S., and Lee, H. (2017). Value prediction network. In the Annual Conference on Neural Information Processing Systems (NIPS).
Omidshaï¬ei, S., Pazis, J., Amato, C., How, J. P., and Vian, J. (2017). Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In the International Conference on Machine Learning (ICML). | 1701.07274#292 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 293 | OntaËn´on, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., and Preuss, M. (2013). A survey of real-time strategy game ai research and competition in starcraft. IEEE Transactions on Computational Intelligence and AI in Games, 5(4):293â311.
Oquab, M., Bottou, L., Laptev, I., and Sivic, J. (2015). Is object localization for free? â weakly- supervised learning with convolutional neural networks. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Osband, I., Blundell, C., Pritzel, A., and Roy, B. V. (2016). Deep exploration via bootstrapped DQN. In the Annual Conference on Neural Information Processing Systems (NIPS).
Ostrovski, G., Bellemare, M. G., van den Oord, A., and Munos, R. (2017). Count-Based Exploration with Neural Density Models. ArXiv e-prints.
Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345 â 1359. | 1701.07274#293 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 294 | Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345 â 1359.
Papernot, N., Abadi, M., Erlingsson, ´U., Goodfellow, I., and Talwar, K. (2017). Semi-supervised knowledge transfer for deep learning from private training data. In the International Conference on Learning Representations (ICLR).
Papernot, N., Goodfellow, I., Sheatsley, R., Feinman, R., and McDaniel, P. (2016). cleverhans v1.0.0: an adversarial machine learning library. ArXiv e-prints.
Parisotto, E., Ba, J. L., and Salakhutdinov, R. (2016). Actor-mimic: Deep multitask and transfer reinforcement learning. In the International Conference on Learning Representations (ICLR).
Parisotto, E., rahman Mohamed, A., Singh, R., Li, L., Zhou, D., and Kohli, P. (2017). Neuro- In the International Conference on Learning Representations symbolic program synthesis. (ICLR).
Pasunuru, R. and Bansal, M. (2017). Reinforced video captioning with entailment rewards. Conference on Empirical Methods in Natural Language Processing (EMNLP). In
74 | 1701.07274#294 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 295 | 74
Paulus, R., Xiong, C., and Socher, R. (2017). A Deep Reinforced Model for Abstractive Summa- rization. ArXiv e-prints.
Pearl, J. (2018). Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution. ArXiv e-prints.
Pei, K., Cao, Y., Yang, J., and Jana, S. (2017). DeepXplore: Automated Whitebox Testing of Deep Learning Systems. ArXiv e-prints.
Peng, B., Li, X., Li, L., Gao, J., Celikyilmaz, A., Lee, S., and Wong, K.-F. (2017a). Composite task-completion dialogue system via hierarchical deep reinforcement learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Peng, P., Yuan, Q., Wen, Y., Yang, Y., Tang, Z., Long, H., and Wang, J. (2017b). Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games. ArXiv e-prints. | 1701.07274#295 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 296 | P´erez-DâArpino, C. and Shah, J. A. (2017). C-learn: Learning geometric constraints from demon- strations for multi-step manipulation in shared autonomy. In IEEE International Conference on Robotics and Automation (ICRA).
Perolat, J., Leibo, J. Z., Zambaldi, V., Beattie, C., Tuyls, K., and Graepel, T. (2017). A multi-agent reinforcement learning model of common-pool resource appropriation. In the Annual Conference on Neural Information Processing Systems (NIPS).
Peters, J. and Neumann, G. (2015). Policy search: Methods and applications. ICML 2015 Tutorial.
Petroski Such, F., Madhavan, V., Conti, E., Lehman, J., Stanley, K. O., and Clune, J. (2017). Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning. ArXiv e-prints.
Pfau, D. and Vinyals, O. (2016). Connecting Generative Adversarial Networks and Actor-Critic Methods. ArXiv e-prints.
Phua, C., Lee, V., Smith, K., and Gayler, R. (2010). A Comprehensive Survey of Data Mining-based Fraud Detection Research. ArXiv e-prints. | 1701.07274#296 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 297 | Popov, I., Heess, N., Lillicrap, T., Hafner, R., Barth-Maron, G., Vecerik, M., Lampe, T., Tassa, Y., Erez, T., and Riedmiller, M. (2017). Data-efï¬cient Deep Reinforcement Learning for Dexterous Manipulation. ArXiv e-prints.
Powell, W. B. (2011). Approximate Dynamic Programming: Solving the curses of dimensionality (2nd Edition). John Wiley and Sons.
Prashanth, L., Jie, C., Fu, M., Marcus, S., and Szepes´ari, C. (2016). Cumulative prospect theory meets reinforcement learning: Prediction and control. In the International Conference on Machine Learning (ICML).
Preuveneers, D. and Ilie-Zudor, E. (2017). The intelligent industry of the future: A survey on emerg- ing trends, research challenges and opportunities in industry 4.0. Journal of Ambient Intelligence and Smart Environments, 9(3):287â298. | 1701.07274#297 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 298 | Pritzel, A., Uria, B., Srinivasan, S., Puigdom`enech, A., Vinyals, O., Hassabis, D., Wierstra, D., and Blundell, C. (2017). Neural Episodic Control. ArXiv e-prints.
Provost, F. and Fawcett, T. (2013). Data Science for Business. OâReilly Media.
Puterman, M. L. (2005). Markov decision processes : discrete stochastic dynamic programming. Wiley-Interscience.
Radford, A., Jozefowicz, R., and Sutskever, I. (2017). Learning to Generate Reviews and Discover- ing Sentiment. ArXiv e-prints.
Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., and Sohl-Dickstein, J. (2016). Survey of Expres- sivity in Deep Neural Networks. ArXiv e-prints.
75
Rahimi, A. and Recht, B. (2007). Random features for large-scale kernel machines. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#298 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 299 | 75
Rahimi, A. and Recht, B. (2007). Random features for large-scale kernel machines. In the Annual Conference on Neural Information Processing Systems (NIPS).
Rajendran, J., Lakshminarayanan, A., Khapra, M. M., P, P., and Ravindran, B. (2017). Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain. the International Conference on Learning Representations (ICLR).
Ranzato, M., Chopra, S., Auli, M., and Zaremba, W. (2016). Sequence level training with recurrent neural networks. In the International Conference on Learning Representations (ICLR).
Rao, Y., Lu, J., and Zhou, J. (2017). Attention-aware deep reinforcement learning for video face recognition. In the IEEE International Conference on Computer Vision (ICCV).
Ravi, S. and Larochelle, H. (2017). Optimization as a model for few-shot learning. In the Interna- tional Conference on Learning Representations (ICLR).
Reed, S. and de Freitas, N. (2016). Neural programmer-interpreters. In the International Conference on Learning Representations (ICLR). | 1701.07274#299 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 300 | Reed, S. and de Freitas, N. (2016). Neural programmer-interpreters. In the International Conference on Learning Representations (ICLR).
Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster R-CNN: Towards real-time object detection In the Annual Conference on Neural Information Processing with region proposal networks. Systems (NIPS).
Ren, Z., Wang, X., Zhang, N., Lv, X., and Li, L.-J. (2017). Deep reinforcement learning-based In the IEEE Conference on Computer Vision and image captioning with embedding reward. Pattern Recognition (CVPR).
Rennie, S. J., Marcheret, E., Mroueh, Y., Ross, J., and Goel, V. (2017). Self-critical sequence training In the IEEE Conference on Computer Vision and Pattern Recognition for image captioning. (CVPR).
Rhinehart, N. and Kitani, K. M. (2017). First-person activity forecasting with online inverse rein- forcement learning. In the IEEE International Conference on Computer Vision (ICCV). | 1701.07274#300 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 301 | Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). âwhy should i trust you?â explaining the pre- dictions of any classiï¬er. In ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD).
Riedmiller, M. (2005). Neural ï¬tted Q iteration - ï¬rst experiences with a data efï¬cient neural rein- forcement learning method. In European Conference on Machine Learning (ECML).
Rockt¨aschel, T. and Riedel, S. (2017). End-to-end Differentiable Proving. ArXiv e-prints.
Roijers, D. M., Vamplew, P., Whiteson, S., and Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artiï¬cial Intelligence Research, 48:67â113.
Ruder, S. (2017). An Overview of Multi-Task Learning in Deep Neural Networks. ArXiv e-prints. | 1701.07274#301 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 302 | Ruder, S. (2017). An Overview of Multi-Task Learning in Deep Neural Networks. ArXiv e-prints.
Ruelens, F., Claessens, B. J., Vandael, S., Schutter, B. D., BabuËska, R., and Belmans, R. (2016). Res- idential demand response of thermostatically controlled loads using batch reinforcement learning. IEEE Transactions on Smart Grid, PP(99):1â11.
Russell, S. and Norvig, P. (2009). Artiï¬cial Intelligence: A Modern Approach (3rd edition). Pearson.
Sabour, S., Frosst, N., and Hinton, G. E. (2017). Dynamic routing between capsules. In the Annual Conference on Neural Information Processing Systems (NIPS).
Salakhutdinov, R. (2016). Foundations of unsupervised deep learning, a talk at Deep Learn- ing School, https://www.bayareadlschool.org. https://www.youtube.com/watch?v= rK6bchqeaN8.
Salimans, T., Ho, J., Chen, X., and Sutskever, I. (2017). Evolution Strategies as a Scalable Alterna- tive to Reinforcement Learning. ArXiv e-prints.
76 | 1701.07274#302 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 303 | 76
Santoro, A., Raposo, D., Barrett, D. G. T., Malinowski, M., Pascanu, R., Battaglia, P., and Lillicrap, T. (2017). A simple neural network module for relational reasoning. ArXiv e-prints.
Saon, G., Sercu, T., Rennie, S., and Kuo, H.-K. J. (2016). The IBM 2016 English Conversational Telephone Speech Recognition System. In Annual Meeting of the International Speech Commu- nication Association (INTERSPEECH).
Saria, S. (2014). A $3 trillion challenge to computational scientists: Transforming healthcare deliv- ery. IEEE Intelligent Systems, 29(4):82â87.
Schaul, T., Horgan, D., Gregor, K., and Silver, D. (2015). Universal value function approximators. In the International Conference on Machine Learning (ICML).
Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2016). Prioritized experience replay. In the International Conference on Learning Representations (ICLR).
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61:85â 117. | 1701.07274#303 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 304 | Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61:85â 117.
Schulman, J., Abbeel, P., and Chen, X. (2017). Equivalence Between Policy Gradients and Soft Q-Learning. ArXiv e-prints.
Schulman, J., Levine, S., Moritz, P., Jordan, M. I., and Abbeel, P. (2015). Trust region policy optimization. In the International Conference on Machine Learning (ICML).
Schuurmans, D. and Zinkevich, M. (2016). Deep learning games. In the Annual Conference on Neural Information Processing Systems (NIPS).
Segler, M. H. S., Preuss, M., and Waller, M. P. (2017). Learning to Plan Chemical Syntheses. ArXiv e-prints.
Serban, I. V., Lowe, R., Charlin, L., and Pineau, J. (2015). A survey of available corpora for building data-driven dialogue systems. arXiv e-prints, abs/1512.05742. | 1701.07274#304 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 305 | Serban, I. V., Sankar, C., Germain, M., Zhang, S., Lin, Z., Subramanian, S., Kim, T., Pieper, M., Chandar, S., Ke, N. R., Mudumba, S., de Brebisson, A., Sotelo, J. M. R., Suhubdy, D., Michalski, V., Nguyen, A., Pineau, J., and Bengio, Y. (2017). A Deep Reinforcement Learning Chatbot. ArXiv e-prints.
Shah, P., Hakkani-T¨ur, D., and Heck, L. (2016). Interactive reinforcement learning for task-oriented dialogue management. In NIPS 2016 Deep Learning for Action and Interaction Workshop.
Shalev-Shwartz, S., Shamir, O., and Shammah, S. (2017). Failures of gradient-based deep learning. In the International Conference on Machine Learning (ICML).
Sharma, S., Lakshminarayanan, A. S., and Ravindran, B. (2017). Learning to repeat: Fine grained action repetition for deep reinforcement learning. In the International Conference on Learning Representations (ICLR). | 1701.07274#305 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 306 | She, L. and Chai, J. (2017). Interactive learning for acquisition of grounded verb semantics towards human-robot communication. In the Association for Computational Linguistics annual meeting (ACL).
Shen, Y., Huang, P.-S., Gao, J., and Chen, W. (2017). Reasonet: Learning to stop reading in machine comprehension. In ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD).
Shoham, Y. and Leyton-Brown, K. (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press.
Shoham, Y., Powers, R., and Grenager, T. (2007). If multi-agent learning is the answer, what is the question? Artiï¬cial Intelligence, 171:365â377.
77
Shortreed, S. M., Laber, E., Lizotte, D. J., Stroup, T. S., Pineau, J., and Murphy, S. A. (2011). In- forming sequential clinical decision-making through reinforcement learning: an empirical study. Machine Learning, 84:109â136. | 1701.07274#306 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 307 | Shrivastava, A., Pï¬ster, T., Tuzel, O., Susskind, J., Wang, W., and Webb, R. (2017). Learning from simulated and unsupervised images through adversarial training. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Shwartz-Ziv, R. and Tishby, N. (2017). Opening the Black Box of Deep Neural Networks via Information. ArXiv e-prints.
Silver, D. (2016). Deep reinforcement learning, a tutorial at ICML 2016. http://icml.cc/ 2016/tutorials/deep_rl_tutorial.pdf.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016a). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489. | 1701.07274#307 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 308 | Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Hassabis, D. (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. ArXiv e-prints.
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. (2014). Deterministic policy gradient algorithms. In the International Conference on Machine Learning (ICML).
Silver, D., Newnham, L., Barker, D., Weller, S., and McFall, J. (2013). Concurrent reinforce- ment learning from customer interactions. In the International Conference on Machine Learning (ICML). | 1701.07274#308 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 309 | Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., and Hassabis, D. (2017). Mastering the game of go without human knowledge. Nature, 550:354â359.
Silver, D., van Hasselt, H., Hessel, M., Schaul, T., Guez, A., Harley, T., Dulac-Arnold, G., Reichert, D., Rabinowitz, N., Barreto, A., and Degris, T. (2016b). The predictron: End-to-end learning and planning. In NIPS 2016 Deep Reinforcement Learning Workshop.
Simeone, O. (2017). A Brief Introduction to Machine Learning for Engineers. ArXiv e-prints.
Smith, L. N. (2017). Best Practices for Applying Deep Learning to Novel Applications. ArXiv e-prints. | 1701.07274#309 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 310 | Smith, L. N. (2017). Best Practices for Applying Deep Learning to Novel Applications. ArXiv e-prints.
Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A. (2017). Federated multi-task learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Snell, J., Swersky, K., and Zemel, R. S. (2017). Prototypical Networks for Few-shot Learning. ArXiv e-prints.
Socher, R., Pennington, J., Huang, E. H., Ng, A. Y., and Manning, C. D. (2011). Semi-supervised re- cursive autoencoders for predicting sentiment distributions. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C., Ng, A., and Potts, C. (2013). Recur- sive deep models for semantic compositionality over a sentiment tree- bank. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Song, Y. and Roth, D. (2017). Machine Learning with World Knowledge: The Position and Survey. ArXiv e-prints. | 1701.07274#310 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 311 | Song, Y. and Roth, D. (2017). Machine Learning with World Knowledge: The Position and Survey. ArXiv e-prints.
Spring, R. and Shrivastava, A. (2017). Scalable and sustainable deep learning via randomized hash- ing. In ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD).
78
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15:1929â1958.
Stadie, B. C., Abbeel, P., and Sutskever, I. (2017). Third person imitation learning. In the Interna- tional Conference on Learning Representations (ICLR). | 1701.07274#311 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 312 | Stoica, I., Song, D., Popa, R. A., Patterson, D. A., Mahoney, M. W., Katz, R. H., Joseph, A. D., Jordan, M., Hellerstein, J. M., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D. E., and Abbeel, P. (2017). A berkeley view of systems challenges for AI. Technical Report No. UCB/EECS-2017- 159.
Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakr- ishnan, S., Kamar, E., Kraus, S., Leyton-Brown, K., Parkes, D., Press, W., Saxenian, A., Shah, J., Tambe, M., and Teller, A. (2016). Artiï¬cial Intelligence and Life in 2030 - One Hundred Year Study on Artiï¬cial Intelligence: Report of the 2015-2016 Study Panel. Stanford University, Stanford, CA.
Stone, P. and Veloso, M. (2000). Multiagent systems: A survey from a machine learning perspective. Autonomous Robots, 8(3):345â383. | 1701.07274#312 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 313 | Stone, P. and Veloso, M. (2000). Multiagent systems: A survey from a machine learning perspective. Autonomous Robots, 8(3):345â383.
Strub, F., de Vries, H., Mary, J., Piot, B., Courville, A., and Pietquin, O. (2017). End-to-end optimization of goal-driven and visually grounded dialogue systems. ArXiv e-prints.
Su, P.-H., Gasic, M., Mrksic, N., Rojas-Barahona, L., Ultes, S., Vandyke, D., Wen, T.-H., and Young, S. (2016a). Continuously Learning Neural Dialogue Management. ArXiv e-prints.
Su, P.-H., GasËi´c, M., MrksËi´c, N., Rojas-Barahona, L., Ultes, S., Vandyke, D., Wen, T.-H., and Young, S. (2016b). On-line active reward learning for policy optimisation in spoken dialogue systems. In the Association for Computational Linguistics annual meeting (ACL).
Sukhbaatar, S., Szlam, A., and Fergus, R. (2016). Learning multiagent communication with back- propagation. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#313 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 314 | Sukhbaatar, S., Weston, J., and Fergus, R. (2015). End-to-end memory networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
SupanËciËc, III, J. and Ramanan, D. (2017). Tracking as online decision-making: Learning a policy In the IEEE International Conference on from streaming videos with reinforcement learning. Computer Vision (ICCV).
Surana, A., Sarkar, S., and Reddy, K. K. (2016). Guided deep reinforcement learning for additive manufacturing control application. In NIPS 2016 Deep Reinforcement Learning Workshop.
Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
Sutton, R. (2016). Reinforcement learning for artiï¬cial intelligence, course slides. http://www. incompleteideas.net/sutton/609%20dropbox/.
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3(1):9â44.
Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approxi- mating dynamic programming. In the International Conference on Machine Learning (ICML). | 1701.07274#314 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 315 | Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approxi- mating dynamic programming. In the International Conference on Machine Learning (ICML).
Sutton, R. S. (1992). Adapting bias by gradient descent: An incremental version of delta-bar-delta. In the AAAI Conference on Artiï¬cial Intelligence (AAAI).
Sutton, R. S. and Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd Edition, in preparation). MIT Press.
79
Sutton, R. S., Maei, H. R., Precup, D., Bhatnagar, S., Silver, D., Szepesv´ari, C., and Wiewiora, E. (2009a). Fast gradient-descent methods for temporal-difference learning with linear function approximation. In the International Conference on Machine Learning (ICML).
Sutton, R. S., Mahmood, A. R., and White, M. (2016). An emphatic approach to the problem of off-policy temporal-difference learning. The Journal of Machine Learning Research, 17:1â29. | 1701.07274#315 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 316 | Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (2000). Policy gradient methods for rein- forcement learning with function approximation. In the Annual Conference on Neural Information Processing Systems (NIPS).
Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction, , proc. of 10th. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Sutton, R. S., Precup, D., and Singh, S. (1999). Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artiï¬cial Intelligence, 112(1-2):181â211.
Sutton, R. S., Szepesv´ari, C., and Maei, H. R. (2009b). A convergent O(n) algorithm for off-policy temporal-difference learning with linear function approximation. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#316 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 317 | Synnaeve, G., Nardelli, N., Auvolat, A., Chintala, S., Lacroix, T., Lin, Z., Richoux, F., and Usunier, N. (2016). TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games. ArXiv e-prints.
Sze, V., Chen, Y.-H., Yang, T.-J., and Emer, J. (2017). Efï¬cient Processing of Deep Neural Net- works: A Tutorial and Survey. ArXiv e-prints.
Szepesv´ari, C. (2010). Algorithms for Reinforcement Learning. Morgan & Claypool.
Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. (2016). Value iteration networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., Turck, F. D., and Abbeel, P. (2017). Exploration: A study of count-based exploration for deep reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#317 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 318 | Tanner, B. and White, A. (2009). RL-Glue : Language-independent software for reinforcement- learning experiments. Journal of Machine Learning Research, 10:2133â2136.
Tassa, Y., Doron, Y., Muldal, A., Erez, T., Li, Y., de Las Casas, D., Budden, D., Abdolmaleki, A., Merel, J., Lefrancq, A., Lillicrap, T., and Riedmiller, M. (2018). DeepMind Control Suite. ArXiv e-prints.
Taylor, M. E. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10:1633â1685.
Tesauro, G. (1994). TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215â219.
Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., and Mannor, S. (2017). A deep hierarchical In the AAAI Conference on Artiï¬cial Intelligence approach to lifelong learning in minecraft. (AAAI). | 1701.07274#318 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 319 | Theocharous, G., Thomas, P. S., and Ghavamzadeh, M. (2015). Personalized ad recommendation systems for life-time value optimization with guarantees. In the International Joint Conference on Artiï¬cial Intelligence (IJCAI).
Tian, Y., Gong, Q., Shang, W., Wu, Y., and Zitnick, L. (2017). ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games. ArXiv e-prints.
Tram`er, F., Kurakin, A., Papernot, N., Boneh, D., and McDaniel, P. (2017). Ensemble Adversarial Training: Attacks and Defenses. ArXiv e-prints.
80
Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., and Blei, D. M. (2017). Deep probabilistic programming. In the International Conference on Learning Representations (ICLR).
Trischler, A., Ye, Z., Yuan, X., and Suleman, K. (2016). Natural language comprehension with the epireader. In Conference on Empirical Methods in Natural Language Processing (EMNLP). | 1701.07274#319 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 320 | Tsitsiklis, J. N. and Van Roy, B. (1997). An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674â690.
Tsitsiklis, J. N. and Van Roy, B. (2001). Regression methods for pricing complex American-style options. IEEE Transactions on Neural Networks, 12(4):694â703.
Usunier, N., Synnaeve, G., Lin, Z., and Chintala, S. (2017). Episodic exploration for deep de- In the International terministic policies: An application to StarCraft micromanagement tasks. Conference on Learning Representations (ICLR).
van der Pol, E. and Oliehoek, F. A. (2017). Coordinated deep reinforcement learners for trafï¬c light control. In NIPSâ16 Workshop on Learning, Inference and Control of Multi-Agent Systems.
van Hasselt, H., Guez, A., , and Silver, D. (2016a). Deep reinforcement learning with double Q- learning. In the AAAI Conference on Artiï¬cial Intelligence (AAAI). | 1701.07274#320 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 321 | van Hasselt, H., Guez, A., Hessel, M., Mnih, V., and Silver, D. (2016b). Learning values across many orders of magnitude. In the Annual Conference on Neural Information Processing Systems (NIPS).
van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., and Tsang, J. (2017). Hybrid reward architecture for reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polo- sukhin, I. (2017). Attention is all you need. In the Annual Conference on Neural Information Processing Systems (NIPS).
Venkatraman, A., Rhinehart, N., Sun, W., Pinto, L., Hebert, M., Boots, B., Kitani, K. M., and Bagnell, J. A. (2017). Predictive-state decoders: Encoding the future into recurrent networks. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#321 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 322 | VeËcer´ık, M., Hester, T., Scholz, J., Wang, F., Pietquin, O., Piot, B., Heess, N., Roth¨orl, T., Lampe, T., and Riedmiller, M. (2017). Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. In the Annual Conference on Neural Information Pro- cessing Systems (NIPS).
Vezhnevets, A. S., Mnih, V., Agapiou, J., Osindero, S., Graves, A., Vinyals, O., and Kavukcuoglu, K. (2016). Strategic attentive writer for learning macro-actions. In the Annual Conference on Neural Information Processing Systems (NIPS).
Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. (2017). Feudal networks for hierarchical reinforcement learning. In the International Confer- ence on Machine Learning (ICML).
Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., and Wierstra, D. (2016). Matching net- works for one shot learning. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#322 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 323 | Vinyals, O., Fortunato, M., and Jaitly, N. (2015). Pointer networks. In the Annual Conference on Neural Information Processing Systems (NIPS).
Wang, H. and Raj, B. (2017). On the Origin of Deep Learning. ArXiv e-prints.
Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M. (2016). Learning to reinforcement learn. ArXiv e-prints.
Wang, S. I., Liang, P., and Manning, C. D. (2016a). Learning language games through interaction. In the Association for Computational Linguistics annual meeting (ACL).
81
Wang, W., Yang, N., Wei, F., Chang, B., and Zhou, M. (2017a). Gated self-matching networks for reading comprehension and question answering. In the Association for Computational Linguistics annual meeting (ACL). | 1701.07274#323 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 324 | Wang, Z., Bapst, V., Heess, N., Mnih, V., Munos, R., Kavukcuoglu, K., and de Freitas, N. (2017b). Sample efï¬cient actor-critic with experience replay. In the International Conference on Learning Representations (ICLR).
Wang, Z., Merel, J., Reed, S., Wayne, G., de Freitas, N., and Heess, N. (2017). Robust Imitation of Diverse Behaviors. ArXiv e-prints.
Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., and de Freitas, N. (2016b). Du- eling network architectures for deep reinforcement learning. In the International Conference on Machine Learning (ICML).
Watkins, C. J. C. H. and Dayan, P. (1992). Q-learning. Machine Learning, 8:279â292.
Watter, M., Springenberg, J. T., Boedecker, J., and Riedmiller, M. (2015). Embed to control: A locally linear latent dynamics model for control from raw images. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#324 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 325 | Watters, N., Tacchetti, A., Weber, T., Pascanu, R., Battaglia, P., and Zoran, D. (2017). Visual In the Annual Conference on interaction networks: Learning a physics simulator from video. Neural Information Processing Systems (NIPS).
Weber, T., Racani`ere, S., Reichert, D. P., Buesing, L., Guez, A., Jimenez Rezende, D., Puig- dom`enech Badia, A., Vinyals, O., Heess, N., Li, Y., Pascanu, R., Battaglia, P., Silver, D., and Wierstra, D. (2017). Imagination-augmented agents for deep reinforcement learning. In the An- nual Conference on Neural Information Processing Systems (NIPS).
Weiss, K., Khoshgoftaar, T. M., and Wang, D. (2016). A survey of transfer learning. Journal of Big Data, 3(9).
Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y., and Chen, Z. (2017). Sequence-to-Sequence Models Can Directly Transcribe Foreign Speech. ArXiv e-prints. | 1701.07274#325 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 326 | Welleck, S., Mao, J., Cho, K., and Zhang, Z. (2017). Saliency-based sequential image attention with multiset prediction. In the Annual Conference on Neural Information Processing Systems (NIPS).
Wen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015a). Semantically con- ditioned LSTM-based natural language generation for spoken dialogue systems. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Wen, T.-H., Vandyke, D., Mrksic, N., Gasic, M., Rojas-Barahona, L. M., Su, P.-H., Ultes, S., and Young, S. (2017). A network-based end-to-end trainable task-oriented dialogue system. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Wen, Z., OâNeill, D., and Maei, H. (2015b). Optimal demand response using device-based rein- forcement learning. IEEE Transactions on Smart Grid, 6(5):2312â2324. | 1701.07274#326 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 327 | Weston, J., Chopra, S., and Bordes, A. (2015). Memory networks. In the International Conference on Learning Representations (ICLR).
White, A. and White, M. (2016). Investigating practical linear temporal difference learning. In the International Conference on Autonomous Agents & Multiagent Systems (AAMAS).
Whye Teh, Y., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., and Pascanu, R. (2017). Distral: Robust multitask reinforcement learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Wiering, M. and van Otterlo, M. (2012). Reinforcement Learning: State-of-the-Art (edited book). Springer.
82
Williams, J. D., Asadi, K., and Zweig, G. (2017). Hybrid code networks: practical and efï¬cient In the Association for end-to-end dialog control with supervised and reinforcement learning. Computational Linguistics annual meeting (ACL).
Williams, J. D. and Zweig, G. (2016). End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning. ArXiv e-prints. | 1701.07274#327 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 328 | Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine Learning, 8(3):229â256.
Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. (2017). The Marginal Value of Adaptive Gradient Methods in Machine Learning. ArXiv e-prints.
Wu, J., Lu, E., Kohli, P., Freeman, B., and Tenenbaum, J. (2017a). Learning to see physics via visual de-animation. In the Annual Conference on Neural Information Processing Systems (NIPS).
Wu, J., Tenenbaum, J. B., and Kohli, P. (2017b). Neural scene de-rendering. In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Wu, J., Yildirim, I., Lim, J. J., Freeman, B., and Tenenbaum, J. (2015). Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In the Annual Conference on Neural Information Processing Systems (NIPS).
Wu, L., Xia, Y., Zhao, L., Tian, F., Qin, T., Lai, J., and Liu, T.-Y. (2017c). Adversarial Neural Machine Translation. ArXiv e-prints. | 1701.07274#328 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 329 | Wu, Y., Mansimov, E., Liao, S., Grosse, R., and Ba, J. (2017). Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In the Annual Conference on Neural Information Processing Systems (NIPS).
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, L., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and Dean, J. (2016). Googleâs neural machine translation system: Bridging the gap between human and machine translation. ArXiv e-prints.
Wu, Y. and Tian, Y. (2017). Training agent for ï¬rst-person shooter game with actor-critic curriculum learning. In the International Conference on Learning Representations (ICLR). | 1701.07274#329 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 330 | Xiong, C., Zhong, V., and Socher, R. (2017a). Dynamic coattention networks for question answer- ing. In the International Conference on Learning Representations (ICLR).
Xiong, W., Droppo, J., Huang, X., Seide, F., Seltzer, M., Stolcke, A., Yu, D., and Zweig, G. (2017b). The microsoft 2016 conversational speech recognition system. In The IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP).
Xiong, W., Hoang, T., and Wang, W. Y. (2017c). Deeppath: A reinforcement learning method for knowledge graph reasoning. In Conference on Empirical Methods in Natural Language Process- ing (EMNLP).
Xiong, W., Wu, L., Alleva, F., Droppo, J., Huang, X., and Stolcke, A. (2017). The Microsoft 2017 Conversational Speech Recognition System. ArXiv e-prints.
Xu, D., Nair, S., Zhu, Y., Gao, J., Garg, A., Fei-Fei, L., and Savarese, S. (2017). Neural Task Programming: Learning to Generalize Across Hierarchical Tasks. ArXiv e-prints. | 1701.07274#330 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 331 | Xu, K., Ba, J. L., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R. S., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In the International Conference on Machine Learning (ICML).
Xu, L. D., He, W., and Li, S. (2014). Internet of things in industries: A survey. IEEE Transactions on Industrial Informatics, 10(4):2233â2243.
83
Yahya, A., Li, A., Kalakrishnan, M., Chebotar, Y., and Levine, S. (2016). Collective robot reinforce- ment learning with distributed asynchronous guided policy search. ArXiv e-prints.
Yang, B. and Mitchell, T. (2017). Leveraging knowledge bases in lstms for improving machine reading. In the Association for Computational Linguistics annual meeting (ACL).
Yang, X., Chen, Y.-N., Hakkani-Tur, D., Crook, P., Li, X., Gao, J., and Deng, L. (2016). End-to-End Joint Learning of Natural Language Understanding and Dialogue Manager. ArXiv e-prints. | 1701.07274#331 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 332 | Yang, Z., He, X., Gao, J., Deng, L., and Smola, A. (2015). Stacked Attention Networks for Image Question Answering. ArXiv e-prints.
Yang, Z., Hu, J., Salakhutdinov, R., and Cohen, W. W. (2017). Semi-supervised qa with generative domain-adaptive nets. In the Association for Computational Linguistics annual meeting (ACL).
Yannakakis, G. N. and Togelius, J. (2018). Artiï¬cial Intelligence and Games. Springer.
Yao, H., Szepesvari, C., Sutton, R. S., Modayil, J., and Bhatnagar, S. (2014). Universal option models. In the Annual Conference on Neural Information Processing Systems (NIPS).
Yi, Z., Zhang, H., Tan, P., and Gong, M. (2017). Dualgan: Unsupervised dual learning for image- to-image translation. In the IEEE International Conference on Computer Vision (ICCV).
Yogatama, D., Blunsom, P., Dyer, C., Grefenstette, E., and Ling, W. (2017). Learning to compose words into sentences with reinforcement learning. In the International Conference on Learning Representations (ICLR). | 1701.07274#332 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 333 | Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How transferable are features in deep neural networks? In the Annual Conference on Neural Information Processing Systems (NIPS).
Young, S., GaËsi´c, M., Thomson, B., and Williams, J. D. (2013). POMDP-based statistical spoken dialogue systems: a review. PROC IEEE, 101(5):1160â1179.
Young, T., Hazarika, D., Poria, S., and Cambria, E. (2017). Recent Trends in Deep Learning Based Natural Language Processing. ArXiv e-prints.
Yu, L., Zhang, W., Wang, J., and Yu, Y. (2017). Seqgan: Sequence generative adversarial nets with policy gradient. In the AAAI Conference on Artiï¬cial Intelligence (AAAI).
Yu, Y.-L., Li, Y., Szepesv´ari, C., and Schuurmans, D. (2009). A general projection property for dis- tribution families. In the Annual Conference on Neural Information Processing Systems (NIPS). | 1701.07274#333 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 334 | Yun, S., Choi, J., Yoo, Y., Yun, K., and Young Choi, J. (2017). Action-decision networks for visual In the IEEE Conference on Computer Vision and tracking with deep reinforcement learning. Pattern Recognition (CVPR).
Zagoruyko, S. and Komodakis, N. (2017). Paying more attention to attention: Improving the per- formance of convolutional neural networks via attention transfer. In the International Conference on Learning Representations (ICLR).
Zaremba, W. and Sutskever, I. (2015). Reinforcement Learning Neural Turing Machines - Revised. ArXiv e-prints.
Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2017). Understanding deep learning requires rethinking generalization. In the International Conference on Learning Representations (ICLR).
Zhang, H., Yu, H., and Xu, W. (2017a). Listen, Interact and Talk: Learning to Speak via Interaction. ArXiv e-prints.
Zhang, J., Ding, Y., Shen, S., Cheng, Y., Sun, M., Luan, H., and Liu, Y. (2017b). THUMT: An Open Source Toolkit for Neural Machine Translation. ArXiv e-prints. | 1701.07274#334 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 335 | Zhang, L., Wang, S., and Liu, B. (2018). Deep Learning for Sentiment Analysis : A Survey. ArXiv e-prints.
84
Zhang, Q. and Zhu, S.-C. (2018). Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19(1):27â39.
Zhang, X. and Lapata, M. (2017). Sentence simpliï¬cation with deep reinforcement learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Zhang, Y., Mustaï¬zur Rahman, M., Braylan, A., Dang, B., Chang, H.-L., Kim, H., McNamara, Q., Angert, A., Banner, E., Khetan, V., McDonnell, T., Thanh Nguyen, A., Xu, D., Wallace, B. C., and Lease, M. (2016). Neural Information Retrieval: A Literature Review. ArXiv e-prints.
Zhang, Y., Pezeshki, M., Brakel, P., Zhang, S., Yoshua Bengio, C. L., and Courville, A. (2017c). Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. ArXiv e- prints. | 1701.07274#335 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 336 | Zhao, T. and Eskenazi, M. (2016). Towards end-to-end learning for dialog state tracking and man- agement using deep reinforcement learning. In the Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL).
Zhong, Z., Yan, J., and Liu, C.-L. (2017). Practical Network Blocks Design with Q-Learning. ArXiv e-prints.
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. (2015). Object detectors emerge in deep scene CNNs. In the International Conference on Learning Representations (ICLR).
Zhou, H., Huang, M., Zhang, T., Zhu, X., and Liu, B. (2017). Emotional Chatting Machine: Emo- tional Conversation Generation with Internal and External Memory. ArXiv e-prints.
Zhou, Y. and Tuzel, O. (2017). VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. ArXiv e-prints.
Zhou, Z.-H. (2016). Machine Learning (in Chinese). Tsinghua University Press, Beijing, China. | 1701.07274#336 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.07274 | 337 | Zhou, Z.-H. (2016). Machine Learning (in Chinese). Tsinghua University Press, Beijing, China.
Zhou, Z.-H. and Feng, J. (2017). Deep forest: Towards an alternative to deep neural networks. In the International Joint Conference on Artiï¬cial Intelligence (IJCAI).
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017a). Unpaired image-to-image translation using cycle-consistent adversarial networks. In the IEEE International Conference on Computer Vision (ICCV).
Zhu, X. and Goldberg, A. B. (2009). Introduction to semi-supervised learning. Morgan & Claypool.
Zhu, Y., Mottaghi, R., Kolve, E., Lim, J. J., Gupta, A., Li, F.-F., and Farhadi, A. (2017b). Target- driven visual navigation in indoor scenes using deep reinforcement learning. In IEEE Interna- tional Conference on Robotics and Automation (ICRA).
Zinkevich, M. (2017). Rules of Machine Learning: Best Practices for ML Engineering. http://martin.zinkevich.org/rules of ml/rules of ml.pdf. | 1701.07274#337 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | [] |
1701.06538 | 0 | 7 1 0 2
n a J 3 2 ] G L . s c [
1 v 8 3 5 6 0 . 1 0 7 1 : v i X r a
Under review as a conference paper at ICLR 2017
# OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
Noam Shazeer1, Azalia Mirhoseiniââ 1, Krzysztof Maziarzâ2, Andy Davis1, Quoc Le1, Geoffrey Hinton1 and Jeff Dean1
1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff}@google.com 2Jagiellonian University, Cracow, [email protected]
# ABSTRACT | 1701.06538#0 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 1 | # ABSTRACT
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increas- ing model capacity without a proportional increase in computation. In practice, however, there are signiï¬cant algorithmic and performance challenges. In this work, we address these challenges and ï¬nally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efï¬ciency on modern GPU clusters. We in- troduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve signiï¬cantly better results than state-of-the-art at lower computational cost.
1
# INTRODUCTION AND RELATED WORK | 1701.06538#1 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 2 | 1
# INTRODUCTION AND RELATED WORK
1.1 CONDITIONAL COMPUTATION
Exploiting scale in both training data and model size has been central to the success of deep learn- ing. When datasets are sufï¬ciently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text (Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images (Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand. | 1701.06538#2 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 3 | Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs (Davis & Arel, 2013; Bengio et al., 2013; Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
âEqually major contributors â Work done as a member of the Google Brain Residency program (g.co/brainresidency)
1
# Under review as a conference paper at ICLR 2017
MoE layer Ge, MoE layer
Figure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network.
While these ideas are promising in theory, no work to date has yet demonstrated massive improve- ments in model capacity, training time, or model quality. We blame this on a combination of the following challenges: | 1701.06538#3 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 4 | ⢠Modern computing devices, especially GPUs, are much faster at arithmetic than at branch- ing. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
⢠Large batch sizes are critical for performance, as they amortize the costs of parameter trans- fers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
⢠Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be com- putationally efï¬cient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional com- putation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
⢠Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These issues can affect both model quality and load-balancing. | 1701.06538#4 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 5 | ⢠Model capacity is most critical for very large data sets. The existing literature on condi- tional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufï¬cient signal to adequately train a model with millions, let alone billions of parameters.
In this work, we for the ï¬rst time address all of the above challenges and ï¬nally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efï¬ciency and signiï¬cantly advance the state-of-the-art results on public language modeling and translation data sets.
1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
Our approach to conditional computation is to introduce a new type of general purpose neural net- work component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a num- ber of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure 1). All parts of the network are trained jointly by back-propagation.
2
# Under review as a conference paper at ICLR 2017 | 1701.06538#5 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 6 | 2
# Under review as a conference paper at ICLR 2017
While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to beneï¬t from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1. The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix E Table 9). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
1.3 RELATED WORK ON MIXTURES OF EXPERTS | 1701.06538#6 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 7 | 1.3 RELATED WORK ON MIXTURES OF EXPERTS
Since its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994), the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp, 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009), and deep networks. Other work has focused on different expert conï¬gurations such as a hierarchical structure (Yao et al., 2009), inï¬nite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. | 1701.06538#7 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 8 | The works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex prob- lems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
Our work builds on this use of MoEs as a general purpose neural network component. While Eigen et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
# 2 THE STRUCTURE OF THE MIXTURE-OF-EXPERTS LAYER | 1701.06538#8 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 9 | # 2 THE STRUCTURE OF THE MIXTURE-OF-EXPERTS LAYER
The Mixture-of-Experts (MoE) layer consists of a set of n âexpert networks" E1, · · · , En, and a âgating network" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
Let us denote by G(x) and Ei(x) the output of the gating network and the output of the i-th expert network for a given input x. The output y of the MoE module can be written as follows:
y= Ga) Bi(2) (1)
i=1 | 1701.06538#9 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 10 | y= Ga) Bi(2) (1)
i=1
We save computation based on the sparsity of the output of G(x). Wherever G(x)i = 0, we need not compute Ei(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of âexperts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B.
Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio, 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout described in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers.
3
# Under review as a conference paper at ICLR 2017
2.1 GATING NETWORK | 1701.06538#10 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 11 | 3
# Under review as a conference paper at ICLR 2017
2.1 GATING NETWORK
Softmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is to multiply the input by a trainable weight matrix Wg and then apply the Sof tmax function.
GÏ(x) = Sof tmax(x · Wg) (2)
Noisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to ââ (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix Wnoise.
G(x) = Sof tmax(KeepT opK(H(x), k)) (3)
H(x)i = (x · Wg)i + StandardN ormal() · Sof tplus((x · Wnoise)i) (4)
# Ui | 1701.06538#11 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 12 | H(x)i = (x · Wg)i + StandardN ormal() · Sof tplus((x · Wnoise)i) (4)
# Ui
KeepT opK(v, k)i = if vi is in the top k elements of v. ââ otherwise. (5)
Training the Gating Network We train the gating network by simple back-propagation, along with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectiï¬ers. Gradients also back- propagate through the gating network to its inputs. Our method differs here from (Bengio et al., 2015) who use boolean gates and a REINFORCE-style approach to train the gating network.
3 ADDRESSING PERFORMANCE CHALLENGES
3.1 THE SHRINKING BATCH PROBLEM | 1701.06538#12 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
1701.06538 | 13 | 3 ADDRESSING PERFORMANCE CHALLENGES
3.1 THE SHRINKING BATCH PROBLEM
On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses k out of n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately ab < b examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size: | 1701.06538#13 | Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | The capacity of a neural network to absorb information is limited by its
number of parameters. Conditional computation, where parts of the network are
active on a per-example basis, has been proposed in theory as a way of
dramatically increasing model capacity without a proportional increase in
computation. In practice, however, there are significant algorithmic and
performance challenges. In this work, we address these challenges and finally
realize the promise of conditional computation, achieving greater than 1000x
improvements in model capacity with only minor losses in computational
efficiency on modern GPU clusters. We introduce a Sparsely-Gated
Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward
sub-networks. A trainable gating network determines a sparse combination of
these experts to use for each example. We apply the MoE to the tasks of
language modeling and machine translation, where model capacity is critical for
absorbing the vast quantities of knowledge available in the training corpora.
We present model architectures in which a MoE with up to 137 billion parameters
is applied convolutionally between stacked LSTM layers. On large language
modeling and machine translation benchmarks, these models achieve significantly
better results than state-of-the-art at lower computational cost. | http://arxiv.org/pdf/1701.06538 | Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean | cs.LG, cs.CL, cs.NE, stat.ML | null | null | cs.LG | 20170123 | 20170123 | [
{
"id": "1502.03167"
},
{
"id": "1606.04199"
},
{
"id": "1602.02410"
},
{
"id": "1609.08144"
},
{
"id": "1511.06297"
},
{
"id": "1512.02595"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.