doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.09249 | 83 | 20
[85] M. Gr¨uttner, F. Sehnke, T. Schaul, and J. Schmidhuber. Multi-Dimensional Deep Memory Atari-Go Players for Parameter Exploring Policy Gradients. In Proceedings of the International Conference on Artiï¬cial Neural Networks ICANN, pages 114â123. Springer, 2010.
[86] X. Guo, S. Singh, H. Lee, R. Lewis, and X. Wang. Deep learning for real-time Atari game play using ofï¬ine Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems 27 (NIPS). 2014.
[87] I. Guyon, V. Vapnik, B. Boser, L. Bottou, and S. A. Solla. Structural risk minimization for character recognition. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems (NIPS) 4, pages 471â479. Morgan Kaufmann, 1992. | 1511.09249#83 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 84 | [88] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, and A. Y. Ng. DeepSpeech: Scaling up end-to-end speech recognition. Preprint arXiv:1412.5567, 2014.
[89] N. Hansen, S. D. M¨uller, and P. Koumoutsakos. Reducing the time complexity of the de- randomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1):1â18, 2003.
[90] N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strate- gies. Evolutionary Computation, 9(2):159â195, 2001.
[91] S. J. Hanson and L. Y. Pratt. Comparing biases for minimal network construction with back- propagation. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems (NIPS) 1, pages 177â185. San Mateo, CA: Morgan Kaufmann, 1989. | 1511.09249#84 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 85 | [92] B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 5, pages 164â171. Morgan Kaufmann, 1993.
[93] N. Heess, D. Silver, and Y. W. Teh. Actor-critic reinforcement learning with energy-based policies. In Proc. European Workshop on Reinforcement Learning, pages 43â57, 2012.
[94] V. Heidrich-Meisner and C. Igel. Neuroevolution strategies for episodic reinforcement learning. Journal of Algorithms, 64(4):152â168, 2009.
[95] J. Hertz, A. Krogh, and R. Palmer. Introduction to the Theory of Neural Computation. Addison- Wesley, Redwood City, 1991.
[96] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504â507, 2006.
In Proceedings of the International Conference on Artiï¬cial Neural Networks, Amsterdam, pages 11â18. Springer, 1993. | 1511.09249#85 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 86 | In Proceedings of the International Conference on Artiï¬cial Neural Networks, Amsterdam, pages 11â18. Springer, 1993.
[98] S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut f¨ur Informatik, Lehrstuhl Prof. Brauer, Technische Universit¨at M¨unchen, 1991. Advisor: J. Schmidhuber.
[99] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001.
21
[100] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
[101] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735â 1780, 1997. Based on TR FKI-207-95, TUM (1995). | 1511.09249#86 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 87 | [102] S. Hochreiter and J. Schmidhuber. Feature extraction through LOCOCODE. Neural Compu- tation, 11(3):679â714, 1999.
[103] S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In Lecture Notes on Comp. Sci. 2130, Proc. Intl. Conf. on Artiï¬cial Neural Networks (ICANN- 2001), pages 87â94. Springer: Berlin, Heidelberg, 2001.
[104] S. B. Holden. On the Theory of Generalization and Self-Structuring in Linearly Weighted Connectionist Networks. PhD thesis, Cambridge University, Engineering Department, 1994.
[105] J. H. Holland. Adaptation in Natural and Artiï¬cial Systems. University of Michigan Press, Ann Arbor, 1975.
[106] V. Honavar and L. Uhr. Generative learning structures and processes for generalized connec- tionist networks. Information Sciences, 70(1):75â108, 1993. | 1511.09249#87 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 88 | [107] V. Honavar and L. M. Uhr. A network of neuron-like units that learns to perceive by generation as well as reweighting of its links. In D. Touretzky, G. E. Hinton, and T. Sejnowski, editors, Proc. of the 1988 Connectionist Models Summer School, pages 472â484, San Mateo, 1988. Morgan Kaufman.
[108] D. A. Huffman. A method for construction of minimum-redundancy codes. Proceedings IRE, 40:1098â1101, 1952.
[109] M. Hutter. Universal Artiï¬cial Intelligence: Sequential Decisions based on Algorithmic Prob- ability. Springer, Berlin, 2005. (On J. Schmidhuberâs SNF grant 20-61847).
[110] C. Igel. Neuroevolution for reinforcement learning using evolution strategies. In R. Reynolds, H. Abbass, K. C. Tan, B. Mckay, D. Essam, and T. Gedeon, editors, Congress on Evolutionary Computation (CEC 2003), volume 4, pages 2588â2595. IEEE, 2003. | 1511.09249#88 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 89 | [111] A. G. Ivakhnenko. The group method of data handling â a rival of the method of stochastic approximation. Soviet Automatic Control, 13(3):43â55, 1968.
[112] A. G. Ivakhnenko. Polynomial theory of complex systems. IEEE Transactions on Systems, Man and Cybernetics, (4):364â378, 1971.
[113] A. G. Ivakhnenko and V. G. Lapa. Cybernetic Predicting Devices. CCM Information Corpo- ration, 1965.
[114] T. Jaakkola, S. P. Singh, and M. I. Jordan. Reinforcement learning algorithm for partially observable Markov decision problems. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems (NIPS) 7, pages 345â352. MIT Press, 1995.
[115] C. Jacob, A. Lindenmayer, and G. Rozenberg. Genetic L-System Programming. In Parallel Problem Solving from Nature III, Lecture Notes in Computer Science. Springer-Verlag, 1994.
[116] H. Jaeger. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304:78â80, 2004.
22 | 1511.09249#89 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 90 | [116] H. Jaeger. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304:78â80, 2004.
22
[117] J. Jameson. Delayed reinforcement learning with multiple time scale hierarchical backpropa- gated adaptive critics. In Neural Networks for Control. 1991.
[118] S. R. Jodogne and J. H. Piater. Closed-loop learning of visual control policies. J. Artiï¬cial Intelligence Research, 28:349â391, 2007.
[119] M. I. Jordan. Supervised learning and systems with excess degrees of freedom. Technical Report COINS TR 88-27, Massachusetts Institute of Technology, 1988.
[120] M. I. Jordan and D. E. Rumelhart. Supervised learning with a distal teacher. Technical Report Occasional Paper #40, Center for Cog. Sci., Massachusetts Institute of Technology, 1990.
[121] C.-F. Juang. A hybrid of genetic algorithm and particle swarm optimization for recurrent net- work design. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 34(2):997â1006, 2004. | 1511.09249#90 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 91 | [122] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Technical report, Brown University, Providence RI, 1995.
[123] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: a survey. Journal of AI research, 4:237â285, 1996.
[124] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
[125] H. J. Kelley. Gradient theory of optimal ï¬ight paths. ARS Journal, 30(10):947â954, 1960.
[126] H. Kimura, K. Miyazaki, and S. Kobayashi. Reinforcement learning in POMDPs with function approximation. In ICML, volume 97, pages 152â160, 1997.
[127] H. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4:461â476, 1990. | 1511.09249#91 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 92 | [127] H. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4:461â476, 1990.
[128] N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Robotics and Automation, 2004. Proceedings. ICRAâ04. 2004 IEEE International Confer- ence on, volume 3, pages 2619â2624. IEEE, 2004.
[129] E. Kohler, C. Keysers, M. A. Umilta, L. Fogassi, V. Gallese, and G. Rizzolatti. Hearing sounds, understanding actions: action representation in mirror neurons. Science, 297(5582):846â848, 2002.
[130] A. N. Kolmogorov. Three approaches to the quantitative deï¬nition of information. Problems of Information Transmission, 1:1â11, 1965.
[131] V. R. Kompella, M. D. Luciw, and J. Schmidhuber. Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams. Neural Computa- tion, 24(11):2994â3024, 2012. | 1511.09249#92 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 93 | [132] J. Koutn´ık, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks for vision-based reinforcement learning. In Proceedings of the Genetic and Evolutionary Compu- tation Conference (GECCO), pages 1061â1068, Amsterdam, July 2013. ACM.
23
[133] J. Koutn´ık, K. Greff, F. Gomez, and J. Schmidhuber. A Clockwork RNN. In Proceedings of the 31th International Conference on Machine Learning (ICML), volume 32, pages 1845â1853, 2014. arXiv:1402.3511 [cs.NE].
[134] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS 2012), page 4, 2012.
[135] A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 4, pages 950â957. Morgan Kaufmann, 1992. | 1511.09249#93 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 94 | [136] S. Kullback and R. A. Leibler. On information and sufï¬ciency. The Annals of Mathematical Statistics, pages 79â86, 1951.
[137] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. JMLR, 4:1107â1149, 12 2003.
[138] S. Lange and M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Neural Networks (IJCNN), The 2010 International Joint Conference on, pages 1â8, July 2010.
[139] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â444, 2015. Critique by JS under http://www.idsia.ch/Ëjuergen/deep-learning-conspiracy.html.
[140] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Back-propagation applied to handwritten zip code recognition. Neural Computation, 1(4):541â 551, 1989. | 1511.09249#94 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 95 | [141] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 598â605. Morgan Kaufmann, 1990.
[142] R. Legenstein, N. Wilbert, and L. Wiskott. Reinforcement learning on slow features of high- dimensional input streams. PLoS Computational Biology, 6(8), 2010.
[143] A. U. Levin, T. K. Leen, and J. E. Moody. Fast pruning using principal components. Advances in Neural Information Processing Systems 6, page 35. Morgan Kaufmann, 1994.
[144] A. U. Levin and K. S. Narendra. Control of nonlinear dynamical systems using neural net- works. ii. observability, identiï¬cation, and control. IEEE Transactions on Neural Networks, 7(1):30â42, 1995.
[145] L. A. Levin. On the notion of a random sequence. Soviet Math. Dokl., 14(5):1413â1416, 1973.
[146] L. A. Levin. Universal sequential search problems. Problems of Information Transmission, 9(3):265â266, 1973. | 1511.09249#95 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 96 | [146] L. A. Levin. Universal sequential search problems. Problems of Information Transmission, 9(3):265â266, 1973.
[147] M. Li and P. M. B. Vit´anyi. An Introduction to Kolmogorov Complexity and its Applications (2nd edition). Springer, 1997.
[148] L. Lin. Reinforcement Learning for Robots Using Neural Networks. PhD thesis, Carnegie Mellon University, Pittsburgh, January 1993.
[149] L.-J. Lin. Programming robots using reinforcement learning and teaching. In Proceedings of the Ninth National Conference on Artiï¬cial Intelligence - Volume 2, AAAIâ91, pages 781â786. AAAI Press, 1991.
24
[150] L.-J. Lin and T. M. Mitchell. Memory approaches to reinforcement learning in non-markovian domains. Technical Report CMU-CS-92-138, School of Computer Science, Carnegie Mellon University, 1992.
[151] S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Masterâs thesis, Univ. Helsinki, 1970. | 1511.09249#96 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 97 | [152] M. L. Littman, A. R. Cassandra, and L. P. Kaelbling. Learning policies for partially observable environments: Scaling up. In A. Prieditis and S. Russell, editors, Machine Learning: Proceed- ings of the Twelfth International Conference, pages 362â370. Morgan Kaufmann Publishers, San Francisco, CA, 1995.
[153] L. Ljung. System identiï¬cation. Springer, 1998.
[154] M. Luciw, V. R. Kompella, S. Kazerounian, and J. Schmidhuber. An intrinsic value system for developing multiple invariant representations with incremental slowness learning. Frontiers in Neurorobotics, 7(9), 2013.
[155] D. J. C. MacKay. A practical Bayesian framework for backprop networks. Neural Computa- tion, 4:448â472, 1992.
[156] H. R. Maei and R. S. Sutton. GQ(λ): A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In Proceedings of the Third Conference on Artiï¬cial General Intelligence, volume 1, pages 91â96, 2010.
[157] S. Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine Learning, 22:159, 1996. | 1511.09249#97 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 98 | [157] S. Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine Learning, 22:159, 1996.
[158] J. Martens and I. Sutskever. Learning recurrent neural networks with Hessian-free optimiza- tion. In Proceedings of the 28th International Conference on Machine Learning (ICML), pages 1033â1040, 2011.
[159] H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber. A system for robotic heart surgery that learns to tie knots using recurrent neural networks. Advanced Robotics, 22(13-14):1521â1537, 2008.
[160] R. A. McCallum. Learning to use selective attention and short-term memory in sequential tasks. In P. Maes, M. Mataric, J.-A. Meyer, J. Pollack, and S. W. Wilson, editors, From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior, Cambridge, MA, pages 315â324. MIT Press, Bradford Books, 1996. | 1511.09249#98 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 99 | [161] U. Meier, D. C. Ciresan, L. M. Gambardella, and J. Schmidhuber. Better digit recognition with a committee of simple neural nets. In 11th International Conference on Document Analysis and Recognition (ICDAR), pages 1135â1139, 2011.
[162] I. Menache, S. Mannor, and N. Shimkin. Q-cut â dynamic discovery of sub-goals in reinforce- ment learning. In Proc. ECMLâ02, pages 295â306, 2002.
[163] N. Meuleau, L. Peshkin, K. E. Kim, and L. P. Kaelbling. Learning ï¬nite state controllers for partially observable environments. In 15th International Conference of Uncertainty in AI, pages 427â436, 1999.
[164] O. Miglino, H. Lund, and S. Nolï¬. Evolving mobile robots in simulated and real environments. Artiï¬cial Life, 2(4):417â434, 1995.
25
[165] G. Miller, P. Todd, and S. Hedge. Designing neural networks using genetic algorithms. In Pro- ceedings of the 3rd International Conference on Genetic Algorithms, pages 379â384. Morgan Kauffman, 1989. | 1511.09249#99 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 100 | [166] W. T. Miller, P. J. Werbos, and R. S. Sutton. Neural networks for control. MIT Press, 1995.
J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beat- I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and tie, A. Sadik, Nature, D. Hassabis. through deep reinforcement Human-level control 518(7540):529â533, 2015. Based on TR arXiv:1312.5602 (2013); critique by JS under http://www.idsia.ch/Ëjuergen/naturedeepmind.html.
[168] J. E. Moody. Fast learning in multi-resolution hierarchies. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems (NIPS) 1, pages 29â39. Morgan Kaufmann, 1989. | 1511.09249#100 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 101 | [169] J. E. Moody. The effective number of parameters: An analysis of generalization and regu- larization in nonlinear learning systems. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems (NIPS) 4, pages 847â854. Morgan Kaufmann, 1992.
[170] J. E. Moody and J. Utans. Architecture selection strategies for neural networks: Application to corporate bond rating prediction. In A. N. Refenes, editor, Neural Networks in the Capital Markets. John Wiley & Sons, 1994.
[171] A. Moore and C. Atkeson. The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning, 21(3):199â233, 1995.
[172] A. Moore and C. G. Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning, 13:103â130, 1993.
[173] D. E. Moriarty. Symbiotic Evolution of Neural Networks in Sequential Decision Tasks. PhD thesis, Department of Computer Sciences, The University of Texas at Austin, 1997. | 1511.09249#101 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 102 | [173] D. E. Moriarty. Symbiotic Evolution of Neural Networks in Sequential Decision Tasks. PhD thesis, Department of Computer Sciences, The University of Texas at Austin, 1997.
[174] J. Morimoto and K. Doya. Robust reinforcement learning. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems (NIPS) 13, pages 1061â1067. MIT Press, 2000.
[175] M. C. Mozer and S. Das. A connectionist symbol manipulator that discovers the structure of context-free languages. Advances in Neural Information Processing Systems (NIPS), pages 863â863, 1993.
[176] M. C. Mozer and P. Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems (NIPS) 1, pages 107â115. Morgan Kaufmann, 1989.
[177] P. W. Munro. A dual back-propagation scheme for scalar reinforcement learning. Proceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA, pages 165â176, 1987. | 1511.09249#102 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 103 | [178] K. S. Narendra and K. Parthasarathy. Identiï¬cation and control of dynamical systems using neural networks. Neural Networks, IEEE Transactions on, 1(1):4â27, 1990.
26
[179] N. Nguyen and B. Widrow. The truck backer-upper: An example of self learning in neural networks. In Proceedings of the International Joint Conference on Neural Networks, pages 357â363. IEEE Press, 1989.
[180] S. Nolï¬, D. Floreano, O. Miglino, and F. Mondada. How to evolve autonomous robots: Differ- ent approaches in evolutionary robotics. In R. A. Brooks and P. Maes, editors, Fourth Interna- tional Workshop on the Synthesis and Simulation of Living Systems (Artiï¬cial Life IV), pages 190â197. MIT, 1994.
[181] K.-S. Oh and K. Jung. GPU implementation of neural networks. Pattern Recognition, 37(6):1311â1314, 2004.
[182] J. R. Olsson. Inductive functional programming using incremental program transformation. Artiï¬cial Intelligence, 74(1):55â83, 1995. | 1511.09249#103 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 104 | [182] J. R. Olsson. Inductive functional programming using incremental program transformation. Artiï¬cial Intelligence, 74(1):55â83, 1995.
[183] M. Otsuka, J. Yoshimoto, and K. Doya. Free-energy-based reinforcement learning in a partially observable environment. In Proc. ESANN, 2010.
[184] P.-Y. Oudeyer, A. Baranes, and F. Kaplan. Intrinsically motivated learning of real world sen- In G. Baldassarre and M. Mirolli, editors, sorimotor skills with developmental constraints. Intrinsically Motivated Learning in Natural and Artiï¬cial Systems. Springer, 2013.
[185] R. Parekh, J. Yang, and V. Honavar. Constructive neural network learning algorithms for multi- category pattern classiï¬cation. IEEE Transactions on Neural Networks, 11(2):436â451, 2000.
[186] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬culty of training recurrent neural networks. In ICMLâ13: JMLR: W&CP volume 28, 2013. | 1511.09249#104 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 105 | [187] F. Pasemann, U. Steinmetz, and U. Dieckman. Evolving structure and function of neuro- controllers. In P. J. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and A. Zalzala, ed- itors, Proceedings of the Congress on Evolutionary Computation, volume 3, pages 1973â1978, Mayï¬ower Hotel, Washington D.C., USA, 6-9 1999. IEEE Press.
[188] J. Peng and R. J. Williams. Incremental multi-step Q-learning. Machine Learning, 22:283â290, 1996.
[189] J. A. P´erez-Ortiz, F. A. Gers, D. Eck, and J. Schmidhuber. Kalman ï¬lters improve LSTM network performance in problems unsolvable by traditional recurrent nets. Neural Networks, 16:241â250, 2003.
[190] J. Peters. Policy gradient methods. Scholarpedia, 5(11):3698, 2010.
[191] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71:1180â1190, March 2008. | 1511.09249#105 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 106 | [191] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71:1180â1190, March 2008.
[192] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural Network, 21(4):682â697, 2008.
[193] J. B. Pollack. Implications of recursive distributed representations. In Proc. NIPS, pages 527â 536, 1988.
[194] E. L. Post. Finite combinatory processes-formulation 1. The Journal of Symbolic Logic, 1(3):103â105, 1936.
27
[195] D. Precup, R. S. Sutton, and S. Singh. Multi-time models for temporally abstract planning. In Advances in Neural Information Processing Systems (NIPS), pages 1050â1056. Morgan Kaufmann, 1998.
In J. Kolen and S. Kremer, editors, A ï¬eld guide to dynamical recurrent networks, pages 23â78. IEEE Press, 2001.
[197] D. Prokhorov and D. Wunsch. Adaptive critic design. IEEE Transactions on Neural Networks, 8(5):997â1007, 1997. | 1511.09249#106 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 107 | [197] D. Prokhorov and D. Wunsch. Adaptive critic design. IEEE Transactions on Neural Networks, 8(5):997â1007, 1997.
[198] R. Raina, A. Madhavan, and A. Ng. Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pages 873â880. ACM, 2009.
[199] M. A. Ranzato, F. Huang, Y. Boureau, and Y. LeCun. Unsupervised learning of invariant feature In Proc. Computer Vision and Pattern hierarchies with applications to object recognition. Recognition Conference (CVPRâ07), pages 1â8. IEEE Press, 2007.
[200] I. Rechenberg. Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Dissertation, 1971. Published 1973 by Fromman-Holzboog.
[201] M. Riedmiller. Neural ï¬tted Q iterationâï¬rst experiences with a data efï¬cient neural rein- In Proc. ECML-2005, pages 317â328. Springer-Verlag Berlin forcement learning method. Heidelberg, 2005. | 1511.09249#107 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 108 | [202] M. Riedmiller, S. Lange, and A. Voigtlaender. Autonomous reinforcement learning on raw In International Joint Conference on Neural visual input data in a real world application. Networks (IJCNN), pages 1â8, Brisbane, Australia, 2012.
[203] M. Ring, T. Schaul, and J. Schmidhuber. The two-dimensional organization of behavior. In Pro- ceedings of the First Joint Conference on Development Learning and on Epigenetic Robotics ICDL-EPIROB, Frankfurt, August 2011.
[204] M. B. Ring. Incremental development of complex behaviors through automatic construction In L. Birnbaum and G. Collins, editors, Machine Learning: of sensory-motor hierarchies. Proceedings of the Eighth International Workshop, pages 343â347. Morgan Kaufmann, 1991.
In J. D. C. S. J. Hanson and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 115â122. Morgan Kaufmann, 1993.
[206] M. B. Ring. Continual Learning in Reinforcement Environments. PhD thesis, University of Texas at Austin, Austin, Texas 78712, August 1994.
[207] J. Rissanen. Stochastic complexity and modeling. The Annals of Statistics, 14(3):1080â1100, 1986. | 1511.09249#108 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 109 | [207] J. Rissanen. Stochastic complexity and modeling. The Annals of Statistics, 14(3):1080â1100, 1986.
[208] A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.
[209] T. Robinson and F. Fallside. Dynamic reinforcement driven error propagation networks with application to game playing. In Proceedings of the 11th Conference of the Cognitive Science Society, Ann Arbor, pages 836â843, 1989.
28
[210] T. R¨uckstieÃ, M. Felder, and J. Schmidhuber. State-Dependent Exploration for policy gradient In W. D. et al., editor, European Conference on Machine Learning (ECML) and methods. Principles and Practice of Knowledge Discovery in Databases 2008, Part II, LNAI 5212, pages 234â249, 2008.
[211] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, volume 1, pages 318â362. MIT Press, 1986. | 1511.09249#109 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 110 | [212] G. Rummery and M. Niranjan. On-line Q-learning using connectionist sytems. Technical Report CUED/F-INFENG-TR 166, Cambridge University, UK, 1994.
[213] H. Sak, A. Senior, and F. Beaufays. Long Short-Term Memory recurrent neural network archi- tectures for large scale acoustic modeling. In Proc. Interspeech, 2014.
[214] H. Sak, A. Senior, K. Rao, F. Beaufays, and J. Schalkwyk. Google Voice search: faster and In Google Research Blog http://googleresearch.blogspot.ch/2015/09/google- more accurate. voice-search-faster-and-more.html, 2015.
[215] K. Samejima, K. Doya, and M. Kawato. Inter-module credit assignment in modular reinforce- ment learning. Neural Networks, 16(7):985â994, 2003.
[216] J. C. Santamar´ıa, R. S. Sutton, and A. Ram. Experiments with reinforcement learning in prob- lems with continuous state and action spaces. Adaptive Behavior, 6(2):163â217, 1997. | 1511.09249#110 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 111 | [217] A. M. Sch¨afer, S. Udluft, and H.-G. Zimmermann. Learning long term dependencies with recurrent neural networks. In S. D. Kollias, A. Stafylopatis, W. Duch, and E. Oja, editors, ICANN (1), volume 4131 of Lecture Notes in Computer Science, pages 71â80. Springer, 2006.
[218] R. E. Schapire. The strength of weak learnability. Machine Learning, 5:197â227, 1990.
[219] T. Schaul and J. Schmidhuber. Metalearning. Scholarpedia, 6(5):4650, 2010.
[220] D. Scherer, A. M¨uller, and S. Behnke. Evaluation of pooling operations in convolutional archi- tectures for object recognition. In Proc. International Conference on Artiï¬cial Neural Networks (ICANN), pages 92â101, 2010.
[221] J. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403â412, 1989. | 1511.09249#111 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 112 | [221] J. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403â412, 1989.
In D. S. Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, editors, Proc. of the 1990 Connectionist Models Summer School, pages 52â61. Morgan Kaufmann, 1990.
[223] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in re- active environments. In Proc. IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 2, pages 253â258, 1990.
[224] J. Schmidhuber. Curious model-building control systems. In Proceedings of the International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458â1463. IEEE press, 1991.
In T. Kohonen, K. M¨akisara, O. Simula, and J. Kangas, editors, Artiï¬cial Neural Networks, pages 967â972. Elsevier Science Publishers B.V., North-Holland, 1991.
29 | 1511.09249#112 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 113 | 29
[226] J. Schmidhuber. A possibility for implementing curiosity and boredom in model-building neu- ral controllers. In J. A. Meyer and S. W. Wilson, editors, Proc. of the International Confer- ence on Simulation of Adaptive Behavior: From Animals to Animats, pages 222â227. MIT Press/Bradford Books, 1991.
[227] J. Schmidhuber. Reinforcement learning in Markovian and non-Markovian environments. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3 (NIPS 3), pages 500â506. Morgan Kaufmann, 1991.
[228] J. Schmidhuber. Learning complex, extended sequences using the principle of history com- pression. Neural Computation, 4(2):234â242, 1992. (Based on TR FKI-148-91, TUM, 1991).
[229] J. Schmidhuber. Learning to control fast-weight memories: An alternative to recurrent nets. Neural Computation, 4(1):131â139, 1992. | 1511.09249#113 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 114 | [229] J. Schmidhuber. Learning to control fast-weight memories: An alternative to recurrent nets. Neural Computation, 4(1):131â139, 1992.
[230] J. Schmidhuber. Netzwerkarchitekturen, Zielfunktionen und Kettenregel. (Network architec- tures, objective functions, and chain rule.) Habilitation Thesis, Inst. f. Inf., Tech. Univ. Munich, 1993.
[231] J. Schmidhuber. On decreasing the ratio between learning complexity and number of time- varying variables in fully recurrent nets. In Proceedings of the International Conference on Artiï¬cial Neural Networks, Amsterdam, pages 460â463. Springer, 1993.
[232] J. Schmidhuber. A self-referential weight matrix. In Proceedings of the International Confer- ence on Artiï¬cial Neural Networks, Amsterdam, pages 446â451. Springer, 1993.
[233] J. Schmidhuber. On learning how to learn learning strategies. Technical Report FKI-198-94, Fakult¨at f¨ur Informatik, Technische Universit¨at M¨unchen, 1994. See [252, 251]. | 1511.09249#114 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 115 | [234] J. Schmidhuber. Discovering solutions with low Kolmogorov complexity and high general- In A. Prieditis and S. Russell, editors, Machine Learning: Proceedings ization capability. of the Twelfth International Conference, pages 488â496. Morgan Kaufmann Publishers, San Francisco, CA, 1995.
[235] J. Schmidhuber. Discovering neural nets with low Kolmogorov complexity and high general- ization capability. Neural Networks, 10(5):857â873, 1997.
[236] J. Schmidhuber. Hierarchies of generalized Kolmogorov complexities and nonenumerable uni- versal measures computable in the limit. International Journal of Foundations of Computer Science, 13(4):587â612, 2002.
[237] J. Schmidhuber. The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. In J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Confer- ence on Computational Learning Theory (COLT 2002), Lecture Notes in Artiï¬cial Intelligence, pages 216â228. Springer, Sydney, Australia, 2002.
[238] J. Schmidhuber. Optimal ordered problem solver. Machine Learning, 54:211â254, 2004. | 1511.09249#115 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 116 | [238] J. Schmidhuber. Optimal ordered problem solver. Machine Learning, 54:211â254, 2004.
[239] J. Schmidhuber. Developmental robotics, optimal artiï¬cial curiosity, creativity, music, and the ï¬ne arts. Connection Science, 18(2):173â187, 2006.
[240] J. Schmidhuber. Simple algorithmic theory of subjective beauty, novelty, surprise, interesting- ness, attention, curiosity, creativity, art, science, music, jokes. SICE Journal of the Society of Instrument and Control Engineers, 48(1):21â32, 2009.
30
[241] J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230â247, 2010.
[242] J. Schmidhuber. Self-delimiting neural networks. Technical Report IDSIA-08-12, arXiv:1210.0118v1 [cs.NE], The Swiss AI Lab IDSIA, 2012.
[243] J. Schmidhuber. POWERPLAY: Training an Increasingly General Problem Solver by Continu- ally Searching for the Simplest Still Unsolvable Problem. Frontiers in Psychology, 2013. | 1511.09249#116 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 117 | [244] J. Schmidhuber. Deep Learning. Scholarpedia, 10(11):32832, 2015.
[245] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85â117, 2015. Published online 2014; 888 references; based on TR arXiv:1404.7828 [cs.NE].
[246] J. Schmidhuber and B. Bakker. NIPS 2003 RNNaissance workshop on recurrent neural net- works, Whistler, CA, 2003. http://www.idsia.ch/Ëjuergen/rnnaissance.html.
[247] J. Schmidhuber, D. Ciresan, U. Meier, J. Masci, and A. Graves. On fast deep nets for AGI vision. In Proc. Fourth Conference on Artiï¬cial General Intelligence (AGI), Google, Mountain View, CA, pages 243â246, 2011.
[248] J. Schmidhuber and S. Heil. Sequential neural text compression. IEEE Transactions on Neural Networks, 7(1):142â146, 1996. | 1511.09249#117 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 118 | [248] J. Schmidhuber and S. Heil. Sequential neural text compression. IEEE Transactions on Neural Networks, 7(1):142â146, 1996.
[249] J. Schmidhuber and R. Huber. Learning to generate artiï¬cial fovea trajectories for target detec- tion. International Journal of Neural Systems, 2(1 & 2):135â141, 1991.
[250] J. Schmidhuber, D. Wierstra, M. Gagliolo, and F. J. Gomez. Training recurrent networks by EVOLINO. Neural Computation, 19(3):757â779, 2007.
[251] J. Schmidhuber, J. Zhao, and N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, editors, Learning to learn, pages 293â309. Kluwer, 1997.
[252] J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105â130, 1997. | 1511.09249#118 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 119 | [253] B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors. Advances in Kernel Methods - Support Vector Learning. MIT Press, Cambridge, MA, 1998.
[254] A. Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In Proc. ICML, pages 298â305, 1993.
[255] H. P. Schwefel. Numerische Optimierung von Computer-Modellen. Dissertation, 1974. Pub- lished 1977 by Birkh¨auser, Basel.
[256] F. Sehnke, C. Osendorfer, T. R¨uckstieÃ, A. Graves, J. Peters, and J. Schmidhuber. Parameter- exploring policy gradients. Neural Networks, 23(4):551â559, 2010.
[257] C. E. Shannon. A mathematical theory of communication (parts I and II). Bell System Technical Journal, XXVII:379â423, 1948.
[258] H. T. Siegelmann and E. D. Sontag. Turing computability with neural nets. Applied Mathe- matics Letters, 4(6):77â80, 1991.
31 | 1511.09249#119 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 120 | 31
[259] K. Sims. Evolving virtual creatures. In A. Glassner, editor, Proceedings of SIGGRAPH â94 (Orlando, Florida, July 1994), Computer Graphics Proceedings, Annual Conference, pages 15â22. ACM SIGGRAPH, ACM Press, jul 1994. ISBN 0-89791-667-0.
¨O. Simsek and A. G. Barto. Skill characterization based on betweenness. In NIPSâ08, pages 1497â1504, 2008.
[261] S. Singh, A. G. Barto, and N. Chentanez. Intrinsically motivated reinforcement learning. In Advances in Neural Information Processing Systems 17 (NIPS). MIT Press, Cambridge, MA, 2005.
[262] S. P. Singh. Reinforcement learning algorithms for average-payoff Markovian decision pro- cesses. In National Conference on Artiï¬cial Intelligence, pages 700â705, 1994.
[263] R. J. Solomonoff. A formal theory of inductive inference. Part I. Information and Control, 7:1â22, 1964.
[264] R. J. Solomonoff. Complexity-based induction systems. IEEE Transactions on Information Theory, IT-24(5):422â432, 1978. | 1511.09249#120 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 121 | [264] R. J. Solomonoff. Complexity-based induction systems. IEEE Transactions on Information Theory, IT-24(5):422â432, 1978.
[265] B. Speelpenning. Compiling Fast Partial Derivatives of Functions Given by Algorithms. PhD thesis, Department of Computer Science, University of Illinois, Urbana-Champaign, Jan. 1980.
[266] R. K. Srivastava, J. Masci, S. Kazerounian, F. Gomez, and J. Schmidhuber. Compete to com- pute. In Advances in Neural Information Processing Systems (NIPS), pages 2310â2318, 2013.
[267] R. K. Srivastava, B. R. Steunebrink, and J. Schmidhuber. First experiments with PowerPlay. Neural Networks, 41(0):130 â 136, 2013. Special Issue on Autonomous Learning.
[268] K. O. Stanley, D. B. DâAmbrosio, and J. Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial Life, 15(2):185â212, 2009. | 1511.09249#121 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 122 | [269] Y. Sun, F. Gomez, T. Schaul, and J. Schmidhuber. A Linear Time Natural Evolution Strategy for Non-Separable Functions. In Proceedings of the Genetic and Evolutionary Computation Conference, page 61, Amsterdam, NL, July 2013. ACM.
[270] Y. Sun, D. Wierstra, T. Schaul, and J. Schmidhuber. Efï¬cient natural evolution strategies. In Proc. 11th Genetic and Evolutionary Computation Conference (GECCO), pages 539â546, 2009.
[271] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. Technical Report arXiv:1409.3215 [cs.CL], Google, 2014. NIPSâ2014.
[272] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998.
[273] R. S. Sutton. Integrated architectures for learning, planning and reacting based on dynamic In Machine Learning: Proceedings of the Seventh International Workshop, programming. 1990. | 1511.09249#122 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 123 | [273] R. S. Sutton. Integrated architectures for learning, planning and reacting based on dynamic In Machine Learning: Proceedings of the Seventh International Workshop, programming. 1990.
[274] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Pro- cessing Systems (NIPS) 12, pages 1057â1063, 1999.
32
[275] R. S. Sutton, D. Precup, and S. P. Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artif. Intell., 112(1-2):181â211, 1999.
[276] R. S. Sutton, C. Szepesv´ari, and H. R. Maei. A convergent O(n) algorithm for off-policy temporal-difference learning with linear function approximation. In Advances in Neural Infor- mation Processing Systems (NIPSâ08), volume 21, pages 1609â1616, 2008. | 1511.09249#123 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 124 | [277] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. Technical Report arXiv:1409.4842 [cs.CV], Google, 2014.
[278] A. Teller. The evolution of mental models. In J. Kenneth E. Kinnear, editor, Advances in Genetic Programming, pages 199â219. MIT Press, 1994.
[279] J. Tenenberg, J. Karlsson, and S. Whitehead. Learning via task decomposition. In J. A. Meyer, H. Roitblat, and S. Wilson, editors, From Animals to Animats 2: Proceedings of the Second In- ternational Conference on Simulation of Adaptive Behavior, pages 337â343. MIT Press, 1993.
[280] G. Tesauro. TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215â219, 1994.
[281] J. N. Tsitsiklis and B. van Roy. Feature-based methods for large scale dynamic programming. Machine Learning, 22(1-3):59â94, 1996. | 1511.09249#124 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 125 | [282] A. M. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 41:230â267, 1936.
[283] P. E. Utgoff and D. J. Stracuzzi. Many-layered learning. Neural Computation, 14(10):2497â 2529, 2002.
[284] H. van Hasselt. Reinforcement learning in continuous state and action spaces. In M. Wiering and M. van Otterlo, editors, Reinforcement Learning, pages 207â251. Springer, 2012.
[285] N. van Hoorn, J. Togelius, and J. Schmidhuber. Hierarchical controller learning in a ï¬rst-person shooter. In Proceedings of the IEEE Symposium on Computational Intelligence and Games, 2009.
[286] V. Vapnik. Principles of risk minimization for learning theory. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems (NIPS) 4, pages 831â838. Morgan Kaufmann, 1992.
[287] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. | 1511.09249#125 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 126 | [287] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
[288] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. Preprint arXiv:1411.4555, 2014.
[289] C. S. Wallace and D. M. Boulton. An information theoretic measure for classiï¬cation. Com- puter Journal, 11(2):185â194, 1968.
[290] C. Wang, S. S. Venkatesh, and J. S. Judd. Optimal stopping and effective machine complexity in learning. In Advances in Neural Information Processing Systems (NIPSâ6), pages 303â310. Morgan Kaufmann, 1994.
[291] O. Watanabe. Kolmogorov complexity and computational complexity. EATCS Monographs on Theoretical Computer Science, Springer, 1992.
33
[292] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, Kingâs College, Oxford, 1989.
[293] C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279â292, 1992. | 1511.09249#126 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 127 | [293] C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279â292, 1992.
[294] A. S. Weigend, D. E. Rumelhart, and B. A. Huberman. Generalization by weight-elimination with application to forecasting. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems (NIPS) 3, pages 875â882. San Mateo, CA: Morgan Kaufmann, 1991.
[295] G. Weiss. Hierarchical chunking in classiï¬er systems. In Proceedings of the 12th National Conference on Artiï¬cial Intelligence, volume 2, pages 1335â1340. AAAI Press/The MIT Press, 1994.
[296] J. Weng, N. Ahuja, and T. S. Huang. Cresceptron: a self-organizing neural network which grows adaptively. In International Joint Conference on Neural Networks (IJCNN), volume 1, pages 576â581. IEEE, 1992. | 1511.09249#127 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 128 | [297] P. J. Werbos. Applications of advances in nonlinear sensitivity analysis. In Proceedings of the 10th IFIP Conference, 31.8 - 4.9, NYC, pages 762â770, 1981.
[298] P. J. Werbos. Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems, Man, and Cybernetics, 17, 1987.
[299] P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1, 1988.
In IEEE/INNS International Joint Conference on Neural Networks, Washington, D.C., volume 1, pages 209â 216, 1989.
[301] P. J. Werbos. Neural networks for control and system identiï¬cation. In Proceedings of IEEE/CDC Tampa, Florida, 1989.
[302] P. J. Werbos. Neural networks, system identiï¬cation, and control in the chemical industries. In D. A. S. D. A. White, editor, Handbook of Intelligent Control: Neural, Fuzzy, and Adaptive Approaches, pages 283â356. Thomson Learning, 1992. | 1511.09249#128 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 129 | [303] J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
[304] H. White. Learning in artiï¬cial neural networks: A statistical perspective. Neural Computation, 1(4):425â464, 1989.
[305] S. Whiteson. Evolutionary computation for reinforcement learning. In M. Wiering and M. van Otterlo, editors, Reinforcement Learning, pages 325â355. Springer, Berlin, Germany, 2012.
[306] S. Whiteson, N. Kohl, R. Miikkulainen, and P. Stone. Evolving keepaway soccer players through task decomposition. Machine Learning, 59(1):5â30, May 2005.
[307] A. P. Wieland. Evolving neural network controllers for unstable systems. In International Joint Conference on Neural Networks (IJCNN), volume 2, pages 667â673. IEEE, 1991.
34
[308] M. Wiering and J. Schmidhuber. Solving POMDPs with Levin search and EIRA. In L. Saitta, editor, Machine Learning: Proceedings of the Thirteenth International Conference, pages 534â 542. Morgan Kaufmann Publishers, San Francisco, CA, 1996. | 1511.09249#129 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 130 | [309] M. Wiering and J. Schmidhuber. HQ-learning. Adaptive Behavior, 6(2):219â246, 1998.
[310] M. Wiering and M. van Otterlo. Reinforcement Learning. Springer, 2012.
[311] M. A. Wiering and J. Schmidhuber. Fast online Q(λ). Machine Learning, 33(1):105â116, 1998.
[312] D. Wierstra, A. Foerster, J. Peters, and J. Schmidhuber. Recurrent policy gradients. Logic Journal of IGPL, 18(2):620â634, 2010.
[313] D. Wierstra, T. Schaul, J. Peters, and J. Schmidhuber. Natural evolution strategies. In Congress of Evolutionary Computation (CEC 2008), 2008.
[314] R. J. Williams. Reinforcement-learning in connectionist networks: A mathematical analysis. Technical Report 8605, Institute for Cognitive Science, University of California, San Diego, 1986.
[315] R. J. Williams. Toward a theory of reinforcement-learning connectionist systems. Technical Report NU-CCS-88-3, College of Comp. Sci., Northeastern University, Boston, MA, 1988. | 1511.09249#130 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 131 | [316] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â256, 1992.
[317] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.
[318] L. Wiskott and T. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715â770, 2002.
In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Sys- tems (NIPS) 6, pages 200â207. Morgan Kaufmann, 1994.
[320] B. M. Yamauchi and R. D. Beer. Sequential behavior and learning in evolved dynamical neural networks. Adaptive Behavior, 2(3):219â246, 1994.
[321] X. Yao. A review of evolutionary artiï¬cial neural networks. International Journal of Intelligent Systems, 4:203â222, 1993. | 1511.09249#131 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 132 | [321] X. Yao. A review of evolutionary artiï¬cial neural networks. International Journal of Intelligent Systems, 4:203â222, 1993.
[322] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. Technical Report arXiv:1311.2901 [cs.CV], NYU, 2013.
[323] H.-G. Zimmermann, C. Tietz, and R. Grothmann. Forecasting with recurrent neural networks: 12 tricks. In G. Montavon, G. B. Orr, and K.-R. M¨uller, editors, Neural Networks: Tricks of the Trade (2nd ed.), volume 7700 of Lecture Notes in Computer Science, pages 687â707. Springer, 2012.
35
Compress history by Câs intrinsic reward for Mâs predictive compression improvements coding Store Lifelong history of actions/inputs/rewards
# Reward
# Input Actions | 1511.09249#132 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.09249 | 133 | 35
Compress history by Câs intrinsic reward for Mâs predictive compression improvements coding Store Lifelong history of actions/inputs/rewards
# Reward
# Input Actions
Figure 1: In a series of trials, an RNN controller C steers an agent interacting with an initially unknown, partially observable environment. The entire lifelong interaction history is stored, and used to train an RNN world model M , which learns to predict new inputs from histories of previous inputs and actions, using predictive coding to compress the history (Sec. 4). Given an RL problem, C may speed up its search for rewarding behavior by learning programs that address/query/exploit M âs program-encoded knowledge about predictable regularities, e.g., through extra connections from and to (a copy of) M âsee Sec. 5.3. This may be much cheaper than learning reward-generating programs from scratch. C also may get intrinsic reward for creating experiments causing data with yet unknown regularities that improve M (Sec. 6). Not shown are deep FNNs as preprocessors (Sec. 4.3) for high- dimensional data (video etc) observed by C and M .
36 | 1511.09249#133 | On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models | This paper addresses the general problem of reinforcement learning (RL) in
partially observable environments. In 2013, our large RL recurrent neural
networks (RNNs) learned from scratch to drive simulated cars from
high-dimensional video input. However, real brains are more powerful in many
ways. In particular, they learn a predictive model of their initially unknown
environment, and somehow use it for abstract (e.g., hierarchical) planning and
reasoning. Guided by algorithmic information theory, we describe RNN-based AIs
(RNNAIs) designed to do the same. Such an RNNAI can be trained on never-ending
sequences of tasks, some of them provided by the user, others invented by the
RNNAI itself in a curious, playful fashion, to improve its RNN-based world
model. Unlike our previous model-building RNN-based RL machines dating back to
1990, the RNNAI learns to actively query its model for abstract reasoning and
planning and decision making, essentially "learning to think." The basic ideas
of this report can be applied to many other cases where one RNN-like system
exploits the algorithmic information content of another. They are taken from a
grant proposal submitted in Fall 2014, and also explain concepts such as
"mirror neurons." Experimental results will be described in separate papers. | http://arxiv.org/pdf/1511.09249 | Juergen Schmidhuber | cs.AI, cs.LG, cs.NE | 36 pages, 1 figure. arXiv admin note: substantial text overlap with
arXiv:1404.7828 | null | cs.AI | 20151130 | 20151130 | [] |
1511.08630 | 0 | 2015:
5 1 0 2
# v o N 0 3
# ] L C . s c [
arXiv:1511.08630v2 [cs.CL]
2 v 0 3 6 8 0 . 1 1 5 1 : v i X r a
# A C-LSTM Neural Network for Text Classiï¬cation
Chunting Zhou1, Chonglin Sun2, Zhiyuan Liu3, Francis C.M. Lau1 Department of Computer Science, The University of Hong Kong1 School of Innovation Experiment, Dalian University of Technology2 Department of Computer Science and Technology, Tsinghua University, Beijing3
# Abstract | 1511.08630#0 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 1 | # Abstract
Neural network models have been demon- strated to be capable of achieving remarkable performance in sentence and document mod- eling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and uniï¬ed model called C-LSTM for sentence representation and text classiï¬cation. C-LSTM utilizes CNN to ex- tract a sequence of higher-level phrase repre- sentations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence se- mantics. We evaluate the proposed archi- tecture on sentiment classiï¬cation and ques- tion classiï¬cation tasks. The experimental re- sults show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.
# 1 Introduction | 1511.08630#1 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 2 | # 1 Introduction
As one of the core steps in NLP, sentence modeling aims at representing sentences as meaningful features for tasks such as sentiment classiï¬cation. Traditional sentence modeling uses the bag-of- words model which often suffers from the curse of dimensionality; others use composition based methods instead, e.g., an algebraic operation over semantic word vectors to produce the semantic sentence vector. However, such methods may not
perform well due to the loss of word order informa- tion. More recent models for distributed sentence representation fall into two categories according to the form of input sentence: sequence-based models and tree-structured models. Sequence-based models from word construct sequences by taking in account the relationship be- tween successive words (Johnson and Zhang, 2015). Tree-structured models treat each word token as a node in a syntactic parse tree and learn sentence representations from leaves to the root in a recursive manner (Socher et al., 2013b).
(CNNs) (RNNs) have and recurrent neural networks emerged architectures and are often combined with sequence-based (Tai et al., 2015; or Lei et al., 2015; Kim, 2014; Kalchbrenner et al., 2014; Mou et al., 2015). | 1511.08630#2 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 3 | Owing to the capability of capturing local cor- relations of spatial or temporal structures, CNNs have achieved top performance in computer vi- sion, speech recognition and NLP. For sentence modeling, CNNs perform excellently in extracting n-gram features at different positions of a sentence through convolutional ï¬lters, and can learn short and long-range relations through pooling opera- tions. CNNs have been successfully combined with both sequence-based model (Denil et al., 2014; Kalchbrenner et al., 2014) tree-structured model (Mou et al., 2015) in sentence modeling.
The other popular neural network architecture â RNN â is able to handle sequences of any length and capture long-term dependencies. To avoid the
problem of gradient exploding or vanishing in the standard RNN, Long Short-term Memory RNN (LSTM) (Hochreiter and Schmidhuber, 1997) and other variants (Cho et al., 2014) were designed for better remembering and memory accesses. Along with the sequence-based (Tang et al., 2015) or the tree-structured (Tai et al., 2015) models, RNNs have achieved remarkable results in sentence or document modeling. | 1511.08630#3 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 4 | To conclude, CNN is able to learn local response from temporal or spatial data but lacks the ability of learning sequential correlations; on the other hand, RNN is specilized for sequential modelling but unable to extract features in a parallel way. It has been shown that higher-level modeling of xt can help to disentangle underlying factors of variation within the input, which should then make it easier to learn temporal structure between successive time steps (Pascanu et al., 2014). For example, Sainath et al. (Sainath et al., 2015) have obtained respectable improvements in WER by learning a deep LSTM from multi-scale inputs. We explore training the LSTM model directly from sequences of higher- level representaions while preserving the sequence order of these representaions. In this paper, we introduce a new architecture short for C-LSTM by combining CNN and LSTM to model sentences. To beneï¬t from the advantages of both CNN and RNN, we design a simple end-to-end, uniï¬ed architecture by feeding the output of a one-layer CNN into LSTM. The CNN is constructed on top of the pre-trained word vectors from massive unlabeled text data to learn higher-level | 1511.08630#4 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 5 | the output of a one-layer CNN into LSTM. The CNN is constructed on top of the pre-trained word vectors from massive unlabeled text data to learn higher-level representions of n-grams. Then to learn sequential correlations from higher-level suqence representations, the feature maps of CNN are organized as sequential window features to serve as the input of LSTM. In this way, instead of constructing LSTM directly from the input sentence, we ï¬rst transform each sentence into successive window (n-gram) features to help disentangle factors of variations within sentences. We choose sequence-based input other than relying on the syntactic parse trees before feeding in the neural network, thus our model doesnât rely on any external language knowledge and complicated pre-processing. | 1511.08630#5 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 6 | In our experiments, we evaluate the semantic sentence representations learned from C-LSTM
with two tasks: sentiment classiï¬cation and 6-way question classiï¬cation. Our evaluations show that the C-LSTM model can achieve excellent results with several benchmarks as compared with a wide range of baseline models. We also show that the combination of CNN and LSTM outperforms individual multi-layer CNN models and RNN models, which indicates that LSTM can learn long- term dependencies from sequences of higher-level representations better than the other models.
# 2 Related Work
network mod- Deep in many els distributed NLP word, representa- tion (Mikolov et al., 2013b; Le and Mikolov, 2014), parsing (Socher et al., 2013a), statistical machine translation (Devlin et al., 2014), sentiment clas- siï¬cation (Kim, 2014), etc. Learning distributed sentence representation through neural network models requires little external domain knowledge and can reach satisfactory results in related tasks like sentiment classiï¬cation, text categorization.
In many recent sentence representation learning works, neural network models are constructed upon either the input word sequences or the transformed syntactic parse tree. Among them, convolutional neural network (CNN) and recurrent neural network (RNN) are two popular ones. | 1511.08630#6 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 7 | The capability of capturing local correlations along with extracting higher-level correlations through pooling empowers CNN to model sen- tences naturally from consecutive context windows. In (Collobert et al., 2011), Collobert et al. applied convolutional ï¬lters to successive windows for a given sequence to extract global features by max-pooling. As a slight variant, Kim et al. (2014) proposed a CNN architecture with multiple ï¬lters (with a varying window size) and two âchannelsâ To capture word relations of of word vectors. varying sizes, Kalchbrenner et al. (2014) proposed In a more a dynamic k-max pooling mechanism. apply recent work (Lei et al., 2015), Tao et al. tensor-based operations between words to replace linear operations on concatenated word vectors layer and explore in the standard convolutional
the non-linear interactions between nonconsective n-grams. Mou et al. (2015) also explores convolu- tional models on tree-structured sentences. | 1511.08630#7 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 8 | the non-linear interactions between nonconsective n-grams. Mou et al. (2015) also explores convolu- tional models on tree-structured sentences.
As a sequence model, RNN is able to deal with variable-length input sequences and discover long-term dependencies. Various variants of RNN have been proposed to better store and access (Hochreiter and Schmidhuber, 1997; memories Cho et al., 2014). With the ability of explicitly modeling time-series data, RNNs are being increas- ingly applied to sentence modeling. For example, Tai et al. (2015) adjusted the standard LSTM to tree-structured topologies and obtained superior results over a sequential LSTM on related tasks. | 1511.08630#8 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 9 | In this paper, we stack CNN and LSTM in a uniï¬ed architecture for semantic sentence mod- eling. The combination of CNN and LSTM can be seen in some computer vision tasks like image and speech recogni- caption (Xu et al., 2015) tion (Sainath et al., 2015). Most of these models use multi-layer CNNs and train CNNs and RNNs separately or throw the output of a fully connected layer of CNN into RNN as inputs. Our approach is different: we apply CNN to text data and feed con- secutive window features directly to LSTM, and so our architecture enables LSTM to learn long-range fea- dependencies from higher-order sequential tures. In (Li et al., 2015), the authors suggest that sequence-based models are sufï¬cient to capture the compositional semantics for many NLP tasks, thus in this work the CNN is directly built upon word sequences other than the syntactic parse tree. Our experiments on sentiment classiï¬cation and 6-way question classiï¬cation tasks clearly demonstrate the superiority of our model over single CNN or LSTM model and other related sequence-based models.
# 3 C-LSTM Model | 1511.08630#9 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 10 | # 3 C-LSTM Model
The architecture of the C-LSTM model is shown in Figure 1, which consists of two main components: convolutional neural network (CNN) and long short- term memory network (LSTM). The following two subsections describe how we apply CNN to extract higher-level sequences of word features and LSTM to capture long-term dependencies over window fea- ture sequences respectively.
The movie is awesome ! L Ã d iput x feature maps window feature sequence LSTM | 1511.08630#10 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 11 | Figure 1: The architecture of C-LSTM for sentence modeling. Blocks of the same color in the feature map layer and window feature sequence layer corresponds to features for the same win- dow. The dashed lines connect the feature of a window with the source feature map. The ï¬nal output of the entire model is the last hidden unit of LSTM.
# 3.1 N-gram Feature Extraction through Convolution
The one-dimensional convolution involves a ï¬lter vector sliding over a sequence and detecting fea- tures at different positions. Let xi â Rd be the d-dimensional word vectors for the i-th word in a sentence. Let x â RLÃd denote the input sentence where L is the length of the sentence. Let k be the length of the ï¬lter, and the vector m â RkÃd is a ï¬l- ter for the convolution operation. For each position j in the sentence, we have a window vector wj with k consecutive word vectors, denoted as:
wj = [xj, xj+1, · · · , xj+kâ1] (1) | 1511.08630#11 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 12 | wj = [xj, xj+1, · · · , xj+kâ1] (1)
Here, the commas represent row vector concatena- tion. A ï¬lter m convolves with the window vectors (k-grams) at each position in a valid way to gener- ate a feature map c â RLâk+1; each element cj of the feature map for window vector wj is produced as follows:
cj = f (wj ⦠m + b), (2)
where ⦠is element-wise multiplication, b â R is a bias term and f is a nonlinear transformation func- tion that can be sigmoid, hyperbolic tangent, etc. In our case, we choose ReLU (Nair and Hinton, 2010) as the nonlinear function. The C-LSTM model uses multiple ï¬lters to generate multiple feature maps. For n ï¬lters with the same length, the generated n
feature maps can be rearranged as feature represen- tations for each window wj,
W = [c1; c2; · · · ; cn] (3) | 1511.08630#12 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 13 | feature maps can be rearranged as feature represen- tations for each window wj,
W = [c1; c2; · · · ; cn] (3)
Here, semicolons represent column vector concate- nation and ci is the feature map generated with the i-th ï¬lter. Each row Wj of W â R(Lâk+1)Ãn is the new feature representation generated from n ï¬lters for the window vector at position j. The new succes- sive higher-order window representations then are fed into LSTM which is described below.
A max-over-pooling or dynamic k-max pooling is often applied to feature maps after the convolu- tion to select the most or the k-most important fea- tures. However, LSTM is speciï¬ed for sequence input, and pooling will break such sequence orga- nization due to the discontinuous selected features. Since we stack an LSTM neural neural network on top of the CNN, we will not apply pooling after the convolution operation.
# 3.2 Long Short-Term Memory Networks | 1511.08630#13 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 14 | # 3.2 Long Short-Term Memory Networks
Recurrent neural networks (RNNs) are able to prop- agate historical information via a chain-like neu- ral network architecture. While processing se- quential data, it looks at the current input xt as well as the previous output of hidden state htâ1 at each time step. However, standard RNNs be- comes unable to learn long-term dependencies as the gap between two time steps becomes large. To address this issue, LSTM was ï¬rst introduced in (Hochreiter and Schmidhuber, 1997) and re- emerged as a successful architecture since Ilya et al. (2014) obtained remarkable performance in sta- tistical machine translation. Although many vari- ants of LSTM were proposed, we adopt the standard architecture (Hochreiter and Schmidhuber, 1997) in this work. | 1511.08630#14 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 15 | The LSTM architecture has a range of repeated modules for each time step as in a standard RNN. At each time step, the output of the module is con- trolled by a set of gates in Rd as a function of the old hidden state htâ1 and the input at the current time step xt: the forget gate ft, the input gate it, and the output gate ot. These gates collectively decide how to update the current memory cell ct and the cur- rent hidden state ht. We use d to denote the mem- ory dimension in the LSTM and all vectors in this
architecture share the same dimension. The LSTM transition functions are deï¬ned as follows:
it = Ï(Wi · [htâ1, xt] + bi) ft = Ï(Wf · [htâ1, xt] + bf ) qt = tanh(Wq · [htâ1, xt] + bq) ot = Ï(Wo · [htâ1, xt] + bo) ct = ft â ctâ1 + it â qt ht = ot â tanh(ct) (4) | 1511.08630#15 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 16 | Here, Ï is the logistic sigmoid function that has an output in [0, 1], tanh denotes the hyperbolic tangent function that has an output in [â1, 1], and â denotes the elementwise multiplication. To understand the mechanism behind the architecture, we can view ft as the function to control to what extent the informa- tion from the old memory cell is going to be thrown away, it to control how much new information is go- ing to be stored in the current memory cell, and ot to control what to output based on the memory cell ct. LSTM is explicitly designed for time-series data for learning long-term dependencies, and therefore we choose LSTM upon the convolution layer to learn such dependencies in the sequence of higher-level features.
# 4 Learning C-LSTM for Text Classiï¬cation | 1511.08630#16 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 17 | # 4 Learning C-LSTM for Text Classiï¬cation
For text classiï¬cation, we regard the output of the hidden state at the last time step of LSTM as the document representation and we add a softmax layer on top. We train the entire model by minimizing the cross-entropy error. Given a training sample x(i) and its true label y(i) â {1, 2, · · · , k} where k is the number of possible labels and the estimated proba- y(i) j â [0, 1] for each label j â {1, 2, · · · , k}, bilities e the error is deï¬ned as:
k L(x(i), y(i)) = X j=1 1{y(i) = j} log( y(i) j ), e (5)
such where that otherwise 1{condition is false} = 0. We employ stochas- tic gradient descent (SGD) to learn the model parameters optimizer RM- Sprop (Tieleman and Hinton, 2012).
# 4.1 Padding and Word Vector Initialization | 1511.08630#17 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 18 | # 4.1 Padding and Word Vector Initialization
First, we use maxlen to denote the maximum length of the sentence in the training set. As the convo- lution layer in our model requires ï¬xed-length in- put, we pad each sentence that has a length less than maxlen with special symbols at the end that indicate the unknown words. For a sentence in the test dataset, we pad sentences that are shorter than maxlen in the same way, but for sentences that have a length longer than maxlen, we simply cut extra words at the end of these sentences to reach maxlen.
We initialize word vectors with the publicly avail- able word2vec vectors1 that are pre-trained using about 100B words from the Google News Dataset. The dimensionality of the word vectors is 300. We also initialize the word vector for the unknown words from the uniform distribution [-0.25, 0.25]. We then ï¬ne-tune the word vectors along with other model parameters during training.
# 4.2 Regularization | 1511.08630#18 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 19 | # 4.2 Regularization
For regularization, we employ two commonly used techniques: dropout (Hinton et al., 2012) and L2 weight regularization. We apply dropout to pre- vent co-adaptation. In our model, we either apply dropout to word vectors before feeding the sequence of words into the convolutional layer or to the output of LSTM before the softmax layer. The L2 regular- ization is applied to the weight of the softmax layer.
# 5 Experiments
We evaluate the C-LSTM model on two tasks: (1) sentiment classiï¬cation, and (2) question type clas- siï¬cation. In this section, we introduce the datasets and the experimental settings.
# 5.1 Datasets
Sentiment Classiï¬cation: Our task in this regard is to predict the sentiment polarity of movie reviews. We use the Stanford Sentiment Treebank (SST) benchmark (Socher et al., 2013b). This dataset consists of 11855 movie reviews and are split into train (8544), dev (1101), and test (2210). Sentences in this corpus are parsed and all phrases along with the sentences are fully annotated with | 1511.08630#19 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 20 | 1http://code.google.com/p/word2vec/
5 labels: very positive, positive, neural, negative, very negative. We consider two classiï¬cation tasks on this dataset: ï¬ne-grained classiï¬cation with 5 labels and binary classiï¬cation by removing the neural labels. dataset has a split of train (6920) / dev (872) / test (1821). Since the data is provided in the format of sub-sentences, we train the model on both phrases and sentences but only test on the sentences as in several previous works (Socher et al., 2013b; Kalchbrenner et al., 2014). Question type classiï¬cation: Question classiï¬ca- tion is an important step in a question answering system that classiï¬es a question into a speciï¬c type, e.g. âwhat is the highest waterfall in the United States?â is a question that belongs to âlocationâ. For this task, we use the benchmark TREC (Li and Roth, 2002). TREC divides all ques- including location, tions into 6 categories, human, entity, abbreviation, description and numeric. The training dataset contains 5452 labelled questions while the testing dataset contains 500 questions.
# 5.2 Experimental Settings | 1511.08630#20 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 21 | # 5.2 Experimental Settings
We implement our model based on Theano (Bastien et al., 2012) â a python library, which sup- ports efï¬cient symbolic differentiation and transpar- ent use of a GPU. To beneï¬t from the efï¬ciency of parallel computation of the tensors, we train the model on a GPU. For text preprocessing, we only convert all characters in the dataset to lower case.
For SST, we conduct hyperparameter (number of ï¬lters, ï¬lter length in CNN; memory dimension in LSTM; dropout rate and which layer to apply, etc.) tuning on the validation data in the standard split. For TREC, we hold out 1000 samples from the train- ing dataset for hyperparameter search and train the model using the remaining data. | 1511.08630#21 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 22 | In our ï¬nal settings, we only use one convolu- tional layer and one LSTM layer for both tasks. For the ï¬lter size, we investigated ï¬lter lengths of 2, 3 and 4 in two cases: a) single convolutional layer with the same ï¬lter length, and b) multiple convolu- tional layers with different lengths of ï¬lters in paral- lel. Here we denote the number of ï¬lters of length i by ni for ease of clariï¬cation. For the ï¬rst case, each n-gram window is transformed into ni convoluted | 1511.08630#22 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 23 | Model SVM NBoW Paragraph Vector RAE MV-RNN RNTN DRNN CNN-non-static CNN-multichannel DCNN Molding-CNN Dependency Tree-LSTM Constituency Tree-LSTM LSTM Bi-LSTM C-LSTM Fine-grained (%) Binary (%) Reported in (Socher et al., 2013b) (Kalchbrenner et al., 2014) (Le and Mikolov, 2014) (Socher, Pennington, et al., 2011) (Socher et al., 2012) (Socher et al., 2013b) (Irsoy and Cardie, 2014) (Kim, 2014) (Kim, 2014) (Kalchbrenner et al., 2014) (Lei et al., 2015) (Tai et al., 2015) (Tai et al., 2015) our implementation our implementation our implementation 79.4 80.5 87.8 82.4 82.9 85.4 86.6 87.2 88.1 86.8 88.6 85.7 88.0 86.6 87.9 87.8 40.7 42.4 48.7 43.2 44.4 45.7 49.8 48.0 47.4 48.5 51.2 48.4 51.0 46.6 47.8 49.2 | 1511.08630#23 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 24 | Table 1: Comparisons with baseline models on Stanford Sentiment Treebank. Fine-grained is a 5-class classiï¬cation task. Binary is a 2-classiï¬cation task. The second block contains the recursive models. The third block are methods related to convolutional neural networks. The fourth block contains methods using LSTM (the ï¬rst two methods in this block also use syntactic parsing trees). The ï¬rst block contains other baseline methods. The last block is our model.
features after convolution and the sequence of win- dow representations is fed into LSTM. For the latter case, since the number of windows generated from each convolution layer varies when the ï¬lter length varies (see L â k + 1 below equation (3)), we cut the window sequence at the end based on the maximum ï¬lter length that gives the shortest number of win- dows. Each window is represented as the concatena- tion of outputs from different convolutional layers. We also exploit different combinations of different ï¬lter lengths. We further present experimental anal- ysis of the exploration on ï¬lter size later. According to the experiments, we choose a single convolutional layer with ï¬lter length 3. | 1511.08630#24 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 25 | For SST, the number of ï¬lters of length 3 is set to be 150 and the memory dimension of LSTM is set to be 150, too. The word vector layer and the LSTM layer are dropped out with a probability of 0.5. For TREC, the number of ï¬lters is set to be 300 and the memory dimension is set to be 300. The word vec- tor layer and the LSTM layer are dropped out with a probability of 0.5. We also add L2 regularization with a factor of 0.001 to the weights in the softmax layer for both tasks.
# 6 Results and Model Analysis
In this section, we show our evaluation results on sentiment classiï¬cation and question type classiï¬ca- tion tasks. Moreover, we give some model analysis on the ï¬lter size conï¬guration.
# 6.1 Sentiment Classiï¬cation
The results are shown in Table 1. We compare our model with a large set of well-performed models on the Stanford Sentiment Treebank. | 1511.08630#25 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 26 | Generally, the baseline models consist of recur- sive models, convolutional neural network mod- els, LSTM related models and others. The re- cursive models employ a syntactic parse tree as the sentence structure and the sentence representa- tion is computed recursively in a bottom-up man- ner along the parse tree. Under this category, we choose recursive autoencoder (RAE), matrix-vector (MV-RNN), tensor based composition (RNTN) and multi-layer stacked (DRNN) recursive neural net- work as baselines. Among CNNs, we compare with Kimâs (2014) CNN model with ï¬ne-tuned word vec- tors (CNN-non-static) and multi-channels (CNN- multichannel), DCNN with dynamic k-max poolAcc Reported in 95.0 Silva et al .(2011) 91.8 Zhao et al .(2015) 92.4 Zhao et al .(2015) 93.6 Kim (2014) 92.2 Kim (2014) 93.0 Kalchbrenner et al. (2014) our implementation 93.2 our implementation 93.0 our implementation 94.6 Model SVM Paragraph Vector Ada-CNN CNN-non-static CNN-multichannel DCNN LSTM Bi-LSTM C-LSTM | 1511.08630#26 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 27 | Table 2: The 6-way question type classiï¬cation accuracy on TREC.
ing, Taoâs CNN (Molding-CNN) with low-rank ten- sor based non-linear and non-consecutive convo- lutions. Among LSTM related models, we ï¬rst compare with two tree-structured LSTM models (Dependence Tree-LSTM and Constituency Tree- LSTM) that adjust LSTM to tree-structured network topologies. Then we implement one-layer LSTM and Bi-LSTM by ourselves. Since we could not tune the result of Bi-LSTM to be as good as what has been reported in (Tai et al., 2015) even if following their untied weight conï¬guration, we report our own results. For other baseline methods, we compare against SVM with unigram and bigram features, NBoW with average word vector features and para- graph vector that infers the new paragraph vector for unseen documents. | 1511.08630#27 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 28 | To the best of our knowledge, we achieve the fourth best published result for the 5-class classi- ï¬cation task on this dataset. For the binary clas- siï¬cation task, we achieve comparable results with respect to the state-of-the-art ones. From Table 1, we have the following observations: (1) Although we did not beat the state-of-the-art ones, as an end- to-end model, the result is still promising and com- parable with thoes models that heavily rely on lin- guistic annotations and knowledge, especially syn- tactic parse trees. This indicates C-LSTM will be more feasible for various scenarios. (2) Compar- ing our results against single CNN and LSTM mod- els shows that LSTM does learn long-term depen- dencies across sequences of higher-level represen- tations better. We could explore in the future how to learn more compact higher-level representations by replacing standard convolution with other nonlinear feature mapping functions or appealing to tree-structured topologies before the convolutional layer.
# 6.2 Question Type Classiï¬cation | 1511.08630#28 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 29 | # 6.2 Question Type Classiï¬cation
The prediction accuracy on TREC question classiï¬- cation is reported in Table 2. We compare our model with a variety of models. The SVM classiï¬er uses unigrams, bigrams, wh-word, head word, POS tags, parser, hypernyms, WordNet synsets as engineered features and 60 hand-coded rules. Ada-CNN is a self-adaptiive hierarchical sentence model with gat- ing networks. Other baseline models have been in- troduced in the last task. From Table 2, we have the following observations: (1) Our result consistently outperforms all published neural baseline models, which means that C-LSTM captures intentions of TREC questions well. (2) Our result is close to that of the state-of-the-art SVM that depends on highly engineered features. Such engineered features not only demands human laboring but also leads to the error propagation in the existing NLP tools, thus couldnât generalize well in other datasets and tasks. With the ability of automatically learning semantic sentence representations, C-LSTM doesnât require any human-designed features and has a better scali- bility.
# 6.3 Model Analysis | 1511.08630#29 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 30 | # 6.3 Model Analysis
Here we investigate the impact of different ï¬lter con- ï¬gurations in the convolutional layer on the model performance.
In the convolutional layer of our model, ï¬lters are used to capture local n-gram features. Intuitively, multiple convolutional layers in parallel with differ0.950 0.945 0.940 y c a r u c c A 0.935 0.930 0.925 0.920 S:2 S:3 S:4 M:2,3 Filter configuration M:2,4 M:3,4 M:2,3,4
Figure 2: Prediction accuracies on TREC questions with dif- ferent ï¬lter size strategies. For the horizontal axis, S means single convolutional layer with the same ï¬lter length, and M means multiple convolutional layers in parallel with different ï¬lter lengths.
ent ï¬lter sizes should perform better than single con- volutional layers with the same length ï¬lters in that different ï¬lter sizes could exploit features of differ- ent n-grams. However, we found in our experiments that single convolutional layer with ï¬lter length 3 al- ways outperforms the other cases. | 1511.08630#30 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 31 | We show in Figure 2 the prediction accuracies on the 6-way question classiï¬cation task using differ- ent ï¬lter conï¬gurations. Note that we also observe the similar phenomenon in the sentiment classiï¬ca- tion task. For each ï¬lter conï¬guration, we report in Figure 2 the best result under extensive grid-search on hyperparameters. It it shown that single convolu- tional layer with ï¬lter length 3 performs best among all ï¬lter conï¬gurations. For the case of multiple convolutional layers in parallel, it is shown that ï¬l- ter conï¬gurations with ï¬lter length 3 performs better that those without tri-gram ï¬lters, which further con- ï¬rms that tri-gram features do play a signiï¬cant role in capturing local features in our tasks. We conjec- ture that LSTM could learn better semantic sentence representations from sequences of tri-gram features.
# 7 Conclusion and Future Work
We have described a novel, uniï¬ed model called C- LSTM that combines convolutional neural network with long short-term memory network (LSTM). C- LSTM is able to learn phrase-level features through | 1511.08630#31 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 32 | a convolutional layer; sequences of such higher- level representations are then fed into the LSTM to learn long-term dependencies. We evaluated the learned semantic sentence representations on senti- ment classiï¬cation and question type classiï¬cation tasks with very satisfactory results.
We could explore in the future ways to replace the standard convolution with tensor-based operations or tree-structured convolutions. We believe LSTM will beneï¬t from more structured higher-level repre- sentations.
# References
[Bastien et al.2012] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Ben- gio. 2012. Theano: new features and speed im- provements. Deep Learning and Unsupervised Fea- ture Learning NIPS 2012 Workshop.
[Cho et al.2014] Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learn- ing phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. | 1511.08630#32 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 33 | [Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â2537.
[Denil et al.2014] Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, and Nando de Freitas. 2014. Modelling, visualising and summarising doc- uments with a single convolutional neural network. arXiv preprint arXiv:1406.3830. Devlin, Zbib, [Devlin et al.2014] Jacob Thomas Lamar, Richard Zhongqiang Huang, Schwartz, and John Makhoul. Fast and 2014. robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1370â1380.
[Hinton et al.2012] Geoffrey E Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. The Computing Research Repository (CoRR). | 1511.08630#33 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 34 | [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780.
[Irsoy and Cardie2014] Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for composi- tionality in language. In Advances in Neural Informa- tion Processing Systems, pages 2096â2104.
[Johnson and Zhang2015] Rie Johnson and Tong Zhang. 2015. Effective use of word order for text categoriza- tion with convolutional neural networks. Human Lan- guage Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 103â 112. | 1511.08630#34 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 35 | [Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convo- lutional neural network for modelling sentences. Association for Computational Linguistics (ACL). [Kim2014] Yoon Kim. 2014. Convolutional neural net- works for sentence classiï¬cation. In Proceedings of Empirical Methods on Natural Language Processing. [Le and Mikolov2014] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188â1196.
[Lei et al.2015] Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. In Proceedings of Em- pirical Methods on Natural Language Processing. [Li and Roth2002] Xin Li and Dan Roth. 2002. Learn- ing question classiï¬ers. In Proceedings of the 19th in- ternational conference on Computational linguistics- Volume 1, pages 1â7. Association for Computational Linguistics. | 1511.08630#35 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 36 | [Li et al.2015] Jiwei Li, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of Em- pirical Methods on Natural Language Processing.
Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural infor- mation processing systems, pages 3111â3119.
[Mou et al.2015] Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. Unpublished manuscript: http://arxiv. org/abs/1504. 01106v5. Version, 5.
[Nair and Hinton2010] Vinod Nair and Geoffrey E Hin- ton. 2010. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th In- ternational Conference on Machine Learning (ICML- 10), pages 807â814.
[Pascanu et al.2014] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to | 1511.08630#36 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 37 | [Pascanu et al.2014] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to
construct deep recurrent neural networks. In Proceed- ings of the conference on International Conference on Learning Representations (ICLR).
[Sainath et al.2015] Tara N Sainath, Oriol Vinyals, An- drew Senior, and Hasim Sak. 2015. Convolutional, long short-term memory, fully connected deep neural networks. IEEE International Conference on Acous- tics, Speech and Signal Processing.
[Silva et al.2011] Joao Silva, Lu´ısa Coheur, Ana Cristina Mendes, and Andreas Wichert. 2011. From symbolic to sub-symbolic information in question classiï¬cation. Artiï¬cial Intelligence Review, 35(2):137â154.
Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix- vector spaces. In Proceedings of Empirical Methods on Natural Language Processing, pages 1201â1211.
John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer. | 1511.08630#37 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 38 | John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer.
[Socher et al.2013b] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recur- sive deep models for semantic compositionality over In Proceedings of Empirical a sentiment treebank. Methods on Natural Language Processing, volume 1631, page 1642. Citeseer.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural informa- tion processing systems, pages 3104â3112.
[Tai et al.2015] Kai Sheng Tai, Richard Socher, and Improved semantic Christopher D Manning. 2015. representations from tree-structured long short-term memory networks. Association for Computational Linguistics (ACL).
[Tang et al.2015] Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classiï¬cation. In Proceedings of Empirical Methods on Natural Language Process- ing. | 1511.08630#38 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08630 | 39 | [Tieleman and Hinton2012] T. Tieleman and G Hinton. 2012. Lecture 6.5 - rmsprop, coursera: Neural net- works for machine learning.
[Xu et al.2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Pro- ceedings of 2015th International Conference on Ma- chine Learning.
[Zhao et al.2015] Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence
model. In Proceedings of International Joint Confer- ences on Artiï¬cial Intelligence. | 1511.08630#39 | A C-LSTM Neural Network for Text Classification | Neural network models have been demonstrated to be capable of achieving
remarkable performance in sentence and document modeling. Convolutional neural
network (CNN) and recurrent neural network (RNN) are two mainstream
architectures for such modeling tasks, which adopt totally different ways of
understanding natural languages. In this work, we combine the strengths of both
architectures and propose a novel and unified model called C-LSTM for sentence
representation and text classification. C-LSTM utilizes CNN to extract a
sequence of higher-level phrase representations, and are fed into a long
short-term memory recurrent neural network (LSTM) to obtain the sentence
representation. C-LSTM is able to capture both local features of phrases as
well as global and temporal sentence semantics. We evaluate the proposed
architecture on sentiment classification and question classification tasks. The
experimental results show that the C-LSTM outperforms both CNN and LSTM and can
achieve excellent performance on these tasks. | http://arxiv.org/pdf/1511.08630 | Chunting Zhou, Chonglin Sun, Zhiyuan Liu, Francis C. M. Lau | cs.CL | null | null | cs.CL | 20151127 | 20151130 | [
{
"id": "1511.08630"
}
] |
1511.08099 | 0 | 5 1 0 2
v o N 5 2 ] I A . s c [
1 v 9 9 0 8 0 . 1 1 5 1 : v i X r a
# Strategic Dialogue Management via Deep Reinforcement Learning
Heriberto Cuay´ahuitl Interaction Lab Department of Computer Science Heriot-Watt University Edinburgh [email protected]
Simon Keizer Interaction Lab Department of Computer Science Heriot-Watt University Edinburgh [email protected]
Oliver Lemon Interaction Lab Department of Computer Science Heriot-Watt University Edinburgh [email protected]
# Abstract | 1511.08099#0 | Strategic Dialogue Management via Deep Reinforcement Learning | Artificially intelligent agents equipped with strategic skills that can
negotiate during their interactions with other natural or artificial agents are
still underdeveloped. This paper describes a successful application of Deep
Reinforcement Learning (DRL) for training intelligent agents with strategic
conversational skills, in a situated dialogue setting. Previous studies have
modelled the behaviour of strategic agents using supervised learning and
traditional reinforcement learning techniques, the latter using tabular
representations or learning with linear function approximation. In this study,
we apply DRL with a high-dimensional state space to the strategic board game of
Settlers of Catan---where players can offer resources in exchange for others
and they can also reply to offers made by other players. Our experimental
results report that the DRL-based learnt policies significantly outperformed
several baselines including random, rule-based, and supervised-based
behaviours. The DRL-based policy has a 53% win rate versus 3 automated players
(`bots'), whereas a supervised player trained on a dialogue corpus in this
setting achieved only 27%, versus the same 3 bots. This result supports the
claim that DRL is a promising framework for training dialogue systems, and
strategic agents with negotiation abilities. | http://arxiv.org/pdf/1511.08099 | Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon | cs.AI, cs.LG | NIPS'15 Workshop on Deep Reinforcement Learning | null | cs.AI | 20151125 | 20151125 | [] |
1511.08099 | 1 | Oliver Lemon Interaction Lab Department of Computer Science Heriot-Watt University Edinburgh [email protected]
# Abstract
Artiï¬cially intelligent agents equipped with strategic skills that can negotiate dur- ing their interactions with other natural or artiï¬cial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a sit- uated dialogue setting. Previous studies have modelled the behaviour of strate- gic agents using supervised learning and traditional reinforcement learning tech- niques, the latter using tabular representations or learning with linear function ap- proximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catanâwhere players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies signiï¬cantly out- performed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (âbotsâ), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.
# Introduction | 1511.08099#1 | Strategic Dialogue Management via Deep Reinforcement Learning | Artificially intelligent agents equipped with strategic skills that can
negotiate during their interactions with other natural or artificial agents are
still underdeveloped. This paper describes a successful application of Deep
Reinforcement Learning (DRL) for training intelligent agents with strategic
conversational skills, in a situated dialogue setting. Previous studies have
modelled the behaviour of strategic agents using supervised learning and
traditional reinforcement learning techniques, the latter using tabular
representations or learning with linear function approximation. In this study,
we apply DRL with a high-dimensional state space to the strategic board game of
Settlers of Catan---where players can offer resources in exchange for others
and they can also reply to offers made by other players. Our experimental
results report that the DRL-based learnt policies significantly outperformed
several baselines including random, rule-based, and supervised-based
behaviours. The DRL-based policy has a 53% win rate versus 3 automated players
(`bots'), whereas a supervised player trained on a dialogue corpus in this
setting achieved only 27%, versus the same 3 bots. This result supports the
claim that DRL is a promising framework for training dialogue systems, and
strategic agents with negotiation abilities. | http://arxiv.org/pdf/1511.08099 | Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon | cs.AI, cs.LG | NIPS'15 Workshop on Deep Reinforcement Learning | null | cs.AI | 20151125 | 20151125 | [] |
1511.08228 | 1 | Learning an algorithm from examples is a fundamental problem that has been widely studied. It has been addressed using neural networks too, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efï¬cient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an al- gorithmic task and successfully generalize to long instances. We veriï¬ed it on a number of tasks including long addition and long multiplication of numbers rep- resented in binary. We train the Neural GPU on numbers with up-to 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent net- works: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.
1 | 1511.08228#1 | Neural GPUs Learn Algorithms | Learning an algorithm from examples is a fundamental problem that has been
widely studied. Recently it has been addressed using neural networks, in
particular by Neural Turing Machines (NTMs). These are fully differentiable
computers that use backpropagation to learn their own programming. Despite
their appeal NTMs have a weakness that is caused by their sequential nature:
they are not parallel and are are hard to train due to their large depth when
unfolded.
We present a neural network architecture to address this problem: the Neural
GPU. It is based on a type of convolutional gated recurrent unit and, like the
NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly
parallel which makes it easier to train and efficient to run.
An essential property of algorithms is their ability to handle inputs of
arbitrary size. We show that the Neural GPU can be trained on short instances
of an algorithmic task and successfully generalize to long instances. We
verified it on a number of tasks including long addition and long
multiplication of numbers represented in binary. We train the Neural GPU on
numbers with upto 20 bits and observe no errors whatsoever while testing it,
even on much longer numbers.
To achieve these results we introduce a technique for training deep recurrent
networks: parameter sharing relaxation. We also found a small amount of dropout
and gradient noise to have a large positive effect on learning and
generalization. | http://arxiv.org/pdf/1511.08228 | Łukasz Kaiser, Ilya Sutskever | cs.LG, cs.NE | null | null | cs.LG | 20151125 | 20160315 | [] |
1511.08099 | 2 | # Introduction
Artiï¬cially intelligent agents can require strategic conversational skills to negotiate during their interactions with other natural or artiï¬cial agents, e.g. âA: I will give/tell you X if you give/tell me Y?, B: Okayâ. While typical conversations of artiï¬cial agents assume cooperative behaviour from partner conversants, strategic conversation does not assume full cooperation during the interaction between agents [2]. Throughout this paper, we will use a strategic card-trading board game to illustrate our approach. Board games with trading aspects aim not only at entertaining people, but also at training them with trading skills. Popular board games of this kind include Last Will, Settlers of Catan, and Power Grid, among others [20]. While these games can be played between humans, they can also be played between computers and humans. The trading behaviours of AI agents in computer games are usually based on carefully tuned rules [33], search algorithms such
1 | 1511.08099#2 | Strategic Dialogue Management via Deep Reinforcement Learning | Artificially intelligent agents equipped with strategic skills that can
negotiate during their interactions with other natural or artificial agents are
still underdeveloped. This paper describes a successful application of Deep
Reinforcement Learning (DRL) for training intelligent agents with strategic
conversational skills, in a situated dialogue setting. Previous studies have
modelled the behaviour of strategic agents using supervised learning and
traditional reinforcement learning techniques, the latter using tabular
representations or learning with linear function approximation. In this study,
we apply DRL with a high-dimensional state space to the strategic board game of
Settlers of Catan---where players can offer resources in exchange for others
and they can also reply to offers made by other players. Our experimental
results report that the DRL-based learnt policies significantly outperformed
several baselines including random, rule-based, and supervised-based
behaviours. The DRL-based policy has a 53% win rate versus 3 automated players
(`bots'), whereas a supervised player trained on a dialogue corpus in this
setting achieved only 27%, versus the same 3 bots. This result supports the
claim that DRL is a promising framework for training dialogue systems, and
strategic agents with negotiation abilities. | http://arxiv.org/pdf/1511.08099 | Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon | cs.AI, cs.LG | NIPS'15 Workshop on Deep Reinforcement Learning | null | cs.AI | 20151125 | 20151125 | [] |
1511.08228 | 2 | 1
# INTRODUCTION
Deep neural networks have recently proven successful at various tasks, such as computer vision (Krizhevsky et al., 2012), speech recognition (Dahl et al., 2012), and in other domains. Recurrent neural networks based on long short-term memory (LSTM) cells (Hochreiter & Schmidhuber, 1997) have been successfully applied to a number of natural language processing tasks. Sequence-to- sequence recurrent neural networks with such cells can learn very complex tasks in an end-to-end manner, such as translation (Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014), parsing (Vinyals & Kaiser et al., 2015), speech recognition (Chan et al., 2016) or image caption generation (Vinyals et al., 2014). Since so many tasks can be solved with essentially one model, a natural question arises: is this model the best we can hope for in supervised learning? | 1511.08228#2 | Neural GPUs Learn Algorithms | Learning an algorithm from examples is a fundamental problem that has been
widely studied. Recently it has been addressed using neural networks, in
particular by Neural Turing Machines (NTMs). These are fully differentiable
computers that use backpropagation to learn their own programming. Despite
their appeal NTMs have a weakness that is caused by their sequential nature:
they are not parallel and are are hard to train due to their large depth when
unfolded.
We present a neural network architecture to address this problem: the Neural
GPU. It is based on a type of convolutional gated recurrent unit and, like the
NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly
parallel which makes it easier to train and efficient to run.
An essential property of algorithms is their ability to handle inputs of
arbitrary size. We show that the Neural GPU can be trained on short instances
of an algorithmic task and successfully generalize to long instances. We
verified it on a number of tasks including long addition and long
multiplication of numbers represented in binary. We train the Neural GPU on
numbers with upto 20 bits and observe no errors whatsoever while testing it,
even on much longer numbers.
To achieve these results we introduce a technique for training deep recurrent
networks: parameter sharing relaxation. We also found a small amount of dropout
and gradient noise to have a large positive effect on learning and
generalization. | http://arxiv.org/pdf/1511.08228 | Łukasz Kaiser, Ilya Sutskever | cs.LG, cs.NE | null | null | cs.LG | 20151125 | 20160315 | [] |
1511.08099 | 3 | 1
as Monte-Carlo tree search [31, 9], and reinforcement learning with tabular representations [12, 11] or linear function approximation [26, 25]. However, the application of reinforcement learning is not trivial due to the complexity of the problem, e.g. large state-action spaces exhibited in strategic conversations. On the one hand, unique situations in the interaction can be described by a large number of variables (e.g. game board and resources available) so that enumerating them would result in very large state spaces. On the other hand, the action space can also be large due to the wide range of unique negotiations (e.g. givable and receivable resources). While one can aim for optimising the interaction via compression of the search space, it is usually not clear what features to incorporate in the state representation. This is a strong motivation for applying deep reinforcement learning for dialogue management, as ï¬rst proposed by (anon citation), so that the agent can simultaneously learn its feature representation and policy. In this paper, we present an application of deep reinforcement learning to learning trading dialogue for the game of Settlers of Catan. | 1511.08099#3 | Strategic Dialogue Management via Deep Reinforcement Learning | Artificially intelligent agents equipped with strategic skills that can
negotiate during their interactions with other natural or artificial agents are
still underdeveloped. This paper describes a successful application of Deep
Reinforcement Learning (DRL) for training intelligent agents with strategic
conversational skills, in a situated dialogue setting. Previous studies have
modelled the behaviour of strategic agents using supervised learning and
traditional reinforcement learning techniques, the latter using tabular
representations or learning with linear function approximation. In this study,
we apply DRL with a high-dimensional state space to the strategic board game of
Settlers of Catan---where players can offer resources in exchange for others
and they can also reply to offers made by other players. Our experimental
results report that the DRL-based learnt policies significantly outperformed
several baselines including random, rule-based, and supervised-based
behaviours. The DRL-based policy has a 53% win rate versus 3 automated players
(`bots'), whereas a supervised player trained on a dialogue corpus in this
setting achieved only 27%, versus the same 3 bots. This result supports the
claim that DRL is a promising framework for training dialogue systems, and
strategic agents with negotiation abilities. | http://arxiv.org/pdf/1511.08099 | Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon | cs.AI, cs.LG | NIPS'15 Workshop on Deep Reinforcement Learning | null | cs.AI | 20151125 | 20151125 | [] |
1511.08228 | 3 | Despite its recent success, the sequence-to-sequence model has limitations. In its basic form, the entire input is encoded into a single ï¬xed-size vector, so the model cannot generalize to inputs much longer than this ï¬xed capacity. One way to resolve this problem is by using an attention mechanism (Bahdanau et al., 2014). This allows the network to inspect arbitrary parts of the input in every de- coding step, so the basic limitation is removed. But other problems remain, and Joulin & Mikolov (2015) show a number of basic algorithmic tasks on which sequence-to-sequence LSTM networks fail to generalize. They propose a stack-augmented recurrent network, and it works on some prob- lems, but is limited in other ways.
In the best case one would desire a neural network model able to learn arbitrarily complex algorithms given enough resources. Neural Turing Machines (Graves et al., 2014) have this theoretical property. However, they are not computationally efï¬cient because they use soft attention and because they tend to be of considerable depth. Their depth makes the training objective difï¬cult to optimize and im- possible to parallelize because they are learning a sequential program. Their use of soft attention requires accessing the entire memory in order to simulate 1 step of computation, which introduces substantial overhead. These two factors make learning complex algorithms using Neural Turing Ma1 | 1511.08228#3 | Neural GPUs Learn Algorithms | Learning an algorithm from examples is a fundamental problem that has been
widely studied. Recently it has been addressed using neural networks, in
particular by Neural Turing Machines (NTMs). These are fully differentiable
computers that use backpropagation to learn their own programming. Despite
their appeal NTMs have a weakness that is caused by their sequential nature:
they are not parallel and are are hard to train due to their large depth when
unfolded.
We present a neural network architecture to address this problem: the Neural
GPU. It is based on a type of convolutional gated recurrent unit and, like the
NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly
parallel which makes it easier to train and efficient to run.
An essential property of algorithms is their ability to handle inputs of
arbitrary size. We show that the Neural GPU can be trained on short instances
of an algorithmic task and successfully generalize to long instances. We
verified it on a number of tasks including long addition and long
multiplication of numbers represented in binary. We train the Neural GPU on
numbers with upto 20 bits and observe no errors whatsoever while testing it,
even on much longer numbers.
To achieve these results we introduce a technique for training deep recurrent
networks: parameter sharing relaxation. We also found a small amount of dropout
and gradient noise to have a large positive effect on learning and
generalization. | http://arxiv.org/pdf/1511.08228 | Łukasz Kaiser, Ilya Sutskever | cs.LG, cs.NE | null | null | cs.LG | 20151125 | 20160315 | [] |
1511.08099 | 4 | Our scenario for strategic conversation is the game of Settlers of Catan, where players take the role of settlers on the ï¬ctitious island of Catanâsee Figure 1(left). The board game consists of 19 hexes randomly connected: 3 hills, 3 mountains, 4 forests, 4 pastures, 4 ï¬elds and 1 desert. In this island, hills produce clay, mountains produce ore, pastures produce sheep, ï¬elds produce wheat, forests produce wood, and the desert produces nothing. In our setting, four players attempt to settle on the island by building settlements and cities connected by roads. To build, players need speciï¬c resource cards, for example: a road requires clay and wood; a settlement requires clay, sheep, wheat and wood; a city requires three clay cards and two wheat cards; and a development card requires clay, sheep and wheat. Each player gets points for example by building a settlement (1 point) or a city (2 points), or by obtaining victory point cards (1 point each). A game consists of a sequence of turns, and each game turn starts with the roll of a die that can make the players obtain resources (depending | 1511.08099#4 | Strategic Dialogue Management via Deep Reinforcement Learning | Artificially intelligent agents equipped with strategic skills that can
negotiate during their interactions with other natural or artificial agents are
still underdeveloped. This paper describes a successful application of Deep
Reinforcement Learning (DRL) for training intelligent agents with strategic
conversational skills, in a situated dialogue setting. Previous studies have
modelled the behaviour of strategic agents using supervised learning and
traditional reinforcement learning techniques, the latter using tabular
representations or learning with linear function approximation. In this study,
we apply DRL with a high-dimensional state space to the strategic board game of
Settlers of Catan---where players can offer resources in exchange for others
and they can also reply to offers made by other players. Our experimental
results report that the DRL-based learnt policies significantly outperformed
several baselines including random, rule-based, and supervised-based
behaviours. The DRL-based policy has a 53% win rate versus 3 automated players
(`bots'), whereas a supervised player trained on a dialogue corpus in this
setting achieved only 27%, versus the same 3 bots. This result supports the
claim that DRL is a promising framework for training dialogue systems, and
strategic agents with negotiation abilities. | http://arxiv.org/pdf/1511.08099 | Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon | cs.AI, cs.LG | NIPS'15 Workshop on Deep Reinforcement Learning | null | cs.AI | 20151125 | 20151125 | [] |
1511.08228 | 4 | Published as a conference paper at ICLR 2016
chines difï¬cult. These issues are not limited to Neural Turing Machines, they apply to other architec- tures too, such as stack-RNNs (Joulin & Mikolov, 2015) or (De)Queue-RNNs (Grefenstette et al., 2015). One can try to alleviate these problems using hard attention and reinforcement learning, but such non-differentiable models do not learn well at present (Zaremba & Sutskever, 2015b).
In this work we present a neural network model, the Neural GPU, that addresses the above issues. It is a Turing-complete model capable of learning arbitrary algorithms in principle, like a Neural Turing Machine. But, in contrast to Neural Turing Machines, it is designed to be as parallel and as shallow as possible. It is more similar to a GPU than to a Turing machine since it uses a smaller num- ber of parallel computational steps. We show that the Neural GPU works in multiple experiments:
⢠A Neural GPU can learn long binary multiplication from examples. It is the ï¬rst neural network able to learn an algorithm whose run-time is superlinear in the size of its input. Trained on up-to 20-bit numbers, we see no single error on any inputs we tested, and we tested on numbers up-to 2000 bits long. | 1511.08228#4 | Neural GPUs Learn Algorithms | Learning an algorithm from examples is a fundamental problem that has been
widely studied. Recently it has been addressed using neural networks, in
particular by Neural Turing Machines (NTMs). These are fully differentiable
computers that use backpropagation to learn their own programming. Despite
their appeal NTMs have a weakness that is caused by their sequential nature:
they are not parallel and are are hard to train due to their large depth when
unfolded.
We present a neural network architecture to address this problem: the Neural
GPU. It is based on a type of convolutional gated recurrent unit and, like the
NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly
parallel which makes it easier to train and efficient to run.
An essential property of algorithms is their ability to handle inputs of
arbitrary size. We show that the Neural GPU can be trained on short instances
of an algorithmic task and successfully generalize to long instances. We
verified it on a number of tasks including long addition and long
multiplication of numbers represented in binary. We train the Neural GPU on
numbers with upto 20 bits and observe no errors whatsoever while testing it,
even on much longer numbers.
To achieve these results we introduce a technique for training deep recurrent
networks: parameter sharing relaxation. We also found a small amount of dropout
and gradient noise to have a large positive effect on learning and
generalization. | http://arxiv.org/pdf/1511.08228 | Łukasz Kaiser, Ilya Sutskever | cs.LG, cs.NE | null | null | cs.LG | 20151125 | 20160315 | [] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.