doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1606.02006 | 43 | Felix A. Gers, J¨urgen A. Schmidhuber, and Fred A. Cum- mins. 2000. Learning to forget: Continual prediction with LSTM. Neural Computation, pages 2451â2471. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence- to-sequence learning. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL), pages 1631â1640.
C¸ aglar G¨ulc¸ehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (ACL), pages 140â149.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Neural Computation, pages short-term memory. 1735â1780. | 1606.02006#43 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 44 | S´ebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In Pro- ceedings of the 53th Annual Meeting of the Associa- tion for Computational Linguistics (ACL) and the 7th Internationali Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing, ACL 2015, July 26-31, 2015, Bei- jing, China, Volume 1: Long Papers, pages 1â10. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1700â1709. Gen-ichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Creating corpora for speech-to-speech translation. In 8th European Confer- ence on Speech Communication and Technology, EUROSPEECH 2003 - INTERSPEECH 2003, Geneva, Switzerland, September 1-4, 2003, pages 381â384. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. CoRR. | 1606.02006#44 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 45 | method for stochastic optimization. CoRR.
Dietrich Klakow. 1998. Log-linear interpolation of lan- guage models. In Proceedings of the 5th International Conference on Speech and Language Processing (IC- SLP).
Phillip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 48â 54.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndËrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceed- ings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 177â180. Philipp Koehn. 2004. Statistical signiï¬cance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP). | 1606.02006#45 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 46 | Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of the 2006 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics (HLT-NAACL), pages 104â111. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015. Character-based neural machine transla- tion. CoRR.
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 1054â1063. | 1606.02006#46 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 47 | Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective approaches to attention- In Proceedings of based neural machine translation. the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412â1421. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the rare word problem in neural machine translation. In Proceedings of the 53th Annual Meeting of the As- sociation for Computational Linguistics (ACL) and the 7th Internationali Joint Conference on Natural Lan- guage Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 11â19.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn treebank. Computational Linguistics, pages 313â330.
Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary manipulation for neural machine transla- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 124â129. | 1606.02006#47 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 48 | Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kuro- hashi, and Hitoshi Isahara. 2016. Aspec: Asian scien- tiï¬c paper excerpt corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), pages 2204â2208.
Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable In Proceedings of Japanese morphological analysis. the 49th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 529â533.
Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.
Graham Neubig. 2013. Travatar: A forest-to-string ma- chine translation engine based on tree transducers. In Proceedings of the 51th Annual Meeting of the Associ- ation for Computational Linguistics (ACL), pages 91â 96.
Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, pages 19â51. | 1606.02006#48 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.02006 | 49 | Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, pages 19â51.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 311â318.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models 2016. In Proceedings of the 54th with monolingual data. Annual Meeting of the Association for Computational Linguistics (ACL), pages 86â96.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, 2014. Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Re- search, pages 1929â1958.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to sequence learning with neural networks. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS), pages 3104â 3112. | 1606.02006#49 | Incorporating Discrete Translation Lexicons into Neural Machine Translation | Neural machine translation (NMT) often makes mistakes in translating
low-frequency content words that are essential to understanding the meaning of
the sentence. We propose a method to alleviate this problem by augmenting NMT
systems with discrete translation lexicons that efficiently encode translations
of these low-frequency words. We describe a method to calculate the lexicon
probability of the next word in the translation candidate by using the
attention vector of the NMT model to select which source word lexical
probabilities the model should focus on. We test two methods to combine this
probability with the standard NMT probability: (1) using it as a bias, and (2)
linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3
BLEU and 0.13-0.44 NIST score, and faster convergence time. | http://arxiv.org/pdf/1606.02006 | Philip Arthur, Graham Neubig, Satoshi Nakamura | cs.CL | Accepted at EMNLP 2016 | null | cs.CL | 20160607 | 20161005 | [
{
"id": "1606.02006"
}
] |
1606.01885 | 0 | 6 1 0 2
n u J 6 ] G L . s c [
1 v 5 8 8 1 0 . 6 0 6 1 : v i X r a
# Learning to Optimize
# Ke Li Jitendra Malik
Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 United States {ke.li,malik}@eecs.berkeley.edu
# Abstract
Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the ï¬rst method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the ï¬nal objective value.
# Introduction | 1606.01885#0 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 1 | # Lo¨ıc Barrault LIUM, University of Le Mans, France [email protected]
# Abstract
The dominant approach for many NLP tasks are recurrent neural networks, in par- ticular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vi- sion. We present a new architecture (VD- CNN) for text processing which operates directly at the character level and uses only small convolutions and pooling oper- ations. We are able to show that the per- formance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of- the-art on several public text classiï¬cation tasks. To the best of our knowledge, this is the ï¬rst time that very deep convolutional nets have been applied to text processing.
terest in the research community and they are sys- tematically applied to all NLP tasks. However, while the use of (deep) neural networks in NLP has shown very good results for many tasks, it seems that they have not yet reached the level to outperform the state-of-the-art by a large margin, as it was observed in computer vision and speech recognition. | 1606.01781#1 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 1 | # Introduction
The current approach to designing algorithms is a laborious process. First, the designer must study the problem and devise an algorithm guided by a mixture of intuition, theoretical and/or empirical insight and general design paradigms. She then needs to analyze the algorithmâs performance on prototypical examples and compare it to that of existing algorithms. If the algorithm falls short, she must uncover the underlying cause and ï¬nd clever ways to overcome the discovered shortcomings. She iterates on this process until she arrives at an algorithm that is superior than existing algorithms. Given the often protracted nature of this process, a natural question to ask is: can we automate it?
In this paper, we focus on automating the design of unconstrained continuous optimization algorithms, which are some of the most powerful and ubiquitous tools used in all areas of science and engineering. Extensive work over the past several decades has yielded many popular methods, like gradient descent, momentum, conjugate gradient and L-BFGS. These algorithms share one commonality: they are all hand-engineered â that is, the steps of these algorithms are carefully designed by human experts. Just as deep learning has achieved tremendous success by automating feature engineering, automating algorithm design could open the way to similar performance gains. | 1606.01885#1 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 2 | Convolutional neural networks, in short Con- vNets, are very successful in computer vision. In early approaches to computer vision, handcrafted features were used, for instance âscale-invariant feature transform (SIFT)â(Lowe, 2004), followed by some classiï¬er. The fundamental idea of Con- vNets(LeCun et al., 1998) is to consider feature extraction and classiï¬cation as one jointly trained task. This idea has been improved over the years, in particular by using many layers of convolutions and pooling to sequentially extract a hierarchical representation(Zeiler and Fergus, 2014) of the in- put. The best networks are using more than 150 layers as in (He et al., 2016a; He et al., 2016b).
# 1 Introduction | 1606.01781#2 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 2 | We learn a better optimization algorithm by observing its execution. To this end, we formulate the problem as a reinforcement learning problem. Under this framework, any particular optimization algorithm simply corresponds to a policy. We reward optimization algorithms that converge quickly and penalize those that do not. Learning an optimization algorithm then reduces to ï¬nding an optimal policy, which can be solved using any reinforcement learning method. To differentiate the algorithm that performs learning from the algorithm that is learned, we will henceforth refer to the former as the âlearning algorithmâ or âlearnerâ and the latter as the âautonomous algorithmâ or âpolicyâ. We use an off-the-shelf reinforcement learning algorithm known as guided policy search [17], which has demonstrated success in a variety of robotic control settings [18, 10, 19, 12]. We show empirically that the autonomous optimization algorithm we learn converges faster and/or ï¬nds better optima than existing hand-engineered optimization algorithms.
# 2 Related Work | 1606.01885#2 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 3 | # 1 Introduction
The goal of natural language processing (NLP) is to process text with computers in order to analyze it, to extract information and eventually to rep- resent the same information differently. We may want to associate categories to parts of the text (e.g. POS tagging or sentiment analysis), struc- ture text differently (e.g. parsing), or convert it to some other form which preserves all or part of the content (e.g. machine translation, summariza- tion). The level of granularity of this processing can range from individual characters to subword units (Sennrich et al., 2016) or words up to whole sentences or even paragraphs. | 1606.01781#3 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 3 | # 2 Related Work
Early work has explored the general theme of speeding up learning with accumulation of learning experience. This line of work, known as âlearning to learnâ or âmeta-learningâ [1, 27, 5, 26], considers the problem of devising methods that can take advantage of knowledge learned on other related tasks to train faster, a problem that is today better known as multi-task learning and transfer learning. In contrast, the proposed method can learn to accelerate the training procedure itself, without necessarily requiring any training on related auxiliary tasks. | 1606.01885#3 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 4 | After a couple of pioneer works (Bengio et al. (2001), Collobert and Weston (2008), Collobert et al. (2011) among others), the use of neural net- works for NLP applications is attracting huge inMany NLP approaches consider words as ba- sic units. An important step was the introduction of continuous representations of words(Bengio et al., 2003). These word embeddings are now the state-of-the-art in NLP. However, it is less clear how we should best represent a sequence of words, e.g. a whole sentence, which has complicated syn- In general, in the tactic and semantic relations. same sentence, we may be faced with local and long-range dependencies. Currently, the main- stream approach is to consider a sentence as a se- quence of tokens (characters or words) and to pro- cess them with a recurrent neural network (RNN). Tokens are usually processed in sequential order, from left to right, and the RNN is expected to âmemorizeâ the whole sequence in its internal states. The most popular and successful RNN vari- ant are certainly LSTMs(Hochreiter and SchmidDataset Label Yelp P. Sample Been going | 1606.01781#4 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 4 | A different line of work, known as âprogramming by demonstrationâ [7], considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: Liang et al. [20] represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and Graves et al. [11] represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Subsequent work proposes variants of this model that use different primitive memory access operations [14], more expressive operations [16, 28] or other non-differentiable operations [30, 29]. Others consider building models that permit parallel execution [15] or training models with stronger supervision in the form of execution traces [23]. The aim of this line of work is to replicate the behaviour of simple existing algorithms from examples, rather than to learn a new algorithm that is better than existing algorithms. | 1606.01885#4 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 5 | internal states. The most popular and successful RNN vari- ant are certainly LSTMs(Hochreiter and SchmidDataset Label Yelp P. Sample Been going to Dr. Goldberg for over 10 years. I think I was one of his 1st patients when he started at MHMG. Hes been great over the years and is really all about the big picture. [...] I love this show, however, there are 14 episodes in the ï¬rst season and this DVD only shows the ï¬rst eight. [...]. I hope the BBC will release another DVD that contains all the episodes, but for now this one is still somewhat enjoyable. ju4 xi1n hua2 she4 5 yue4 3 ri4 , be3i ji1ng 2008 a4o yu4n hui4 huo3 ju4 jie1 li4 ji1ng guo4 shi4 jie4 wu3 da4 zho1u 21 ge4 che2ng shi4 âWhat should I look for when buying a laptop? What is the best brand and whatâs reliable?â,âWeight and dimensions are important if youâre planning to travel with the laptop. Get something with at least 512 mb of RAM. [..] is a good brand, and has an easy | 1606.01781#5 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 5 | There is a rich body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods [13, 4, 24, 25, 9] rely on sequential model-based Bayesian optimization [22, 6], while others adopt a random search approach [3] or use gradient- based optimization [2, 8, 21]. Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over the space of all possible optimization algorithms. In addition, when presented with a new objective function, hyperparameter optimization needs to conduct multiple trials with different hyperparameter settings to ï¬nd the optimal hyperparameters. In contrast, once training is complete, the autonomous algorithm knows how to choose hyperparameters on-the-ï¬y without needing to try different hyperparameter settings, even when presented with an objective function that it has not seen during training.
To the best of our knowledge, the proposed method represents the ï¬rst attempt to learn a better algorithm automatically.
# 3 Method
# 3.1 Preliminaries | 1606.01885#5 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01885 | 6 | To the best of our knowledge, the proposed method represents the ï¬rst attempt to learn a better algorithm automatically.
# 3 Method
# 3.1 Preliminaries
In the reinforcement learning setting, the learner is given a choice of actions to take in each time step, which changes the state of the environment in an unknown fashion, and receives feedback based on the consequence of the action. The feedback is typically given in the form of a reward or cost, and the objective of the learner is to choose a sequence of actions based on observations of the current environment that maximizes cumulative reward or minimizes cumulative cost over all time steps. | 1606.01885#6 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 7 | Table 1: Examples of text samples and their labels.
huber, 1997) â there are many works which have shown the ability of LSTMs to model long-range dependencies in NLP applications, e.g. (Sunder- meyer et al., 2012; Sutskever et al., 2014) to name just a few. However, we argue that LSTMs are generic learning machines for sequence process- ing which are lacking task-speciï¬c structure.
several sentence classiï¬cation tasks, initially pro- posed by (Zhang et al., 2015). These tasks and our experimental results are detailed in section 4. The proposed deep convolutional network shows signiï¬cantly better results than previous ConvNets approach. The paper concludes with a discus- sion of future research directions for very deep ap- proach in NLP. | 1606.01781#7 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 7 | A reinforcement learning problem is typically formally represented as an Markov decision process (MDP). We consider a ï¬nite-horizon MDP with continuous state and action spaces deï¬ned by the tuple (S, A, p0, p, c, γ), where S is the set of states, A is the set of actions, p0 : S â R+ is the probability density over initial states, p : S à A à S â R+ is the transition probability density, that is, the conditional probability density over successor states given the current state and action, c : S â R is a function that maps state to cost and γ â (0, 1] is the discount factor. The objective is to learn a stochastic policy Ïâ : S à A â R+, which is a conditional probability density over actions given the current state, such that the expected cumulative cost is minimized. That
2
is,
Tv smi t ⢠= argmin Eso,a0,s1,...,97 » Y 2) : t=0
where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density
T-1 4 (80; 40; 81,-+-, 87) = Po (So) Il T (az| 84) P ( Sepa] Se, 44) - t=0 | 1606.01885#7 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 8 | It is well known that a fully connected one hidden layer neural network can in principle learn any real- valued function, but much better results can be obtained with a deep problem-speciï¬c architec- ture which develops hierarchical representations. By these means, the search space is heavily con- strained and efï¬cient solutions can be learned with gradient descent. ConvNets are namely adapted for computer vision because of the compositional structure of an image. Texts have similar proper- ties : characters combine to form n-grams, stems, words, phrase, sentences etc.
We believe that a challenge in NLP is to develop deep architectures which are able to learn hierar- chical representations of whole sentences, jointly with the task. In this paper, we propose to use deep architectures of many convolutional layers to ap- proach this goal, using up to 29 layers. The design of our architecture is inspired by recent progress in computer vision, in particular (Simonyan and Zisserman, 2015; He et al., 2016a).
This paper is structured as follows. There have been previous attempts to use ConvNets for text processing. We summarize the previous works in the next section and discuss the relations and dif- ferences. Our architecture is described in detail in section 3. We have evaluated our approach on
# 2 Related work | 1606.01781#8 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 8 | T-1 4 (80; 40; 81,-+-, 87) = Po (So) Il T (az| 84) P ( Sepa] Se, 44) - t=0
This problem of ï¬nding the cost-minimizing policy is known as the policy search problem. To enable generalization to unseen states, the policy is typically parameterized and minimization is performed over representable policies. Solving this problem exactly is intractable in all but selected special cases. Therefore, policy search methods generally tackle this problem by solving it approximately.
In many practical settings, p, which characterizes the dynamics, is unknown and must therefore be estimated. Additionally, because it is often equally important to minimize cost at earlier and later time steps, we will henceforth focus on the undiscounted setting, i.e. the setting where γ = 1. | 1606.01885#8 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 9 | # 2 Related work
There is a large body of research on sentiment analysis, or more generally on sentence classiï¬ca- tion tasks. Initial approaches followed the clas- sical two stage scheme of extraction of (hand- crafted) features, followed by a classiï¬cation stage. Typical features include bag-of-words or n- grams, and their TF-IDF. These techniques have been compared with ConvNets by (Zhang et al., 2015; Zhang and LeCun, 2015). We use the same corpora for our experiments. More recently, words or characters, have been projected into a low-dimensional space, and these embeddings are combined to obtain a ï¬xed size representation of the input sentence, which then serves as input for the classiï¬er. The simplest combination is the element-wise mean. This usually performs badly since all notion of token order is disregarded. | 1606.01781#9 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 9 | Guided policy search [17] is a method for performing policy search in continuous state and action spaces under possibly unknown dynamics. It works by alternating between computing a target distribution over trajectories that is encouraged to minimize cost and agree with the current policy and learning parameters of the policy in a standard supervised fashion so that sample trajectories from executing the policy are close to sample trajectories drawn from the target distribution. The target trajectory distribution is computed by iteratively ï¬tting local time-varying linear and quadratic approximations to the (estimated) dynamics and cost respectively and optimizing over a restricted class of linear-Gaussian policies subject to a trust region constraint, which can be solved efï¬ciently in closed form using a dynamic programming algorithm known as linear-quadratic-Gaussian (LQG). We refer interested readers to [17] for details.
# 3.2 Formulation
Consider the general structure of an algorithm for unconstrained continuous optimization, which is outlined in Algorithm 1. Starting from a random location in the domain of the objective function, the algorithm iteratively updates the current location by a step vector computed from some functional Ï of the objective function, the current location and past locations.
# Algorithm 1 General structure of optimization algorithms
# Require: Objective function f | 1606.01885#9 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 10 | Another class of approaches are recursive neu- ral networks. The main idea is to use an ex- ternal tool, namely a parser, which speciï¬es the order in which the word embeddings are com- bined. At each node, the left and right context are combined using weights which are shared for all nodes (Socher et al., 2011). The state of the top node is fed to the classiï¬er. A recurrent neural network (RNN) could be considered as a special case of a recursive NN: the combination is performed sequentially, usually from left to right. The last state of the RNN is used as ï¬xed-sized representa- tion of the sentence, or eventually a combination of all the hidden states.
First works using convolutional neural networks for NLP appeared in (Collobert and Weston, 2008; Collobert et al., 2011). They have been subse- quently applied to sentence classiï¬cation (Kim, 2014; Kalchbrenner et al., 2014; Zhang et al., 2015). We will discuss these techniques in more detail below. If not otherwise stated, all ap- proaches operate on words which are projected into a high-dimensional space. | 1606.01781#10 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 10 | # Algorithm 1 General structure of optimization algorithms
# Require: Objective function f
x(0) â random point in the domain of f for i = 1, 2, . . . do âx â Ï(f, {x(0), . . . , x(iâ1)}) if stopping condition is met then return x(iâ1) end if x(i) â x(iâ1) + âx end for
This framework subsumes all existing optimization algorithms. Different optimization algorithms differ in the choice of Ï. First-order methods use a Ï that depends only on the gradient of the objective function, whereas second-order methods use a Ï that depends on both the gradient and the Hessian of the objective function. In particular, the following choice of Ï yields the gradient descent method:
Ï(f, {x(0), . . . , x(iâ1)}) = âγâf (x(iâ1)),
where γ denotes the step size or learning rate. Similarly, the following choice of Ï yields the gradient descent method with momentum:
i-1 D}) = 9 [ Sra hovy(el) j=0 mf, {r,..
3
where γ again denotes the step size and α denotes the momentum decay factor. | 1606.01885#10 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 11 | A rather shallow neural net was proposed in (Kim, 2014): one convolutional layer (using multiple widths and ï¬lters) followed by a max pooling layer over time. The ï¬nal classiï¬er uses one fully connected layer with drop-out. Results are reported on six data sets, in particular Stanford Sentiment Treebank (SST). A similar system was proposed in (Kalchbrenner et al., 2014), but us- ing ï¬ve convolutional layers. An important differ- ence is also the introduction of multiple temporal k-max pooling layers. This allows to detect the k most important features in a sentence, independent of their speciï¬c position, preserving their relative order. The value of k depends on the length of the sentence and the position of this layer in the network. (Zhang et al., 2015) were the ï¬rst to per- form sentiment analysis entirely at the character level. Their systems use up to six convolutional layers, followed by three fully connected classiï¬- cation layers. Convolutional kernels of size 3 and 7 are used, as well as simple max-pooling layers. Another interesting aspect of this paper is the in- troduction of several | 1606.01781#11 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 11 | 3
where γ again denotes the step size and α denotes the momentum decay factor.
Therefore, if we can learn Ï, we will be able to learn an optimization algorithm. Since it is difï¬cult to model general functionals, in practice, we restrict the dependence of Ï on the objective function f to objective values and gradients evaluated at current and past locations. Hence, Ï can be simply modelled as a function from the objective values and gradients along the trajectory taken by the optimizer so far to the next step vector.
We observe that the execution of an optimization algorithm can be viewed as the execution of a ï¬xed policy in an MDP: the state consists of the current location and the objective values and gradients evaluated at the current and past locations, the action is the step vector that is used to update the current location, and the transition probability is partially characterized by the location update formula, x(i) â x(iâ1) + âx. The policy that is executed corresponds precisely to the choice of Ï used by the optimization algorithm. For this reason, we will also use Ï to denote the policy at hand. Under this formulation, searching over policies corresponds to searching over all possible ï¬rst-order optimization algorithms. | 1606.01885#11 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 12 | kernels of size 3 and 7 are used, as well as simple max-pooling layers. Another interesting aspect of this paper is the in- troduction of several large-scale data sets for text classiï¬cation. We use the same experimental set- ting (see section 4.1). The use of character level information was also proposed by (Dos Santos and Gatti, 2014): all the character embeddings of one word are combined by a max operation and they are then jointly used with the word embedding in- formation in a shallow architecture. In parallel to our work, (Yang et al., 2016) proposed a based hi- erarchical attention network for document classi- ï¬cation that perform an attention ï¬rst on the sen- tences in the document, and on the words in the sentence. Their architecture performs very well on datasets whose samples contain multiple sentences. | 1606.01781#12 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 12 | We can use reinforcement learning to learn the policy Ï. To do so, we need to deï¬ne the cost function, which should penalize policies that exhibit undesirable behaviours during their execution. Since the performance metric of interest for optimization algorithms is the speed of convergence, the cost function should penalize policies that converge slowly. To this end, assuming the goal is to minimize the objective function, we deï¬ne cost at a state to be the objective value at the current location. This encourages the policy to reach the minimum of the objective function as quickly as possible.
Since the policy Ï may be stochastic in general, we model each dimension of the action conditional on the state as an independent Gaussian whose mean is given by a regression model and variance is some learned constant. We choose to parameterize the mean of Ï using a neural net, due to its appealing properties as a universal function approximator and strong empirical performance in a variety of applications. We use guided policy search to learn the parameters of the policy.
We use a training set consisting of different randomly generated objective functions. We evaluate the resulting autonomous algorithm on different objective functions drawn from the same distribution.
# 3.3 Discussion | 1606.01885#12 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 13 | In the computer vision community, the com- bination of recurrent and convolutional networks in one architecture has also been investigated, with the goal to âget the best of both worldsâ, e.g. (Pinheiro and Collobert, 2014). The same idea was recently applied to sentence classiï¬ca- tion (Xiao and Cho, 2016). A convolutional net- work with up to ï¬ve layers is used to learn high- level features which serve as input for an LSTM. The initial motivation of the authors was to ob- tain the same performance as (Zhang et al., 2015) with networks which have signiï¬cantly fewer pa- rameters. They report results very close to those of (Zhang et al., 2015) or even outperform Con- vNets for some data sets. | 1606.01781#13 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 13 | We use a training set consisting of different randomly generated objective functions. We evaluate the resulting autonomous algorithm on different objective functions drawn from the same distribution.
# 3.3 Discussion
An autonomous optimization algorithm offers several advantages over hand-engineered algorithms. First, an autonomous optimizer is trained on real algorithm execution data, whereas hand-engineered optimizers are typically derived by analyzing objective functions with properties that may or may not be satisï¬ed by objective functions that arise in practice. Hence, an autonomous optimizer minimizes the amount of a priori assumptions made about objective functions and can instead take full advantage of the information about the actual objective functions of interest. Second, an autonomous optimizer has no hyperparameters that need to be tuned by the user. Instead of just computing a step direction which must then be combined with a user-speciï¬ed step size, an autonomous optimizer predicts the step direction and size jointly. This allows the autonomous optimizer to dynamically adjust the step size based on the information it has acquired about the objective function while performing the optimization. Finally, when an autonomous optimizer is trained on a particular class of objective functions, it may be able to discover hidden structure in the geometry of the class of objective functions. At test time, it can then exploit this knowledge to perform optimization faster.
# Implementation Details | 1606.01885#13 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 14 | In summary, we are not aware of any work that uses VGG-like or ResNet-like architecture to go deeper than than six convolutional layers (Zhang et al., 2015) for sentence classiï¬cation. Deeper networks were not tried or they were re- ported to not improve performance. This is in sharp contrast to the current trend in computer vi- sion where signiï¬cant improvements have been re- ported using much deeper networks(Krizhevsky et al., 2012), namely 19 layers (Simonyan and Zis- serman, 2015), or even up to 152 layers (He et al., 2016a). In the remainder of this paper, we describe our very deep convolutional architecture and re- port results on the same corpora than (Zhang et al., 2015). We were able to show that performance improves with increased depth, using up to 29 con- volutional layers.
# 3 VDCNN Architecture
The overall architecture of our network is shown in Figure 1. Our model begins with a look-up ta- ble that generates a 2D tensor of size (f0, s) that contain the embeddings of the s characters. s is ï¬xed to 1024, and f0 can be seen as the âRGBâ dimension of the input text. | 1606.01781#14 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 14 | # Implementation Details
We store the current location, previous gradients and improvements in the objective value from previous iterations in the state. We keep track of only the information pertaining to the previous H time steps and use H = 25 in our experiments. More speciï¬cally, the dimensions of the state space encode the following information:
Current location in the domain ⢠Change in the objective value at the current location relative to the objective value at the ith
most recent location for all i â {2, . . . , H + 1}
⢠Gradient of the objective function evaluated at the ith most recent location for all i â {2, . . . , H + 1}
4
Initially, we set the dimensions corresponding to historical information to zero. The current location is only used to compute the cost; because the policy should not depend on the absolute coordinates of the current location, we exclude it from the input that is fed into the neural net. | 1606.01885#14 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 15 | We ï¬rst apply one layer of 64 convolutions of size 3, followed by a stack of temporal âconvolu- tional blocksâ. Inspired by the philosophy of VGG and ResNets we apply these two design rules: (i) for the same output temporal resolution, the layers have the same number of feature maps, (ii) when the temporal resolution is halved, the number of feature maps is doubled. This helps reduce the memory footprint of the network. The networks contains 3 pooling operations (halving the tempo# fc(2048, nClasses)
# I
fc(2048, 2048), ReLU | 1606.01781#15 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 15 | We use a small neural net to model the policy. Its architecture consists of a single hidden layer with 50 hidden units. Softplus activation units are used in the hidden layer and linear activation units are used in the output layer. The training objective imposed by guided policy search takes the form of the squared Mahalanobis distance between mean predicted and target actions along with other terms dependent on the variance of the policy. We also regularize the entropy of the policy to encourage deterministic actions conditioned on the state. The coefï¬cient on the regularizer increases gradually in later iterations of guided policy search. We initialize the weights of the neural net randomly and do not regularize the magnitude of weights.
Initially, we set the target trajectory distribution so that the mean action given state at each time step matches the step vector used by the gradient descent method with momentum. We choose the best settings of the step size and momentum decay factor for each objective function in the training set by performing a grid search over hyperparameters and running noiseless gradient descent with momentum for each hyperparameter setting.
For training, we sample 20 trajectories with a length of 40 time steps for each objective function in the training set. After each iteration of guided policy search, we sample new trajectories from the new distribution and discard the trajectories from the preceding iteration.
# 4 Experiments | 1606.01885#15 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 16 | # I
fc(2048, 2048), ReLU
fc(4096, 2048), ReLU output: 512 x k k-max pooling, k=8 Convolutional Block, 3, 512 optional shortcut Convolutional Block, 3, 512 output: 512 x s/8 pool/2 optional shortcut Convolutional Block, 3, 256 optional shortcut Convolutional Block, 3, 256 Convolutional Block, 3, 256 output: 256 x s/4 pool/2 optional shortcut Convolutional Block, 3, 128 optional shortcut Convolutional Block, 3, 128 output: 128 x s/2 pool/2 optional shortcut Convolutional Block, 3, 64 optional shortcut Convolutional Block, 3, 64 output: 64 x s 3, Temp Conv, 64 output: 16 x s Lookup table, 16 input : 1 x s
# Text
Figure 1: VDCNN architecture. | 1606.01781#16 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 16 | # 4 Experiments
We learn autonomous optimization algorithms for various convex and non-convex classes of objective functions that correspond to loss functions for different machine learning models. We ï¬rst learn an autonomous optimizer for logistic regression, which induces a convex loss function. We then learn an autonomous optimizer for robust linear regression using the Geman-McClure M-estimator, whose loss function is non-convex. Finally, we learn an autonomous optimizer for a two-layer neural net classiï¬er with ReLU activation units, whose error surface has even more complex geometry.
# 4.1 Logistic Regression
We consider a logistic regression model with an ¢2 regularizer on the weight vector. Training the model requires optimizing the following objective:
wb n n Xr ¢ min ââ yi logo (w? x; +b) +(1ây)log(l-o (w? x; +b))+ 3 \jw||5 ; i=l | 1606.01885#16 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 17 | # Text
Figure 1: VDCNN architecture.
ral resolution each time by 2), resulting in 3 levels of 128, 256 and 512 feature maps (see Figure 1). The output of these convolutional blocks is a ten- sor of size 512 à sd, where sd = s 2p with p = 3 the number of down-sampling operations. At this level of the convolutional network, the resulting tensor can be seen as a high-level representation of the input text. Since we deal with padded in- put text of ï¬xed size, sd is constant. However, in the case of variable size input, the convolu- tional encoder provides a representation of the in- put text that depends on its initial length s. Repre- sentations of a text as a set of vectors of variable size can be valuable namely for neural machine translation, in particular when combined with an In Figure 1, temporal convolu- attention model. tions with kernel size 3 and X feature maps are denoted â3, Temp Conv, Xâ, fully connected layers which are linear projections (matrix of size I à O) are denoted âfc(I, O)â and â3-max pooling, stride 2â means temporal max- pooling with kernel size 3 and stride 2. | 1606.01781#17 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 17 | where w â Rd and b â R denote the weight vector and bias respectively, xi â Rd and yi â {0, 1} denote the feature vector and label of the ith instance, λ denotes the coefï¬cient on the regularizer and Ï(z) := 1 1+eâz . For our experiments, we choose λ = 0.0005 and d = 3. This objective is convex in w and b.
We train an autonomous algorithm that learns to optimize objectives of this form. The training set consists of examples of such objective functions whose free variables, which in this case are xi and yi, are all assigned concrete values. Hence, each objective function in the training set corresponds to a logistic regression problem on a different dataset.
To construct the training set, we randomly generate a dataset of 100 instances for each function in the training set. The instances are drawn randomly from two multivariate Gaussians with random means and covariances, with half drawn from each. Instances from the same Gaussian are assigned the same label and instances from different Gaussians are assigned different labels. | 1606.01885#17 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 18 | Most of the previous applications of ConvNets to NLP use an architecture which is rather shal- low (up to 6 convolutional layers) and combines convolutions of different sizes, e.g. spanning 3, 5 and 7 tokens. This was motivated by the fact that convolutions extract n-gram features over tokens and that different n-gram lengths are needed to model short- and long-span relations. In this work, we propose to create instead an architecture which uses many layers of small convolutions (size 3). Stacking 4 layers of such convolutions results in a span of 9 tokens, but the network can learn by it- self how to best combine these different â3-gram featuresâ in a deep hierarchical manner. Our ar- chitecture can be in fact seen as a temporal adap- tation of the VGG network (Simonyan and Zisser- man, 2015). We have also investigated the same kind of âResNet shortcutâ connections as in (He et al., 2016a), namely identity and 1 Ã 1 convolu- tions (see Figure 1). | 1606.01781#18 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 18 | We train the autonomous algorithm on a set of 90 objective functions. We evaluate it on a test set of 100 random objective functions generated using the same procedure and compare to popular hand-engineered algorithms, such as gradient descent, momentum, conjugate gradient and L-BFGS. All baselines are run with the best hyperparameter settings tuned on the training set.
For each algorithm and objective function in the test set, we compute the difference between the objective value achieved by a given algorithm and that achieved by the best of the competing
5
(a) (b) (c)
Figure 1: (a) Mean margin of victory of each algorithm for optimizing the logistic regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. | 1606.01885#18 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 19 | For the classiï¬cation tasks in this work, the tem- poral resolution of the output of the convolution blocks is ï¬rst down-sampled to a ï¬xed dimension using k-max pooling. By these means, the net- work extracts the k most important features, inde- pendently of the position they appear in the sen- tence. The 512 à k resulting features are trans- formed into a single vector which is the input to a three layer fully connected classiï¬er with ReLU hidden units and softmax outputs. The number of
ReLU Temporal Batch Norm 3, Temp Conv, 256 ReLU Temporal Batch Norm 3, Temp Conv, 256
Figure 2: Convolutional block.
output neurons depends on the classiï¬cation task, the number of hidden units is set to 2048, and k to 8 in all experiments. We do not use drop-out with the fully connected layers, but only temporal batch normalization after convolutional layers to regularize our network.
# Convolutional Block | 1606.01781#19 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 19 | algorithms at every iteration, a quantity we will refer to as âthe margin of victoryâ. This quantity is positive when the current algorithm is better than all other algorithms and negative otherwise. In Figure 1a, we plot the mean margin of victory of each algorithm at each iteration averaged over all objective functions in the test set. We ï¬nd that conjugate gradient and L-BFGS diverge or oscillate in rare cases (on 6% of the objective functions in the test set), even though the autonomous algorithm, gradient descent and momentum do not. To reï¬ect performance of these baselines in the majority of cases, we exclude the offending objective functions when computing the mean margin of victory.
As shown, the autonomous algorithm outperforms gradient descent, momentum and conjugate gradient at almost every iteration. The margin of victory of the autonomous algorithm is quite high in early iterations, indicating that the autonomous algorithm converges much faster than other algorithms. It is interesting to note that despite having seen only trajectories of length 40 at training time, the autonomous algorithm is able to generalize to much longer time horizons at test time. L-BFGS converges to slightly better optima than the autonomous algorithm and the momentum method. This is not surprising, as the objective functions are convex and L-BFGS is known to be a very good optimizer for convex optimization problems. | 1606.01885#19 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 20 | # Convolutional Block
Each convolutional block (see Figure 2) is a se- layers, each one quence of two convolutional followed by a temporal BatchNorm (Ioffe and Szegedy, 2015) layer and an ReLU activation. The kernel size of all the temporal convolutions is 3, with padding such that the temporal resolution is preserved (or halved in the case of the convolu- tional pooling with stride 2, see below). Steadily increasing the depth of the network by adding more convolutional layers is feasible thanks to the limited number of parameters of very small con- volutional ï¬lters in all layers. Different depths of the overall architecture are obtained by vary- ing the number of convolutional blocks in between the pooling layers (see table 2). Temporal batch normalization applies the same kind of regulariza- tion as batch normalization except that the activa- tions in a mini-batch are jointly normalized over temporal (instead of spatial) locations. So, for a mini-batch of size m and feature maps of tempo- ral size s, the sum and the standard deviations re- lated to the BatchNorm algorithm are taken over |B| = m · s terms.
We explore three types of down-sampling be- tween blocks Ki and Ki+1 (Figure 1) : | 1606.01781#20 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 20 | We show the performance of each algorithm on two objective functions from the test set in Figures 1b and 1c. In Figure 1b, the autonomous algorithm converges faster than all other algorithms. In Figure 1c, the autonomous algorithm initially converges faster than all other algorithms but is later overtaken by L-BFGS, while remaining faster than all other optimizers. However, it eventually achieves the same objective value as L-BFGS, while the objective values achieved by gradient descent and momentum remain much higher.
# 4.2 Robust Linear Regression
Next, we consider the problem of linear regression using a robust loss function. One way to ensure robustness is to use an M-estimator for parameter estimation. A popular choice is the Geman-McClure estimator, which induces the following objective:
2 min i SS _ (i= wtxi = bd) = wTxi - 5) 2 ~_ wlx, â bp)?â wb 2 2+ (y; â wx; âb) | 1606.01885#20 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 21 | We explore three types of down-sampling be- tween blocks Ki and Ki+1 (Figure 1) :
(i) The ï¬rst convolutional stride 2 (ResNet-like). layer of Ki+1 has
(ii) Ki is followed by a k-max pooling layer where k is such that the resolution is halved
(Kalchbrenner et al., 2014).
(iii) Ki is followed by max-pooling with kernel size 3 and stride 2 (VGG-like).
All these types of pooling reduce the temporal res- olution by a factor 2. At the ï¬nal convolutional layer, the resolution is thus sd.
Depth: conv block 512 conv block 256 conv block 128 conv block 64 First conv. layer #params [in M] 9 2 2 2 2 1 2.2 17 4 4 4 4 1 4.3 29 4 4 10 10 1 4.6 49 6 10 16 16 1 7.8 | 1606.01781#21 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 21 | where w â Rd and b â R denote the weight vector and bias respectively, xi â Rd and yi â R denote the feature vector and label of the ith instance and c â R is a constant that modulates the shape of the loss function. For our experiments, we use c = 1 and d = 3. This loss function is not convex in either w or b.
As with the preceding section, each objective function in the training set is a function of the above form with realized values for xi and yi. The dataset for each objective function is generated by drawing 25 random samples from each one of four multivariate Gaussians, each of which has a random mean and the identity covariance matrix. For all points drawn from the same Gaussian, their labels are generated by projecting them along the same random vector, adding the same randomly generated bias and perturbing them with i.i.d. Gaussian noise.
6
(a) (b) (c)
Figure 2: (a) Mean margin of victory of each algorithm for optimizing the robust linear regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. | 1606.01885#21 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 22 | Table 2: Number of conv. layers per depth. In this work, we have explored four depths for our networks: 9, 17, 29 and 49, which we de- ï¬ne as being the number of convolutional lay- ers. The depth of a network is obtained by sum- ming the number of blocks with 64, 128, 256 and 512 ï¬lters, with each block containing two con- In Figure 1, the network has volutional layers. 2 blocks of each type, resulting in a depth of 2 à (2 + 2 + 2 + 2) = 16. Adding the very ï¬rst convolutional layer, this sums to a depth of 17 con- volutional layers. The depth can thus be increased or decreased by adding or removing convolutional blocks with a certain number of ï¬lters. The best conï¬gurations we observed for depths 9, 17, 29 and 49 are described in Table 2. We also give the number of parameters of all convolutional layers.
# 4 Experimental evaluation
# 4.1 Tasks and data | 1606.01781#22 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 22 | The autonomous algorithm is trained on a set of 120 objective functions. We evaluate it on 100 randomly generated objective functions using the same metric as above. As shown in Figure 2a, the autonomous algorithm outperforms all hand-engineered algorithms except at early iterations. While it dominates gradient descent, conjugate gradient and L-BFGS at all times, it does not make progress as quickly as the momentum method initially. However, after around 30 iterations, it is able to close the gap and surpass the momentum method. On this optimization problem, both conjugate gradient and L-BFGS diverge quickly. Interestingly, unlike in the previous experiment, L-BFGS no longer performs well, which could be caused by non-convexity of the objective functions.
Figures 2b and 2c show performance on objective functions from the test set. In Figure 2b, the autonomous optimizer not only converges the fastest, but also reaches a better optimum than all other algorithms. In Figure 2c, the autonomous algorithm converges the fastest and is able to avoid most of the oscillations that hamper gradient descent and momentum after reaching the optimum.
# 4.3 Neural Net Classiï¬er | 1606.01885#22 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 23 | # 4 Experimental evaluation
# 4.1 Tasks and data
In the computer vision community, the availabil- ity of large data sets for object detection and im- age classiï¬cation has fueled the development of new architectures. In particular, this made it pos- sible to compare many different architectures and to show the beneï¬t of very deep convolutional net- works. We present our results on eight freely avail- able large-scale data sets introduced by (Zhang et al., 2015) which cover several classiï¬cation tasks such as sentiment analysis, topic classiï¬cation or news categorization (see Table 3). The number of training examples varies from 120k up to 3.6M, and the number of classes is comprised between 2 and 14. This is considerably lower than in com- puter vision (e.g. 1 000 classes for ImageNet). | 1606.01781#23 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 23 | # 4.3 Neural Net Classiï¬er
Finally, we train an autonomous algorithm to train a small neural net classifier. We consider a two-layer neural net with ReLU activation on the hidden units and softmax activation on the output units. We use the cross-entropy loss combined with @2 regularization on the weights. To train the model, we need to optimize the following objective:
exp (a max (Wx; + b, 0) + c),,) Do exp (G max (Wx; + b,0) + c);) in 2 Shoe wig d yop Sa ee SW + 5 ele,
where W â RhÃd, b â Rh, U â RpÃh, c â Rp denote the ï¬rst-layer and second-layer weights and biases, xi â Rd and yi â {1, . . . , p} denote the input and target class label of the ith instance, λ denotes the coefï¬cient on regularizers and (v)j denotes the jth component of v. For our experiments, we use λ = 0.0005 and d = h = p = 2. The error surface is known to have complex geometry and multiple local optima, making this a challenging optimization problem. | 1606.01885#23 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 24 | #Classes Classiï¬cation Task #Test 7.6k 60k 70k 38k 50k 60k 650k 400k #Train 120k 450k 560k 560k 650k 1 400k 3 000k 3 600k Data set AGâs news Sogou news DBPedia Yelp Review Polarity Yelp Review Full Yahoo! Answers Amazon Review Full Amazon Review Polarity 4 English news categorization 5 Chinese news categorization 14 Ontology classiï¬cation 2 Sentiment analysis 5 Sentiment analysis 10 Topic classiï¬cation 5 Sentiment analysis 2 Sentiment analysis
Table 3: Large-scale text classiï¬cation data sets used in our experiments. See (Zhang et al., 2015) for a detailed description. | 1606.01781#24 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 24 | The training set consists of 80 objective functions, each of which corresponds to the objective for training a neural net on a different dataset. Each dataset is generated by generating four multivariate Gaussians with random means and covariances and sampling 25 points from each. The points from the same Gaussian are assigned the same random label of either 0 or 1. We make sure not all of the points in the dataset are assigned the same label.
We evaluate the autonomous algorithm in the same manner as above. As shown in Figure 3a, the autonomous algorithm signiï¬cantly outperforms all other algorithms. In particular, as evidenced by the sizeable and sustained gap between margin of victory of the autonomous optimizer and the momentum method, the autonomous optimizer is able to reach much better optima and is less prone to getting trapped in local optima compared to other methods. This gap is also larger compared to that exhibited in previous sections, suggesting that hand-engineered algorithms are more sub-optimal on
7
(a) (b) (c)
Figure 3: (a) Mean margin of victory of each algorithm for training neural net classiï¬ers. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. | 1606.01885#24 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 25 | Table 3: Large-scale text classiï¬cation data sets used in our experiments. See (Zhang et al., 2015) for a detailed description.
This has the consequence that each example in- duces less gradient information which may make it harder to train large architectures. It should be also noted that some of the tasks are very ambigu- ous, in particular sentiment analysis for which it is difï¬cult to clearly associate ï¬ne grained labels. There are equal numbers of examples in each class for both training and test sets. The reader is re- ferred to (Zhang et al., 2015) for more details on the construction of the data sets. Table 4 summa- rizes the best published results on these corpora we are aware of. We do not use âThesaurus data augmentationâ or any other preprocessing, except lower-casing. Nevertheless, we still outperform the best convolutional neural networks of (Zhang et al., 2015) for all data sets. The main goal of our work is to show that it is possible and beneï¬cial to train very deep convolutional networks as text encoders. Data augmentation may improve our re- sults even further. We will investigate this in future research.
# 4.2 Common model settings | 1606.01781#25 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 25 | challenging optimization problems and so the potential for improvement from learning the algorithm is greater in such settings. Due to non-convexity, conjugate gradient and L-BFGS often diverge.
Performance on examples of objective functions from the test set is shown in Figures 3b and 3c. As shown, the autonomous optimizer is able to reach better optima than all other methods and largely avoids oscillations that other methods suffer from.
# 5 Conclusion
We presented a method for learning a better optimization algorithm. We formulated this as a reinforcement learning problem, in which any optimization algorithm can be represented as a policy. Learning an optimization algorithm then reduces to ï¬nd the optimal policy. We used guided policy search for this purpose and trained autonomous optimizers for different classes of convex and non- convex objective functions. We demonstrated that the autonomous optimizer converges faster and/or reaches better optima than hand-engineered optimizers. We hope autonomous optimizers learned using the proposed approach can be used to solve various common classes of optimization problems more quickly and help accelerate the pace of innovation in science and engineering.
# References | 1606.01885#25 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 26 | # 4.2 Common model settings
rate of 0.01 and momentum of 0.9. We follow the same training procedure as in (Zhang et al., layers 2015). We initialize our convolutional following (He et al., 2015). One epoch took from 24 minutes to 2h45 for depth 9, and from 50 minutes to 7h (on the largest datasets) for depth 29. It took between 10 to 15 epoches to converge. The implementation is done using Torch 7. All experiments are performed on a single NVidia K40 GPU. Unlike previous research on the use of ConvNets for text processing, we use temporal batch norm without dropout.
# 4.3 Experimental results
In this section, we evaluate several conï¬gurations of our model, namely three different depths and three different pooling types (see Section 3). Our main contribution is a thorough evaluation of net- works of increasing depth using an architecture with small temporal convolution ï¬lters with dif- ferent types of pooling, which shows that a signif- icant improvement on the state-of-the-art conï¬gu- rations can be achieved on text classiï¬cation tasks by pushing the depth to 29 convolutional layers. | 1606.01781#26 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 26 | # References
[1] Jonathan Baxter, Rich Caruana, Tom Mitchell, Lorien Y Pratt, Daniel L Silver, and Sebastian Thrun. NIPS 1995 workshop on learning to learn: Knowledge consolidation and transfer in inductive sys- tems. https://web.archive.org/web/20000618135816/http://www.cs.cmu.edu/afs/cs.cmu. edu/user/caruana/pub/transfer.html, 1995. Accessed: 2015-12-05.
[2] Yoshua Bengio. Gradient-based optimization of hyperparameters. Neural computation, 12(8):1889â1900, 2000.
[3] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281â305, 2012.
[4] James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546â2554, 2011.
[5] Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning: applications to data mining. Springer Science & Business Media, 2008. | 1606.01885#26 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 27 | The following settings have been used in all our experiments. They were found to be best in initial experiments. Following (Zhang et al., 2015), all processing is done at the char- acter level which is the atomic representation of a sentence, same as pixels for images. The dictionary consists of the following characters âabcdefghijklmnopqrstuvwxyz0123456 789-,;.!?:â"/| #$%Ë&*Ëâ+=<>()[]{}â plus a special padding, space and unknown token which add up to a total of 69 tokens. The input text is padded to a ï¬xed size of 1014, larger text are truncated. The character embedding is of size 16. Training is performed with SGD, using a mini-batch of size 128, an initial learning | 1606.01781#27 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 27 | [6] Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
[7] Allen Cypher and Daniel Conrad Halbert. Watch what I do: programming by demonstration. MIT press, 1993.
[8] Justin Domke. Generic methods for optimization-based modeling. In AISTATS, volume 22, pages 318â326, 2012.
[9] Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Initializing bayesian hyperparameter optimization via meta-learning. In AAAI, pages 1128â1135, 2015.
8
[10] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015.
[11] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
[12] Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multi-step controllers under unknown dynamics. In International Conference on Intelligent Robots and Systems, 2015. | 1606.01885#27 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 28 | Our deep architecture works well on big data sets in particular, even for small depths. Table 5 shows the test errors for depths 9, 17 and 29 and for each type of pooling : convolution with stride 2, k-max pooling and temporal max-pooling. For the smallest depth we use (9 convolutional layers), we see that our model already performs better than Zhangâs convolutional baselines (which includes 6 convolutional layers and has a different archi- tecture) on the biggest data sets : Yelp Full, Ya- hoo Answers and Amazon Full and Polarity. The most important decrease in classiï¬cation error can be observed on the largest data set Amazon Full which has more than 3 Million training samples.
Yah. A. Conv+RNN [Xiao] 28.26 24.2 Yelp F. Conv [Zhang] 37.95â - Amz. F. Amz. P. Corpus: Method Author Error [Yang] Sogou n-TFIDF n-TFIDF n-TFIDF [Zhang] [Zhang] [Zhang] 1.31 2.81 7.64 - - - AG DBP. Conv [Zhang] 40.43â 36.4 Conv [Zhang] 4.93â | 1606.01781#28 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 28 | [12] Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multi-step controllers under unknown dynamics. In International Conference on Intelligent Robots and Systems, 2015.
[13] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm conï¬guration. In Learning and Intelligent Optimization, pages 507â523. Springer, 2011.
[14] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198, 2015.
[15] Åukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
[16] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015.
[17] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071â1079, 2014. | 1606.01885#28 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01885 | 29 | [18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[19] Sergey Levine, Nolan Wagener, and Pieter Abbeel. Learning contact-rich manipulation skills with guided policy search. arXiv preprint arXiv:1501.05611, 2015.
[20] Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 639â646, 2010.
[21] Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimization through reversible learning. arXiv preprint arXiv:1502.03492, 2015.
[22] Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2, 1978.
[23] Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015. | 1606.01885#29 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 30 | AG Sogou DBP. Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. 37.63 Convolution 10.17 38.04 KMaxPooling 9.83 36.73 9.17 MaxPooling 36.10 Convolution 9.29 37.41 KMaxPooling 9.39 36.07 8.88 MaxPooling 35.28 Convolution 9.36 37.00 KMaxPooling 8.67 35.74 8.73 MaxPooling
Table 5: Testing error of our models on the 8 data sets. No data preprocessing or augmentation is used.
We also observe that for a small depth, temporal max-pooling works best on all data sets.
iments, it seems to hurt performance to perform this type of max operation at intermediate layers (with the exception of the smallest data sets). | 1606.01781#30 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01885 | 30 | [23] Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015.
[24] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951â2959, 2012.
[25] Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. In Advances in neural information processing systems, pages 2004â2012, 2013.
[26] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
[27] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artiï¬cial Intelligence Review, 18(2):77â95, 2002.
[28] Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016.
[29] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015. | 1606.01885#30 | Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value. | http://arxiv.org/pdf/1606.01885 | Ke Li, Jitendra Malik | cs.LG, cs.AI, math.OC, stat.ML | 9 pages, 3 figures | null | cs.LG | 20160606 | 20160606 | [
{
"id": "1505.00521"
},
{
"id": "1511.07275"
},
{
"id": "1511.06279"
},
{
"id": "1602.08671"
},
{
"id": "1511.08228"
},
{
"id": "1502.03492"
},
{
"id": "1501.05611"
},
{
"id": "1504.00702"
},
{
"id": "1509.06113"
},
{
"id": "1511.06392"
}
] |
1606.01781 | 31 | iments, it seems to hurt performance to perform this type of max operation at intermediate layers (with the exception of the smallest data sets).
Depth improves performance. As we increase the network depth to 17 and 29, the test errors decrease on all data sets, for all types of pooling (with 2 exceptions for 48 comparisons). Going from depth 9 to 17 and 29 for Amazon Full re- duces the error rate by 1% absolute. Since the test is composed of 650K samples, 6.5K more test samples have been classiï¬ed correctly. These improvements, especially on large data sets, are signiï¬cant and show that increasing the depth is useful for text processing. Overall, compared to previous state-of-the-art, our best architecture with depth 29 and max-pooling has a test error of 37.0 compared to 40.43%. This represents a gain of 3.43% absolute accuracy. The signiï¬cant im- provements which we obtain on all data sets com- pared to Zhangâs convolutional models do not in- clude any data augmentation technique.
Max-pooling performs better than other pool- ing types. In terms of pooling, we can also see that max-pooling performs best overall, very close to convolutions with stride 2, but both are signiï¬- cantly superior to k-max pooling. | 1606.01781#31 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 32 | Both pooling mechanisms perform a max oper- ation which is local and limited to three consec- utive tokens, while k-max polling considers the whole sentence at once. According to our experOur models outperform state-of-the-art Con- vNets. We obtain state-of-the-art results for all data sets, except AGâs news and Sogou news which are the smallest ones. However, with our very deep architecture, we get closer to the state- of-the-art which are ngrams TF-IDF for these data sets and signiï¬cantly surpass convolutional mod- els presented in (Zhang et al., 2015). As observed in previous work, differences in accuracy between shallow (TF-IDF) and deep (convolutional) mod- els are more signiï¬cant on large data sets, but we still perform well on small data sets while getting closer to the non convolutional state-of-the-art re- sults on small data sets. The very deep models even perform as well as ngrams and ngrams-TF- IDF respectively on the sentiment analysis task of Yelp Review Polarity and the ontology classi- ï¬cation task of the DBPedia data | 1606.01781#32 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 33 | IDF respectively on the sentiment analysis task of Yelp Review Polarity and the ontology classi- ï¬cation task of the DBPedia data set. Results of Yang et al. (only on Yahoo Answers and Amazon Full) outperform our model on the Yahoo Answers dataset, which is probably linked to the fact that their model is task-speciï¬c to datasets whose sam- ples that contain multiple sentences like (question, answer). They use a hierarchical attention mecha- nism that apply very well to documents (with mul- tiple sentences). | 1606.01781#33 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 34 | Going even deeper degrades accuracy. Short- cut connections help reduce the degradation. As described in (He et al., 2016a), the gain in accu- racy due to the the increase of the depth is limited when using standard ConvNets. When the depth increases too much, the accuracy of the model gets saturated and starts degrading rapidly. This degra- dation problem was attributed to the fact that very deep models are harder to optimize. The gradi- ents which are backpropagated through the very deep networks vanish and SGD with momentum is not able to converge to a correct minimum of the loss function. To overcome this degradation of the model, the ResNet model introduced short- cut connections between convolutional blocks that allow the gradients to ï¬ow more easily in the net- work (He et al., 2016a). | 1606.01781#34 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 35 | We evaluate the impact of shortcut connections by increasing the number of convolutions to 49 layers. We present an adaptation of the ResNet model to the case of temporal convolutions for text (see Figure 1). Table 6 shows the evolution of the test errors on the Yelp Review Full data set with or without shortcut connections. When looking at the column âwithout shortcutâ, we observe the same degradation problem as in the original ResNet ar- ticle: when going from 29 to 49 layers, the test error rate increases from 35.28 to 37.41 (while the training error goes up from 29.57 to 35.54). When using shortcut connections, we observe improved results when the network has 49 layers: both the training and test errors go down and the network is less prone to underï¬tting than it was without short- cut connections.
While shortcut connections give better results when the network is very deep (49 layers), we were not able to reach state-of-the-art results with them. We plan to further explore adaptations of residual networks to temporal convolutions as we think this a milestone for going deeper in NLP. Residual units (He et al., 2016a) better adapted to the text processing task may help for training even deeper models for text processing, and is left for future research. | 1606.01781#35 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 36 | Exploring these models on text classiï¬cation tasks with more classes sounds promising. Note that one of the most important difference between the classiï¬cation tasks discussed in this work and ImageNet is that the latter deals with 1000 classes and thus much more information is back-propagated to the network through the gradepth without shortcut with shortcut 9 17 29 49 37.63 36.10 35.28 37.41 40.27 39.18 36.01 36.15
Table 6: Test error on the Yelp Full data set for all depths, with or without residual connections.
dients. Exploring the impact of the depth of tem- poral convolutional models on categorization tasks with hundreds or thousands of classes would be an interesting challenge and is left for future research.
# 5 Conclusion | 1606.01781#36 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 37 | # 5 Conclusion
We have presented a new architecture for NLP which follows two design principles: 1) operate at the lowest atomic representation of text, i.e. char- acters, and 2) use a deep stack of local operations, i.e. convolutions and max-pooling of size 3, to learn a high-level hierarchical representation of a sentence. This architecture has been evaluated on eight freely available large-scale data sets and we were able to show that increasing the depth up to 29 convolutional layers steadily improves perfor- mance. Our models are much deeper than pre- viously published convolutional neural networks and they outperform those approaches on all data sets. To the best of our knowledge, this is the ï¬rst time that the âbeneï¬t of depthsâ was shown for convolutional neural networks in NLP. | 1606.01781#37 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 38 | Eventhough text follows human-deï¬ned rules and images can be seen as raw signals of our en- vironment, images and small texts have similar properties. Texts are also compositional for many languages. Characters combine to form n-grams, stems, words, phrase, sentences etc. These simi- lar properties make the comparison between com- puter vision and natural language processing very proï¬table and we believe future research should invest into making text processing models deeper. Our work is a ï¬rst attempt towards this goal.
In this paper, we focus on the use of very deep convolutional neural networks for sentence classi- ï¬cation tasks. Applying similar ideas to other se- quence processing tasks, in particular neural ma- chine translation is left for future research. It needs to be investigated whether these also beneï¬t from having deeper convolutional encoders.
# References
Yoshua Bengio, Rejean Ducharme, and Pascal Vin- cent. 2001. A neural probabilistic language model. In NIPS, volume 13, pages 932â938, Vancouver, British Columbia, Canada. | 1606.01781#38 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 39 | Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137â1155.
Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural language processing: deep neural networks with multitask learning. In ICML, pages 160â167, Helsinki, Finland.
Ronan Collobert, Jason Weston Lon Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, pages 2493â2537.
C´ıcero Nogueira Dos Santos and Maira Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In COLING, pages 69â78, Dublin, Ireland.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬- In Proceedings of the IEEE international cation. conference on computer vision, pages 1026â1034, Santiago, Chile. | 1606.01781#39 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 40 | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â778, Las Vegas, Nevada, USA.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Identity mappings in deep residual Sun. 2016b. networks. In European Conference on Computer Vision, pages 630â645, Amsterdam, Netherlands. Springer.
1997. Long short-term memory. Neural computation, 9(8):1735â1780.
Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448â456, Lille, France.
Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics, pages 655â665, Baltimore, Mary- land, USA.
2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746â1751,
Doha, Qatar. Association for Computational Lin- guistics. | 1606.01781#40 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 41 | Doha, Qatar. Association for Computational Lin- guistics.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097â1105, Lake Tahoe, California, USA.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324.
David G Lowe. 2004. Distinctive image features from International journal of scale-invariant keypoints. computer vision, 60(2):91â110.
Pedro HO Pinheiro and Ronan Collobert. 2014. Re- current convolutional neural networks for scene la- beling. In ICML, pages 82â90, Beijing, China.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. pages 1715â1725.
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR, San Diego, California, USA. | 1606.01781#41 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 42 | Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR, San Diego, California, USA.
Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the conference on empirical methods in natural lan- guage processing, pages 151â161, Edinburgh, UK. Association for Computational Linguistics.
Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language model- ing. In Interspeech, pages 194â197, Portland, Ore- gon, USA.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NIPS, pages 3104â3112, Montreal, Canada.
2016. Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers. | 1606.01781#42 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01781 | 43 | 2016. Efï¬cient character-level document classiï¬cation by combin- ing convolution and recurrent layers.
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classiï¬cation. In Proceedings of NAACL-HLT, pages 1480â1489, San Diego, California, USA.
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Eu- ropean conference on computer vision, pages 818â 833, Zurich, Switzerland. Springer.
Xiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. arXiv preprint arXiv:1502.01710.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- siï¬cation. In NIPS, pages 649â657, Montreal, Canada. | 1606.01781#43 | Very Deep Convolutional Networks for Text Classification | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing. | http://arxiv.org/pdf/1606.01781 | Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun | cs.CL, cs.LG, cs.NE | 10 pages, EACL 2017, camera-ready | null | cs.CL | 20160606 | 20170127 | [
{
"id": "1502.01710"
}
] |
1606.01540 | 0 | 6 1 0 2 n u J 5 ] G L . s c [
1 v 0 4 5 1 0 . 6 0 6 1 : v i X r a
# OpenAI Gym
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba OpenAI
# Abstract
OpenAI Gym1 is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software.
# 1 Introduction
Reinforcement learning (RL) is the branch of machine learning that is concerned with making sequences of decisions. RL has a rich mathematical theory and has found a variety of practical applications [1]. Recent advances that combine deep learning with reinforcement learning have led to a great deal of excitement in the ï¬eld, as it has become evident that general algorithms such as policy gradients and Q-learning can achieve good performance on difï¬cult problems, without problem-speciï¬c engineering [2, 3, 4]. | 1606.01540#0 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 0 | 6 1 0 2
p e S 9 2 ] L C . s c [ 4 v 1 4 5 1 0 . 6 0 6 1 : v i X r a
# Deep Reinforcement Learning for Dialogue Generation
Jiwei Li1, Will Monroe1, Alan Ritter2, Michel Galley3, Jianfeng Gao3 and Dan Jurafsky1 1Stanford University, Stanford, CA, USA 2Ohio State University, OH, USA 3Microsoft Research, Redmond, WA, USA {jiweil,wmonroe4,jurafsky}@stanford.edu, [email protected] {mgalley,jfgao}@microsoft.com
# Abstract | 1606.01541#0 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 1 | To build on recent progress in reinforcement learning, the research community needs good benchmarks on which to compare algorithms. A variety of benchmarks have been released, such as the Arcade Learn- ing Environment (ALE) [5], which exposed a collection of Atari 2600 games as reinforcement learning problems, and recently the RLLab benchmark for continuous control [6], to which we refer the reader for a survey on other RL benchmarks, including [7, 8, 9, 10, 11]. OpenAI Gym aims to combine the best el- ements of these previous benchmark collections, in a software package that is maximally convenient and accessible. It includes a diverse collection of tasks (called environments) with a common interface, and this collection will grow over time. The environments are versioned in a way that will ensure that results remain meaningful and reproducible as the software is updated.
Alongside the software library, OpenAI Gym has a website (gym.openai.com) where one can ï¬nd score- boards for all of the environments, showcasing results submitted by users. Users are encouraged to provide links to source code and detailed instructions on how to reproduce their results.
# 2 Background | 1606.01540#1 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 1 | # Abstract
Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be short- sighted, predicting utterances one at a time while ignoring their inï¬uence on future out- comes. Modeling the future direction of a di- alogue is crucial to generating coherent, inter- esting dialogues, a need which led traditional NLP models of dialogue to draw on reinforce- ment learning. In this paper, we show how to integrate these goals, applying deep reinforce- ment learning to model future reward in chat- bot dialogue. The model simulates dialogues between two virtual agents, using policy gradi- ent methods to reward sequences that display three useful conversational properties: infor- mativity, coherence, and ease of answering (re- lated to forward-looking function). We evalu- ate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conver- sation in dialogue simulation. This work marks a ï¬rst step towards learning a neural conversa- tional model based on the long-term success of dialogues.
context when mapping between consecutive dialogue turns (Sordoni et al., 2015) in a way not possible, for example, with MT-based dialogue models (Ritter et al., 2011). | 1606.01541#1 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 2 | # 2 Background
Reinforcement learning assumes that there is an agent that is situated in an environment. Each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize some measure of the agentâs total reward, as the agent interacts with the environment. In the RL literature, the environment is formalized as a partially observable Markov decision process (POMDP) [12]. OpenAI Gym focuses on the episodic setting of reinforcement learning, where the agentâs experience is broken down into a series of episodes. In each episode, the agentâs initial state is randomly sampled from a distribution, and the interaction proceeds until the environment reaches a terminal state. The goal in episodic reinforcement learning is to maximize the expectation of total reward per episode, and to achieve a high level of performance in as few episodes as possible.
The following code snippet shows a single episode with 100 timesteps. It assumes that there is an object called agent, which takes in the observation at each timestep, and an object called env, which is the
1gym.openai.com
1
environment. OpenAI Gym does not include an agent class or specify what interface the agent should use; we just include an agent here for demonstration purposes. | 1606.01540#2 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 2 | context when mapping between consecutive dialogue turns (Sordoni et al., 2015) in a way not possible, for example, with MT-based dialogue models (Ritter et al., 2011).
Despite the success of SEQ2SEQ models in di- alogue generation, two problems emerge: First, SEQ2SEQ models are trained by predicting the next dialogue turn in a given conversational context using the maximum-likelihood estimation (MLE) objective function. However, it is not clear how well MLE approximates the real-world goal of chatbot develop- ment: teaching a machine to converse with humans, while providing interesting, diverse, and informative feedback that keeps users engaged. One concrete example is that SEQ2SEQ models tend to generate highly generic responses such as âI donât knowâ re- gardless of the input (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016a). This can be ascribed to the high frequency of generic responses found in the training set and their compatibility with a diverse range of conversational contexts. Yet âI donât knowâ is apparently not a good action to take, since it closes the conversation down.
# Introduction | 1606.01541#2 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 3 | 1gym.openai.com
1
environment. OpenAI Gym does not include an agent class or specify what interface the agent should use; we just include an agent here for demonstration purposes.
ob0 = env.reset() # sample environment state, return first observation a0 = agent.act(ob0) # agent chooses first action ob1, rew0, done0, info0 = env.step(a0) # environment returns observation, # reward, and boolean flag indicating if the episode is complete. a1 = agent.act(ob1) ob2, rew1, done1, info1 = env.step(a1) ... a99 = agent.act(o99) ob100, rew99, done99, info2 = env.step(a99) # done99 == True => terminal
# 3 Design Decisions
The design of OpenAI Gym is based on the authorsâ experience developing and comparing reinforcement learning algorithms, and our experience using previous benchmark collections. Below, we will summarize some of our design decisions. | 1606.01540#3 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 3 | # Introduction
Neural response generation (Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016a; Wen et al., 2015; Yao et al., 2015; Luan et al., 2016; Xu et al., 2016; Wen et al., 2016; Li et al., 2016b; Su et al., 2016) is of growing inter- est. The LSTM sequence-to-sequence (SEQ2SEQ) model (Sutskever et al., 2014) is one type of neural generation model that maximizes the probability of generating a response given the previous dialogue turn. This approach enables the incorporation of rich
Another common problem, illustrated in the two sample conversations on the left of Table 1, is that the system becomes stuck in an inï¬nite loop of repet- itive responses. This is due to MLE-based SEQ2SEQ modelsâ inability to account for repetition. In exam- ple 2 (bottom left), the dialogue falls into an inï¬nite loop after three turns, with both agents generating dull, generic utterances like i donât know what you are talking about and you donât know what you are saying. Looking at the entire conversation, utterance (4) turns out to be a bad action to take because it offers no way of continuing the conversation.1 | 1606.01541#3 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 4 | The design of OpenAI Gym is based on the authorsâ experience developing and comparing reinforcement learning algorithms, and our experience using previous benchmark collections. Below, we will summarize some of our design decisions.
Environments, not agents. Two core concepts are the agent and the environment. We have chosen to only provide an abstraction for the environment, not for the agent. This choice was to maximize convenience for users and allow them to implement different styles of agent interface. First, one could imagine an âonline learningâ style, where the agent takes (observation, reward, done) as an input at each timestep and performs learning updates incrementally. In an alternative âbatch updateâ style, a agent is called with observation as input, and the reward information is collected separately by the RL algorithm, and later it is used to compute an update. By only specifying the agent interface, we allow users to write their agents with either of these styles. | 1606.01540#4 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01540 | 5 | Emphasize sample complexity, not just ï¬nal performance. The performance of an RL algorithm on an environment can be measured along two axes: ï¬rst, the ï¬nal performance; second, the amount of time it takes to learnâthe sample complexity. To be more speciï¬c, ï¬nal performance refers to the average reward per episode, after learning is complete. Learning time can be measured in multiple ways, one simple scheme is to count the number of episodes before a threshold level of average performance is exceeded. This threshold is chosen per-environment in an ad-hoc way, for example, as 90% of the maximum performance achievable by a very heavily trained agent. Both ï¬nal performance and sample complexity are very interesting, however, arbitrary amounts of computation can be used to boost ï¬nal performance, making it a comparison of computational resources rather than algorithm quality. | 1606.01540#5 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 5 | Baseline mutual information model (Li et al. 2015) A: Where are you going? (1) B: Iâm going to the restroom. (2) A: See you later. (3) B: See you later. (4) A: See you later. (5) B: See you later. (6) ... ... A: how old are you? (1) B: Iâm 16. (2) A: 16? (3) B: I donât know what you are talking about. (4) A: You donât know what you are saying. (5) B: I donât know what you are talking about . (6) A: You donât know what you are saying. (7) ... Proposed reinforcement learning model A: Where are you going? (1) B: Iâm going to the police station. (2) A: Iâll come with you. (3) B: No, no, no, no, youâre not going anywhere. (4) A: Why? (5) B: I need you to stay here. (6) A: I donât know what you are talking about. (7) ... A: How old are you? (1) B: Iâm 16. Why are you | 1606.01541#5 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 6 | Encourage peer review, not competition. The OpenAI Gym website allows users to compare the performance of their algorithms. One of its inspiration is Kaggle, which hosts a set of machine learning contests with leaderboards. However, the aim of the OpenAI Gym scoreboards is not to create a competition, but rather to stimulate the sharing of code and ideas, and to be a meaningful benchmark for assessing different methods. RL presents new challenges for benchmarking. In the supervised learning setting, performance is measured by prediction accuracy on a test set, where the correct outputs are hidden from contestants. In RL, itâs less straightforward to measure generalization performance, except by running the usersâ code on a collection of unseen environments, which would be computationally expensive. Without a hidden test set, one must check that an algorithm did not âoverï¬tâ on the problems it was tested on (for example, through parameter tuning). We would like to encourage a peer review process for interpreting results submitted by users. Thus, OpenAI Gym asks users to create a Writeup describing their algorithm, parameters used, and linking to code. Writeups should allow other users to reproduce the results. With the source code available, it is possible to make a nuanced judgement about whether the algorithm âoverï¬tâ to the task at hand. | 1606.01540#6 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 7 | Table 1: Left Column: Dialogue simulation between two agents using a 4-layer LSTM encoder-decoder trained on the OpenSubtitles dataset. The ï¬rst turn (index 1) is input by the authors. Then the two agents take turns conversing, taking as input the other agentâs prior generated turn. The output is generated using the mutual information model (Li et al., 2015) in which an N-best list is ï¬rst obtained using beam search based on p(t|s) and reranked by linearly combining the backward probability p(s|t), where t and s respectively denote targets and sources. Right Column: Dialogue simulated using the proposed reinforcement learning model. The new model has more forward-looking utterances (questions like âWhy are you asking?â and offers like âIâll come with youâ) and lasts longer before it falls into conversational black holes.
These challenges suggest we need a conversa- tion framework that has the ability to (1) integrate developer-deï¬ned rewards that better mimic the true goal of chatbot development and (2) model the long- term inï¬uence of a generated response in an ongoing dialogue. | 1606.01541#7 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 8 | Figure 1: Images of some environments that are currently part of OpenAI Gym.
Monitoring by default. By default, environments are instrumented with a Monitor, which keeps track of every time step (one step of simulation) and reset (sampling a new initial state) are called. The Monitorâs behavior is conï¬gurable, and it can record a video periodically. It also is sufï¬cient to produce learning curves. The videos and learning curve data can be easily posted to the OpenAI Gym website.
# 4 Environments
OpenAI Gym contains a collection of Environments (POMDPs), which will grow over time. See Figure 1 for examples. At the time of Gymâs initial beta release, the following environments were included:
Classic control and toy text: small-scale tasks from the RL literature.
⢠Algorithmic: perform computations such as adding multi-digit numbers and reversing sequences. Most of these tasks require memory, and their difï¬culty can be chosen by varying the sequence length.
Atari: classic Atari games, with screen images or RAM as input, using the Arcade Learning Environment [5].
⢠Board games: currently, we have included the game of Go on 9x9 and 19x19 boards, where the Pachi engine [13] serves as an opponent. | 1606.01540#8 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 8 | To achieve these goals, we draw on the insights of reinforcement learning, which have been widely ap- plied in MDP and POMDP dialogue systems (see Re- lated Work section for details). We introduce a neu- ral reinforcement learning (RL) generation method, which can optimize long-term rewards designed by system developers. Our model uses the encoder- decoder architecture as its backbone, and simulates conversation between two virtual agents to explore the space of possible actions while learning to maxi- mize expected reward. We deï¬ne simple heuristic ap- proximations to rewards that characterize good con- versations: good conversations are forward-looking (Allwood et al., 1992) or interactive (a turn suggests a following turn), informative, and coherent. The pa- rameters of an encoder-decoder RNN deï¬ne a policy over an inï¬nite action space consisting of all possible
utterances. The agent learns a policy by optimizing the long-term developer-deï¬ned reward from ongo- ing dialogue simulations using policy gradient meth- ods (Williams, 1992), rather than the MLE objective deï¬ned in standard SEQ2SEQ models. | 1606.01541#8 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 9 | ⢠Board games: currently, we have included the game of Go on 9x9 and 19x19 boards, where the Pachi engine [13] serves as an opponent.
⢠2D and 3D robots: control a robot in simulation. These tasks use the MuJoCo physics engine, which was designed for fast and accurate robot simulation [14]. A few of the tasks are adapted from RLLab [6].
Since the initial release, more environments have been created, including ones based on the open source physics engine Box2D or the Doom game engine via VizDoom [15].
# 5 Future Directions
In the future, we hope to extend OpenAI Gym in several ways.
⢠Multi-agent setting. It will be interesting to eventually include tasks in which agents must collaborate or compete with other agents.
⢠Curriculum and transfer learning. Right now, the tasks are meant to be solved from scratch. Later, it will be more interesting to consider sequences of tasks, so that the algorithm is trained on one task after the other. Here, we will create sequences of increasingly difï¬cult tasks, which are meant to be solved in order.
⢠Real-world operation. Eventually, we would like to integrate the Gym API with robotic hardware, validating reinforcement learning algorithms in the real world.
3
# References | 1606.01540#9 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 9 | Our model thus integrates the power of SEQ2SEQ systems to learn compositional semantic meanings of utterances with the strengths of reinforcement learn- ing in optimizing for long-term goals across a conver- sation. Experimental results (sampled results at the right panel of Table 1) demonstrate that our approach fosters a more sustained dialogue and manages to produce more interactive responses than standard SEQ2SEQ models trained using the MLE objective.
# 2 Related Work
Efforts to build statistical dialog systems fall into two major categories. | 1606.01541#9 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 10 | ⢠Real-world operation. Eventually, we would like to integrate the Gym API with robotic hardware, validating reinforcement learning algorithms in the real world.
3
# References
[1] Dimitri P Bertsekas, Dimitri P Bertsekas, Dimitri P Bertsekas, and Dimitri P Bertsekas. Dynamic programming and optimal control. Athena Scientiï¬c Belmont, MA, 1995.
[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, Sadik Beattie, C., Antonoglou A., H. I., King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[3] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, pages 1889â1897, 2015. | 1606.01540#10 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 10 | # 2 Related Work
Efforts to build statistical dialog systems fall into two major categories.
The ï¬rst treats dialogue generation as a source- to-target transduction problem and learns mapping rules between input messages and responses from a massive amount of training data. Ritter et al. (2011) frames the response generation problem as a statistical machine translation (SMT) problem. Sordoni et al. (2015) improved Ritter et al.âs system by rescor- ing the outputs of a phrasal SMT-based conversation system with a neural model that incorporates prior context. Recent progress in SEQ2SEQ models inspire several efforts (Vinyals and Le, 2015) to build end- to-end conversational systems which ï¬rst apply an encoder to map a message to a distributed vector rep- resenting its semantics and generate a response from the message vector. Serban et al. (2016) propose a hierarchical neural model that captures dependen- cies over an extended conversation history. Li et al. (2016a) propose mutual information between mes- sage and response as an alternative objective function in order to reduce the proportion of generic responses produced by SEQ2SEQ systems. | 1606.01541#10 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
1606.01540 | 11 | [4] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
[5] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res., 47:253â279, 2013.
[6] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016.
[7] A. Geramifard, C. Dann, R. H. Klein, W. Dabney, and J. P. How. RLPy: A value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res., 16:1573â1578, 2015. | 1606.01540#11 | OpenAI Gym | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software. | http://arxiv.org/pdf/1606.01540 | Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba | cs.LG, cs.AI | null | null | cs.LG | 20160605 | 20160605 | [
{
"id": "1602.01783"
},
{
"id": "1604.06778"
},
{
"id": "1605.02097"
}
] |
1606.01541 | 11 | The other line of statistical research focuses on building task-oriented dialogue systems to solve domain-speciï¬c tasks. Efforts include statistical models such as Markov Decision Processes (MDPs) (Levin et al., 1997; Levin et al., 2000; Walker et al., 2003; Pieraccini et al., 2009), POMDP (Young et al., 2010; Young et al., 2013; GaËsic et al., 2013a; GaËsic et al., 2014) models, and models that statisti- cally learn generation rules (Oh and Rudnicky, 2000; Ratnaparkhi, 2002; Banchs and Li, 2012; Nio et al., 2014). This dialogue literature thus widely applies reinforcement learning (Walker, 2000; Schatzmann et al., 2006; Gasic et al., 2013b; Singh et al., 1999; Singh et al., 2000; Singh et al., 2002) to train dialogue policies. But task-oriented RL dialogue systems of- ten rely on carefully limited dialogue parameters, or hand-built templates with state, action and reward sig- nals designed by humans for each new domain, mak- ing the paradigm difï¬cult to extend to open-domain scenarios. | 1606.01541#11 | Deep Reinforcement Learning for Dialogue Generation | Recent neural models of dialogue generation offer great promise for
generating responses for conversational agents, but tend to be shortsighted,
predicting utterances one at a time while ignoring their influence on future
outcomes. Modeling the future direction of a dialogue is crucial to generating
coherent, interesting dialogues, a need which led traditional NLP models of
dialogue to draw on reinforcement learning. In this paper, we show how to
integrate these goals, applying deep reinforcement learning to model future
reward in chatbot dialogue. The model simulates dialogues between two virtual
agents, using policy gradient methods to reward sequences that display three
useful conversational properties: informativity (non-repetitive turns),
coherence, and ease of answering (related to forward-looking function). We
evaluate our model on diversity, length as well as with human judges, showing
that the proposed algorithm generates more interactive responses and manages to
foster a more sustained conversation in dialogue simulation. This work marks a
first step towards learning a neural conversational model based on the
long-term success of dialogues. | http://arxiv.org/pdf/1606.01541 | Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky | cs.CL | null | null | cs.CL | 20160605 | 20160929 | [
{
"id": "1506.08941"
},
{
"id": "1505.00521"
},
{
"id": "1511.06732"
},
{
"id": "1605.05110"
},
{
"id": "1604.04562"
},
{
"id": "1603.09457"
},
{
"id": "1603.08023"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.