doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1602.02410
6
In this section we describe previous work relevant to the approaches discussed in this paper. A more detailed dis- cussion on language modeling research is provided in (Mikolov, 2012). # 2.1. Language Models larger data sets. Further, given current hardware trends and vast amounts of text available on the Web, it is much more straightforward to tackle large scale modeling than it used to be. Thus, we hope that our work will help and motivate researchers to work on traditional LM beyond PTB – for this purpose, we will open-source our models and training recipes. We focused on a well known, large scale LM benchmark: the One Billion Word Benchmark data set (Chelba et al., 2013). This data set is much larger than PTB (one thou- sand fold, 800k word vocabulary and 1B words training data) and far more challenging. Similar to Imagenet (Deng et al., 2009), which helped advance computer vision, we believe that releasing and working on large data sets and models with clear benchmarks will help advance Language Modeling. The contributions of our work are as follows: • We explored, extended and tried to unify some of the current research on large scale LM. • Specifically, we designed a Softmax loss which is based on character level CNNs, is efficient to train, and is as precise as a full Softmax which has orders of magnitude more parameters.
1602.02410#6
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
7
Language Modeling (LM) has been a central task in NLP. The goal of LM is to learn a probability distribution over sequences of symbols pertaining to a language. Much work has been done on both parametric (e.g., log-linear models) and non-parametric approaches (e.g., count-based LMs). Count-based approaches (based on statistics of N-grams) typically add smoothing which account for unseen (yet pos- sible) sequences, and have been quite successful. To this extent, Kneser-Ney smoothed 5-gram models (Kneser & Ney, 1995) are a fairly strong baseline which, for large amounts of training data, have challenged other paramet- ric approaches based on Neural Networks (Bengio et al., 2006).
1602.02410#7
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
8
Most of our work is based on Recurrent Neural Networks (RNN) models which retain long term dependencies. To this extent, we used the Long-Short Term Memory model (Hochreiter & Schmidhuber, 1997) which uses a gating mechanism (Gers et al., 2000) to ensure proper propaga- tion of information through many time steps. Much work has been done on small and large scale RNN-based LMs (Mikolov et al., 2010; Mikolov, 2012; Chelba et al., 2013; Zaremba et al., 2014; Williams et al., 2015; Ji et al., 2015a; Wang & Cho, 2015; Ji et al., 2015b). The architectures that we considered in this paper are represented in Figure 1. • Our study yielded significant improvements to the state-of-the-art on a well known, large scale LM task: from 51.3 down to 30.0 perplexity for single models whilst reducing the number of parameters by a factor of 20. In our work, we train models on the popular One Bil- lion Word Benchmark, which can be considered to be a medium-sized data set for count-based LMs but a very large data set for NN-based LMs. This regime is most interesting to us as we believe learning a very good model of human language is a complex task which will require large models,
1602.02410#8
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
9
Exploring the Limits of Language Modeling and thus large amounts of data. Further advances in data availability and computational resources helped our study. We argue this leap in scale enabled tremendous advances in deep learning. A clear example found in computer vision is Imagenet (Deng et al., 2009), which enabled learning com- plex vision models from large amounts of data (Krizhevsky et al., 2012). A crucial aspect which we discuss in detail in later sections is the size of our models. Despite the large number of pa- rameters, we try to minimize computation as much as pos- sible by adopting a strategy proposed in (Sak et al., 2014) of projecting a relatively big recurrent state space down so that the matrices involved remain relatively small, yet the model has large memory capacity. # 2.2. Convolutional Embedding Models inner product zw = hT ew where h is a context vector and ew is a “word embedding” for w.
1602.02410#9
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
10
inner product zw = hT ew where h is a context vector and ew is a “word embedding” for w. The main challenge when |V | is very large (in the order of one million in this paper) is the fact that computing all inner products between h and all embeddings becomes prohibitively slow during training (even when exploiting matrix-matrix multiplications and modern GPUs). Several approaches have been proposed to cope with the scaling is- sue: importance sampling (Bengio et al., 2003; Bengio & Sen´ecal, 2008), Noise Contrastive Estimation (NCE) (Gut- mann & Hyv¨arinen, 2010; Mnih & Kavukcuoglu, 2013), self normalizing partition functions (Vincent et al., 2015) or Hierarchical Softmax (Morin & Bengio, 2005; Mnih & Hinton, 2009) – they all offer good solutions to this prob- lem. We found importance sampling to be quite effective on this task, and explain the connection between it and NCE in the following section, as they are closely related.
1602.02410#10
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
11
There is an increased interest in incorporating character- level inputs to build word embeddings for various NLP problems, including part-of-speech tagging, parsing and language modeling (Ling et al., 2015; Kim et al., 2015; Ballesteros et al., 2015). The additional character informa- tion has been shown useful on relatively small benchmark data sets. # 3. Language Modeling Improvements Recurrent Neural Networks based LMs employ the chain rule to model joint probabilities over word sequences: The approach proposed in (Ling et al., 2015) builds word embeddings using bidirectional LSTMs (Schuster & Pali- wal, 1997; Graves & Schmidhuber, 2005) over the charac- ters. The recurrent networks process sequences of charac- ters from both sides and their final state vectors are concate- nated. The resulting representation is then fed to a Neural Network. This model achieved very good results on a part- of-speech tagging task. N p(wi,...,WN) = [rites +++, Wi-1) i=1 where the context of all previous words is encoded with an LSTM, and the probability over words uses a Softmax (see Figure 1(a)). # 3.1. Relationship between Noise Contrastive Estimation and Importance Sampling
1602.02410#11
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
12
# 3.1. Relationship between Noise Contrastive Estimation and Importance Sampling In (Kim et al., 2015), the words characters are processed by a 1-d CNN (Le Cun et al., 1990) with max-pooling across the sequence for each convolutional feature. The result- ing features are fed to a 2-layer highway network (Srivas- tava et al., 2015b), which allows the embedding to learn se- mantic representations. The model was evaluated on small- scale language modeling experiments for various languages and matched the best results on the PTB data set despite having 60% fewer parameters. # 2.3. Softmax Over Large Vocabularies As discussed in Section 2.3, a large scale Softmax is neces- sary for training good LMs because of the vocabulary size. A Hierarchical Softmax (Mnih & Hinton, 2009) employs a tree in which the probability distribution over words is decomposed into a product of two probabilities for each word, greatly reducing training and inference time as only the path specified by the hierarchy needs to be computed and updated. Choosing a good hierarchy is important for obtaining good results and we did not explore this approach further for this paper as sampling methods worked well for our setup.
1602.02410#12
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
13
Assigning probability distributions over large vocabularies is computationally challenging. For modeling language, maximizing log-likelihood of a given word sequence leads to optimizing cross-entropy between the target probability distribution (e.g., the target word we should be predicting), and our model predictions p. Generally, predictions come from a linear layer followed by a Softmax non-linearity: where zw is the logit correspond- p(w) = Sampling approaches are only useful during training, as they propose an approximation to the loss which is cheap to compute (also in a distributed setting) – however, at infer- ence time one still has to compute the normalization term over all words. Noise Contrastive Estimation (NCE) pro- poses to consider a surrogate binary classification task in which a classifier is trained to discriminate between true data, or samples coming from some arbitrary distribution. If both the noise and data distributions were known, the Exploring the Limits of Language Modeling optimal classifier would be: # 3.2. CNN Softmax p(Y = true|w) = pd(w) pd(w) + kpn(w)
1602.02410#13
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
14
# 3.2. CNN Softmax p(Y = true|w) = pd(w) pd(w) + kpn(w) where Y is the binary random variable indicating whether w comes from the true data distribution, k is the number of negative samples per positive word, and pd and pn are the data and noise distribution respectively (we dropped any dependency on previous words for notational simplicity). It is easy to show that if we train a logistic classifier po(Y = true|w) = o(se(w,h) — logkp,(w)) where o is the logistic function, then, p’(w) = softmax(so(w, h)) is a good approximation of pg(w) (sq is a logit which e.g. an LSTM LM computes). The other technique, which is based on importance sam- pling (IS), proposes to directly approximate the partition function (which comprises a sum over all words) with an estimate of it through importance sampling. Though the methods look superficially similar, we will derive a similar surrogate classification task akin to NCE which arrives at IS, showing a strong connection between the two.
1602.02410#14
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
15
Suppose that, instead of having a binary task to decide if a word comes from the data or from the noise distribution, we want to identify the words coming from the true data distribution in a set W = {w1, . . . , wk+1}, comprised of k noise samples and one data distribution sample. Thus, we can train a multiclass loss over a multinomial random variable Y which maximizes log p(Y = 1|W ), assuming w.l.o.g. that w1 ∈ W is always the word coming from true data. By Bayes rule, and ignoring terms that are constant with respect to Y , we can write:
1602.02410#15
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
16
The character-level features allow for a smoother and com- pact parametrization of the word embeddings. Recent ef- forts on small scale language modeling have used CNN character embeddings for the input embeddings (Kim et al., 2015). Although not as straightforward, we propose an ex- tension to this idea to also reduce the number of param- eters of the Softmax layer. Recall from Section 2.3 that the Softmax computes a logit as zw = hT ew where h is a context vector and ew the word embedding. Instead of building a matrix of |V | × |h| (whose rows correspond to ew), we produce ew with a CNN over the characters of w as ew = CN N (charsw) – we call this a CNN Softmax. We used the same network architecture to dynamically gener- ate the Softmax word embeddings without sharing the pa- rameters with the input word-embedding sub-network. For inference, the vectors ew can be precomputed, so there is no computational complexity increase w.r.t. the regular Soft- max.
1602.02410#16
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
17
We note that, when using an importance sampling loss such as the one described in Section 3.1, only a few logits have non-zero gradient (those corresponding to the true and sam- pled words). With a Softmax where ew are independently learned word embeddings, this is not a problem. But we observed that, when using a CNN, all the logits become tied as the function mapping from w to ew is quite smooth. As a result, a much smaller learning rate had to be used. Even with this, the model lacks capacity to differentiate between words that have very different meanings but that are spelled similarly. Thus, a reasonable compromise was to add a small correction factor which is learned per word, such that: p(Y = k|W ) ∝Y pd(wk) pn(wk)
1602.02410#17
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
18
p(Y = k|W ) ∝Y pd(wk) pn(wk) and, following a similar argument than for NCE, if we de- fine p(Y = k|W) = softmax(sg(we) — log pn (we)) then p(w) = softmax(sg(w,h)) is a good approximation of pa(word). Note that the only difference between NCE and IS is that, in NCE, we define a binary classification task between true or noise words with a logistic loss, whereas in IS we define a multiclass classification problem with a Softmax and cross entropy loss. We hope that our deriva- tion helps clarify the similarities and differences between the two. In particular, we observe that IS, as it optimizes a multiclass classification task (in contrast to solving a bi- nary task), may be a better choice. Indeed, the updates to the logits with IS are tied whereas in NCE they are inde- pendent.
1602.02410#18
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
19
zw = hT CN N (charsw) + hT M corrw where M is a matrix projecting a low-dimensional embed- ding vector corrw back up to the dimensionality of the pro- jected LSTM hidden state of h. This amounts to adding a bottleneck linear layer, and brings the CNN Softmax much closer to our best result, as can be seen in Table 1, where adding a 128-dim correction halves the gap between regu- lar and the CNN Softmax. Aside from a big reduction in the number of parameters and incorporating morphological knowledge from words, the other benefit of this approach is that out-of-vocabulary (OOV) words can easily be scored. This may be useful for other problems such as Machine Translation where han- dling out-of-vocabulary words is very important (Luong et al., 2014). This approach also allows parallel training over various data sets since the model is no longer explic- itly parametrized by the vocabulary size – or the language. This has shown to help when using byte-level input embed- dings for named entity recognition (Gillick et al., 2015), Exploring the Limits of Language Modeling and we hope it will enable similar gains when used to map onto words. # 3.3. Char LSTM Predictions
1602.02410#19
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
20
Exploring the Limits of Language Modeling and we hope it will enable similar gains when used to map onto words. # 3.3. Char LSTM Predictions The CNN Softmax layer can handle arbitrary words and is much more efficient in terms of number of parameters than the full Softmax matrix. It is, though, still considerably slow, as to evaluate perplexities we need to compute the partition function. A class of models that solve this prob- lem more efficiently are character-level LSTMs (Sutskever et al., 2011; Graves, 2013). They make predictions one character at a time, thus allowing to compute probabili- ties over a much smaller vocabulary. On the other hand, these models are more difficult to train and seem to per- form worse even in small tasks like PTB (Graves, 2013). Most likely this is due to the sequences becoming much longer on average as the LSTM reads the input character by character instead of word by word. # 4.2. Model Setup The typical measure used for reporting progress in language modeling is perplexity, which is the aver- age per-word log-probability on the holdout data set: e− 1 ln pwi . We follow the standard procedure and sum over all the words (including the end of sentence symbol).
1602.02410#20
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
21
We used the 1B Word Benchmark data set without any pre- processing. Given the shuffled sentences, they are input to the network as a batch of independent streams of words. Whenever a sentence ends, a new one starts without any padding (thus maximizing the occupancy per batch). For the models that consume characters as inputs or as tar- gets, each word is fed to the model as a sequence of charac- ter IDs of preespecified length (see Figure 1(b)). The words were processed to include special begin and end of word to- kens and were padded to reach the expected length. I.e. if the maximum word length was 10, the word “cat” would be transformed to “$catˆ ” due to the CNN model. Thus, we combine the word and character-level models by feeding a word-level LSTM hidden state h into a small LSTM that predicts the target word one character at a time (see Figure 1(c)). In order to make the whole process rea- sonably efficient, we train the standard LSTM model un- til convergence, freeze its weights, and replace the stan- dard word-level Softmax layer with the aforementioned character-level LSTM.
1602.02410#21
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
22
In our experiments we found that limiting the maximum word length in training to 50 was sufficient to reach very good results while 32 was clearly insufficient. We used 256 characters in our vocabulary and the non-ascii symbols were represented as a sequence of bytes. # 4.3. Model Architecture The resulting model scales independently of vocabulary size – both for training and inference. However, it does seem to be worse than regular and CNN Softmax – we are hopeful that further research will enable these models to replace fixed vocabulary models whilst being computation- ally attractive. # 4. Experiments We evaluated many variations of RNN LM architectures. These include the dimensionalities of the embedding lay- ers, the state, projection sizes, and number of LSTM layers to use. Exhaustively trying all combinations would be ex- tremely time consuming for such a large data set, but our findings suggest that LSTMs with a projection layer (i.e., a bottleneck between hidden states as in (Sak et al., 2014)) trained with truncated BPTT (Williams & Peng, 1990) for 20 steps performed well. All experiments were run using the TensorFlow system (Abadi et al., 2015), with the exception of some older mod- els which were used in the ensemble. # 4.1. Data Set
1602.02410#22
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
23
# 4.1. Data Set The experiments are performed on the 1B Word Bench- mark data set introduced by (Chelba et al., 2013), which is a publicly available benchmark for measuring progress of statistical language modeling. The data set contains about 0.8B words with a vocabulary of 793471 words, including sentence boundary markers. All the sentences are shuffled and the duplicates are removed. The words that are out of vocabulary (OOV) are marked with a special UNK token (there are approximately 0.3% such words). Following (Zaremba et al., 2014) we use dropout (Srivas- tava, 2013) before and after every LSTM layer. The bi- ases of LSTM forget gate were initialized to 1.0 (Jozefow- icz et al., 2015). The size of the models will be described in more detail in the following sections, and the choices of hyper-parameters will be released as open source upon publication.
1602.02410#23
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
24
For any model using character embedding CNNs, we closely follow the architecture from (Kim et al., 2015). The only important difference is that we use a larger number of convolutional features of 4096 to give enough capacity to the model. The resulting embedding is then linearly trans- formed to match the LSTM projection sizes. This allows it to match the performance of regular word embeddings but only uses a small fraction of parameters. Exploring the Limits of Language Modeling Table 1. Best results of single models on the 1B word benchmark. Our results are shown below previous work.
1602.02410#24
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
25
MODEL TEST PERPLEXITY NUMBER OF PARAMS [BILLIONS] SIGMOID-RNN-2048 (JI ET AL., 2015A) INTERPOLATED KN 5-GRAM, 1.1B N-GRAMS (CHELBA ET AL., 2013) SPARSE NON-NEGATIVE MATRIX LM (SHAZEER ET AL., 2015) RNN-1024 + MAXENT 9-GRAM FEATURES (CHELBA ET AL., 2013) 68.3 67.6 52.9 51.3 LSTM-512-512 LSTM-1024-512 LSTM-2048-512 LSTM-8192-2048 (NO DROPOUT) LSTM-8192-2048 (50% DROPOUT) 2-LAYER LSTM-8192-1024 (BIG LSTM) BIG LSTM+CNN INPUTS 54.1 48.2 43.7 37.9 32.2 30.6 30.0 BIG LSTM+CNN INPUTS + CNN SOFTMAX BIG LSTM+CNN INPUTS + CNN SOFTMAX + 128-DIM CORRECTION BIG LSTM+CNN INPUTS + CHAR LSTM PREDICTIONS 39.8 35.8 47.9 4.1 1.76 33 20 0.82
1602.02410#25
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
27
Table 2. Best results of ensembles on the 1B Word Benchmark. MODEL TEST PERPLEXITY LARGE ENSEMBLE (CHELBA ET AL., 2013) RNN+KN-5 (WILLIAMS ET AL., 2015) RNN+KN-5 (JI ET AL., 2015A) RNN+SNM10-SKIP (SHAZEER ET AL., 2015) LARGE ENSEMBLE (SHAZEER ET AL., 2015) 43.8 42.4 42.0 41.3 41.0 OUR 10 BEST LSTM MODELS (EQUAL WEIGHTS) OUR 10 BEST LSTM MODELS (OPTIMAL WEIGHTS) 10 LSTMS + KN-5 (EQUAL WEIGHTS) 10 LSTMS + KN-5 (OPTIMAL WEIGHTS) 10 LSTMS + SNM10-SKIP (SHAZEER ET AL., 2015) 26.3 26.1 25.3 25.1 23.7 # 4.4. Training Procedure
1602.02410#27
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
28
# 4.4. Training Procedure The models were trained until convergence with an Ada- Grad optimizer using a learning rate of 0.2. In all the exper- iments the RNNs were unrolled for 20 steps without ever resetting the LSTM states. We used a batch size of 128. We clip the gradients of the LSTM weights such that their norm is bounded by 1.0 (Pascanu et al., 2012). We used a large number of negative (or noise) samples: 8192 such samples were drawn per step, but were shared across all the target words in the batch (2560 total, i.e. 128 times 20 unrolled steps). This results in multiplying (2560 x 1024) times (1024 x (8192+1)) (instead of (2560 x 1024) times (1024 x 793471)), i.e. about 100-fold less computa- tion. Using these hyper-parameters we found large LSTMs to be relatively easy to train. The same learning rate was used in almost all of the experiments. In a few cases we had to re- duce it by an order of magnitude. Unless otherwise stated, the experiments were performed with 32 GPU workers and asynchronous gradient updates. Further details will be fully specified with the code upon publication.
1602.02410#28
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
29
Training a model for such large target vocabulary (793471 words) required to be careful with some details about the approximation to full Softmax using importance sampling. # 5. Results and Analysis In this section we summarize the results of our experiments and do an in-depth analysis. Table 1 contains all results for our models compared to previously published work. Ta- ble 2 shows previous and our own work on ensembles of models. We hope that our encouraging results, which im- proved the best perplexity of a single model from 51.3 to 30.0 (whilst reducing the model size considerably), and set a new record with ensembles at 23.7, will enable rapid re- search and progress to advance Language Modeling. For Exploring the Limits of Language Modeling this purpose, we will release the model weights and recipes upon publication. # 5.1. Size Matters Table 3. The test perplexities of an LSTM-2048-512 trained with different losses versus number of epochs. The model needs about 40 minutes per epoch. First epoch is a bit slower because we slowly increase the number of workers.
1602.02410#29
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
30
Unsurprisingly, size matters: when training on a very large and complex data set, fitting the training data with an LSTM is fairly challenging. Thus, the size of the LSTM layer is a very important factor that influences the results, as seen in Table 1. The best models are the largest we were able to fit into a GPU memory. Our largest model was a 2- layer LSTM with 8192+1024 dimensional recurrent state in each of the layers. Increasing the embedding and projec- tion size also helps but causes a large increase in the num- ber of parameters, which is less desirable. Lastly, training an RNN instead of an LSTM yields poorer results (about 5 perplexity worse) for a comparable model size. EPOCHS NCE IS TRAINING TIME [HOURS] 1 5 10 20 50 97 58 53 49 46.1 60 47.5 45 44 43.7 1 4 8 14 34 Table 4. Nearest neighbors in the character CNN embedding space of a few out-of-vocabulary words. Even for words that the model has never seen, the model usually still finds reasonable neighbors. # 5.2. Regularization Importance
1602.02410#30
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
31
# 5.2. Regularization Importance As shown in Table 1, using dropout improves the results. To our surprise, even relatively small models (e.g., single layer LSTM with 2048 units projected to 512 dimensional outputs) can over-fit the training set if trained long enough, eventually yielding holdout set degradation. WORD TOP-1 TOP-2 TOP-3 INCERDIBLE WWW.A.COM 7546 TOWNHAL1 KOMARSKI INCREDIBLE WWW.AA.COM 7646 TOWNHALL KOHARSKI NONEDIBLE WWW.AAA.COM 7534 DJC2 KONARSKI EXTENDIBLE WWW.CA.COM 8566 MOODSWING360 KOMANSKI Using dropout on non-recurrent connections largely miti- gates these issues. While over-fitting still occurs, there is no more need for early stopping. For models that had 4096 or less units in the LSTM layer, we used 10% dropout prob- ability. For larger models, 25% was significantly better. Even with such regularization, perplexities on the training set can be as much as 6 points below test.
1602.02410#31
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
32
In one experiment we tried to use a smaller vocabulary comprising of the 100,000 most frequent words and found the difference between train and test to be smaller – which suggests that too much capacity is given to rare words. This is less of an issue with character CNN embedding models as the embeddings are shared across all words. using character-level embeddings is feasible and does not degrade performance – in fact, our best single model uses a Character CNN embedding. An additional advantage is that the number of parameters of the input layer is reduced by a factor of 11 (though training speed is slightly worse). For inference, the embeddings can be precomputed so there is no speed penalty. Overall, the embedding of the best model is parametrized by 72M weights (down from 820M weights). Table 4 shows a few examples of nearest neighbor embed- dings for some out-of-vocabulary words when character CNNs are used. # 5.3. Importance Sampling is Data Efficient # 5.5. Smaller Models with CNN Softmax Table 3 shows the test perplexities of NCE vs IS loss after a few epochs of 2048 unit LSTM with 512 projection. The IS objective significantly improves the speed and the overall performance of the model when compared to NCE.
1602.02410#32
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
33
Even with character-level embeddings, the model is still fairly large (though much smaller than the best competing models from previous work). Most of the parameters are in the linear layer before the Softmax: 820M versus a total of 1.04B parameters. # 5.4. Word Embeddings vs Character CNN Replacing the embedding layer with a parametrized neural network that process characters of a given word allows the model to consume arbitrary words and is not restricted to a fixed vocabulary. This property is useful for data sets with conversational or informal text as well as for mor- phologically rich languages. Our experiments show that In one of the experiments we froze the word-LSTM after convergence and replaced the Softmax layer with the CNN Softmax sub-network. Without any fine-tuning that model was able to reach 39.8 perplexity with only 293M weights (as seen in Table 1). As described in Section 3.2, adding a “correction” word embedding term alleviates the gap between regular and Exploring the Limits of Language Modeling CNN Softmax. Indeed, we can trade-off model size versus perplexity. For instance, by adding 100M weights (through a 128 dimensional bottleneck embedding) we achieve 35.8 perplexity (see Table 1).
1602.02410#33
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
34
To contrast with the CNN Softmax, we also evaluated a model that replaces the Softmax layer with a smaller LSTM that predicts one character at a time (see Section 3.3). Such a model does not have to learn long dependencies because the base LSTM still operates at the word-level (see Fig- ure 1(c)). With a single-layer LSTM of 1024 units we reached 49.0 test perplexity, far below the best model. In order to make the comparisons more fair, we performed a very expensive marginalization over the words in the vo- cabulary (to rule out words not in the dictionary which the character LSTM would assign some probability). When doing this marginalization, the perplexity improved a bit down to 47.9. ment over previous work. Interestingly, including the best N-gram model reduces the perplexity by 1.2 point even though the model is rather weak on its own (67.6 perplex- ity). Most previous work had to either ensemble with the best N-gram model (as their RNN only used a limited out- put vocabulary of a few thousand words), or use N-gram features as additional input to the RNN. Our results, on the contrary, suggest that N-grams are of limited benefit, and suggest that a carefully trained LSTM LM is the most competitive model.
1602.02410#34
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
35
# 5.8. LSTMs are best on the tail words Figure 2 shows the difference in log probabilities between our best model (at 30.0 perplexity) and the KN-5. As can be seen from the plot, the LSTM is better across all the buckets and significantly outperforms KN-5 on the rare words. This is encouraging as it seems to suggest that LSTM LMs may fare even better for languages or data sets where the number of rare words is larger than traditional N-gram models. 25 2.0 Mean difference in log perplexity 5 0.0 # 5.9. Samples from the model To qualitatively evaluate the model, we sampled many sen- tences. We discarded short and politically incorrect ones, but the sample shown below is otherwise “raw” (i.e., not hand picked). The samples are of high quality – which is not a surprise, given the perplexities attained – but there are still some occasional mistakes. Sentences generated by the ensemble (about 26 perplexity): Words buckets of equal size (less frequent words on the right) Figure 2. The difference in log probabilities between the best LSTM and KN-5 (higher is better). The words from the hold- out set are grouped into 25 buckets of equal size based on their frequencies. # 5.6. Training Speed
1602.02410#35
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
36
# 5.6. Training Speed < S > With even more new technologies coming onto the market quickly during the past three years , an increasing number of compa- nies now must tackle the ever-changing and ever-changing environ- mental challenges online . < S > Check back for updates on this breaking news story . < S > About 800 people gathered at Hever Castle on Long Beach from noon to 2pm , three to four times that of the funeral cort`ege . < S > We are aware of written instructions from the copyright holder not to , in any way , mention Rosenberg ’s negative comments if they are relevant as indicated in the documents , ” eBay said in a statement . < S > It is now known that coffee and cacao products can do no harm on the body . < S > Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second half but neither Drogba nor Malouda was able to push on through the Barcelona defence .
1602.02410#36
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
37
We used 32 Tesla K40 GPUs to train our models. The smaller version of the LSTM model with 2048 units and 512 projections needs less than 10 hours to reach below 45 perplexity and after only 2 hours of training the model beats previous state-of-the art on this data set. The best model needs about 5 days to get to 35 perplexity and 10 days to 32.5. The best results were achieved after 3 weeks of training. See Table 3 for more details. # 5.7. Ensembles We averaged several of our best models and we were able to reach 23.7 test perplexity (more details and results can be seen in Table 2), which is more than 40% improve# 6. Discussion and Conclusions In this paper we have shown that RNN LMs can be trained on large amounts of data, and outperform competing mod- els including carefully tuned N-grams. The reduction in perplexity from 51.3 to 30.0 is due to several key compo- nents which we studied in this paper. Thus, a large, regular- ized LSTM LM, with projection layers and trained with an approximation to the true Softmax with importance sam- pling performs much better than N-grams. Unlike previ- ous work, we do not require to interpolate both the RNN LM and the N-gram, and the gains of doing so are rather marginal.
1602.02410#37
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
38
Exploring the Limits of Language Modeling By exploring recent advances in model architectures (e.g. LSTMs), exploiting small character CNNs, and by sharing our findings in this paper and accompanying code and mod- els (to be released upon publication), we hope to inspire research on large scale Language Modeling, a problem we consider crucial towards language understanding. We hope for future research to focus on reasonably sized datasets taking inspiration from recent advances seen in the com- puter vision community thanks to efforts such as Imagenet (Deng et al., 2009). Jean- S´ebastien, Morin, Fr´ederic, and Gauvain, Jean-Luc. Neural probabilistic language models. In Innovations in Machine Learning, pp. 137–186. Springer, 2006. Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robinson, Tony. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. # Acknowledgements We thank Ciprian Chelba, Ilya Sutskever, and the Google Brain Team for their help and discussions. We also thank Koray Kavukcuoglu for his help with the manuscript.
1602.02410#38
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
40
# References Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghe- mawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irv- ing, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man´e, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi´egas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-scale machine learning on heteroge- neous systems, 2015. URL http://tensorflow. org/. Software available from tensorflow.org.
1602.02410#40
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
41
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Imagenet: A large-scale hierarchical and Fei-Fei, Li. image database. In Computer Vision and Pattern Recog- nition, 2009. CVPR 2009. IEEE Conference on, pp. 248– 255. IEEE, 2009. Filippova, Katja, Alfonseca, Enrique, Colmenares, Car- los A, Kaiser, Lukasz, and Vinyals, Oriol. Sentence com- pression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pp. 360–368, 2015. Gers, Felix A, Schmidhuber, J¨urgen, and Cummins, Fred. Learning to forget: Continual prediction with lstm. Neu- ral computation, 12(10):2451–2471, 2000. Gillick, Dan, Brunk, Cliff, Vinyals, Oriol, and Subra- manya, Amarnag. Multilingual language processing from bytes. arXiv preprint arXiv:1512.00103, 2015.
1602.02410#41
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
42
Arisoy, Ebru, Sainath, Tara N, Kingsbury, Brian, and Ram- abhadran, Bhuvana. Deep neural network language mod- els. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pp. 20–28. As- sociation for Computational Linguistics, 2012. Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5): 602–610, 2005. Ballesteros, Miguel, Dyer, Chris, and Smith, Noah A. Improved transition-based parsing by modeling char- arXiv preprint acters instead of words with lstms. arXiv:1508.00657, 2015. Noise- contrastive estimation: A new estimation principle for unnormalized statistical models. In International Con- ference on Artificial Intelligence and Statistics, pp. 297– 304, 2010.
1602.02410#42
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
43
Bengio, Yoshua and Sen´ecal, Jean-S´ebastien. Adaptive im- portance sampling to accelerate training of a neural prob- abilistic language model. Neural Networks, IEEE Trans- actions on, 19(4):713–722, 2008. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short- term memory. Neural computation, 9(8):1735–1780, 1997. Bengio, Yoshua, Sen´ecal, Jean-S´ebastien, et al. Quick training of probabilistic neural nets by importance sam- pling. In AISTATS, 2003. Ji, Shihao, Vishwanathan, S. V. N., Satish, Nadathur, An- derson, Michael J., and Dubey, Pradeep. Blackout: Speeding up recurrent neural network language models Exploring the Limits of Language Modeling with very large vocabularies. CoRR, abs/1511.06909, URL http://arxiv.org/abs/1511. 2015a. 06909. Mikolov, Tomas and Zweig, Geoffrey. Context dependent In SLT, pp. recurrent neural network language model. 234–239, 2012.
1602.02410#43
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
44
Mikolov, Tomas and Zweig, Geoffrey. Context dependent In SLT, pp. recurrent neural network language model. 234–239, 2012. Ji, Yangfeng, Cohn, Trevor, Kong, Lingpeng, Dyer, Chris, and Eisenstein, Jacob. Document context language mod- els. arXiv preprint arXiv:1511.03962, 2015b. Jozefowicz, Rafal, Zaremba, Wojciech, and Sutskever, Ilya. An empirical exploration of recurrent network ar- In Proceedings of the 32nd International chitectures. Conference on Machine Learning (ICML-15), pp. 2342– 2350, 2015. Mikolov, Tomas, Karafi´at, Martin, Burget, Lukas, Cer- nock`y, Jan, and Khudanpur, Sanjeev. Recurrent neural network based language model. In INTERSPEECH, vol- ume 2, pp. 3, 2010. Mikolov, Tomas, Deoras, Anoop, Kombrink, Stefan, Bur- get, Lukas, and Cernock`y, Jan. Empirical evaluation and combination of advanced language modeling techniques. In INTERSPEECH, number s 1, pp. 605–608, 2011.
1602.02410#44
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
45
Kalchbrenner, Nal, Grefenstette, Edward, and Blunsom, Phil. A convolutional neural network for modelling sen- tences. arXiv preprint arXiv:1404.2188, 2014. Mnih, Andriy and Hinton, Geoffrey E. A scalable hierar- chical distributed language model. In Advances in neural information processing systems, pp. 1081–1088, 2009. Kim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015. Kneser, Reinhard and Ney, Hermann. Improved backing- off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 Inter- national Conference on, volume 1, pp. 181–184. IEEE, 1995. Mnih, Andriy and Kavukcuoglu, Koray. Learning word embeddings efficiently with noise-contrastive estima- tion. In Advances in Neural Information Processing Sys- tems, pp. 2265–2273, 2013.
1602.02410#45
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
46
Morin, Frederic and Bengio, Yoshua. Hierarchical proba- bilistic neural network language model. In Aistats, vol- ume 5, pp. 246–252. Citeseer, 2005. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012. Le Cun, B Boser, Denker, John S, Henderson, D, Howard, Richard E, Hubbard, W, and Jackel, Lawrence D. Hand- written digit recognition with a back-propagation net- work. In Advances in neural information processing sys- tems. Citeseer, 1990.
1602.02410#46
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
47
Ling, Wang, Lu´ıs, Tiago, Marujo, Lu´ıs, Astudillo, Ram´on Fernandez, Amir, Silvio, Dyer, Chris, Black, Alan W, and Trancoso, Isabel. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096, 2015. Rush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summa- rization. arXiv preprint arXiv:1509.00685, 2015. Sak, Hasim, Senior, Andrew W, and Beaufays, Franc¸oise. Long short-term memory recurrent neural network archi- In INTER- tectures for large scale acoustic modeling. SPEECH, pp. 338–342, 2014. Schuster, Mike and Paliwal, Kuldip K. Bidirectional recur- rent neural networks. Signal Processing, IEEE Transac- tions on, 45(11):2673–2681, 1997.
1602.02410#47
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
48
Luong, Minh-Thang, Sutskever, Ilya, Le, Quoc V, Vinyals, Oriol, and Zaremba, Wojciech. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014. Marcus, Mitchell P, Marcinkiewicz, Mary Ann, and San- torini, Beatrice. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993. Mikolov, Tom´aˇs. Statistical language models based on neu- ral networks. Presentation at Google, Mountain View, 2nd April, 2012. Schwenk, Holger, Rousseau, Anthony, and Attik, Mo- hammed. Large, pruned or continuous space language models on a gpu for statistical machine translation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Fu- ture of Language Modeling for HLT, pp. 11–19. Associ- ation for Computational Linguistics, 2012.
1602.02410#48
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
49
Serban, Iulian Vlad, Sordoni, Alessandro, Bengio, Yoshua, Courville, Aaron C., and Pineau, Joelle. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808, 2015. URL http://arxiv. org/abs/1507.04808. Exploring the Limits of Language Modeling Shazeer, Noam, Pelemans, Joris, and Chelba, Ciprian. Sparse non-negative matrix language modeling for skip- grams. Proceedings of Interspeech, pp. 1428–1432, 2015. Srivastava, Nitish. Improving neural networks with dropout. PhD thesis, University of Toronto, 2013. Srivastava, Nitish, Mansimov, Elman, and Salakhutdinov, Ruslan. Unsupervised learning of video representations using lstms. arXiv preprint arXiv:1502.04681, 2015a. Srivastava, Rupesh K, Greff, Klaus, and Schmidhuber, In Advances in J¨urgen. Training very deep networks. Neural Information Processing Systems, pp. 2368–2376, 2015b.
1602.02410#49
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
50
Sutskever, Ilya, Martens, James, and Hinton, Geoffrey E. Generating text with recurrent neural networks. In Pro- ceedings of the 28th International Conference on Ma- chine Learning (ICML-11), pp. 1017–1024, 2011. Se- In quence to sequence learning with neural networks. Advances in neural information processing systems, pp. 3104–3112, 2014. Vaswani, Ashish, Zhao, Yinggong, Fossum, Victoria, and Chiang, David. Decoding with large-scale neural lan- guage models improves translation. Citeseer. Vincent, Pascal, de Br´ebisson, Alexandre, and Bouthillier, Xavier. Efficient exact gradient update for training deep networks with very large sparse targets. In Advances in Neural Information Processing Systems, pp. 1108–1116, 2015. Vinyals, Oriol and Le, Quoc. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015. Wang, Tian and Cho, Kyunghyun. Larger-context language modelling. arXiv preprint arXiv:1511.03729, 2015.
1602.02410#50
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.02410
51
Wang, Tian and Cho, Kyunghyun. Larger-context language modelling. arXiv preprint arXiv:1511.03729, 2015. Williams, Ronald J and Peng, Jing. An efficient gradient- based algorithm for on-line training of recurrent network trajectories. Neural computation, 2(4):490–501, 1990. Williams, Will, Prasad, Niranjani, Mrva, David, Ash, Tom, and Robinson, Tony. Scaling recurrent neural network language models. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2015 IEEE International Conference on, pp. 5391–5395. IEEE, 2015. Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
1602.02410#51
Exploring the Limits of Language Modeling
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.
http://arxiv.org/pdf/1602.02410
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu
cs.CL
null
null
cs.CL
20160207
20160211
[ { "id": "1512.00103" }, { "id": "1511.03729" }, { "id": "1508.06615" }, { "id": "1502.04681" }, { "id": "1509.00685" }, { "id": "1508.00657" }, { "id": "1511.03962" }, { "id": "1508.02096" }, { "id": "1506.05869" } ]
1602.01783
0
6 1 0 2 n u J 6 1 ] G L . s c [ 2 v 3 8 7 1 0 . 2 0 6 1 : v i X r a # Asynchronous Methods for Deep Reinforcement Learning [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Volodymyr Mnih1 Adrià Puigdomènech Badia1 Mehdi Mirza1,2 Alex Graves1 Tim Harley1 Timothy P. Lillicrap1 David Silver1 Koray Kavukcuoglu 1 1 Google DeepMind 2 Montreal Institute for Learning Algorithms (MILA), University of Montreal # Abstract
1602.01783#0
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
1
# Abstract and We lightweight framework for deep reinforce- ment learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input. # 1. Introduction
1602.01783#1
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
2
# 1. Introduction Deep neural networks provide rich representations that can enable reinforcement learning (RL) algorithms to perform effectively. However, it was previously thought that the combination of simple online RL algorithms with deep neural networks was fundamentally unstable. Instead, a va- riety of solutions have been proposed to stabilize the algo- rithm (Riedmiller, 2005; Mnih et al., 2013; 2015; Van Has- selt et al., 2015; Schulman et al., 2015a). These approaches share a common idea: the sequence of observed data en- countered by an online RL agent is non-stationary, and onProceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). line RL updates are strongly correlated. By storing the agent’s data in an experience replay memory, the data can be batched (Riedmiller, 2005; Schulman et al., 2015a) or randomly sampled (Mnih et al., 2013; 2015; Van Hasselt et al., 2015) from different time-steps. Aggregating over memory in this way reduces non-stationarity and decorre- lates updates, but at the same time limits the methods to off-policy reinforcement learning algorithms.
1602.01783#2
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
3
Deep RL algorithms based on experience replay have achieved unprecedented success in challenging domains such as Atari 2600. However, experience replay has several drawbacks: it uses more memory and computation per real interaction; and it requires off-policy learning algorithms that can update from data generated by an older policy. In this paper we provide a very different paradigm for deep reinforcement learning. Instead of experience replay, we asynchronously execute multiple agents in parallel, on mul- tiple instances of the environment. This parallelism also decorrelates the agents’ data into a more stationary process, since at any given time-step the parallel agents will be ex- periencing a variety of different states. This simple idea enables a much larger spectrum of fundamental on-policy RL algorithms, such as Sarsa, n-step methods, and actor- critic methods, as well as off-policy RL algorithms such as Q-learning, to be applied robustly and effectively using deep neural networks.
1602.01783#3
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
4
Our parallel reinforcement learning paradigm also offers practical benefits. Whereas previous approaches to deep re- inforcement learning rely heavily on specialized hardware such as GPUs (Mnih et al., 2015; Van Hasselt et al., 2015; Schaul et al., 2015) or massively distributed architectures (Nair et al., 2015), our experiments run on a single machine with a standard multi-core CPU. When applied to a vari- ety of Atari 2600 domains, on many games asynchronous reinforcement learning achieves better results, in far less Asynchronous Methods for Deep Reinforcement Learning time than previous GPU-based algorithms, using far less resource than massively distributed approaches. The best of the proposed methods, asynchronous advantage actor- critic (A3C), also mastered a variety of continuous motor control tasks as well as learned general strategies for ex- ploring 3D mazes purely from visual inputs. We believe that the success of A3C on both 2D and 3D games, discrete and continuous action spaces, as well as its ability to train feedforward and recurrent agents makes it the most general and successful reinforcement learning agent to date. # 2. Related Work
1602.01783#4
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
5
# 2. Related Work The General Reinforcement Learning Architecture (Gorila) of (Nair et al., 2015) performs asynchronous training of re- inforcement learning agents in a distributed setting. In Go- rila, each process contains an actor that acts in its own copy of the environment, a separate replay memory, and a learner that samples data from the replay memory and computes gradients of the DQN loss (Mnih et al., 2015) with respect to the policy parameters. The gradients are asynchronously sent to a central parameter server which updates a central copy of the model. The updated policy parameters are sent to the actor-learners at fixed intervals. By using 100 sep- arate actor-learner processes and 30 parameter server in- stances, a total of 130 machines, Gorila was able to signif- icantly outperform DQN over 49 Atari games. On many games Gorila reached the score achieved by DQN over 20 times faster than DQN. We also note that a similar way of parallelizing DQN was proposed by (Chavez et al., 2015).
1602.01783#5
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
7
# 3. Reinforcement Learning Background We consider the standard reinforcement learning setting where an agent interacts with an environment € over a number of discrete time steps. At each time step t, the agent receives a state s; and selects an action a; from some set of possible actions A according to its policy 7, where m is a mapping from states s; to actions a,. In return, the agent receives the next state s,4 1 and receives a scalar re- ward r;. The process continues until the agent reaches a terminal state after which the process restarts. The return R= Yro 7*rt.x is the total accumulated return from time step ¢ with discount factor 7 € (0, 1]. The goal of the agent is to maximize the expected return from each state s;. The action value Q*(s,a) = E[R;|s; = s,a] is the ex- pected return for selecting action a in state s and follow- ing policy 7. The optimal value function Q*(s,a) = max, Q*(s,a) gives the maximum action value for state s and action a achievable by any policy. Similarly, the value of state s under policy 7 is defined as V"(s) = E [R,|s_ = s] and is simply the expected return for follow- ing policy 7 from state s.
1602.01783#7
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
8
In earlier work, (Li & Schuurmans, 2011) applied the Map Reduce framework to parallelizing batch reinforce- ment learning methods with linear function approximation. Parallelism was used to speed up large matrix operations but not to parallelize the collection of experience or sta- bilize learning. (Grounds & Kudenko, 2008) proposed a parallel version of the Sarsa algorithm that uses multiple separate actor-learners to accelerate training. Each actor- learner learns separately and periodically sends updates to weights that have changed significantly to the other learn- ers using peer-to-peer communication. In value-based model-free reinforcement learning methods, the action value function is represented using a function ap- proximator, such as a neural network. Let Q(s, a; θ) be an approximate action-value function with parameters θ. The updates to θ can be derived from a variety of reinforcement learning algorithms. One example of such an algorithm is Q-learning, which aims to directly approximate the optimal action value function: Q∗(s, a) ≈ Q(s, a; θ). In one-step Q-learning, the parameters θ of the action value function Q(s, a; θ) are learned by iteratively minimizing a sequence of loss functions, where the ith loss function defined as
1602.01783#8
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
9
(Tsitsiklis, 1994) studied convergence properties of Q- learning in the asynchronous optimization setting. These results show that Q-learning is still guaranteed to converge when some of the information is outdated as long as out- dated information is always eventually discarded and sev- eral other technical assumptions are satisfied. Even earlier, (Bertsekas, 1982) studied the related problem of distributed dynamic programming. Another related area of work is in evolutionary meth- ods, which are often straightforward to parallelize by dis- tributing fitness evaluations over multiple machines or threads (Tomassini, 1999). Such parallel evolutionary ap2 L;(0;) =E (r + ymax Q(s’, a’; 0-1) — Q(s, a; 64)) a where s’ is the state encountered after state s.
1602.01783#9
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
10
where s’ is the state encountered after state s. We refer to the above method as one-step Q-learning be- cause it updates the action value Q(s,a) toward the one- step return r + ymaxq Q(s’,a’;@). One drawback of us- ing one-step methods is that obtaining a reward r only di- rectly affects the value of the state action pair s, a that led to the reward. The values of other state action pairs are affected only indirectly through the updated value Q(s, a). This can make the learning process slow since many up- dates are required the propagate a reward to the relevant preceding states and actions. Asynchronous Methods for Deep Reinforcement Learning One way of propagating rewards faster is by using n- step returns (Watkins, 1989; Peng & Williams, 1996). In n-step Q-learning, Q(s, a) is updated toward the n- step return defined as rt + γrt+1 + · · · + γn−1rt+n−1 + maxa γnQ(st+n, a). This results in a single reward r di- rectly affecting the values of n preceding state action pairs. This makes the process of propagating rewards to relevant state-action pairs potentially much more efficient.
1602.01783#10
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
11
In contrast to value-based methods, policy-based model- free methods directly parameterize the policy π(a|s; θ) and update the parameters θ by performing, typically approx- imate, gradient ascent on E[Rt]. One example of such a method is the REINFORCE family of algorithms due to Williams (1992). Standard REINFORCE updates the policy parameters θ in the direction ∇θ log π(at|st; θ)Rt, which is an unbiased estimate of ∇θE[Rt]. It is possible to reduce the variance of this estimate while keeping it unbi- ased by subtracting a learned function of the state bt(st), known as a baseline (Williams, 1992), from the return. The resulting gradient is ∇θ log π(at|st; θ) (Rt − bt(st)). Algorithm 1 Asynchronous one-step Q-learning - pseu- docode for each actor-learner thread.
1602.01783#11
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
12
Algorithm 1 Asynchronous one-step Q-learning - pseu- docode for each actor-learner thread. docode for each actor-learner thread. // Assume global shared 0, 0~, and counter T = 0. Initialize thread step counter t <- 0 Initialize target network weights 6~ < 6 Initialize network gradients dO + 0 Get initial state s repeat Take action a with e-greedy policy based on Q(s, a; 0) Receive new state s’ and reward r for terminal s’ y= for non-terminal s’ r r+ ymaxa Q(s’,a’;07) Accumulate gradients wrt 6: dO < d@ + y= Q(s.036))* , s=s T<T+landt+t+1 ifT mod Itarget == 0 then Update the target network 0~ < 0 end if ift mod [Asyncupdate == 0 or s is terminal then Perform asynchronous update of 6 using d@. Clear gradients d@ + 0. end if until T > Tmax
1602.01783#12
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
13
A learned estimate of the value function is commonly used as the baseline bt(st) ≈ V π(st) leading to a much lower variance estimate of the policy gradient. When an approx- imate value function is used as the baseline, the quantity Rt − bt used to scale the policy gradient can be seen as an estimate of the advantage of action at in state st, or A(at, st) = Q(at, st)−V (st), because Rt is an estimate of Qπ(at, st) and bt is an estimate of V π(st). This approach can be viewed as an actor-critic architecture where the pol- icy π is the actor and the baseline bt is the critic (Sutton & Barto, 1998; Degris et al., 2012). # 4. Asynchronous RL Framework
1602.01783#13
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
14
# 4. Asynchronous RL Framework learners running in parallel are likely to be exploring dif- ferent parts of the environment. Moreover, one can explic- itly use different exploration policies in each actor-learner to maximize this diversity. By running different explo- ration policies in different threads, the overall changes be- ing made to the parameters by multiple actor-learners ap- plying online updates in parallel are likely to be less corre- lated in time than a single agent applying online updates. Hence, we do not use a replay memory and rely on parallel actors employing different exploration policies to perform the stabilizing role undertaken by experience replay in the DQN training algorithm. We now present multi-threaded asynchronous variants of one-step Sarsa, one-step Q-learning, n-step Q-learning, and advantage actor-critic. The aim in designing these methods was to find RL algorithms that can train deep neural net- work policies reliably and without large resource require- ments. While the underlying RL methods are quite dif- ferent, with actor-critic being an on-policy policy search method and Q-learning being an off-policy value-based method, we use two main ideas to make all four algorithms practical given our design goal.
1602.01783#14
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
15
In addition to stabilizing learning, using multiple parallel actor-learners has multiple practical benefits. First, we ob- tain a reduction in training time that is roughly linear in the number of parallel actor-learners. Second, since we no longer rely on experience replay for stabilizing learning we are able to use on-policy reinforcement learning methods such as Sarsa and actor-critic to train neural networks in a stable way. We now describe our variants of one-step Q- learning, one-step Sarsa, n-step Q-learning and advantage actor-critic. First, we use asynchronous actor-learners, similarly to the Gorila framework (Nair et al., 2015), but instead of using separate machines and a parameter server, we use multi- ple CPU threads on a single machine. Keeping the learn- ers on a single machine removes the communication costs of sending gradients and parameters and enables us to use Hogwild! (Recht et al., 2011) style updates for training.
1602.01783#15
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
16
Second, we make the observation that multiple actorsAsynchronous one-step Q-learning: Pseudocode for our variant of Q-learning, which we call Asynchronous one- step Q-learning, is shown in Algorithm 1. Each thread in- teracts with its own copy of the environment and at each step computes a gradient of the Q-learning loss. We use a shared and slowly changing target network in comput- ing the Q-learning loss, as was proposed in the DQN train- ing method. We also accumulate gradients over multiple timesteps before they are applied, which is similar to usAsynchronous Methods for Deep Reinforcement Learning ing minibatches. This reduces the chances of multiple ac- tor learners overwriting each other’s updates. Accumulat- ing updates over several steps also provides some ability to trade off computational efficiency for data efficiency. Finally, we found that giving each thread a different explo- ration policy helps improve robustness. Adding diversity to exploration in this manner also generally improves per- formance through better exploration. While there are many possible ways of making the exploration policies differ we experiment with using ¢-greedy exploration with € periodi- cally sampled from some distribution by each thread. by tmax. The pseudocode for the algorithm is presented in Supplementary Algorithm S3.
1602.01783#16
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
17
by tmax. The pseudocode for the algorithm is presented in Supplementary Algorithm S3. As with the value-based methods we rely on parallel actor- learners and accumulated updates for improving training stability. Note that while the parameters θ of the policy and θv of the value function are shown as being separate for generality, we always share some of the parameters in practice. We typically use a convolutional neural network that has one softmax output for the policy π(at|st; θ) and one linear output for the value function V (st; θv), with all non-output layers shared. Asynchronous one-step Sarsa: The asynchronous one- step Sarsa algorithm is the same as asynchronous one-step Q-learning as given in Algorithm 1 except that it uses a dif- ferent target value for Q(s,a). The target value used by one-step Sarsa is r + yQ(s’,a’;6—) where a’ is the action taken in state s’ (Rummery & Niranjan, 1994; Sutton & Barto, 1998). We again use a target network and updates accumulated over multiple timesteps to stabilize learning.
1602.01783#17
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
18
Asynchronous n-step Q-learning: Pseudocode for our variant of multi-step Q-learning is shown in Supplementary Algorithm S2. The algorithm is somewhat unusual because it operates in the forward view by explicitly computing n- step returns, as opposed to the more common backward view used by techniques like eligibility traces (Sutton & Barto, 1998). We found that using the forward view is eas- ier when training neural networks with momentum-based methods and backpropagation through time. In order to compute a single update, the algorithm first selects actions using its exploration policy for up to tmax steps or until a terminal state is reached. This process results in the agent receiving up to tmax rewards from the environment since its last update. The algorithm then computes gradients for n-step Q-learning updates for each of the state-action pairs encountered since the last update. Each n-step update uses the longest possible n-step return resulting in a one-step update for the last state, a two-step update for the second last state, and so on for a total of up to tmax updates. The accumulated updates are applied in a single gradient step.
1602.01783#18
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
19
We also found that adding the entropy of the policy 7 to the objective function improved exploration by discouraging premature convergence to suboptimal deterministic poli- cies. This technique was originally proposed by (Williams & Peng, 1991), who found that it was particularly help- ful on tasks requiring hierarchical behavior. The gradi- ent of the full objective function including the entropy regularization term with respect to the policy parame- ters takes the form Vy log (az| 51; 6’)(Ri — V(s13 90) + BV o H((s1;6’)), where H is the entropy. The hyperpa- rameter 6 controls the strength of the entropy regulariza- tion term. Optimization: We investigated three different optimiza- tion algorithms in our asynchronous framework – SGD with momentum, RMSProp (Tieleman & Hinton, 2012) without shared statistics, and RMSProp with shared statis- tics. We used the standard non-centered RMSProp update given by 9 Ae g=ag+ (1—a)AM and 6 + 6 "Tare (1) where all operations are performed elementwise. A com- parison on a subset of Atari 2600 games showed that a vari- ant of RMSProp where statistics g are shared across threads is considerably more robust than the other two methods. Full details of the methods and comparisons are included in Supplementary Section 7.
1602.01783#19
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
20
Asynchronous advantage actor-critic: The algorithm, which we call asynchronous advantage actor-critic (A3C), maintains a policy 7(a,|s,;@) and an estimate of the value function V(s;;0,). Like our variant of n-step Q-learning, our variant of actor-critic also operates in the forward view and uses the same mix of n-step returns to update both the policy and the value-function. The policy and the value function are updated after every t,,q, actions or when a terminal state is reached. The update performed by the al- gorithm can be seen as Vy log (az |51; 6’) A(Sz, at; 9, Ov) where A(s;, a1; 9, 0,,) is an estimate of the advantage func- tion given by Yh9 Viren: + 7*V (Stan: Ov) — V(s15 80), where k can vary from state to state and is upper-bounded # 5. Experiments
1602.01783#20
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
21
# 5. Experiments We use four different platforms for assessing the properties of the proposed framework. We perform most of our exper- iments using the Arcade Learning Environment (Bellemare et al., 2012), which provides a simulator for Atari 2600 games. This is one of the most commonly used benchmark environments for RL algorithms. We use the Atari domain to compare against state of the art results (Van Hasselt et al., 2015; Wang et al., 2015; Schaul et al., 2015; Nair et al., 2015; Mnih et al., 2015), as well as to carry out a detailed stability and scalability analysis of the proposed methods. We performed further comparisons using the TORCS 3D car racing simulator (Wymann et al., 2013). We also use Asynchronous Methods for Deep Reinforcement Learning
1602.01783#21
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
22
Asynchronous Methods for Deep Reinforcement Learning Beamrider Breakout 16000 600 30 — pon — DeN 14000 __ 1-step Q — Lstep Q — I step SARSA 500 Lstep SARSA 20 12000 ih etep Q ABC — nsstep Q 10000 8000 Score 6000 4000 2000 0 -30 0 2 4 6 8 1012 14 0 2 4 6 8 10 12 14 0 2 Training time (hours) Training time (hours) — DON 4000 — 1step Q — Lsstep SARSA — mstep Q ABC Training time (hours) Pong 12000 Q*bert 1600 Space Invaders — DON — DON oovo — 2-step Q 1400 — 1-step Q — 1-step SARSA — 1-step SARSA — n-step Q 1200 pstep Q 8000 3c 1000 A3C 6000 2000 0 8 10 12 14 0 2 4 6 8 10 12 14 0 2 4 6 8 1012 14 Training time (hours) Training time (hours)
1602.01783#22
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
23
Figure 1. Learning speed comparison for DQN and the new asynchronous algorithms on five Atari 2600 games. DQN was trained on a single Nvidia K40 GPU while the asynchronous methods were trained using 16 CPU cores. The plots are averaged over 5 runs. In the case of DQN the runs were for different seeds with fixed hyperparameters. For asynchronous methods we average over the best 5 models from 50 experiments with learning rates sampled from LogU nif orm(10−4, 10−2) and all other hyperparameters fixed. two additional domains to evaluate only the A3C algorithm – Mujoco and Labyrinth. MuJoCo (Todorov, 2015) is a physics simulator for evaluating agents on continuous mo- tor control tasks with contact dynamics. Labyrinth is a new 3D environment where the agent must learn to find rewards in randomly generated mazes from a visual input. The pre- cise details of our experimental setup can be found in Sup- plementary Section 8.
1602.01783#23
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
25
# 5.1. Atari 2600 Games We first present results on a subset of Atari 2600 games to demonstrate the training speed of the new methods. Fig- ure 1 compares the learning speed of the DQN algorithm trained on an Nvidia K40 GPU with the asynchronous methods trained using 16 CPU cores on five Atari 2600 games. The results show that all four asynchronous meth- ods we presented can successfully train neural network controllers on the Atari domain. The asynchronous meth- ods tend to learn faster than DQN, with significantly faster learning on some games, while training on only 16 CPU cores. Additionally, the results suggest that n-step methods learn faster than one-step methods on some games. Over- all, the policy-based advantage actor-critic method signifi- cantly outperforms all three value-based methods.
1602.01783#25
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
26
We then evaluated asynchronous advantage actor-critic on 57 Atari games. In order to compare with the state of the art in Atari game playing, we largely followed the train- ing and evaluation protocol of (Van Hasselt et al., 2015). Specifically, we tuned hyperparameters (learning rate and amount of gradient norm clipping) using a search on six Atari games (Beamrider, Breakout, Pong, Q*bert, Seaquest and Space Invaders) and then fixed all hyperparameters for all 57 games. We trained both a feedforward agent with the same architecture as (Mnih et al., 2015; Nair et al., 2015; Van Hasselt et al., 2015) as well as a recurrent agent with an additional 256 LSTM cells after the final hidden layer. We additionally used the final network weights for evaluation to make the results more comparable to the original results Table 1. Mean and median human-normalized scores on 57 Atari games using the human starts evaluation metric. Supplementary Table SS3 shows the raw scores for all games.
1602.01783#26
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
27
Table 1. Mean and median human-normalized scores on 57 Atari games using the human starts evaluation metric. Supplementary Table SS3 shows the raw scores for all games. from (Bellemare et al., 2012). We trained our agents for four days using 16 CPU cores, while the other agents were trained for 8 to 10 days on Nvidia K40 GPUs. Table 1 shows the average and median human-normalized scores obtained by our agents trained by asynchronous advantage actor-critic (A3C) as well as the current state-of-the art. Supplementary Table S3 shows the scores on all games. A3C significantly improves on state-of-the-art the average score over 57 games in half the training time of the other methods while using only 16 CPU cores and no GPU. Fur- thermore, after just one day of training, A3C matches the average human normalized score of Dueling Double DQN and almost reaches the median human normalized score of Gorila. We note that many of the improvements that are presented in Double DQN (Van Hasselt et al., 2015) and Dueling Double DQN (Wang et al., 2015) can be incorpo- rated to 1-step Q and n-step Q methods presented in this work with similar potential improvements. # 5.2. TORCS Car Racing Simulator
1602.01783#27
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
28
# 5.2. TORCS Car Racing Simulator We also compared the four asynchronous methods on the TORCS 3D car racing game (Wymann et al., 2013). TORCS not only has more realistic graphics than Atari 2600 games, but also requires the agent to learn the dy- namics of the car it is controlling. At each step, an agent received only a visual input in the form of an RGB image Asynchronous Methods for Deep Reinforcement Learning of the current frame as well as a reward proportional to the agent’s velocity along the center of the track at the agent’s current position. We used the same neural network archi- tecture as the one used in the Atari experiments specified in Supplementary Section 8. We performed experiments us- ing four different settings – the agent controlling a slow car with and without opponent bots, and the agent controlling a fast car with and without opponent bots. Full results can be found in Supplementary Figure S6. A3C was the best per- forming agent, reaching between roughly 75% and 90% of the score obtained by a human tester on all four game con- figurations in about 12 hours of training. A video showing the learned driving behavior of the A3C agent can be found at https://youtu.be/0xo1Ldx3L5Q.
1602.01783#28
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
29
1 Method 1-step Q 1.0 1-step SARSA 1.0 1.0 n-step Q 1.0 A3C Number of threads 2 3.0 2.8 2.7 2.1 4 6.3 5.9 5.9 3.7 8 13.3 13.1 10.7 6.9 16 24.1 22.1 17.2 12.5 Table 2. The average training speedup for each method and num- ber of threads averaged over seven Atari games. To compute the training speed-up on a single game we measured the time to re- quired reach a fixed reference score using each method and num- ber of threads. The speedup from using n threads on a game was defined as the time required to reach a fixed reference score using one thread divided the time required to reach the reference score using n threads. The table shows the speedups averaged over seven Atari games (Beamrider, Breakout, Enduro, Pong, Q*bert, Seaquest, and Space Invaders). # 5.3. Continuous Action Control Using the MuJoCo Physics Simulator
1602.01783#29
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
30
# 5.3. Continuous Action Control Using the MuJoCo Physics Simulator We also examined a set of tasks where the action space is continuous. In particular, we looked at a set of rigid body physics domains with contact dynamics where the tasks include many examples of manipulation and loco- motion. These tasks were simulated using the Mujoco physics engine. We evaluated only the asynchronous ad- vantage actor-critic algorithm since, unlike the value-based methods, it is easily extended to continuous actions. In all problems, using either the physical state or pixels as in- put, Asynchronous Advantage-Critic found good solutions in less than 24 hours of training and typically in under a few hours. Some successful policies learned by our agent can be seen in the following video https://youtu.be/ Ajjc08-iPx8. Further details about this experiment can be found in Supplementary Section 9. # 5.4. Labyrinth
1602.01783#30
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
31
# 5.4. Labyrinth We performed an additional set of experiments with A3C on a new 3D environment called Labyrinth. The specific task we considered involved the agent learning to find re- wards in randomly generated mazes. At the beginning of each episode the agent was placed in a new randomly gen- erated maze consisting of rooms and corridors. Each maze contained two types of objects that the agent was rewarded for finding – apples and portals. Picking up an apple led to a reward of 1. Entering a portal led to a reward of 10 after which the agent was respawned in a new random location in the maze and all previously collected apples were regener- ated. An episode terminated after 60 seconds after which a new episode would begin. The aim of the agent is to collect as many points as possible in the time limit and the optimal strategy involves first finding the portal and then repeatedly going back to it after each respawn. This task is much more challenging than the TORCS driving domain because the agent is faced with a new maze in each episode and must learn a general strategy for exploring random mazes.
1602.01783#31
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
33
# 5.5. Scalability and Data Efficiency We analyzed the effectiveness of our proposed framework by looking at how the training time and data efficiency changes with the number of parallel actor-learners. When using multiple workers in parallel and updating a shared model, one would expect that in an ideal case, for a given task and algorithm, the number of training steps to achieve a certain score would remain the same with varying num- bers of workers. Therefore, the advantage would be solely due to the ability of the system to consume more data in the same amount of wall clock time and possibly improved exploration. Table 2 shows the training speed-up achieved by using increasing numbers of parallel actor-learners av- eraged over seven Atari games. These results show that all four methods achieve substantial speedups from using mul- tiple worker threads, with 16 threads leading to at least an order of magnitude speedup. This confirms that our pro- posed framework scales well with the number of parallel workers, making efficient use of resources.
1602.01783#33
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
34
Somewhat surprisingly, asynchronous one-step Q-learning and Sarsa algorithms exhibit superlinear speedups that cannot be explained by purely computational gains. We observe that one-step methods (one-step Q and one-step Sarsa) often require less data to achieve a particular score when using more parallel actor-learners. We believe this is due to positive effect of multiple threads to reduce the bias in one-step methods. These effects are shown more clearly in Figure 3, which shows plots of the average score against the total number of training frames for different Asynchronous Methods for Deep Reinforcement Learning 26, Pong 236 oer 6, Space Invaders Figure 2. Scatter plots of scores obtained by asynchronous advantage actor-critic on five games (Beamrider, Breakout, Pong, Q*bert, Space Invaders) for 50 different learning rates and random initializations. On each game, there is a wide range of learning rates for which all random initializations acheive good scores. This shows that A3C is quite robust to learning rates and initial random weights. numbers of actor-learners and training methods on five Atari games, and Figure 4, which shows plots of the av- erage score against wall-clock time. # 5.6. Robustness and Stability
1602.01783#34
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
35
# 5.6. Robustness and Stability substantially improve the data efficiency of these methods by reusing old data. This could in turn lead to much faster training times in domains like TORCS where interacting with the environment is more expensive than updating the model for the architecture we used. Finally, we analyzed the stability and robustness of the four proposed asynchronous algorithms. For each of the four algorithms we trained models on five games (Break- out, Beamrider, Pong, Q*bert, Space Invaders) using 50 different learning rates and random initializations. Figure 2 shows scatter plots of the resulting scores for A3C, while Supplementary Figure S11 shows plots for the other three methods. There is usually a range of learning rates for each method and game combination that leads to good scores, indicating that all methods are quite robust to the choice of learning rate and random initialization. The fact that there are virtually no points with scores of 0 in regions with good learning rates indicates that the methods are stable and do not collapse or diverge once they are learning. # 6. Conclusions and Discussion
1602.01783#35
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
36
# 6. Conclusions and Discussion We have presented asynchronous versions of four standard reinforcement learning algorithms and showed that they are able to train neural network controllers on a variety of domains in a stable manner. Our results show that in our proposed framework stable training of neural networks through reinforcement learning is possible with both value- based and policy-based methods, off-policy as well as on- policy methods, and in discrete as well as continuous do- mains. When trained on the Atari domain using 16 CPU cores, the proposed asynchronous algorithms train faster than DQN trained on an Nvidia K40 GPU, with A3C sur- passing the current state-of-the-art in half the training time.
1602.01783#36
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
37
Combining other existing reinforcement learning meth- ods or recent advances in deep reinforcement learning with our asynchronous framework presents many possibil- ities for immediate improvements to the methods we pre- sented. While our n-step methods operate in the forward view (Sutton & Barto, 1998) by using corrected n-step re- turns directly as targets, it has been more common to use the backward view to implicitly combine different returns through eligibility traces (Watkins, 1989; Sutton & Barto, 1998; Peng & Williams, 1996). The asynchronous ad- vantage actor-critic method could be potentially improved by using other ways of estimating the advantage function, such as generalized advantage estimation of (Schulman et al., 2015b). All of the value-based methods we inves- tigated could benefit from different ways of reducing over- estimation bias of Q-values (Van Hasselt et al., 2015; Belle- mare et al., 2016). Yet another, more speculative, direction is to try and combine the recent work on true online tempo- ral difference methods (van Seijen et al., 2015) with non- linear function approximation.
1602.01783#37
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
38
In addition to these algorithmic improvements, a number of complementary improvements to the neural network ar- chitecture are possible. The dueling architecture of (Wang et al., 2015) has been shown to produce more accurate es- timates of Q-values by including separate streams for the state value and advantage in the network. The spatial soft- max proposed by (Levine et al., 2015) could improve both value-based and policy-based methods by making it easier for the network to represent feature coordinates. One of our main findings is that using parallel actor- learners to update a shared model had a stabilizing effect on the learning process of the three value-based methods we considered. While this shows that stable online Q-learning is possible without experience replay, which was used for this purpose in DQN, it does not mean that experience re- play is not useful. Incorporating experience replay into the asynchronous reinforcement learning framework could # ACKNOWLEDGMENTS We thank Thomas Degris, Remi Munos, Marc Lanctot, Sasha Vezhnevets and Joseph Modayil for many helpful discussions, suggestions and comments on the paper. We also thank the DeepMind evaluation team for setting up the environments used to evaluate the agents in the paper. Asynchronous Methods for Deep Reinforcement Learning % 2000 Training enone § e000
1602.01783#38
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
39
Asynchronous Methods for Deep Reinforcement Learning % 2000 Training enone § e000 Figure 3. Data efficiency comparison of different numbers of actor-learners for three asynchronous methods on five Atari games. The x-axis shows the total number of training epochs where an epoch corresponds to four million frames (across all threads). The y-axis shows the average score. Each curve shows the average over the three best learning rates. Single step methods show increased data efficiency from more parallel workers. Results for Sarsa are shown in Supplementary Figure S9. Traning ume hous) “rang tie nous) e000 eamrier o000 creer 600 space mvaders § 000 § e000 Trang ume nous) Figure 4. Training speed comparison of different numbers of actor-learners on five Atari games. The x-axis shows training time in hours while the y-axis shows the average score. Each curve shows the average over the three best learning rates. All asynchronous methods show significant speedups from using greater numbers of parallel actor-learners. Results for Sarsa are shown in Supplementary Figure S10. Asynchronous Methods for Deep Reinforcement Learning # References
1602.01783#39
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
40
Asynchronous Methods for Deep Reinforcement Learning # References Bellemare, Marc G, Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 2012. Bellemare, Marc G., Ostrovski, Georg, Guez, Arthur, Thomas, Philip S., and Munos, Rémi. Increasing the ac- tion gap: New operators for reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intel- ligence, 2016. Bertsekas, Dimitri P. Distributed dynamic programming. Automatic Control, IEEE Transactions on, 27(3):610– 616, 1982. Chavez, Kevin, Ong, Hao Yi, and Hong, Augustus. Dis- tributed deep q-learning. Technical report, Stanford Uni- versity, June 2015. Degris, Thomas, Pilarski, Patrick M, and Sutton, Richard S. Model-free reinforcement learning with continuous ac- tion in practice. In American Control Conference (ACC), 2012, pp. 2177–2182. IEEE, 2012.
1602.01783#40
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
41
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare, Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wierstra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 02 2015. URL http://dx.doi.org/10.1038/nature14236. Nair, Arun, Srinivasan, Praveen, Blackwell, Sam, Alci- cek, Cagdas, Fearon, Rory, Maria, Alessandro De, Pan- neershelvam, Vedavyas, Suleyman, Mustafa, Beattie, Charles, Petersen, Stig, Legg, Shane, Mnih, Volodymyr, Kavukcuoglu, Koray, and Silver, David. Massively par- allel methods for deep reinforcement learning. In ICML Deep Learning Workshop. 2015.
1602.01783#41
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
42
Peng, Jing and Williams, Ronald J. Incremental multi-step q-learning. Machine Learning, 22(1-3):283–290, 1996. Recht, Benjamin, Re, Christopher, Wright, Stephen, and Niu, Feng. Hogwild: A lock-free approach to paralleliz- ing stochastic gradient descent. In Advances in Neural Information Processing Systems, pp. 693–701, 2011. Grounds, Matthew and Kudenko, Daniel. Parallel rein- forcement learning with linear function approximation. In Proceedings of the 5th, 6th and 7th European Confer- ence on Adaptive and Learning Agents and Multi-agent Systems: Adaptation and Multi-agent Learning, pp. 60– 74. Springer-Verlag, 2008. Riedmiller, Martin. Neural fitted q iteration–first experi- ences with a data efficient neural reinforcement learning method. In Machine Learning: ECML 2005, pp. 317– 328. Springer Berlin Heidelberg, 2005. Koutník, Jan, Schmidhuber, Jürgen, and Gomez, Faustino. Evolving deep unsupervised convolutional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary com- putation, pp. 541–548. ACM, 2014.
1602.01783#42
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
43
Rummery, Gavin A and Niranjan, Mahesan. On-line q- learning using connectionist systems. 1994. Schaul, Tom, Quan, John, Antonoglou, Ioannis, and Sil- ver, David. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy op- In International Conference on Machine timization. Learning (ICML), 2015a. Li, Yuxi and Schuurmans, Dale. Mapreduce for parallel re- inforcement learning. In Recent Advances in Reinforce- ment Learning - 9th European Workshop, EWRL 2011, Athens, Greece, September 9-11, 2011, Revised Selected Papers, pp. 309–320, 2011.
1602.01783#43
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
44
Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional con- tinuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b. Lillicrap, Timothy P, Hunt, Jonathan J, Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep re- inforcement learning. arXiv preprint arXiv:1509.02971, 2015. Sutton, R. and Barto, A. Reinforcement Learning: an In- troduction. MIT Press, 1998. Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforce- ment learning. In NIPS Deep Learning Workshop. 2013.
1602.01783#44
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
45
Todorov, E. MuJoCo: Modeling, Simulation and Visual- ization of Multi-Joint Dynamics with Contact (ed 1.0). Roboti Publishing, 2015. Asynchronous Methods for Deep Reinforcement Learning Tomassini, Marco. Parallel and distributed evolutionary al- gorithms: A review. Technical report, 1999. Tsitsiklis, John N. Asynchronous stochastic approxima- tion and q-learning. Machine Learning, 16(3):185–202, 1994. Van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double q-learning. arXiv preprint arXiv:1509.06461, 2015. van Seijen, H., Rupam Mahmood, A., Pilarski, P. M., Machado, M. C., and Sutton, R. S. True Online Temporal-Difference Learning. ArXiv e-prints, Decem- ber 2015. Wang, Z., de Freitas, N., and Lanctot, M. Dueling Network Architectures for Deep Reinforcement Learning. ArXiv e-prints, November 2015. Watkins, Christopher John Cornish Hellaby. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
1602.01783#45
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
46
Watkins, Christopher John Cornish Hellaby. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. Williams, R.J. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Ma- chine Learning, 8(3):229–256, 1992. Williams, Ronald J and Peng, Jing. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241–268, 1991. Wymann, B., EspiÃl’, E., Guionneau, C., Dimitrakakis, C., Coulom, R., and Sumner, A. Torcs: The open racing car simulator, v1.3.5, 2013. # Supplementary Material for "Asynchronous Methods for Deep Reinforcement Learning" # November 7, 2021 # 7. Optimization Details We investigated two different optimization algorithms with our asynchronous framework – stochastic gradient descent and RMSProp. Our implementations of these algorithms do not use any locking in order to maximize throughput when using a large number of threads.
1602.01783#46
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
47
Momentum SGD: The implementation of SGD in an asynchronous setting is relatively straightforward and well studied (Recht et al., 2011). Let θ be the parameter vector that is shared across all threads and let ∆θi be the accumulated gradients of the loss with respect to parameters θ computed by thread number i. Each thread i independently applies the standard momentum SGD update mi = αmi + (1 − α)∆θi followed by θ ← θ − ηmi with learning rate η, momentum α and without any locks. Note that in this setting, each thread maintains its own separate gradient and momentum vector. RMSProp: While RMSProp (Tieleman & Hinton, 2012) has been widely used in the deep learning literature, it has not been extensively studied in the asynchronous optimization setting. The standard non-centered RMSProp update is given by g = αg + (1 − α)∆θ2 (S2) A@ 9 O~ nT (S3)
1602.01783#47
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
48
g = αg + (1 − α)∆θ2 (S2) A@ 9 O~ nT (S3) where all operations are performed elementwise. In order to apply RMSProp in the asynchronous optimiza- tion setting one must decide whether the moving average of elementwise squared gradients g is shared or per-thread. We experimented with two versions of the algorithm. In one version, which we refer to as RM- SProp, each thread maintains its own g shown in Equation S2. In the other version, which we call Shared RMSProp, the vector g is shared among threads and is updated asynchronously and without locking. Sharing statistics among threads also reduces memory requirements by using one fewer copy of the parameter vector per thread.
1602.01783#48
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
49
We compared these three asynchronous optimization algorithms in terms of their sensitivity to different learn- ing rates and random network initializations. Figure S5 shows a comparison of the methods for two different reinforcement learning methods (Async n-step Q and Async Advantage Actor-Critic) on four different games (Breakout, Beamrider, Seaquest and Space Invaders). Each curve shows the scores for 50 experiments that correspond to 50 different random learning rates and initializations. The x-axis shows the rank of the model after sorting in descending order by final average score and the y-axis shows the final average score achieved by the corresponding model. In this representation, the algorithm that performs better would achieve higher maximum rewards on the y-axis and the algorithm that is most robust would have its slope closest to horizon- tal, thus maximizing the area under the curve. RMSProp with shared statistics tends to be more robust than RMSProp with per-thread statistics, which is in turn more robust than Momentum SGD. Asynchronous Methods for Deep Reinforcement Learning # 8. Experimental Setup
1602.01783#49
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
50
The experiments performed on a subset of Atari games (Figures 1, 3, 4 and Table 2) as well as the TORCS experiments (Figure S6) used the following setup. Each experiment used 16 actor-learner threads running on a single machine and no GPUs. All methods performed updates after every 5 actions (tmax = 5 and IU pdate = 5) and shared RMSProp was used for optimization. The three asynchronous value-based methods used a shared target network that was updated every 40000 frames. The Atari experiments used the same input preprocessing as (Mnih et al., 2015) and an action repeat of 4. The agents used the network architecture from (Mnih et al., 2013). The network used a convolutional layer with 16 filters of size 8 × 8 with stride 4, followed by a convolutional layer with with 32 filters of size 4 × 4 with stride 2, followed by a fully connected layer with 256 hidden units. All three hidden layers were followed by a rectifier nonlinearity. The value-based methods had a single linear output unit for each action representing the action-value. The model used by actor-critic agents had two set of outputs – a softmax output with one entry per action representing the
1602.01783#50
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
52
The value based methods sampled the exploration rate € from a distribution taking three values €1, €2, €; with probabilities 0.4, 0.3, 0.3. The values of €1,€2,€3 were annealed from 1 to 0.1,0.01,0.5 respectively over the first four million frames. Advantage actor-critic used entropy regularization with a weight 8 = 0.01 for all Atari and TORCS experiments. We performed a set of 50 experiments for five Atari games and every TORCS level, each using a different random initialization and initial learning rate. The initial learning rate was sampled from a LogUniform(10~4, 10-7) distribution and annealed to 0 over the course of training. Note that in comparisons to prior work (Tables 1 and S3) we followed standard evaluation protocol and used fixed hyperparameters. # 9. Continuous Action Control Using the MuJoCo Physics Simulator
1602.01783#52
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
53
# 9. Continuous Action Control Using the MuJoCo Physics Simulator To apply the asynchronous advantage actor-critic algorithm to the Mujoco tasks the necessary setup is nearly identical to that used in the discrete action domains, so here we enumerate only the differences required for the continuous action domains. The essential elements for many of the tasks (i.e. the physics models and task objectives) are near identical to the tasks examined in (Lillicrap et al., 2015). However, the rewards and thus performance are not comparable for most of the tasks due to changes made by the developers of Mujoco which altered the contact model.
1602.01783#53
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
54
For all the domains we attempted to learn the task using the physical state as input. The physical state consisted of the joint positions and velocities as well as the target position if the task required a target. In addition, for three of the tasks (pendulum, pointmass2D, and gripper) we also examined training directly from RGB pixel inputs. In the low dimensional physical state case, the inputs are mapped to a hidden state using one hidden layer with 200 ReLU units. In the cases where we used pixels, the input was passed through two layers of spatial convolutions without any non-linearity or pooling. In either case, the output of the encoder layers were fed to a single layer of 128 LSTM cells. The most important difference in the architecture is in the the output layer of the policy network. Unlike the discrete action domain where the action output is a Softmax, here the two outputs of the policy network are two real number vectors which we treat as the mean vector µ and scalar variance σ2 of a multidimensional normal distribution with a spherical covariance. To act, the input is passed through the model to the output layer where we sample from the normal distribution determined by µ and σ2. In practice, µ is modeled
1602.01783#54
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
55
To act, the input is passed through the model to the output layer where we sample from the normal distribution determined by µ and σ2. In practice, µ is modeled by a linear layer and σ2 by a SoftPlus operation, log(1 + exp(x)), as the activation computed as a function of the output of a linear layer. In our experiments with continuous control problems the networks for policy network and value network do not share any parameters, though this detail is unlikely to be crucial. Finally, since the episodes were typically at most several hundred time steps long, we did not use any bootstrapping in the policy or value function updates and batched each episode into a single update.
1602.01783#55
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
56
As in the discrete action case, we included an entropy cost which encouraged exploration. In the continuous Asynchronous Methods for Deep Reinforcement Learning case the we used a cost on the differential entropy of the normal distribution defined by the output of the actor network, − 1 2 (log(2πσ2) + 1), we used a constant multiplier of 10−4 for this cost across all of the tasks examined. The asynchronous advantage actor-critic algorithm finds solutions for all the domains. Figure S8 shows learning curves against wall-clock time, and demonstrates that most of the domains from states can be solved within a few hours. All of the experiments, including those done from pixel based observations, were run on CPU. Even in the case of solving the domains directly from pixel inputs we found that it was possible to reliably discover solutions within 24 hours. Figure S7 shows scatter plots of the top scores against the sampled learning rates. In most of the domains there is large range of learning rates that consistently achieve good performance on the task. # Algorithm S2 Asynchronous n-step Q-learning - pseudocode for each actor-learner thread.
1602.01783#56
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
57
# Algorithm S2 Asynchronous n-step Q-learning - pseudocode for each actor-learner thread. // Assume global shared parameter vector 0. // Assume global shared target parameter vector 0~ . // Assume global shared counter T = 0. Initialize thread step counter t <- 1 Initialize target network parameters 0~ < 0 Initialize thread-specific parameters 6’ = 0 Initialize network gradients dO ~ 0 repeat Clear gradients d@ + 0 Synchronize thread-specific parameters 0’ = 0 Estar t= t Get state s; repeat Take action a, according to the e-greedy policy based on Q(s+, a; 6’) Receive reward r; and new state 5441 t<et+l1 TeT+1 until terminal s; or t — tstart == tmazx _f 0 for terminal s; R= maxa Q(s:,4;07 ) for non-terminal s; fori € {t—1,...,tstare} do Rerit+yR > Accumulate gradients wrt 6’: d@ — d@ + (R= OCsi,0550"))" end for Perform asynchronous update of @ using d0. ifT mod Itarget == 0 then a +80 end if until T > Tinax # until T > Tmax Asynchronous Methods for Deep Reinforcement Learning
1602.01783#57
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
59
# Algorithm S3 Asynchronous advantage actor-critic - pseudocode for each actor-learner thread. // Assume global shared parameter vectors 0 and 0, and global shared counter T = 0 // Assume thread-specific parameter vectors 0' and 67, Initialize thread step counter t <- 1 repeat Reset gradients: d@ < 0 and d6,, < 0. Synchronize thread-specific parameters 6’ = 0 and 6/, = 0, Estar t= t Get state s; repeat Perform a; according to policy 7(az|s1; 6’) Receive reward r; and new state 5,41 t<t+l1 TeT+1 until terminal sz or t — tstart == tmazx R= 0 for terminal s; ~ ) V(se,0) for non-terminal s;// Bootstrap from last state for i € {t— - +s tstarte} do Rern+yR Accumulate gradients wrt 0’: d0 <— d@ + Vor log m(ai|si; 6’)(R — V(si; 0)) Accumulate gradients wrt 6,: d0,, — dO, + O(R — V(si;64,))”/00, end for Perform asynchronous update of @ using dé and of @y using d6v. until T
1602.01783#59
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]
1602.01783
60
until T > Tmax Asynchronous Methods for Deep Reinforcement Learning seo “ step Q, SD Tatas 0, RMSProp 2 peste 0, AMSProp rates 0 Shares RMSProp rte 0, Shares RuSProp step 0,560 2 peste 0, AMSPop reste 0, shared RMS Prop 2 psten 0, AMSPrep — nsten Shares RSProp Figure S5. Comparison of three different optimization methods (Momentum SGD, RMSProp, Shared RMSProp) tested using two different algorithms (Async n-step Q and Async Advantage Actor-Critic) on four different Atari games (Break- out, Beamrider, Seaquest and Space Invaders). Each curve shows the final scores for 50 experiments sorted in descending order that covers a search over 50 random initializations and learning rates. The top row shows results using Async n-step Q algorithm and bottom row shows results with Async Advantage Actor-Critic. Each individual graph shows results for one of the four games and three different optimization methods. Shared RMSProp tends to be more robust to different learning rates and random initializations than Momentum SGD and RMSProp without sharing.
1602.01783#60
Asynchronous Methods for Deep Reinforcement Learning
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
http://arxiv.org/pdf/1602.01783
Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
cs.LG
null
ICML 2016
cs.LG
20160204
20160616
[ { "id": "1509.02971" }, { "id": "1509.06461" }, { "id": "1511.05952" }, { "id": "1506.02438" }, { "id": "1504.00702" } ]