id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.01578#42 | Neural Architecture Search with Reinforcement Learning | David G. Lowe. Object recognition from local scale-invariant features. In CVPR, 1999. Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. To- wards automatically-tuned neural networks. In Proceedings of the 2016 Workshop on Automatic Machine Learning, pp. 58â 65, 2016. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016. Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. In SLT, pp. 234â 239, 2012. 12 | 1611.01578#41 | 1611.01578#43 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#43 | Neural Architecture Search with Reinforcement Learning | # Under review as a conference paper at ICLR 2017 Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling. In ICML, 2007. Vinod Nair and Geoffrey E. Hinton. Rectiï¬ ed linear units improve restricted Boltzmann machines. In ICML, 2010. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In ICLR, 2015. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013. | 1611.01578#42 | 1611.01578#44 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#44 | Neural Architecture Search with Reinforcement Learning | Oï¬ r Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016. Marcâ Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. Scott Reed and Nando de Freitas. Neural programmer-interpreters. In ICLR, 2015. Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS, 2016. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. In ACL, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Jasper Snoek, Hugo Larochelle, and Ryan P. | 1611.01578#43 | 1611.01578#45 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#45 | Neural Architecture Search with Reinforcement Learning | Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, 2012. Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mostofa Ali, Ryan P. Adams, et al. Scalable bayesian optimization using deep neural networks. In ICML, 2015. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. | 1611.01578#44 | 1611.01578#46 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#46 | Neural Architecture Search with Reinforcement Learning | Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. Kenneth O. Stanley, David B. Dâ Ambrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. | 1611.01578#45 | 1611.01578#47 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#47 | Neural Architecture Search with Reinforcement Learning | Artiï¬ cial Life, 2009. Phillip D. Summers. A methodology for LISP program construction from examples. Journal of the ACM, 1977. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa- tion and momentum in deep learning. In ICML, 2013. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, 2015. Daan Wierstra, Faustino J Gomez, and J¨urgen Schmidhuber. Modeling systems with internal state using evolino. In GECCO, 2005. | 1611.01578#46 | 1611.01578#48 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#48 | Neural Architecture Search with Reinforcement Learning | Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, 1992. 13 # Under review as a conference paper at ICLR 2017 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, et al. Googleâ s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. | 1611.01578#47 | 1611.01578#49 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#49 | Neural Architecture Search with Reinforcement Learning | Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016. | 1611.01578#48 | 1611.01578#50 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#50 | Neural Architecture Search with Reinforcement Learning | 14 # Under review as a conference paper at ICLR 2017 # A APPENDIX Softmax FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 7 N: 48 FH: 5 FW: 7 N: 36 FH: 7 FW: 7 N: 36, FH: 7 FW: 1 N: 36 FH: 7 FW: 3 N: 36 FH: 7 FW: 7 N: 48 FH: 7 FW: 7 N: 48 FH: 3 FW: 7 N: 48 FH: 5 FW: 5 N: 36 FH: 3 FW: 3 N: 36 FH: 3 FW: 3 N: 48 FH: 3 FW: 3 N: 36 Image Figure 7: | 1611.01578#49 | 1611.01578#51 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#51 | Neural Architecture Search with Reinforcement Learning | Convolutional architecture discovered by our method, when the search space does not have strides or pooling layers. FH is ï¬ lter height, FW is ï¬ lter width and N is number of ï¬ lters. Note that the skip connections are not residual connections. If one layer has many input layers then all input layers are concatenated in the depth dimension. 15 # Under review as a conference paper at ICLR 2017 elem_mult . elem_mult identity add elem_mult tanh add sigmoid sigmoid tanh elem_mult elem_mult sigmoid identity add identity Figure 8: A comparison of the original LSTM cell vs. two good cells our model found. Top left: LSTM cell. Top right: Cell found by our model when the search space does not include max and sin. Bottom: Cell found by our model when the search space includes max and sin (the controller did not choose to use the sin function). | 1611.01578#50 | 1611.01578#52 | 1611.01578 | [
"1611.01462"
]
|
1611.01578#52 | Neural Architecture Search with Reinforcement Learning | 16 | 1611.01578#51 | 1611.01578 | [
"1611.01462"
]
|
|
1611.01603#0 | Bidirectional Attention Flow for Machine Comprehension | 8 1 0 2 n u J 1 2 ] L C . s c [ 6 v 3 0 6 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION Minjoon Seo1â University of Washington1, Allen Institute for Artiï¬ cial Intelligence2 {minjoon,ali,hannaneh}@cs.washington.edu, {anik}@allenai.org # ABSTRACT Machine comprehension (MC), answering a query about a given context para- graph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typ- ically these methods use attention to focus on a small portion of the con- text and summarize it with a ï¬ xed-size vector, couple attentions temporally, In this paper we introduce the and/or often form a uni-directional attention. Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical pro- cess that represents the context at different levels of granularity and uses bi- directional attention ï¬ ow mechanism to obtain a query-aware context represen- tation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. # INTRODUCTION The tasks of machine comprehension (MC) and question answering (QA) have gained signiï¬ cant popularity over the past few years within the natural language processing and computer vision com- munities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al., 2015; Antol et al., 2015; Xiong et al., 2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are often used to extract the most relevant information from the context for answering the question by sum- marizing the context into a ï¬ | 1611.01603#1 | 1611.01603 | [
"1606.02245"
]
|
|
1611.01603#1 | Bidirectional Attention Flow for Machine Comprehension | xed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image. In this paper, we introduce the Bi-Directional Attention Flow (BIDAF) network, a hierarchical multi-stage architecture for modeling the representations of the context paragraph at different levels of granularity (Figure 1). BIDAF includes character-level, word-level, and contextual embeddings, and uses bi-directional attention ï¬ ow to obtain a query-aware context representation. Our attention mechanism offers following improvements to the previously popular attention paradigms. First, our attention layer is not used to summarize the context paragraph into a ï¬ | 1611.01603#0 | 1611.01603#2 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#2 | Bidirectional Attention Flow for Machine Comprehension | xed-size vector. Instead, the attention is computed for every time step, and the attended vector at each time step, along with the representations from previous layers, is allowed to ï¬ ow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. Second, we use a memory-less attention mechanism. That is, while we iteratively compute attention through time as in Bahdanau et al. (2015), the attention at each time step is a function of only the query and the context para- graph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simpliï¬ cation leads to the division of labor between the attention layer and the modeling layer. It forces the attention layer to focus on learning the attention between the query and the context, and enables the modeling layer to focus on learning the interaction within the | 1611.01603#1 | 1611.01603#3 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#3 | Bidirectional Attention Flow for Machine Comprehension | â The majority of the work was done while the author was interning at the Allen Institute for AI. 1 Published as a conference paper at ICLR 2017 Start End Query2Context Tec | wamecens _ as Output Layer Soameatuy ces My me my g = 4 <= Slelsteteti eld up 01000 Modeinglaye | = 7. EE 5 hy hp hr il vy ry ry a [iio ffl | : o Context2Query ASitenitsin Gea fr Query2Context and Context2Query t{ttttt ey Attention fi fi fi fi fi fi uy SSeS LS rot a ES Embed Layer 5 L | L] L] aL | LC] SHeeterterter tee â Uy Word Embed [ hy he pr Layer Charact Word Character pentecallciier c =) ] ! Embedding Embedding xy X2 X3 XT qh qu u J t 7 GLOVE Char-CNN Context Query Figure 1: BiDirectional Attention Flow Model (best viewed in color) query-aware context representation (the output of the attention layer). It also allows the attention at each time step to be unaffected from incorrect attendances at previous time steps. Our experi- ments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other. Our BIDAF model1 outperforms all previous approaches on the highly-competitive Stanford Ques- tion Answering Dataset (SQuAD) test set leaderboard at the time of submission. | 1611.01603#2 | 1611.01603#4 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#4 | Bidirectional Attention Flow for Machine Comprehension | With a modiï¬ cation to only the output layer, BIDAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, vi- sualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al., 2016). 2 MODEL Our machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure 1): 1. Character Embedding Layer maps each word to a vector space using character-level CNNs. 2. Word Embedding Layer maps each word to a vector space using a pre-trained word em- bedding model. 3. Contextual Embedding Layer utilizes contextual cues from surrounding words to reï¬ ne the embedding of the words. These ï¬ rst three layers are applied to both the query and context. 4. Attention Flow Layer couples the query and context vectors and produces a set of query- aware feature vectors for each word in the context. 5. Modeling Layer employs a Recurrent Neural Network to scan the context. 6. Output Layer provides an answer to the query. 1Our code and interactive demo are available at: allenai.github.io/bi-att-flow/ 2 Published as a conference paper at ICLR 2017 1. Character Embedding Layer. Character embedding layer is responsible for mapping each word to a high-dimensional vector space. Let {x1, . . . xT } and {q1, . . . qJ } represent the words in the input context paragraph and query, respectively. Following Kim (2014), we obtain the character- level embedding of each word using Convolutional Neural Networks (CNN). Characters are embed- ded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the input channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a ï¬ xed-size vector for each word. | 1611.01603#3 | 1611.01603#5 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#5 | Bidirectional Attention Flow for Machine Comprehension | 2. Word Embedding Layer. Word embedding layer also maps each word to a high-dimensional vector space. We use pre-trained word vectors, GloVe (Pennington et al., 2014), to obtain the ï¬ xed word embedding of each word. The concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al., 2015). The outputs of the Highway Network are two sequences of d- dimensional vectors, or more conveniently, two matrices: | 1611.01603#4 | 1611.01603#6 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#6 | Bidirectional Attention Flow for Machine Comprehension | X â Rdà T for the context and Q â Rdà J for the query. 3. Contextual Embedding Layer. We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain H â R2dà T from the context word vectors X, and U â R2dà J from query word vectors Q. Note that each column vector of H and U is 2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d-dimensional output. It is worth noting that the ï¬ rst three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision ï¬ | 1611.01603#5 | 1611.01603#7 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#7 | Bidirectional Attention Flow for Machine Comprehension | eld. 4. Attention Flow Layer. Attention ï¬ ow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al., 2015; Hill et al., 2016; Sordoni et al., 2016; Shen et al., 2016), the attention ï¬ ow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to ï¬ ow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. The inputs to the layer are contextual vector representations of the context H and the query U. The outputs of the layer are the query-aware vector representations of the context words, G, along with the contextual embeddings from the previous layer. In this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, S â RT à J , between the contextual embeddings of the context (H) and the query (U), where Stj indicates the similarity between t-th context word and j-th query word. The similarity matrix is computed by S,; = 0(H.,,U.;) ER dd) where a is a trainable scalar function that encodes the similarity between its two input vectors, H., is t-th column vector of H, and U.,; is j-th column vector of U, We choose a(h, u) = Wis) {h; u; ho ul], where wis) â | 1611.01603#6 | 1611.01603#8 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#8 | Bidirectional Attention Flow for Machine Comprehension | ¬ Râ is a trainable weight vector, o is elementwise multiplication, {;] is vector concatenation across row, and implicit multiplication is matrix multiplication. Now we use S to obtain the attentions and the attended vectors in both directions. Context-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let a, â ¬ R/ represent the attention weights on the query words by t-th context word, }> a,; = 1 for all t. The attention weight is computed by a, = softmax(S;,) â ¬ R/â , and subsequently each attended query vector is U.. = yj a,j U.;. Hence U is a 2d-by-T matrix containing the attended query vectors for the entire context. Query-to-context Attention. Query-to-context (Q2C) attention signiï¬ es which context words have the closest similarity to one of the query words and are hence critical for answering the query. | 1611.01603#7 | 1611.01603#9 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#9 | Bidirectional Attention Flow for Machine Comprehension | 3 Published as a conference paper at ICLR 2017 We obtain the attention weights on the context words by b = softmax(max,o1(S)) â ¬ R7, where the maximum function (max,,;) is performed across the column. Then the attended context vector ish = > ,bH., â ¬ R24. This vector indicates the weighted sum of the most important words in the context with respect to the query. his tiled T times across the column, thus giving He RT, Finally, the contextual embeddings and the attention vectors are combined together to yield G, where each column vector can be considered as the query-aware representation of each context word. | 1611.01603#8 | 1611.01603#10 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#10 | Bidirectional Attention Flow for Machine Comprehension | We deï¬ ne G by G:t = β(H:t, Ë U:t, Ë H:t) â RdG (2) where G:t is the t-th column vector (corresponding to t-th context word), β is a trainable vector function that fuses its (three) input vectors, and dG is the output dimension of the β function. While the β function can be an arbitrary trainable neural network, such as multi-layer perceptron, a simple concatenation as following still shows good performance in our experiments: β(h, Ë u, Ë h) = [h; Ë u; h â ¦ Ë u; h â ¦ Ë h] â R8dà T (i.e., dG = 8d). 5. Modeling Layer. The input to the modeling layer is G, which encodes the query-aware rep- resentations of context words. The output of the modeling layer captures the interaction among the context words conditioned on the query. This is different from the contextual embedding layer, which captures the interaction among context words independent of the query. We use two layers of bi-directional LSTM, with the output size of d for each direction. Hence we obtain a matrix M â R2dà T , which is passed onto the output layer to predict the answer. Each column vector of M is expected to contain contextual information about the word with respect to the entire context paragraph and the query. 6. Output Layer. The output layer is application-speciï¬ c. The modular nature of BIDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section 5, we use a slight modiï¬ cation of this output layer for cloze-style comprehension. The QA task requires the model to ï¬ nd a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by | 1611.01603#9 | 1611.01603#11 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#11 | Bidirectional Attention Flow for Machine Comprehension | p= softmax(w (1) [G; M]), (3) where w(p1) â R10d is a trainable weight vector. For the end index of the answer phrase, we pass M to another bidirectional LSTM layer and obtain M2 â R2dà T . Then we use M2 to obtain the probability distribution of the end index in a similar manner: p= softmax(w/(,2) [G; Mâ )) 4 Training. We deï¬ ne the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples: 1 N L(@) =~ Do log(pj:) + los (pee) (5) where θ is the set of all trainable weights in the model (the weights and biases of CNN ï¬ lters and LSTM cells, w(S), w(p1) and w(p2)), N is the number of examples in the dataset, y1 i are the true start and end indices of the i-th example, respectively, and pk indicates the k-th value of the vector p. | 1611.01603#10 | 1611.01603#12 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#12 | Bidirectional Attention Flow for Machine Comprehension | Test. The answer span (k, l) where k â ¤ l with the maximum value of p1 computed in linear time with dynamic programming. 3 RELATED WORK Machine comprehension. A signiï¬ cant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al., 2013) were too 4 Published as a conference paper at ICLR 2017 small to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail by Hermann et al. (2015) and Childrens Book Test by Hill et al. (2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets. Previous works in end-to-end machine comprehension use attention mechanisms in three distinct ways. | 1611.01603#11 | 1611.01603#13 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#13 | Bidirectional Attention Flow for Machine Comprehension | The ï¬ rst group (largely inspired by Bahdanau et al. (2015)) uses a dynamic attention mech- anism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention. Hermann et al. (2015) argue that the dynamic attention model per- forms better than using a single ï¬ xed query vector to attend on context words on CNN & DailyMail datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weights in the same model drastically improves the accuracy. Wang & Jiang (2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast to these models, BIDAF uses a memory-less attention mechanism. The second group computes the attention weights once, which are then fed into an output layer for ï¬ nal prediction (e.g., Kadlec et al. (2016)). Attention-over-attention model (Cui et al., 2016) uses a 2D similarity matrix between the query and context words (similar to Equation 1) to compute the weighted average of query-to-context attention. In contrast to these models, BIDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors ï¬ ow into the modeling (RNN) layer. The third group (considered as variants of Memory Network (Weston et al., 2015)) repeats comput- ing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al., 2016; Dhingra et al., 2016). Shen et al. (2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our BIDAF model to incorporate multiple hops. Visual question answering. The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al., 2015; Malinowski et al., 2015). | 1611.01603#12 | 1611.01603#14 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#14 | Bidirectional Attention Flow for Machine Comprehension | Attention mechanisms have also been suc- cessfully employed for the VQA task and can be broadly clustered based on the granularity of their attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al., 2016; Xiong et al., 2016a). At a ï¬ ner level, each question word attends to each image patch and the highest attention value for each spatial location (Xu & Saenko, 2016) is adopted. A hybrid approach is to combine questions representa- tions at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al., 2015). Several approaches to constructing the attention matrix have been used including element-wise product, element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al., 2016). Lu et al. (2016) have recently shown that in addition to attending from the question to image patches, attending from the image back to the question words provides an improvement on the VQA task. | 1611.01603#13 | 1611.01603#15 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#15 | Bidirectional Attention Flow for Machine Comprehension | This ï¬ nding in the visual domain is consistent with our ï¬ nding in the language domain, where our bi-directional attention between the query and context provides improved results. Their model, however, uses the attention weights directly in the output layer and does not take advantage of the attention ï¬ ow to the modeling layer. # 4 QUESTION ANSWERING EXPERIMENTS In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al., 2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension. Dataset. SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,000 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. | 1611.01603#14 | 1611.01603#16 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#16 | Bidirectional Attention Flow for Machine Comprehension | Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k 5 Published as a conference paper at ICLR 2017 Logistic Regression Baselinea Dynamic Chunk Readerb Fine-Grained Gatingc Match-LSTMd Multi-Perspective Matchinge Dynamic Coattention Networksf R-Netg BIDAF (Ours) Single Model EM 40.4 62.5 62.5 64.7 65.5 66.2 68.4 68.0 F1 51.0 71.0 73.3 73.7 75.1 75.9 77.5 77.3 Ensemble EM F1 - - - 77.0 77.2 80.4 79.7 81.1 - - - 67.9 68.2 71.6 72.1 73.3 No char embedding No word embedding No C2Q attention No Q2C attention Dynamic attention BIDAF (single) BIDAF (ensemble) EM F1 75.4 65.0 66.8 55.5 67.7 57.2 73.7 63.6 73.6 63.5 77.3 67.7 80.7 72.6 (a) Results on the SQuAD test set Table 1: (1a) The performance of our model BIDAF and competing approaches by Rajpurkar et al. (2016)a, Yu et al. (2016)b, Yang et al. (2016)c, Wang & Jiang (2016)d, IBM Watsone (unpublished), Xiong et al. (2016b)f , and Microsoft Research Asiag (unpublished) on the SQuAD test set. A concurrent work by Lee et al. (2016) does not report the test scores. All results shown here reï¬ ect the SQuAD leaderboard (stanford-qa.com) as of 6 Dec 2016, 12pm PST. (1b) The performance of our model and its ablations on the SQuAD dev set. Ablation results are presented only for single runs. | 1611.01603#15 | 1611.01603#17 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#17 | Bidirectional Attention Flow for Machine Comprehension | train/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model. Model Details. The model architecture used for this task is depicted in Figure 1. Each paragraph and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fed into the model. We use 100 1D ï¬ lters for CNN char embedding, each with a width of 5. The hidden state size (d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler, 2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.5, for 12 epochs. A dropout (Srivastava et al., 2014) rate of 0.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. | 1611.01603#16 | 1611.01603#18 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#18 | Bidirectional Attention Flow for Machine Comprehension | At test time, we choose the answer with the highest sum of conï¬ dence scores amongst the 12 runs for each question. Results. The results of our model and competing approaches on the hidden test are summarized in Table 1a. BIDAF (ensemble) achieves an EM score of 73.3 and an F1 score of 81.1, outperforming all previous approaches. Ablations. Table 1b shows the performance of our model and its ablations on the SQuAD dev set. Both char-level and word-level embeddings contribute towards the modelâ s performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole, while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi- directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector Ë U with the average of the output vectors of the questionâ s contextual embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on both metrics. For ablating Q2C attention, the output of the attention layer, G, does not include terms that have the attended Q2C vectors, Ë | 1611.01603#17 | 1611.01603#19 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#19 | Bidirectional Attention Flow for Machine Comprehension | H. To evaluate the attention ï¬ ow, we study a dynamic attention model, where the attention is dynamically computed within the modeling layerâ s LSTM, following previous work (Bahdanau et al., 2015; Wang & Jiang, 2016). This is in contrast with our approach, where the attention is pre-computed before ï¬ owing to the modeling layer. Despite being a simpler attention mechanism, our proposed static attention outperforms the dynamically computed attention by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the ï¬ rst 4 layers which are then incorporated by the modeling layer. We also show the performance of BIDAF with several different deï¬ nitions of α and β functions (Equation 1 and 2) in Appendix B. 6 | 1611.01603#18 | 1611.01603#20 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#20 | Bidirectional Attention Flow for Machine Comprehension | Published as a conference paper at ICLR 2017 Layer Query Closest words in the Context using cosine similarity Word When Contextual When Word Where Contextual Where Word Who Contextual Who city Word city Contextual January Word January Contextual Seahawks Word Seahawks Contextual date Word date Contextual when, When, After, after, He, he, But, but, before, Before When, when, 1945, 1991, 1971, 1967, 1990, 1972, 1965, 1953 Where, where, It, IT, it, they, They, that, That, city where, Where, Rotterdam, area, Nearby, location, outside, Area, across, locations Who, who, He, he, had, have, she, She, They, they who, whose, whom, Guiscard, person, John, Thomas, families, Elway, Louis City, city, town, Town, Capital, capital, district, cities, province, Downtown city, City, Angeles, Paris, Prague, Chicago, Port, Pittsburgh, London, Manhattan July, December, June, October, January, September, February, April, November, March January, March, December, August, December, July, July, July, March, December Seahawks, Broncos, 49ers, Ravens, Chargers, Steelers, quarterback, Vikings, Colts, NFL Seahawks, Broncos, Panthers, Vikings, Packers, Ravens, Patriots, Falcons, Steelers, Chargers date, dates, until, Until, June, July, Year, year, December, deadline date, dates, December, July, January, October, June, November, March, February | 1611.01603#19 | 1611.01603#21 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#21 | Bidirectional Attention Flow for Machine Comprehension | Table 2: Closest context words to a given query word, using a cosine similarity metric computed in the Word Embedding feature space and the Phrase Embedding feature space. _15,___Word Embed Space 1s,___ Phrase Embed Space Questions answered correctly by our BIDAF model = sor ows and the more traditional baseline model tol what (4752) May how (1090)} 7 @ Sk from 28 January to 29 ~ may | but by September had beén who (1061) 5 _os debut on May 5 when (696 B 3 Opening in May 1852 at whieh (654) a -19] 509 3734 3585 in taal w â 39] â Januay 5 of these may be moreâ where (433) | seston Hy | 2577 ann mes as on (aa) August . Baseline toa -a0) -39) â to -5 0 5 10 15 20 25 24 26 28 30 32 34 36 38 40 42 BIDAF | ep t-SNE Dimension 1 t-SNE Dimension 1 â of questions witn correct answers {a) (b) (c) Figure 2: (a) t-SNE visualizations of the months names embedded in the two feature spaces. The contextual embedding layer is able to distinguish the two usages of the word May using context from the surrounding text. (b) Venn diagram of the questions answered correctly by our model and the more traditional baseline (Rajpurkar et al., 2016). (c) Correctly answered questions broken down by the 10 most frequent ï¬ rst words in the question. Visualizations. We now provide a qualitative analysis of our model on the SQuAD dev set. First, we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequent query words in the dev data and look at the context words that have the highest cosine similarity to the query words (Table 2). | 1611.01603#20 | 1611.01603#22 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#22 | Bidirectional Attention Flow for Machine Comprehension | At the word embedding layer, query words such as When, Where and Who are not well aligned to possible answers in the context, but this dramatically changes in the contextual embedding layer which has access to context from surrounding words and is just 1 layer below the attention layer. When begins to match years, Where matches locations, and Who matches names. We also visualize these two feature spaces using t-SNE in Figure 2. t-SNE is performed on a large fraction of dev data but we only plot data points corresponding to the months of the year. An interesting pattern emerges in the Word space, where May is separated from the rest of the months because May has multiple meanings in the English language. The contextual embedding layer uses contextual cues from surrounding words and is able to separate the usages of the word May. Finally we visualize the attention matrices for some question-context tuples in the dev data in Figure 3. In the ï¬ rst example, Where matches locations and in the second example, many matches quantities and numerical symbols. Also, entities in the question typically attend to the same entities in the context, thus providing a feature for the model to localize possible answers. | 1611.01603#21 | 1611.01603#23 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#23 | Bidirectional Attention Flow for Machine Comprehension | Discussions. We analyse the performance of our our model with a traditional language-feature- based baseline (Rajpurkar et al., 2016). Figure 2b shows a Venn diagram of the dev set questions correctly answered by the models. Our model is able to answer more than 86% of the questions 7 Published as a conference paper at ICLR 2017 Super Bow! SO was an American football game cemmememeareee | wer SUMP ITIMIIN SEMMII LUTEIUEY Ulf. 2: t,t sadn Lovin, soma, Ane Football League { NFL] forthe 2015 s The American Footbal Conference ( did it champion Denver Broncos defeated the Net Footal ofeence(NFC}chanoion | Super Super, Super, Super, Super, Super SuperBowl title. The game was played on | February 72016 at levlestecurinthesan | BOW! Bowl, Bowl, Bowl, Bowl, Bow! | 1611.01603#22 | 1611.01603#24 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#24 | Bidirectional Attention Flow for Machine Comprehension | Francisco Bay Area at Sant Ciara, California. â As this was the 50th Super Sow, the leg 50 50 emphasized the â golden anniversary" with various gold-themed inlatves as well as take temporarily suspending the tradition of raming each Super Bow! game with Roman place numerals (under whieh the game would have | been known as "Super Bowl") so that the > (lll | I il WW logo could prominently feature the Arabic numerals 5. WL LIME nitatves â â â â â ] hm Ten Te â : many | ll | | | | 1] | | hundreds, few, among, 15, several, only, {3s} fom Warsaw, the Vistula rivers environment changes natural | | | natural, of Stkingy and featres a perfecty preserved ecosystem, with ahabitatof animalsthat | TES@rves reserves are mn WN NNN HAWN are, are, are, are, are, includes there i iak6w Lake, the lakes in th w Parks, Karnionek Lake. There are lakes inthe parks, but only afew in it them ot ponte jarsaw, Warsaw, Warsaw before winter to clean them of plants and Warsaw We â 2] TOTO AO EEE TMA ie species Figure 3: Attention matrices for question-context tuples. The left palette shows the context paragraph (correct answer in red and underlined), the middle palette shows the attention matrix (each row is a question word, each column is a context word), and the right palette shows the top attention points for each question word, above a threshold. correctly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. | 1611.01603#23 | 1611.01603#25 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#25 | Bidirectional Attention Flow for Machine Comprehension | This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the ï¬ rst words in the questions (Figure 2c). Our model outperforms the traditional baseline comfortably in every category. Error Analysis. We randomly select 50 incorrect questions (based on EM) and categorize them into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowl- edge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix A for the examples of the error modes. # 5 CLOZE TEST EXPERIMENTS We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015). Dataset. In a cloze test, the reader is asked to ï¬ ll in words that have been removed from a passage, for measuring oneâ s ability to comprehend text. Hermann et al. (2015) have recently compiled a mas- sive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word, the missing word is always a named entity, anonymized with a random ID. | 1611.01603#24 | 1611.01603#26 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#26 | Bidirectional Attention Flow for Machine Comprehension | Also, the IDs must be shufï¬ ed constantly during test, which is also critical for full anonymization. Model Details. The model architecture used for this task is very similar to that for SQuAD (Sec- tion 4) with only a few small changes to adapt it to the cloze test. Since each answer in the CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (p1); the prediction for the end index (p2) is omitted from the loss function. Also, we mask out all non-entity words in the ï¬ | 1611.01603#25 | 1611.01603#27 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#27 | Bidirectional Attention Flow for Machine Comprehension | nal classiï¬ cation layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al. (2016). During training, after we obtain p1, we sum all probability values of the entity instances 8 13, 9 Published as a conference paper at ICLR 2017 in the context that correspond to the correct answer. Then the loss function is computed from the summed probability. | 1611.01603#26 | 1611.01603#28 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#28 | Bidirectional Attention Flow for Machine Comprehension | We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al., 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BIDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by par- allelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section 4. | 1611.01603#27 | 1611.01603#29 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#29 | Bidirectional Attention Flow for Machine Comprehension | Results. The results of our single-run models and competing approaches on the CNN/DailyMail datasets are summarized in Table 3. â indicates ensemble methods. BIDAF outperforms previous single-run models on both datasets for both val and test data. On the DailyMail test, our single-run model even outperforms the best ensemble method. Attentive Reader (Hermann et al., 2015) MemNN (Hill et al., 2016) AS Reader (Kadlec et al., 2016) DER Network (Kobayashi et al., 2016) Iterative Attention (Sordoni et al., 2016) EpiReader (Trischler et al., 2016) Stanford AR (Chen et al., 2016) GAReader (Dhingra et al., 2016) AoA Reader (Cui et al., 2016) ReasoNet (Shen et al., 2016) BIDAF (Ours) MemNNâ (Hill et al., 2016) ASReaderâ (Kadlec et al., 2016) Iterative Attentionâ (Sordoni et al., 2016) GA Readerâ (Dhingra et al., 2016) Stanford ARâ (Chen et al., 2016) CNN val 61.6 63.4 68.6 71.3 72.6 73.4 73.8 73.0 73.1 72.9 76.3 66.2 73.9 74.5 76.4 77.2 test 63.0 6.8 69.5 72.9 73.3 74.0 73.6 73.8 74.4 74.7 76.9 69.4 75.4 75.7 77.4 77.6 DailyMail test val 69.0 70.5 - - 73.9 75.0 - - - - - - 76.6 77.6 75.7 76.7 - - 76.6 77.6 79.6 80.3 - - 77.7 78.7 - - 78.1 79.1 79.2 80.2 | 1611.01603#28 | 1611.01603#30 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#30 | Bidirectional Attention Flow for Machine Comprehension | Table 3: Results on CNN/DailyMail datasets. We also include the results of previous ensemble methods (marked with â ) for completeness. # 6 CONCLUSION In this paper, we introduce BIDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention ï¬ ow mechanism to achieve a query- aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each compo- nent in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct loca- tions in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer. ACKNOWLEDGMENTS This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments. | 1611.01603#29 | 1611.01603#31 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#31 | Bidirectional Attention Flow for Machine Comprehension | 9 Published as a conference paper at ICLR 2017 # REFERENCES Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit- nick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In ACL, 2016. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over- attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016. Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP, 2016. | 1611.01603#30 | 1611.01603#32 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#32 | Bidirectional Attention Flow for Machine Comprehension | Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâ s books with explicit memory representations. In ICLR, 2016. Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997. | 1611.01603#31 | 1611.01603#33 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#33 | Bidirectional Attention Flow for Machine Comprehension | Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In ACL, 2016. Yoon Kim. Convolutional neural networks for sentence classiï¬ cation. In EMNLP, 2014. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representation with max-pooling improves machine reading. In NAACL-HLT, 2016. Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436, 2016. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, 2016. | 1611.01603#32 | 1611.01603#34 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#34 | Bidirectional Attention Flow for Machine Comprehension | Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based ap- proach to answering questions about images. In ICCV, 2015. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, 2014. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In EMNLP, 2016. | 1611.01603#33 | 1611.01603#35 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#35 | Bidirectional Attention Flow for Machine Comprehension | Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, 2013. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬ | 1611.01603#34 | 1611.01603#36 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#36 | Bidirectional Attention Flow for Machine Comprehension | tting. JMLR, 2014. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. 10 Published as a conference paper at ICLR 2017 Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016. | 1611.01603#35 | 1611.01603#37 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#37 | Bidirectional Attention Flow for Machine Comprehension | Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015. Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016a. Caiming Xiong, Victor Zhong, and Richard Socher. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604, 2016b. Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. | 1611.01603#36 | 1611.01603#38 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#38 | Bidirectional Attention Flow for Machine Comprehension | In ECCV, 2016. Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhut- dinov. Words or characters? ï¬ ne-grained gating for reading comprehension. arXiv preprint arXiv:1611.01724, 2016. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015. Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end reading comprehension with dynamic answer chunk ranking. arXiv preprint arXiv:1610.09996, 2016. Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. Yuke Zhu, Oliver Groth, Michael S. Bernstein, and Li Fei-Fei. Visual7w: Grounded question an- swering in images. In CVPR, 2016. | 1611.01603#37 | 1611.01603#39 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#39 | Bidirectional Attention Flow for Machine Comprehension | 11 Published as a conference paper at ICLR 2017 # A ERROR ANALYSIS Table 4 summarizes the modes of errors by BIDAF and shows examples for each category of error in SQuAD. Error type Imprecise answer boundaries Ratio (%) 50 Example Context: â The Free Movement of Workers Regulation articles 1 to 7 set out the main provisions on equal treatment of workers.â Question: â Which articles of the Free Movement of Workers Regulation set out the primary provisions on equal treatment of workers?â Prediction: â 1 to 7â , Answer: â articles 1 to 7â Syntactic complications and ambiguities 28 Context: â A piece of paper was later found on which Luther had written his last statement. â Question: â What was later discovered written by Luther?â Prediction: â A piece of paperâ , Answer: â his last statementâ Paraphrase problems 14 Context: â | 1611.01603#38 | 1611.01603#40 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#40 | Bidirectional Attention Flow for Machine Comprehension | Generally, education in Australia follows the three- tier model which includes primary education (primary schools), followed by secondary education (secondary schools/high schools) and tertiary education (universities and/or TAFE colleges).â Question: â What is the ï¬ rst model of education, in the Aus- tralian system?â Prediction: â three-tierâ , Answer: â primary educationâ External knowledge 4 Context: â On June 4, 2014, the NFL announced that the practice of branding Super Bowl games with Roman numerals, a practice established at Super Bowl V, would be temporarily suspended, and that the game would be named using Arabic numerals as Super Bowl 50 as opposed to Super Bowl L.â Question: â If Roman numerals were used in the naming of the 50th Super Bowl, which one would have been used?â Prediction: â Super Bowl 50â , Answer: â Lâ Multi- sentence 2 Context: â Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch ï¬ le transfer), interactive ï¬ le transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. | 1611.01603#39 | 1611.01603#41 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#41 | Bidirectional Attention Flow for Machine Comprehension | All of this set the stage for Meritâ s role in the NSFNET project starting in the mid-1980s.â Question: â What set the stage for Merits role in NSFNETâ Prediction: â All of this set the stage for Merit â s role in the NSFNET project starting in the mid-1980sâ , Answer: â Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the networkâ Incorrect preprocessing 2 Context: â | 1611.01603#40 | 1611.01603#42 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#42 | Bidirectional Attention Flow for Machine Comprehension | English chemist John Mayow (1641-1679) reï¬ ned this work by showing that ï¬ re requires only a part of air that he called spiritus nitroaereus or just nitroaereus.â Question: â John Mayow died in what year?â Prediction: â 1641-1679â , Answer: â 1679â Table 4: Error analysis on SQuAD. We randomly selected EM-incorrect answers and classiï¬ ed them into 6 different categories. | 1611.01603#41 | 1611.01603#43 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#43 | Bidirectional Attention Flow for Machine Comprehension | Only relevant sentence(s) from the context shown for brevity. 12 Published as a conference paper at ICLR 2017 # B VARIATIONS OF SIMILARITY AND FUSION FUNCTIONS Eqn. 1: dot product Eqn. 1: linear Eqn. 1: bilinear Eqn. 1: linear after MLP Eqn. 2: MLP after concat BIDAF (single) EM F1 75.5 65.5 69.7 59.5 71.8 61.6 76.4 66.2 77.0 67.1 77.3 68.0 Table 5: Variations of similarity function α (Equation 1) and fusion function β (Equation 2) and their performance on the dev data of SQuAD. See Appendix B for the details of each variation. In this appendix section, we experimentally demonstrate how different choices of the similarity function α (Equation 1) and the fusion function β (Equation 2) impact the performance of our model. Each variation is deï¬ ned as following: Eqn. 1: dot product. Dot product α is deï¬ ned as a(h,u) =h'u (6) where T indicates matrix transpose. Dot product has been used for the measurement of similarity between two vectors by {Hill et al.|(2016). Eqn. 1: linear. Linear α is deï¬ ned as (7) lin â | 1611.01603#42 | 1611.01603#44 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#44 | Bidirectional Attention Flow for Machine Comprehension | R4d is a trainable weight matrix. This can be considered as the simpliï¬ cation of where wi â ¬ R*4 is a trainable weight matrix. This can be considered as the simplification of Equation|1|by dropping the term h o u in the concatenation. Eqn. 1: bilinear. Bilinear α is deï¬ ned as a(h, u) = h' Wu (8) where Wj; â ¬ R?¢*?4 is a trainable weight matrix. Bilinear term has been used by |Chen et al. (2016). Eqn. 1: linear after MLP. We can also perform linear mapping after single layer of perceptron: a(h, u) = wi, tanh(Wip[h; u] + Drip) (9) where Wmlp and bmlp are trainable weight matrix and bias, respectively. Linear mapping after perceptron layer has been used by Hermann et al. (2015). | 1611.01603#43 | 1611.01603#45 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#45 | Bidirectional Attention Flow for Machine Comprehension | # Eqn. 2: MLP after concatenation. We can deï¬ ne β as β(h, Ë u, Ë h) = max(0, Wmlp[h; Ë u; h â ¦ Ë u; h â ¦ Ë h] + bmlp) where Wmlp â R2dà 8d and bmlp â R2d are trainable weight matrix and bias. This is equivalent to adding ReLU after linearly transforming the original deï¬ nition of β. Since the output dimension of β changes, the input dimension of the ï¬ rst LSTM of the modeling layer will change as well. The results of these variations on the dev data of SQuAD are shown in Table 5. It is important to note that there are non-trivial gaps between our deï¬ nition of α and other deï¬ nitions employed by previous work. Adding MLP in β does not seem to help, yielding slightly worse result than β without MLP. | 1611.01603#44 | 1611.01603#46 | 1611.01603 | [
"1606.02245"
]
|
1611.01603#46 | Bidirectional Attention Flow for Machine Comprehension | 13 | 1611.01603#45 | 1611.01603 | [
"1606.02245"
]
|
|
1611.01626#0 | Combining policy gradient and Q-learning | 7 1 0 2 r p A 7 ] G L . s c [ 3 v 6 2 6 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # COMBINING POLICY GRADIENT AND Q-LEARNING # Brendan Oâ Donoghue, R´emi Munos, Koray Kavukcuoglu & Volodymyr Mnih Deepmind {bodonoghue,munos,korayk,vmnih}@google.com # ABSTRACT | 1611.01626#1 | 1611.01626 | [
"1602.01783"
]
|
|
1611.01626#1 | Combining policy gradient and Q-learning | Policy gradient is an efï¬ cient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the ï¬ xed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. | 1611.01626#0 | 1611.01626#2 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#2 | Combining policy gradient and Q-learning | We refer to the new technique as â PGQLâ , for policy gradient and Q-learning. We also establish an equivalency between action-value ï¬ tting techniques and actor-critic algorithms, showing that regular- ized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate im- proved data efï¬ ciency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning. | 1611.01626#1 | 1611.01626#3 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#3 | Combining policy gradient and Q-learning | # INTRODUCTION In reinforcement learning an agent explores an environment and through the use of a reward signal learns to optimize its behavior to maximize the expected long-term return. Reinforcement learning has seen success in several areas including robotics (Lin, 1993; Levine et al., 2015), computer games (Mnih et al., 2013; 2015), online advertising (Pednault et al., 2002), board games (Tesauro, 1995; Silver et al., 2016), and many others. For an introduction to reinforcement learning we refer to the classic text by Sutton & Barto (1998). In this paper we consider model-free reinforcement learning, where the state-transition function is not known or learned. There are many different algorithms for model-free reinforcement learning, but most fall into one of two families: action-value ï¬ tting and policy gradient techniques. Action-value techniques involve ï¬ tting a function, called the Q-values, that captures the expected return for taking a particular action at a particular state, and then following a particular policy there- after. Two alternatives we discuss in this paper are SARSA (Rummery & Niranjan, 1994) and Q-learning (Watkins, 1989), although there are many others. SARSA is an on-policy algorithm whereby the action-value function is ï¬ t to the current policy, which is then reï¬ ned by being mostly greedy with respect to those action-values. On the other hand, Q-learning attempts to ï¬ nd the Q- values associated with the optimal policy directly and does not ï¬ t to the policy that was used to generate the data. Q-learning is an off-policy algorithm that can use data generated by another agent or from a replay buffer of old experience. Under certain conditions both SARSA and Q-learning can be shown to converge to the optimal Q-values, from which we can derive the optimal policy (Sutton, 1988; Bertsekas & Tsitsiklis, 1996). In policy gradient techniques the policy is represented explicitly and we improve the policy by updating the parameters in the direction of the gradient of the performance (Sutton et al., 1999; Silver et al., 2014; Kakade, 2001). Online policy gradient typically requires an estimate of the action-value function of the current policy. | 1611.01626#2 | 1611.01626#4 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#4 | Combining policy gradient and Q-learning | For this reason they are often referred to as actor-critic methods, where the actor refers to the policy and the critic to the estimate of the action-value function (Konda & Tsitsiklis, 2003). Vanilla actor-critic methods are on-policy only, although some attempts have been made to extend them to off-policy data (Degris et al., 2012; Levine & Koltun, 2013). 1 Published as a conference paper at ICLR 2017 | 1611.01626#3 | 1611.01626#5 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#5 | Combining policy gradient and Q-learning | In this paper we derive a link between the Q-values induced by a policy and the policy itself when the policy is the ï¬ xed point of a regularized policy gradient algorithm (where the gradient van- ishes). This connection allows us to derive an estimate of the Q-values from the current policy, which we can reï¬ ne using off-policy data and Q-learning. We show in the tabular setting that when the regularization penalty is small (the usual case) the resulting policy is close to the policy that would be found without the addition of the Q-learning update. Separately, we show that regularized actor-critic methods can be interpreted as action-value ï¬ tting methods, where the Q-values have been parameterized in a particular way. We conclude with some numerical examples that provide empirical evidence of improved data efï¬ ciency and stability of PGQL. 1.1 PRIOR WORK Here we highlight various axes along which our work can be compared to others. | 1611.01626#4 | 1611.01626#6 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#6 | Combining policy gradient and Q-learning | In this paper we use entropy regularization to ensure exploration in the policy, which is a common practice in policy gradient (Williams & Peng, 1991; Mnih et al., 2016). An alternative is to use KL-divergence instead of entropy as a regularizer, or as a constraint on how much deviation is permitted from a prior policy (Bagnell & Schneider, 2003; Peters et al., 2010; Schulman et al., 2015; Fox et al., 2015). Natural policy gradient can also be interpreted as putting a constraint on the KL-divergence at each step of the policy improvement (Amari, 1998; Kakade, 2001; Pascanu & Bengio, 2013). In Sallans & Hinton (2004) the authors use a Boltzmann exploration policy over estimated Q-values which they update using TD-learning. In Heess et al. (2012) this was extended to use an actor-critic algorithm instead of TD-learning, however the two updates were not combined as we have done in this paper. In Azar et al. (2012) the authors develop an algorithm called dynamic policy programming, whereby they apply a Bellman-like update to the action-preferences of a policy, which is similar in spirit to the update we describe here. In Norouzi et al. (2016) the authors augment a maximum likelihood objective with a reward in a supervised learning setting, and develop a connection that resembles the one we develop here between the policy and the Q-values. Other works have attempted to com- bine on and off-policy learning, primarily using action-value ï¬ tting methods (Wang et al., 2013; Hausknecht & Stone, 2016; Lehnert & Precup, 2015), with varying degrees of success. In this paper we establish a connection between actor-critic algorithms and action-value learning algorithms. In particular we show that TD-actor-critic (Konda & Tsitsiklis, 2003) is equivalent to expected-SARSA (Sutton & Barto, 1998, Exercise 6.10) with Boltzmann exploration where the Q-values are decom- posed into advantage function and value function. | 1611.01626#5 | 1611.01626#7 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#7 | Combining policy gradient and Q-learning | The algorithm we develop extends actor-critic with a Q-learning style update that, due to the decomposition of the Q-values, resembles the update of the dueling architecture (Wang et al., 2016). Recently, the ï¬ eld of deep reinforcement learning, i.e., the use of deep neural networks to represent action-values or a policy, has seen a lot of success (Mnih et al., 2015; 2016; Silver et al., 2016; Riedmiller, 2005; Lillicrap et al., 2015; Van Hasselt et al., 2016). In the examples section we use a neural network with PGQL to play the Atari games suite. # 2 REINFORCEMENT LEARNING We consider the infinite horizon, discounted, finite state and action space Markov decision process, with state space S, action space A and rewards at each time period denoted by r; â ¬ R. A policy am: Sx A â R, is a mapping from state-action pair to the probability of taking that action at that state, so it must satisfy }>,. 4 7(s,@) = 1 for all states s â ¬ S. Any policy m induces a probability distribution over visited states, dâ ¢ : S â + R, (which may depend on the initial state), so the probability of seeing state-action pair (s,a) â ¬ S x Ais dâ ¢(s)r(s,a). In reinforcement learning an â agentâ interacts with an environment over a number of times steps. At each time step t the agent receives a state s; and a reward r;, and selects an action a; from the policy mt, at which point the agent moves to the next state 5:4; ~ P(-,s:,a1), where P(sâ ,s,a) is the probability of transitioning from state s to state sâ after taking action a. This continues until the agent encounters a terminal state (after which the process is typically restarted). The goal of the agent is to find a policy 7 that maximizes the expected total discounted return J(7) = E()0P29 y'r: | 7), where the expectation is with respect to the initial state distribution, the state-transition probabilities, and the policy, and where 7 â | 1611.01626#6 | 1611.01626#8 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#8 | Combining policy gradient and Q-learning | ¬ (0, 1) is the discount factor that, loosely speaking, controls how much the agent prioritizes long-term versus short-term rewards. Since the agent starts with no knowledge 2 Published as a conference paper at ICLR 2017 of the environment it must continually explore the state space and so will typically use a stochastic policy. Action-values. The action-value, or Q-value, of a particular state under policy a is the ex- pected total discounted return from taking that action at that state and following 7 thereafter, i.e., Qâ ¢(s,a) = EQ Y'r: | 80 = 8,40 = a, 7). The value of state s under policy 7 is denoted by Vâ ¢(s) = E(0207'7: | 80 = 8,7), which is the expected total discounted return of policy 7 from state s. The optimal action-value function is denoted Q* and satisfies Q*(s,a) = max, Qâ ¢(s, a) for each (s, a). The policy that achieves the maximum is the optimal policy 7*, with value function V*. The advantage function is the difference between the action-value and the value function, i.e., Aâ ¢(s,a) = Q*(s, a) â V%(s), and represents the additional expected reward of taking action a over the average performance of the policy from state s. Since V"(s) = S>, 7(s,a)Q*(s,a) we have the identity }>, 7(s,a)A*(s,a) = 0, which simply states that the policy 7 has no advantage over itself. Bellman equation. The Bellman operator T Ï (Bellman, 1957) for policy Ï is deï¬ ned as | 1611.01626#7 | 1611.01626#9 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#9 | Combining policy gradient and Q-learning | T*Q(s,a) = EB (r(s,a) +7Q(s',0)) where the expectation is over next state sâ ~ P(-,s,a), the reward r(s, a), and the action b from policy 7,. The Q-value function for policy 7 is the fixed point of the Bellman operator for 7, i.e., T7 Qâ ¢ = Qâ . The optimal Bellman operator 7* is defined as T*Q(s,a) = Ells, a) + ymax Q(s',b)), where the expectation is over the next state sâ ~ P(-,s,a), and the reward r(s,a). The optimal Q-value function is the fixed point of the optimal Bellman equation, i.e, 7*Q* = Q*. Both the m-Bellman operator and the optimal Bellman operator are y-contraction mappings in the sup-norm, ie., ||TQi â TQalloo < YI]Qi â Qalloo, for any Qi, Q2 â | 1611.01626#8 | 1611.01626#10 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#10 | Combining policy gradient and Q-learning | ¬ RS*4. From this fact one can show that the fixed point of each operator is unique, and that value iteration converges, i.e., (T")*Q > Qâ and (T*)*Q â Q* from any initial Q. 2005). 2.1 ACTION-VALUE LEARNING In value based reinforcement learning we approximate the Q-values using a function approximator. We then update the parameters so that the Q-values are as close to the fixed point of a Bellman equation as possible. If we denote by Q(s,a;0) the approximate Q-values parameterized by 0, then Q-learning updates the Q-values along direction Es ,4(T*Q(s, a; 4) â Q(s, a; 6))VoQ(s, a; 4) and SARSA updates the Q-values along direction E, «(77 Q(s, a; 0) â Q(s, a; 6))VoQ(s, a; 6). In the online setting the Bellman operator is approximated by sampling and bootstrapping, whereby the Q-values at any state are updated using the Q-values from the next visited state. Exploration is achieved by not always taking the action with the highest Q-value at each time step. One common technique called â epsilon greedyâ is to sample a random action with probability « > 0, where eâ ¬ starts high and decreases over time. Another popular technique is â Boltzmann explo- rationâ , where the policy is given by the softmax over the Q-values with a temperature T, i.e., m(s,a) = exp(Q(s,a)/T)/ >>, exp(Q(s, 6)/T), where it is common to decrease the temperature over time. 2.2 POLICY GRADIENT Alternatively, we can parameterize the policy directly and attempt to improve it via gradient ascent on the performance J. The policy gradient theorem (Sutton et al., 1999) states that the gradient of J with respect to the parameters of the policy is given by | 1611.01626#9 | 1611.01626#11 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#11 | Combining policy gradient and Q-learning | â θJ(Ï ) = E s,a QÏ (s, a)â θ log Ï (s, a), (1) where the expectation is over (s, a) with probability dÏ (s)Ï (s, a). In the original derivation of the policy gradient theorem the expectation is over the discounted distribution of states, i.e., over dÏ ,s0 t=0 γtP r{st = s | s0, Ï }. However, the gradient update in that case will assign a low γ 3 | 1611.01626#10 | 1611.01626#12 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#12 | Combining policy gradient and Q-learning | Published as a conference paper at ICLR 2017 weight to states that take a long time to reach and can therefore have poor empirical performance. In practice the non-discounted distribution of states is frequently used instead. In certain cases this is equivalent to maximizing the average (i.e., non-discounted) policy performance, even when QÏ uses a discount factor (Thomas, 2014). Throughout this paper we will use the non-discounted distribution of states. In the online case it is common to add an entropy regularizer to the gradient in order to prevent the policy becoming deterministic. This ensures that the agent will explore continually. In that case the (batch) update becomes | 1611.01626#11 | 1611.01626#13 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#13 | Combining policy gradient and Q-learning | â θ â E s,a QÏ (s, a)â θ log Ï (s, a) + α E s â θH Ï (s), (2) where H7(s) = â >, 7(s, a) log 7(s, a) denotes the entropy of policy 7, and a > 0 is the reg- ularization penalty parameter. Throughout this paper we will make use of entropy regularization, however many of the results are true for other choices of regularizers with only minor modification, e.g., KL-divergence. Note that equation (2) requires exact knowledge of the Q-values. In practice they can be estimated, e.g., by the sum of discounted rewards along an observed trajectory 1992), and the policy gradient will still perform well (Konda & Tsitsiklis] |2003). # 3 REGULARIZED POLICY GRADIENT ALGORITHM In this section we derive a relationship between the policy and the Q-values when using a regularized policy gradient algorithm. This allows us to transform a policy into an estimate of the Q-values. We then show that for small regularization the Q-values induced by the policy at the ï¬ xed point of the algorithm have a small Bellman error in the tabular case. 3.1 TABULAR CASE Consider the fixed points of the entropy regularized policy gradient update Qh. Let us define f(@) = Es, Q" (8, a)Vo log 7(s, a) + aE, VoH (5), and gs(7) = 3°, (s, a) for each s. A fixed point is one where we can no longer update 6 in the direction of f (0) without violating one of the constraints gs(7) = 1, i.e, where f(@) is in the span of the vectors {Vogs(7)}. In other words, any fixed point must satisfy f(0) = >>, AsVogs(), where for each s the Lagrange multiplier \, â ¬ R ensures that gs(7) = 1. Substituting in terms to this equation we obtain E s,a (QÏ (s, a) â α log Ï (s, a) â cs) â θ log Ï (s, a) = 0, (3) | 1611.01626#12 | 1611.01626#14 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#14 | Combining policy gradient and Q-learning | where we have absorbed all constants into c â R|S|. Any solution Ï to this equation is strictly positive element-wise since it must lie in the domain of the entropy function. In the tabular case Ï is represented by a single number for each state and action pair and the gradient of the policy with respect to the parameters is the indicator function, i.e., â θ(t,b)Ï (s, a) = 1(t,b)=(s,a). From this we obtain QÏ (s, a) â α log Ï (s, a) â cs = 0 for each s (assuming that the measure dÏ (s) > 0). Multiplying by Ï (a, s) and summing over a â A we get cs = αH Ï (s) + V Ï (s). Substituting c into equation (3) we have the following formulation for the policy: Ï (s, a) = exp(AÏ (s, a)/α â H Ï (s)), (4) for all s â S and a â A. In other words, the policy at the ï¬ xed point is a softmax over the advantage function induced by that policy, where the regularization parameter α can be interpreted as the temperature. Therefore, we can use the policy to derive an estimate of the Q-values, | 1611.01626#13 | 1611.01626#15 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#15 | Combining policy gradient and Q-learning | Ë QÏ (s, a) = Ë AÏ (s, a) + V Ï (s) = α(log Ï (s, a) + H Ï (s)) + V Ï (s). (5) With this we can rewrite the gradient update (2) as â θ â E s,a (QÏ (s, a) â Ë QÏ (s, a))â θ log Ï (s, a), (6) | 1611.01626#14 | 1611.01626#16 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#16 | Combining policy gradient and Q-learning | since the update is unchanged by per-state constant offsets. When the policy is parameterized as a softmax, i.e., 7(s,a) = exp(W(s,a))/ >>, exp W(s,b), the quantity W is sometimes referred to as the action-preferences of the policy (Sutton & Barto} Chapter 6.6). Equation (7) states that the action preferences are equal to the Q-values scaled by 1/a, up to an additive per-state constant. | 1611.01626#15 | 1611.01626#17 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#17 | Combining policy gradient and Q-learning | 4 Published as a conference paper at ICLR 2017 3.2 GENERAL CASE Consider the following optimization problem: minimize Es,4(q(s,a) â alog 7(s, a))? a) subjectto S°,a(s,a)=1, sES over variable 6 which parameterizes 7, where we consider both the measure in the expectation and the values q(s, a) to be independent of @. The optimality condition for this problem is # E s,a (q(s, a) â α log Ï (s, a) + cs)â θ log Ï (s, a) = 0, where c â R|S| is the Lagrange multiplier associated with the constraint that the policy sum to one at each state. Comparing this to equation (3), we see that if q = QÏ and the measure in the expectation is the same then they describe the same set of ï¬ xed points. This suggests an interpretation of the ï¬ xed points of the regularized policy gradient as a regression of the log-policy onto the Q-values. In the general case of using an approximation architecture we can interpret equation (3) as indicating that the error between QÏ and Ë QÏ is orthogonal to â θi log Ï for each i, and so cannot be reduced further by changing the parameters, at least locally. In this case equation (4) is unlikely to hold at a solution to (3), however with a good approximation architecture it may hold approximately, so that the we can derive an estimate of the Q-values from the policy using equation (5). We will use this estimate of the Q-values in the next section. 3.3 CONNECTION TO ACTION-VALUE METHODS The previous section made a connection between regularized policy gradient and a regression onto the Q-values at the ï¬ xed point. In this section we go one step further, showing that actor-critic methods can be interpreted as action-value ï¬ tting methods, where the exact method depends on the choice of critic. Actor-critic methods. Consider an agent using an actor-critic method to learn both a policy Ï and a value function V . At any iteration k, the value function V k has parameters wk, and the policy is of the form | 1611.01626#16 | 1611.01626#18 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#18 | Combining policy gradient and Q-learning | a*(s,a) = exp(W*(s, a)/a)/ S> exp(W*(s,b)/a), 8) b where W* is parameterized by 6" and a > 0 is the entropy regularization penalty. In this case Vo log r*(s,a) = (1/a)(VeW*(s, a) â 0, 7(s,b)VeW*(s, b)). Using equation a) the parame- ters are updated as AO x E dac(VoW*(s,a) â S> m*(s,b)VoW*(s,b)), Aw oc E dacVwV*(s) (9) sa sa b where δac is the critic minus baseline term, which depends on the variant of actor-critic being used (see the remark below). Action-value methods. Compare this to the case where an agent is learning Q-values with a du- eling architecture (Wang et al., 2016), which at iteration k is given by Qk(s, a) = Y k(s, a) â µ(s, b)Y k(s, b) + V k(s), b where µ is a probability distribution, Y k is parameterized by θk, V k is parameterized by wk, and the exploration policy is Boltzmann with temperature α, i.e., Ï k(s, a) = exp(Y k(s, a)/α)/ exp(Y k(s, b)/α). b (10) In action value ï¬ tting methods at each iteration the parameters are updated to reduce some error, where the update is given by AO x E dbav(VoÂ¥*(s,a) â Ss (s,b)VeY*(s,b)), Aw ox E davVwV*(s) (11) 3a 7 sa where δav is the action-value error term and depends on which algorithm is being used (see the remark below). | 1611.01626#17 | 1611.01626#19 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#19 | Combining policy gradient and Q-learning | 5 Published as a conference paper at ICLR 2017 Equivalence. The two policies (8) and (10) are identical if W* = Y* for all k. Since X° and Y° can be initialized and parameterized in the same way, and assuming the two value function estimates are initialized and parameterized in the same way, all that remains is to show that the updates in equations and (9p are identical. Comparing the two, and assuming that dac = day (see remark), we see that the only difference is that the measure is not fixed in (9). but is equal to the current policy and therefore changes after each update. Replacing ju in (11) with 7* makes the updates identical, in which case W* = Y* at all iterations and the two policies and are always the same. In other words, the slightly modified action-value method is equivalent to an actor-critic policy gradient method, and vice-versa (modulo using the non-discounted distribu- tion of states, as discussed in 2.2). In particular, regularized policy gradient methods can be inter- preted as advantage function learning techniques Cretan, since at the optimum the quantity W(s,a) â do, 7(s,b)W(s,b) = a(log 7(s, a) + Hâ ¢(s)) will be equal to the advantage function values in the tabular case. Remark. In SARSA (Rummery & Niranjan] 1994) we set day = r(s,a) + yQ(sâ ,b) â Q(s, a), where b is the action selected at state sâ , which would be equivalent to using a bootstrap critic in equation (6) where Qâ (s,a) = r(s,a) + yQ(sâ ,b). In expected-SARSA (Sutton & Barto} {1998 Exercise 6.10), (Van Seijen et al.|[2009)) we take the expectation over the Q-values at the next state, $0 day = T(s,a)+7V(sâ ) â Q(s, a). This is equivalent to TD-actor-critic (Konda & Tsitsiklis}/2003) r V In where we use the value function to provide the critic, which is given by Q* = r(s,a) + yV(sâ ). | 1611.01626#18 | 1611.01626#20 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#20 | Combining policy gradient and Q-learning | Q-learning Say = 7 (8, a) + ymax, Q(sâ , b) â Q(s, a), which would be equivalent to using an optimizing critic that bootstraps using the max Q-value at the next state, i.e, Q7(s,a) = r(s,a) + ymaxp Q (sâ ,b). In REINFORCE the critic is the Monte Carlo return from that state on, ie, Q"(s,a) = (Pg 7'Tt | 80 = 8,a9 = a). If the return trace is truncated and a bootstrap is performed after n-steps, this is equivalent to n-step SARSA or n-step Q-learning, depending on the form of the bootstrap (Peng & Williams} |T996p. 3.4 BELLMAN RESIDUAL In this section we show that ||7*Q** â Q7«|| > 0 with decreasing regularization penalty a, where Tq is the policy defined by (4) and Q* is the corresponding Q-value function, both of which are functions of a. We shall show that it converges to zero by bounding the sequence below by zero and above with a sequence that converges to zero. First, we have that T*Q7* > T⠢°Qâ ¢* = Qâ ¢, since J* is greedy with respect to the Q-values. So T*Q7« â Q7* > 0. Now, to bound from above we need the fact that 7.(s,a) = exp(Q**(s, a)/a)/ >>, exp(Q7*(s, b)/a) < exp((Q7*(s, a) â max, Q7*(s,c))/a). Using this we have 0 < T*Q7(s,a) â Qâ ¢(s,a) = TOF (s,a) â TQ*(s,a) = E, (max, Qre (sâ ,c) -â do, Tals! b)Qâ ¢= (s', b)) = By Sy nals! dylimax, Q*(s!,c) â Q*(s!,0)) < Ey d7, exp((Qâ ¢ (s', b) â Q* (s/, b*))/a) (max. Q**(s', c) â | 1611.01626#19 | 1611.01626#21 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#21 | Combining policy gradient and Q-learning | Q*(s',b)) Ey oy fa (max, Q7*(sâ ,c) â Q7*(s',b)), where we deï¬ ne fα(x) = x exp(â x/α). To conclude our proof we use the fact that fα(x) â ¤ supx fα(x) = fα(α) = αeâ 1, which yields 0< T*Qâ ¢(s,a) â Q**(s,a) < |Alaeâ ¢* for all (s,a), and so the Bellman residual converges to zero with decreasing a. In other words, for small enough a (which is the regime we are interested in) the Q-values induced by the policy will have a small Bellman residual. Moreover, this implies that limy_,9 Q7* = Q*, as one might expect. # 4 PGQL In this section we introduce the main contribution of the paper, which is a technique to combine pol- icy gradient with Q-learning. We call our technique â PGQLâ , for policy gradient and Q-learning. In the previous section we showed that the Bellman residual is small at the ï¬ xed point of a regularized | 1611.01626#20 | 1611.01626#22 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#22 | Combining policy gradient and Q-learning | 6 Published as a conference paper at ICLR 2017 policy gradient algorithm when the regularization penalty is sufï¬ ciently small. This suggests adding an auxiliary update where we explicitly attempt to reduce the Bellman residual as estimated from the policy, i.e., a hybrid between policy gradient and Q-learning. We ï¬ rst present the technique in a batch update setting, with a perfect knowledge of QÏ (i.e., a perfect critic). Later we discuss the practical implementation of the technique in a reinforcement learning setting with function approximation, where the agent generates experience from interacting with the environment and needs to estimate a critic simultaneously with the policy. | 1611.01626#21 | 1611.01626#23 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#23 | Combining policy gradient and Q-learning | 4.1 PGQL UPDATE Deï¬ ne the estimate of Q using the policy as Ë QÏ (s, a) = α(log Ï (s, a) + H Ï (s)) + V (s), (12) where V has parameters w and is not necessarily V Ï as it was in equation (5). In (2) it was unneces- sary to estimate the constant since the update was invariant to constant offsets, although in practice it is often estimated for use in a variance reduction technique (Williams, 1992; Sutton et al., 1999). Since we know that at the ï¬ xed point the Bellman residual will be small for small α, we can consider updating the parameters to reduce the Bellman residual in a fashion similar to Q-learning, i.e., Aé x E(T*Q"(s,a) â Q"(s,a))Vologn(s,a), Aw x E(7*Q"(s,a) â Q7(s,a))VwV(s). sa s,a (13) This is Q-learning applied to a particular form of the Q-values, and can also be interpreted as an actor-critic algorithm with an optimizing (and therefore biased) critic. The full scheme simply combines two updates to the policy, the regularized policy gradient update (2) and the Q-learning update (13). Assuming we have an architecture that provides a policy Ï , a value function estimate V , and an action-value critic QÏ , then the parameter updates can be written as (suppressing the (s, a) notation) AO x (1 =) Ex,a(Qâ ¢ â Q") Vo log 7 + 7 Es,a(T*Q" â Q7)Vo log, (14) Aw « (1 = 1) Es,a(Qâ â Q")VuV + 1 Es.a(T*Q" â Q7)VuV, here η â | 1611.01626#22 | 1611.01626#24 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#24 | Combining policy gradient and Q-learning | [0, 1] is a weighting parameter that controls how much of each update we apply. In the case where η = 0 the above scheme reduces to entropy regularized policy gradient. If η = 1 then it becomes a variant of (batch) Q-learning with an architecture similar to the dueling architecture (Wang et al., 2016). Intermediate values of η produce a hybrid between the two. Examining the update we see that two error terms are trading off. The ï¬ rst term encourages consistency with critic, and the second term encourages optimality over time. However, since we know that under standard policy gradient the Bellman residual will be small, then it follows that adding a term that reduces that error should not make much difference at the ï¬ | 1611.01626#23 | 1611.01626#25 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#25 | Combining policy gradient and Q-learning | xed point. That is, the updates should be complementary, pointing in the same general direction, at least far away from a ï¬ xed point. This update can also be interpreted as an actor-critic update where the critic is given by a weighted combination of a standard critic and an optimizing critic. Yet another interpretation of the update is a combination of expected-SARSA and Q-learning, where the Q-values are parameterized as the sum of an advantage function and a value function. # 4.2 PRACTICAL IMPLEMENTATION The updates presented in (14) are batch updates, with an exact critic QÏ . In practice we want to run this scheme online, with an estimate of the critic, where we donâ t necessarily apply the policy gradient update at the same time or from same data source as the Q-learning update. Our proposal scheme is as follows. One or more agents interact with an environment, encountering states and rewards and performing on-policy updates of (shared) parameters using an actor-critic algorithm where both the policy and the critic are being updated online. Each time an agent receives new data from the environment it writes it to a shared replay memory buffer. Periodically a separate learner process samples from the replay buffer and performs a step of Q-learning on the parameters of the policy using (13). | 1611.01626#24 | 1611.01626#26 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#26 | Combining policy gradient and Q-learning | This scheme has several advantages. The critic can accumulate the Monte 7 Published as a conference paper at ICLR 2017 (a) Grid world. (b) Performance versus agent steps in grid world. Figure 1: Grid world experiment. Carlo return over many time periods, allowing us to spread the inï¬ uence of a reward received in the future backwards in time. Furthermore, the replay buffer can be used to store and replay â importantâ past experiences by prioritizing those samples (Schaul et al., 2015). The use of the replay buffer can help to reduce problems associated with correlated training data, as generated by an agent explor- ing an environment where the states are likely to be similar from one time step to the next. Also the use of replay can act as a kind of regularizer, preventing the policy from moving too far from satisfying the Bellman equation, thereby improving stability, in a similar sense to that of a policy â trust-regionâ | 1611.01626#25 | 1611.01626#27 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#27 | Combining policy gradient and Q-learning | (Schulman et al., 2015). Moreover, by batching up replay samples to update the net- work we can leverage GPUs to perform the updates quickly, this is in comparison to pure policy gradient techniques which are generally implemented on CPU (Mnih et al., 2016). Since we perform Q-learning using samples from a replay buffer that were generated by a old policy we are performing (slightly) off-policy learning. However, Q-learning is known to converge to the optimal Q-values in the off-policy tabular case (under certain conditions) (Sutton & Barto, 1998), and has shown good performance off-policy in the function approximation case (Mnih et al., 2013). 4.3 MODIFIED FIXED POINT The PGQL updates in equation (14) have modiï¬ ed the ï¬ xed point of the algorithm, so the analysis of §3 is no longer valid. Considering the tabular case once again, it is still the case that the policy Ï â exp( Ë QÏ /α) as before, where Ë QÏ is deï¬ ned by (12), however where previously the ï¬ xed point satisï¬ ed Ë QÏ = QÏ , with QÏ corresponding to the Q-values induced by Ï , now we have Qâ ¢ = (1 n)Q" +nT*Q", (1s) Or equivalently, if 7 < 1, we have Qâ ¢ = (1 â 7) ro 1*(T*)*Qâ ¢. In the appendix we show that |Qâ ¢ â Qâ ¢|| > 0 and that ||7*Q* â Qâ || > 0 with decreasing a in the tabular case. That is, for small a the induced Q-values and the Q-values estimated from the policy are close, and we still have the guarantee that in the limit the Q-values are optimal. In other words, we have not perturbed the policy very much by the addition of the auxiliary update. | 1611.01626#26 | 1611.01626#28 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#28 | Combining policy gradient and Q-learning | # 5 NUMERICAL EXPERIMENTS 5.1 GRID WORLD In this section we discuss the results of running PGQL on a toy 4 by 6 grid world, as shown in Figure 1a. The agent always begins in the square marked â Sâ and the episode continues until it reaches the square marked â Tâ , upon which it receives a reward of 1. All other times it receives no reward. For this experiment we chose regularization parameter α = 0.001 and discount factor γ = 0.95. Figure 1b shows the performance traces of three different agents learning in the grid world, running from the same initial random seed. The lines show the true expected performance of the policy | 1611.01626#27 | 1611.01626#29 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#29 | Combining policy gradient and Q-learning | 8 Published as a conference paper at ICLR 2017 Q-learning . Q Policy XY, t TD learning gradient NS Policy / a, A Input Figure 2: PGQL network augmentation. from the start state, as calculated by value iteration after each update. The blue-line is standard TD-actor-critic (Konda & Tsitsiklis, 2003), where we maintain an estimate of the value function and use that to generate an estimate of the Q-values for use as the critic. The green line is Q-learning where at each step an update is performed using data drawn from a replay buffer of prior experience and where the Q-values are parameterized as in equation (12). The policy is a softmax over the Q-value estimates with temperature α. The red line is PGQL, which at each step ï¬ rst performs the TD-actor-critic update, then performs the Q-learning update as in (14). The grid world was totally deterministic, so the step size could be large and was chosen to be 1. A step-size any larger than this made the pure actor-critic agent fail to learn, but both PGQL and Q-learning could handle some increase in the step-size, possibly due to the stabilizing effect of using replay. It is clear that PGQL outperforms the other two. At any point along the x-axis the agents have seen the same amount of data, which would indicate that PGQL is more data efï¬ cient than either of the vanilla methods since it has the highest performance at practically every point. # 5.2 ATARI We tested our algorithm on the full suite of Atari benchmarks (Bellemare et al., 2012), using a neural network to parameterize the policy. | 1611.01626#28 | 1611.01626#30 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#30 | Combining policy gradient and Q-learning | In ï¬ gure 2 we show how a policy network can be augmented with a parameterless additional layer which outputs the Q-value estimate. With the exception of the extra layer, the architecture and parameters were chosen to exactly match the asynchronous advantage actor-critic (A3C) algorithm presented in Mnih et al. (2016), which in turn reused many of the settings from Mnih et al. (2015). Speciï¬ cally we used the exact same learning rate, number of workers, entropy penalty, bootstrap horizon, and network architecture. This allows a fair comparison between A3C and PGQL, since the only difference is the addition of the Q-learning step. Our technique augmented A3C with the following change: After each actor-learner has accumulated the gradient for the policy update, it performs a single step of Q-learning from replay data as described in equation (13), where the minibatch size was 32 and the Q-learning learning rate was chosen to be 0.5 times the actor-critic learning rate (we mention learning rate ratios rather than choice of η in (14) because the updates happen at different frequencies and from different data sources). Each actor-learner thread maintained a replay buffer of the last 100k transitions seen by that thread. We ran the learning for 50 million agent steps (200 million Atari frames), as in (Mnih et al., 2016). In the results we compare against both A3C and a variant of asynchronous deep Q-learning. The changes we made to Q-learning are to make it similar to our method, with some tuning of the hyper- parameters for performance. We use the exact same network, the exploration policy is a softmax over the Q-values with a temperature of 0.1, and the Q-values are parameterized as in equation (12) (i.e., similar to the dueling architecture (Wang et al., 2016)), where α = 0.1. The Q-value updates are performed every 4 steps with a minibatch of 32 (roughly 5 times more frequently than PGQL). For each method, all games used identical hyper-parameters. The results across all games are given in table 3 in the appendix. All scores have been normal- ized by subtracting the average score achieved by an agent that takes actions uniformly at random. | 1611.01626#29 | 1611.01626#31 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#31 | Combining policy gradient and Q-learning | 9 Published as a conference paper at ICLR 2017 Each game was tested 5 times per method with the same hyper-parameters but with different ran- dom seeds. The scores presented correspond to the best score obtained by any run from a random start evaluation condition (Mnih et al., 2016). Overall, PGQL performed best in 34 games, A3C performed best in 7 games, and Q-learning was best in 10 games. In 6 games two or more methods tied. In tables 1 and 2 we give the mean and median normalized scores as percentage of an expert human normalized score across all games for each tested algorithm from random and human-start conditions respectively. In a human-start condition the agent takes over control of the game from randomly selected human-play starting points, which generally leads to lower performance since the agent may not have found itself in that state during training. In both cases, PGQL has both the highest mean and median, and the median score exceeds 100%, the human performance threshold. It is worth noting that PGQL was the worst performer in only one game, in cases where it was not the outright winner it was generally somewhere in between the performance of the other two algorithms. Figure 3 shows some sample traces of games where PGQL was the best performer. In these cases PGQL has far better data efï¬ ciency than the other methods. In ï¬ gure 4 we show some of the games where PGQL under-performed. In practically every case where PGQL did not perform well it had better data efï¬ ciency early on in the learning, but performance saturated or collapsed. We hypothesize that in these cases the policy has reached a local optimum, or over-ï¬ t to the early data, and might perform better were the hyper-parameters to be tuned. Mean Median A3C Q-learning 636.8 107.3 756.3 58.9 PGQL 877.2 145.6 Table 1: Mean and median normalized scores for the Atari suite from random starts, as a percentage of human normalized score. Mean Median A3C Q-learning 266.6 58.3 246.6 30.5 PGQL 416.7 103.3 Table 2: Mean and median normalized scores for the Atari suite from human starts, as a percentage of human normalized score. | 1611.01626#30 | 1611.01626#32 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#32 | Combining policy gradient and Q-learning | 12000 assault 16000 battle zone â asc â asc 10000 â â Q-learning 14000 QJearning â PGQL 12000 â â PGQL 8000 10000 6000 8000 4000 6000 4000 2000 2000 0 1 2 3 4 5 0 1 2 3 4 5 agent steps le7 agent steps le7 12000 chopper command Lovoo0 yars revenge â asc â asc 10000 â â Q-learning â Qlearning 80000 PGQL â PGQL 8000 60000 6000 40000 4000 Pry 20000 oO oO oO 1 2 3 4 5 oO 1 2 3 4 5 agent steps 1e7 agent steps 1e7 Figure 3: Some Atari runs where PGQL performed well. | 1611.01626#31 | 1611.01626#33 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#33 | Combining policy gradient and Q-learning | 10 Published as a conference paper at ICLR 2017 Py) breakout 35000 hero â A3c â A3c 700 | Q-learning 30000 â Qlearning 600 â PGaL 25000 500 2 20000 5 400 & 15000 300 200 10000 100 5000 0 1 2 3 4 5 agent steps le7 agent steps le7 25000 qbert 80000 up n down â _â age goo00 | â | Ase 20000 â â Qlearning â @learning PGQL 60000 â â PGQL 15000 50000 40000 10000 30000 20000 5000 10000 ° ° 0 1 2 3 4 5 0 1 2 3 4 5 agent steps le7 agent steps le7 Figure 4: Some Atari runs where PGQL performed poorly. # 6 CONCLUSIONS | 1611.01626#32 | 1611.01626#34 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#34 | Combining policy gradient and Q-learning | We have made a connection between the ï¬ xed point of regularized policy gradient techniques and the Q-values of the resulting policy. For small regularization (the usual case) we have shown that the Bellman residual of the induced Q-values must be small. This leads us to consider adding an auxiliary update to the policy gradient which is related to the Bellman residual evaluated on a transformation of the policy. This update can be performed off-policy, using stored experience. We call the resulting method â PGQLâ , for policy gradient and Q-learning. Empirically, we observe better data efï¬ ciency and stability of PGQL when compared to actor-critic or Q-learning alone. We veriï¬ ed the performance of PGQL on a suite of Atari games, where we parameterize the policy using a neural network, and achieved performance exceeding that of both A3C and Q-learning. # 7 ACKNOWLEDGMENTS We thank Joseph Modayil for many comments and suggestions on the paper, and Hubert Soyer for help with performance evaluation. We would also like to thank the anonymous reviewers for their constructive feedback. | 1611.01626#33 | 1611.01626#35 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#35 | Combining policy gradient and Q-learning | 11 Published as a conference paper at ICLR 2017 # REFERENCES Shun-Ichi Amari. Natural gradient works efï¬ ciently in learning. Neural computation, 10(2):251â 276, 1998. Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming. Journal of Machine Learning Research, 13(Nov):3207â 3245, 2012. | 1611.01626#34 | 1611.01626#36 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#36 | Combining policy gradient and Q-learning | J Andrew Bagnell and Jeff Schneider. Covariant policy search. In IJCAI, 2003. Leemon C Baird III. Advantage updating. Technical Report WL-TR-93-1146, Wright-Patterson Air Force Base Ohio: Wright Laboratory, 1993. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. | 1611.01626#35 | 1611.01626#37 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#37 | Combining policy gradient and Q-learning | Journal of Artiï¬ cial Intelligence Research, 2012. # Richard Bellman. Dynamic programming. Princeton University Press, 1957. Dimitri P Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientiï¬ c, 2005. Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientiï¬ c, 1996. Thomas Degris, Martha White, and Richard S Sutton. | 1611.01626#36 | 1611.01626#38 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#38 | Combining policy gradient and Q-learning | Off-policy actor-critic. 2012. Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1207.4708, 2015. Matthew Hausknecht and Peter Stone. On-policy vs. off-policy updates for deep reinforcement learning. Deep Reinforcement Learning: Frontiers and Challenges, IJCAI 2016 Workshop, 2016. Nicolas Heess, David Silver, and Yee Whye Teh. Actor-critic reinforcement learning with energy- based policies. In JMLR: | 1611.01626#37 | 1611.01626#39 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#39 | Combining policy gradient and Q-learning | Workshop and Conference Proceedings 24, pp. 43â 57, 2012. Sham Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14, pp. 1531â 1538, 2001. Vijay R Konda and John N Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143â 1166, 2003. Lucas Lehnert and Doina Precup. | 1611.01626#38 | 1611.01626#40 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#40 | Combining policy gradient and Q-learning | Policy gradient methods for off-policy control. arXiv preprint arXiv:1512.04105, 2015. Sergey Levine and Vladlen Koltun. Guided policy search. In Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 1â 9, 2013. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo- motor policies. arXiv preprint arXiv:1504.00702, 2015. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Long-Ji Lin. Reinforcement learning for robots using neural networks. | 1611.01626#39 | 1611.01626#41 | 1611.01626 | [
"1602.01783"
]
|
1611.01626#41 | Combining policy gradient and Q-learning | Technical report, DTIC Document, 1993. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learn- ing Workshop. 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. | 1611.01626#40 | 1611.01626#42 | 1611.01626 | [
"1602.01783"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.