doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.05756 | 38 | DPPnet: resting (relaxing)
DPPnet: swimming (fishing)
(b) Results of the proposed algorithm on a single common question for multiple images
Figure 4. Sample images and questions in VQA dataset [1]. Each question requires a different type and/or level of understanding of the corresponding input image to ï¬nd correct answer. Answers in blue are correct while answers in red are incorrect. For the incorrect answers, ground-truth answers are provided within the parentheses.
# 7. Conclusion
We proposed a novel architecture for image question an- swering based on two subnetworksâclassiï¬cation network
and parameter prediction network. The classiï¬cation net- work has a dynamic parameter layer, which enables the classiï¬cation network to adaptively determine its weights through the parameter prediction network. While predicting
all entries of the weight matrix is infeasible due to its large dimensionality, we relieved this limitation using parame- ter hashing and weight sharing. The effectiveness of the proposed architecture is supported by experimental results showing the state-of-the-art performances on three different datasets. Note that the proposed method achieved outstand- ing performance even without more complex recognition processes such as referencing objects. We believe that the proposed algorithm can be extended further by integrating attention model [29] to solve such difï¬cult problems.
# References | 1511.05756#38 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 39 | # References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: visual question answering. In ICCV, 2015. 1, 2, 5, 6, 7, 8
[2] J. Ba, K. Swersky, S. Fidler, and R. Salakhutdinov. Predict- ing deep zero-shot convolutional neural networks using tex- tual descriptions. In ICCV, 2015. 2
[3] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In ICML, 2015. 2, 4, 5
[4] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning Workshop, 2014. 4, 5, 7 I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In CVPR, 2014. 1 | 1511.05756#39 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 40 | [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 3
[7] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013. 5
[8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: a deep convolutional acti- vation feature for generic visual recognition. In ICML, 2014. 1
[9] C. Fellbaum. Wordnet: An electronic database, 1998. 6 [10] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for mul- tilingual image question answering. In NIPS, 2015. 1, 2 [11] S. Hochreiter and J. Schmidhuber. Long short-term memory.
Neural computation, 9(8):1735â1780, 1997. 5 | 1511.05756#40 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 41 | Neural computation, 9(8):1735â1780, 1997. 5
[12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 6
[13] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. In ICLR, 2015. 6
[14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-thought vectors. In NIPS, 2015. 2, 4, 5
[15] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: com- mon objects in context. In ECCV, 2014. 6
[16] L. Ma, Z. Lu, and H. Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. 1, 2, 3, 7 | 1511.05756#41 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 42 | [17] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In NIPS, 2014. 1, 2, 6, 7
[18] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 2, 7
[19] T. Mikolov, M. Karaï¬Â´at, L. Burget, J. Cernock`y, and S. Khu- danpur. Recurrent neural network based language model. In INTERSPEECH, pages 1045â1048, 2010. 5
[20] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. 6
[21] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolu- tional neural networks. In CVPR, 2014. 1 | 1511.05756#42 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 43 | [22] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬culty of training recurrent neural networks. In ICML, 2013. 6 [23] M. Ren, R. Kiros, and R. S. Zemel. Exploring models and data for image question answering. In NIPS, 2015. 1, 2, 3, 5, 6, 7
[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 1, 3
[25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. 5
[26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 1
[27] L. Wolf. Deepface: Closing the gap to human-level perfor- mance in face veriï¬cation. In CVPR, 2014. 1 | 1511.05756#43 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 44 | [27] L. Wolf. Deepface: Closing the gap to human-level perfor- mance in face veriï¬cation. In CVPR, 2014. 1
[28] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In ACL, pages 133â138, 1994. 6
[29] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural im- age caption generation with visual attention. In ICML, 2015. 6, 9
[30] J. Yao, S. Fidler, and R. Urtasun. Describing the scene as a whole: Joint object detection, scene classiï¬cation and se- mantic segmentation. In CVPR, 2012. 1
[31] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. 1 | 1511.05756#44 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05234 | 1 | Abstract. We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a ques- tion about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the infor- mation stored in memory. Our Spatial Memory Network stores neuron activations from diï¬erent spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single âhopâ in the network. We propose a novel spatial attention architecture that aligns words with image patches in the ï¬rst hop, and obtain improved results by adding a second atten- tion hop which considers the whole question to choose visual evidence based on the results of the ï¬rst hop. To better understand the inference process learned by the network, we design synthetic questions that specif- ically require spatial inference | 1511.05234#1 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 2 | of the ï¬rst hop. To better understand the inference process learned by the network, we design synthetic questions that specif- ically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3]. | 1511.05234#2 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 3 | Keywords: Visual Question Answering, Spatial Attention, Memory Net- work, Deep Learning
# 1 Introduction
Visual Question Answering (VQA) is an emerging interdisciplinary research problem at the intersection of computer vision, natural language processing and artiï¬cial intelligence. It has many real-life applications, such as automatic query- ing of surveillance video [4] or assisting the visually impaired [5]. Compared to the recently popular image captioning task [6,7,8,9], VQA requires a deeper un- derstanding of the image, but is considerably easier to evaluate. It also puts more focus on artiï¬cial intelligence, namely the inference process needed to produce the answer to the visual question.
2
What is the child standing on? skateboard
What color is the phone booth? blue
What color is the phone booth? blue | 1511.05234#3 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 4 | 2
What is the child standing on? skateboard
What color is the phone booth? blue
What color is the phone booth? blue
Fig. 1. We propose a Spatial Memory Network for VQA (SMem-VQA) that answers questions about images using spatial inference. The ï¬gure shows the inference process of our two-hop model on examples from the VQA dataset [2]. In the ï¬rst hop (middle), the attention process captures the correspondence between individual words in the question and image regions. High attention regions (bright areas) are marked with bounding boxes and the corresponding words are highlighted using the same color. In the second hop (right), the ï¬ne-grained evidence gathered in the ï¬rst hop, as well as an embedding of the entire question, are used to collect more exact evidence to predict the answer. (Best viewed in color.) | 1511.05234#4 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 5 | In one of the early works [1], VQA is seen as a Turing test proxy. The authors propose an approach based on handcrafted features using a semantic parse of the question and scene analysis of the image combined in a latent-world Bayesian framework. More recently, several end-to-end deep neural networks that learn features directly from data have been applied to this problem [10,11]. Most of these are directly adapted from captioning models [6,7,8], and utilize a recurrent LSTM network, which takes the question and Convolutional Neural Net (CNN) image features as input, and outputs the answer. Though the deep learning methods in [10,11] have shown great improvement compared to the handcrafted feature method [1], they have their own drawbacks. These models based on the LSTM reading in both the question and the image features do not show a clear improvement compared to an LSTM reading in the question only [10,11]. Fur- thermore, the rather complicated LSTM models obtain similar or worse accuracy to a baseline model which concatenates CNN features and a bag-of-words ques- tion embedding to predict the answer, see the IMG+BOW model in [11] and the iBOWIMG model in [3]. | 1511.05234#5 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 6 | A major drawback of existing models is that they do not have any explicit notion of object position, and do not support the computation of intermedi- ate results based on spatial attention. Our intuition is that answering visual questions often involves looking at diï¬erent spatial regions and comparing their contents and/or locations. For example, to answer the questions in Fig. 1, we need to look at a portion of the image, such as the child or the phone booth. Similarly, to answer the question âIs there a cat in the basket?â in Fig. 2, we can ï¬rst ï¬nd the basket and the cat objects, and then compare their locations. | 1511.05234#6 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 7 | We propose a new deep learning approach to VQA that incorporates explicit spatial attention, which we call the Spatial Memory Network VQA (SMem- VQA). Our approach is based on memory networks, which have recently been proposed for text Question Answering (QA) [12,13]. Memory networks combine learned text embeddings with an attention mechanism and multi-step inference. The text QA memory network stores textual knowledge in its âmemoryâ in the form of sentences, and selects relevant sentences to infer the answer. However, in VQA, the knowledge is in the form of an image, thus the memory and the question come from diï¬erent modalities. We adapt the end-to-end memory net- work [13] to solve visual question answering by storing the convolutional network outputs obtained from diï¬erent receptive ï¬elds into the memory, which explicitly allows spatial attention over the image. We also propose to repeat the process of gathering evidence from attended regions, enabling the model to update the answer based on several attention steps, or âhopsâ. The entire model is trained end-to-end and the evidence for the computed answer can be visualized using the attention weights.
To summarize our contributions, in this paper we | 1511.05234#7 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 8 | To summarize our contributions, in this paper we
â propose a novel multi-hop memory network with spatial attention for the VQA task which allows one to visualize the spatial inference process used by the deep network (a CAFFE [14] implementation will be made available), â design an attention architecture in the ï¬rst hop which uses each word em- bedding to capture ï¬ne-grained alignment between the image and question, â create a series of synthetic questions that explicitly require spatial inference to analyze the working principles of the network, and show that it learns logical inference rules by visualizing the attention weights,
â provide an extensive evaluation of several existing models and our own model on the same publicly available datasets.
Sec. 2 introduces relevant work on memory networks and attention models. Sec. 3 describes our design of the multi-hop memory network architecture for visual question answering (SMem-VQA). Sec. 4 visualizes the inference rules learned by the network for synthetic spatial questions and shows the experimen- tal results on DAQUAR [1] and VQA [2] datasets. Sec. 5 concludes the paper.
# 2 Related work | 1511.05234#8 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 9 | # 2 Related work
Before the popularity of visual question answering (VQA), text question an- swering (QA) had already been established as a mature research problem in the area of natural language processing. Previous QA methods include searching for the key words of the question in a search engine [15]; parsing the question as a knowledge base (KB) query [16]; or embedding the question and using a similarity measurement to ï¬nd evidence for the answer [17]. Recently, memory networks were proposed for solving the QA problem. [12] ï¬rst introduces the memory network as a general model that consists of a memory and four compo- nents: input feature map, generalization, output feature map and response. The model is investigated in the context of question answering, where the long-term
3
4 | 1511.05234#9 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 10 | 3
4
memory acts as a dynamic knowledge base and the output is a textual response. [13] proposes a competitive memory network model that uses less supervision, called end-to-end memory network, which has a recurrent attention model over a large external memory. The Neural Turing Machine (NTM) [18] couples a neural network to external memory and interacts with it by attentional processes to in- fer simple algorithms such as copying, sorting, and associative recall from input and output examples. In this paper, we solve the VQA problem using a multi- modal memory network architecture that applies a spatial attention mechanism over an input image guided by an input text question. | 1511.05234#10 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 11 | The neural attention mechanism has been widely used in diï¬erent areas of computer vision and natural language processing, see for example the atten- tion models in image captioning [19], video description generation [20], machine translation [21][22] and machine reading systems [23]. Most methods use the soft attention mechanism ï¬rst proposed in [21], which adds a layer to the network that predicts soft weights and uses them to compute a weighted combination of the items in memory. The two main types of soft attention mechanisms diï¬er in the function that aligns the input feature vector and the candidate feature vectors in order to compute the soft attention weights. The ï¬rst type uses an alignment function based on âconcatenationâ of the input and each candidate (we use the term âconcatenationâ as described [22]), and the second type uses an alignment function based on the dot product of the input and each candi- date. The âconcatenationâ alignment function adds one input vector (e.g. hidden state vector of the LSTM) to each candidate feature vector, embeds the result- ing vectors into scalar values, and then applies the softmax function to generate the attention | 1511.05234#11 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 12 | the LSTM) to each candidate feature vector, embeds the result- ing vectors into scalar values, and then applies the softmax function to generate the attention weight for each candidate. [19][20][21][23] use the âconcatenationâ alignment function in their soft attention models and [24] gives a literature review of such models applied to diï¬erent tasks. On the other hand, the dot product alignment function ï¬rst projects both inputs to a common vector em- bedding space, then takes the dot product of the two input vectors, and applies a softmax function to the resulting scalar value to produce the attention weight for each candidate. The end-to-end memory network [13] uses the dot product alignment function. In [22], the authors compare these two alignment functions in an attention model for the neural machine translation task, and ï¬nd that their implementation of the âconcatenationâ alignment function does not yield good performance on their task. Motivated by this, in this paper we use the dot product alignment function in our Spatial Memory Network. | 1511.05234#12 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 13 | VQA is related to image captioning. Several early papers about VQA directly adapt the image captioning models to solve the VQA problem [10][11] by gen- erating the answer using a recurrent LSTM network conditioned on the CNN output. But these modelsâ performance is still limited [10][11]. [25] proposes a new dataset and uses a similar attention model to that in image captioning [19], but does not give results on the more common VQA benchmark [2], and our own implementation of this model is less accurate on [2] than other baseline models. [3] summarizes several recent papers reporting results on the VQA dataset [2] on arxiv.org and gives a simple but strong baseline model (iBOWIMG) on this | 1511.05234#13 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 14 | dataset. This simple baseline concatenates the image features with the bag of word embedding question representation and feeds them into a softmax classiï¬er to predict the answer. The iBOWIMG model beats most VQA models consid- ered in the paper. Here, we compare our proposed model to the VQA models (namely, the ACK model [26] and the DPPnet model [27]) which have compa- rable or better results than the iBOWIMG model. The ACK model in [26] is essentially the same as the LSTM model in [11], except that it uses image at- tribute features, the generated image caption and relevant external knowledge from a knowledge base as the input to the LSTMâs ï¬rst time step. The DPPnet model in [27] tackles VQA by learning a convolutional neural network (CNN) with some parameters predicted from a separate parameter prediction network. Their parameter prediction network uses a Gate Recurrent Unit (GRU) to gen- erate a question representation, and based on this question input, maps the predicted weights to CNN via hashing. Neither of these models [26][27] contain a spatial attention mechanism, and they both use external data in | 1511.05234#14 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 15 | input, maps the predicted weights to CNN via hashing. Neither of these models [26][27] contain a spatial attention mechanism, and they both use external data in addition to the VQA dataset [2], e.g. the knowledge base in [26] and the large-scale text corpus used to pre-train the GRU question representation [27]. In this paper, we explore a complementary approach of spatial attention to both improve perfor- mance and visualize the networkâs inference process, and obtain improved results without using external data compared to the iBOWIMG model [3] as well as the ACK model [26] and the DPPnet model [27] which use external data. | 1511.05234#15 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 16 | # 3 Spatial Memory Network for VQA
We ï¬rst give an overview of the proposed SMem-VQA network, illustrated in Fig. 2 (a). Sec. 3.1 details the word-guided spatial attention process of the ï¬rst hop shown in Fig. 2 (b), and Sec. 3.2 describes adding a second hop into SMem- VQA network.
The input to our network is a question comprised of a variable-length se- quence of words, and an image of ï¬xed size. Each word in the question is ï¬rst represented as a one-hot vector in the size of the vocabulary, with a value of one only in the corresponding word position and zeros in the other posi- tions. Each one-hot vector is then embedded into a real-valued word vector, V = {vj | vj â RN ; j = 1, · · · , T }, where T is the maximum number of words in the question and N is the dimensionality of the embedding space. Sentences with length less than T are padded with special â1 value, which are embedded to all-zero word vector. | 1511.05234#16 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 17 | The words in questions are used to compute attention over the visual mem- ory, which contains extracted image features. The input image is processed by a convolutional neural network (CNN) to extract high-level M -dimensional vi- sual features on a grid of spatial locations. Speciï¬cally, we use S = {si | si â RM ; i = 1, · · · , L} to represent the spatial CNN features at each of the L grid locations. In this paper, the spatial feature outputs of the last convolutional layer of GoogLeNet (inception 5b/output) [28] are used as the visual features for the image.
5
6
vy, fi Is there a cat in the basket? word embedding first hop Se |-â vy se (b) SxW, 4 âtestis | py aim image embeddings Oo ase eet ce (a) Overview (b) Word-guided attention next hop memory | 1511.05234#17 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 18 | Fig. 2. Our proposed Spatial Memory Network for Visual Question Answering (SMem- VQA). (a) Overview. First, the CNN activation vectors S = {si} at image locations i are projected into the semantic space of the question word vectors vj using the âatten- tionâ visual embedding WA (Sec. 3). The results are then used to infer spatial attention weights Watt using the word-guided attention process shown in (b). (b) Word-guided attention. This process predicts attention determined by the question word that has the maximum correlation with embedded visual features at each location, e.g. choosing the word basket to attend to the location of the basket in the above image (Sec. 3.1). The resulting spatial attention weights Watt are then used to compute a weighted sum over the visual features embedded via a separate âevidenceâ transformation WE, e.g., selecting evidence for the cat concept at the basket location. Finally, the weighted evidence vector Satt is combined with the full question embedding Q to predict the answer. An additional hop can repeat the process to gather more evidence (Sec. 3.2). | 1511.05234#18 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 19 | The convolutional image feature vectors at each location are embedded into a common semantic space with the word vectors. Two diï¬erent embeddings are used: the âattentionâ embedding WA and the âevidenceâ embedding WE. The attention embedding projects each visual feature vector such that its combina- tion with the embedded question words generates the attention weight at that location. The evidence embedding detects the presence of semantic concepts or objects, and the embedding results are multiplied with attention weights and summed over all locations to generate the visual evidence vector Satt.
Finally, the visual evidence vector is combined with the question represen- tation and used to predict the answer for the given image and question. In the next section, we describe the one-hop Spatial Memory network model and the speciï¬c attention mechanism it uses in more detail.
# 3.1 Word Guided Spatial Attention in One-Hop Model
Rather than using the bag-of-words question representation to guide attention, the attention architecture in the ï¬rst hop (Fig. 2(b)) uses each word vector separately to extract correlated visual features in memory. The intuition is that the BOW representation may be too coarse, and letting each word select a related region may provide more ï¬ne-grained attention. The correlation matrix C â RT ÃL between word vectors V and visual features S is computed as | 1511.05234#19 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 20 | C = V · (S · WA + bA)T (1)
where WA â RM ÃN contains the attention embedding weights of visual features S, and bA â RLÃN is the bias term. This correlation matrix is the dot product result of each word embedding and each spatial locationâs visual feature, thus each value in correlation matrix C measures the similarity between each word and each locationâs visual feature.
The spatial attention weights Watt are calculated by taking maximum over the word dimension T for the correlation matrix C, selecting the highest corre- lation value for each spatial location, and then applying the softmax function
Watt = softmax( max i=1,··· ,T (Ci)), Ci â RL (2) | 1511.05234#20 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 21 | Watt = softmax( max i=1,··· ,T (Ci)), Ci â RL (2)
The resulting attention weights Watt â RL are high for selected locations and low for other locations, with the sum of weights equal to 1. For instance, in the example shown in Fig. 2, the question âIs there a cat in the basket?â pro- duces high attention weights for the location of the basket because of the high correlation of the word vector for basket with the visual features at that location. The evidence embedding WE projects visual features S to produce high ac- tivations for certain semantic concepts. E.g., in Fig. 2, it has high activations in the region containing the cat. The results of this evidence embedding are then multiplied by the generated attention weights Watt, and summed to produce the selected visual âevidenceâ vector Satt â RN ,
Satt = Watt · (S · WE + bE) where WE â RM ÃN are the evidence embedding weights of the visual features S, and bE â RLÃN is the bias term. In our running example, this step accumulates cat presence features at the basket location. | 1511.05234#21 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 22 | Finally, the sum of this evidence vector Satt and the question embedding Q is used to predict the answer for the given image and question. For the question representation Q, we choose the bag-of-words (BOW). Other question represen- tations, such as an LSTM, can also be used, however, BOW has fewer parameters yet has shown good performance. As noted in [29], the simple BOW model per- forms roughly as well if not better than the sequence-based LSTM for the VQA task. Speciï¬cally, we compute
Q = WQ · V + bQ where WQ â RT represents the BOW weights for word vectors V , and bQ â RN is the bias term. The ï¬nal prediction P is | 1511.05234#22 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 23 | P = softmax(WP · f (Satt + Q) + bP ) where WP â RKÃN , bias term bP â RK, and K represents the number of possible prediction answers. f is the activation function, and we use ReLU here. In our running example, this step adds the evidence gathered for cat near the basket location to the question, and, since the cat was not found, predicts the answer ânoâ. The attention and evidence computation steps can be optionally repeated in another hop, before predicting the ï¬nal answer, as detailed in the next section.
7
8
# 3.2 Spatial Attention in Two-Hop Model
We can repeat hops to promote deeper inference, gathering additional evidence at each hop. Recall that the visual evidence vector Satt is added to the question representation Q in the ï¬rst hop to produce an updated question vector,
Ohop1 = Satt + Q (6)
On the next hop, this vector Ohop1 â RN is used in place of the individual word vectors V to extract additional correlated visual features to the whole question from memory and update the visual evidence. | 1511.05234#23 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 24 | The correlation matrix C in the ï¬rst hop provides ï¬ne-grained local evidence from each word vectors V in the question, while the correlation vector Chop2 in next hop considers the global evidence from the whole question representation Q. The correlation vector Chop2 â RL in the second hop is calculated by
Chop2 = (S · WE + bE) · Ohop1 (7)
where WE â RM ÃN should be the attention embedding weights of visual features S in the second hop and bE â RLÃN should be the bias term. Since the attention embedding weights in the second hop are shared with the evidence embedding in the ï¬rst hop, so we directly use WE and bE from ï¬rst hop here.
The attention weights in the second hop Watt2 are obtained by applying the softmax function to the correlation vector Chop2.
Watt2 = softmax(Chop2) (8)
Then, the correlated visual information in the second hop Satt2 â RN is extracted using attention weights Watt2.
Satt2 = Watt2 · (S · WE2 + bE2 ) (9)
where WE2 â RM ÃN are the evidence embedding weights of visual features S in the second hop, and bE2 â RLÃN is the bias term. | 1511.05234#24 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 25 | where WE2 â RM ÃN are the evidence embedding weights of visual features S in the second hop, and bE2 â RLÃN is the bias term.
The ï¬nal answer P is predicted by combining the whole question represen- tation Q, the local visual evidence Satt from each word vector in the ï¬rst hop and the global visual evidence Satt2 from the whole question in the second hop,
P = softmax(WP · f (Ohop1 + Satt2) + bP ) (10)
where WP â RKÃN , bias term bP â RK, and K represents the number of possible prediction answers. f is activation function. More hops can be added in this manner.
The entire network is diï¬erentiable and is trained using stochastic gradient descent via standard backpropagation, allowing image feature extraction, image embedding, word embedding and answer prediction to be jointly optimized on the training image/question/answer triples. | 1511.05234#25 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 26 | Is there a red square on the top ? Is there a red square on the bottom ? Is there a red square on the right ? Is there a red square on the left ? GT: no Prediction: no Gt yes Prediction: yes GT: no Prediction: no Gt: yes Prediction: yes Is there a red square on the bottom ? Is there a red square on the right ? Is there a red square on the left ? Is there a red square on the left ? GT no Prediction: no Gt: no Prediction: no Gt:no Prediction: no Gt:no Prediction: no . a a
Is there a red square on the top ? GT: no Prediction: no
Is there a red square on the bottom ? Gt yes Prediction: yes
Is there a red square on the right ? GT: no Prediction: no
Is there a red square on the left ? Gt: yes Prediction: yes
Is there a red square on the bottom ? GT no Prediction: no .
Is there a red square on the right ? Gt: no Prediction: no
Is there a red square on the left ? Gt:no Prediction: no a
Is there a red square on the left ? Gt:no Prediction: no a | 1511.05234#26 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 27 | Is there a red square on the left ? Gt:no Prediction: no a
Is there a red square on the left ? Gt:no Prediction: no a
Fig. 3. Absolute position experiment: for each image and question pair, we show the original image (left) and the attention weights Watt (right). The attention follows the following rules. The ï¬rst rule (top row) looks at the position speciï¬ed in question (top|bottom|right|left), if it contains a square, answer âyesâ; otherwise answer ânoâ. The second rule (bottom row) looks at the region where there is a square, and answers âyesâ if the question contains that position and ânoâ for the other three positions.
# 4 Experiments
In this section, we conduct a series of experiments to evaluate our model. To explore whether the model learns to perform the spatial inference necessary for answering visual questions that explicitly require spatial reasoning, we design a set of experiments using synthetic visual question/answer data in Sec. 4.1. The experimental results of our model in standard datasets (DAQUAR [1] and VQA [2] datasets) are reported in Sec. 4.2.
# 4.1 Exploring Attention on Synthetic Data | 1511.05234#27 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 28 | # 4.1 Exploring Attention on Synthetic Data
The questions in the public VQA datasets are quite varied and diï¬cult and often require common sense knowledge to answer (e.g., âDoes this man have 20/20 vision?â about a person wearing glasses). Furthermore, past work [10,11] showed that the question text alone (no image) is a very strong predictor of the answer. Therefore, before evaluating on standard datasets, we would ï¬rst like to understand how the proposed model uses spatial attention to answer simple visual questions where the answer cannot be predicted from question alone. Our visualization demonstrates that the attention mechanism does learn to attend to objects and gather evidence via certain inference rules.
Absolute Position Recognition We investigate whether the model has the ability to recognize the absolute location of the object in the image. We explore this by designing a simple task where an object (a red square) appears in some region of a white-background image, and the question is âIs there a red square on the [top|bottom|left|right]?â For each image, we randomly place the square in one of the four regions, and generate the four questions above, together with three ânoâ answers and one âyesâ answer. The generated data is split into training and testing sets. | 1511.05234#28 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 29 | Due to the simplicity of this synthetic dataset, the SMem-VQA one-hop model achieves 100% test accuracy. However, the baseline model (iBOWIMG) [3]
9
10
2 Prediction: Is there a red square on the top of the cat? Prediction: no Is there a red square on the bottom of th Gr. yer ts there a red square on yes . F Is there a red square on the left of the cat? rie) : 7 âon the right of the cat? Prediction: Is there a red square on the right of the cat? eT: 10 Prediction: no = Is there a red square on the top of the cat? Prediction: no cr: no Prediction: no
2 Prediction: Is there a red square on the bottom of th Gr. yer
ts there a red square on yes . F âon the right of the cat? Prediction:
Is there a red square on the right of the cat? eT: 10 Prediction: no =
Is there a red square on the top of the cat? Prediction: no rie)
Is there a red square on the left of the cat? : 7 Prediction: no
Is there a red square on the top of the cat? cr: no Prediction: no | 1511.05234#29 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 30 | Is there a red square on the left of the cat? : 7 Prediction: no
Is there a red square on the top of the cat? cr: no Prediction: no
Fig. 4. Relative position experiment: for each image and question pair, we show the original image (left), the evidence embedding WE of the convolutional layer (mid- dle) and the attention weights Watt (right). The evidence embedding WE has high activations on both cat and red square. The attention weights follow similar inference rules as in Fig. 3, with the diï¬erence that the attention position is around the cat. | 1511.05234#30 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 31 | cannot infer the answer and only obtains accuracy of around 75%, which is the prior probability of the answer ânoâ in the training set. The SMem-VQA one-hop model is equivalent to the iBOWIMG model if the attention weights in our one- hop model are set equally for each location, since the iBOWIMG model uses the mean pool of the convolutional feature (inception 5b/output) in GoogLeNet that we use in SMem-VQA model. We check the visualization of the attention weights and ï¬nd that the relationship between the high attention position and the answer can be expressed by logical expressions. We show the attention weights of several typical examples in Fig. 3 which reï¬ect two logic rules: 1) Look at the position speciï¬ed in question (top|bottom|right|left), if it contains a square, then answer âyesâ; if it does not contain a square, then answer ânoâ. 2) Look at the region where there is a square, then answer âyesâ for the question about that position and ânoâ for the questions about the other three positions. | 1511.05234#31 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 32 | In the iBOWIMG model, the mean-pooled GoogLeNet visual features lose spatial information and thus cannot distinguish images with a square in diï¬er- ent positions. On the contrary, our SMem-VQA model can pay high attention to diï¬erent regions according to the question, and generate an answer based on the selected region, using some learned inference rules. This experiment demon- strates that the attention mechanism in our model is able to make absolute spatial location inference based on the spatial attention.
Relative Position Recognition In order to check whether the model has the ability to infer the position of one object relative to another object, we collect all the cat images from the MS COCO Detection dataset [30], and add a red square on the [top|bottom|left|right] of the bounding box of the cat in the images. For each generated image, we create four questions, âIs there a red square on the [top|bottom|left|right] of the cat?â together with three ânoâ answers and one âyesâ answer. We select 2639 training cat images and 1395 testing cat images from MS COCO Detection dataset. | 1511.05234#32 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 33 | Our SMem-VQA one-hop model achieves 96% test accuracy on this synthetic task, while the baseline model (iBOWIMG) accuracy is around 75%. We also check that another simple baseline that predicts the answer based on the absoTable 1. Accuracy results on the DAQUAR dataset (in percentage).
Multi-World [1] Neural-Image-QA [10] Question LSTM [10] VIS+LSTM [11] Question BOW [11] IMG+BOW [11] SMem-VQA One-Hop SMem-VQA Two-Hop DAQUAR 12.73 29.27 32.32 34.41 32.67 34.17 36.03 40.07 | 1511.05234#33 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 34 | lute position of the square in the image gets around 70% accuracy. We visualize the evidence embedding WE features and the attention weights Watt of several typical examples in Fig. 4. The evidence embedding WE has high activations on the cat and the red square, while the attention weights pay high attention to certain locations around the cat. We can analyze the attention in the correctly predicted examples using the same rules as in absolute position recognition ex- periment. These rules still work, but the position is relative to the cat object: 1) Check the speciï¬ed position relative to the cat, if it ï¬nds the square, then answer âyesâ, otherwise ânoâ; 2) Find the square, then answer âyesâ for the speciï¬ed position, and answer ânoâ for the other positions around the cat. We also check the images where our model makes mistakes, and ï¬nd that the mis- takes mainly occur in images with more than one cats. The red square appears near only one of the cats in the image, but our model might make mistakes by focusing on the other cats. We conclude that our SMem-VQA model can infer the relative spatial position based on the spatial attention around the speciï¬ed object, which can also be represented by some logical inference rules.
# 4.2 Experiments on Standard Datasets | 1511.05234#34 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 35 | # 4.2 Experiments on Standard Datasets
Results on DAQUAR The DAQUAR dataset is a relatively small dataset which builds on the NYU Depth Dataset V2 [31]. We use the reduced DAQUAR dataset [1]. The evaluation metric for this dataset is 0-1 accuracy. The embedding dimension is 512 for our models running on the DAQUAR dataset. We use several reported models on DAQUAR as baselines, which are listed below: ⢠Multi-World [1]: an approach based on handcrafted features using a semantic parse of the question and scene analysis of the image combined in a latent-world Bayesian framework. ⢠Neural-Image-QA [10]: uses an LSTM to encode the question and then decode the hidden information into the answer. The image CNN feature vector is shown at each time step of the encoding phase. ⢠Question LSTM [10]: only shows the question to the LSTM to predict the answer without any image information. ⢠VIS+LSTM [11]: similar to Neural-Image-QA, but only shows the image features to the LSTM at the ï¬rst time step, and the question in the remaining time steps to predict the answer.
11
12 | 1511.05234#35 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 36 | 11
12
what electrical GT: blender___One Hi which way can you not turn 7 GT: left âOne Hop: right _ Two Hop: left what is the colour of the object near the bed ? GT: pink One Hop: bed Two Hop: pink what is beneath the framed picture ? GT: sofa âOne Hop: table___Two Hop: sofa
what electrical GT: blender___One Hi
which way can you not turn 7 GT: left âOne Hop: right _ Two Hop: left
what is the colour of the object near the bed ? GT: pink One Hop: bed Two Hop: pink
what is beneath the framed picture ? GT: sofa âOne Hop: table___Two Hop: sofa
Fig. 5. Visualization of the spatial attention weights in the SMem-VQA One-Hop and Two-Hop models on VQA (top row) and DAQUAR (bottom row) datasets. For each image and question pair, we show the original image, the attention weights Watt of the One-Hop model, and the two attention weights Watt and Watt2 of the Two-Hop model in order. | 1511.05234#36 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 38 | Results of our SMem-VQA model on the DAQUAR dataset and the base- line model results reported in previous work are shown in Tab. 1. From the DAQUAR result in Tab. 1, we see that models based on deep features signif- icantly outperform the Multi-World approach based on hand-crafted features. Modeling the question only with either the LSTM model or Question BOW model does equally well in comparison, indicating the the question text contains important prior information for predicting the answer. Also, on this dataset, the VIS+LSTM model achieves better accuracy than Neural-Image-QA model; the former shows the image only at the ï¬rst timestep of the LSTM, while the latter does so at each timestep. In comparison, both our One-Hop model and Two-Hop spatial attention models outperform the IMG+BOW, as well as the other baseline models. A major advantage of our model is the ability to visual- ize the inference process in the deep network. To illustrate this, two attention weights visualization examples in SMem-VQA One-Hop and Two-Hop models on DAQUAR dataset are shown in Fig. 5 (bottom row). | 1511.05234#38 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 39 | Results on VQA The VQA dataset is a recent large dataset based on MS COCO [30]. We use the full release (V1.0) open-ended dataset, which con- tains a train set and a val set. Following standard practice, we choose the top 1000 answers in train and val sets as possible prediction answers, and only keep the examples whose answers belong to these 1000 answers as train- ing data. The question vocabulary size is 7477 with the word frequency of at least three. Because of the larger training size, the embedding dimension is 1000 on the VQA dataset. We report the test-dev and test-standard results from the VQA evaluation server. The server evaluation uses the evaluation metTable 2. Test-dev and test-standard results on the Open-Ended VQA dataset (in percentage). Models with â use external training data in addition to the VQA dataset. | 1511.05234#39 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 40 | test-dev test-standard Overall yes/no number others Overall yes/no number others 36.42 53.74 LSTM Q+I [2] ACKâ [26] 40.08 55.72 DPPnetâ [27] 41.69 57.22 42.62 55.72 iBOWIMG [3] SMem-VQA One-Hop 42.09 56.56 SMem-VQA Two-Hop 57.99 80.87 37.32 43.12 78.94 79.23 80.71 76.55 78.98 35.24 36.13 37.24 35.03 35.93 54.06 55.98 57.36 55.89 - 58.24 - 79.05 80.28 76.76 - 80.8 - 36.10 36.92 34.98 - - 40.61 42.24 42.62 - 37.53 43.48
ric introduced by [2], which gives partial credit to certain synonym answers: Acc(ans) = min {(# humans that said ans)/3, 1}. | 1511.05234#40 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 41 | ric introduced by [2], which gives partial credit to certain synonym answers: Acc(ans) = min {(# humans that said ans)/3, 1}.
For the attention models, we do not mirror the input image when using the CNN to extract convolutional features, since this might cause confusion about the spatial locations of objects in the input image. The optimization algorithm used is stochastic gradient descent (SGD) with a minibatch of size 50 and mo- mentum of 0.9. The base learning rate is set to be 0.01 which is halved every six epoches. Regularization, dropout and L2 norm are cross-validated and used. | 1511.05234#41 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 42 | For the VQA dataset, we use the simple iBOWIMG model in [3] as one baseline model, which beats most existing VQA models currently on arxiv.org. We also compare to two models in [26][27] which have comparable or better results to the iBOWIMG model. These three baseline models as well the best model in VQA dataset paper [2] are listed in the following: ⢠LSTM Q+I [2]: uses the element-wise multiplication of the LSTM encoding of the question and the image feature vector to predict the answer. This is the best model in the VQA dataset paper. ⢠ACK [26]: shows the image attribute features, the generated image caption and relevant external knowledge from knowledge base to the LSTM at the ï¬rst time step, and the question in the remaining time steps to predict the answer. ⢠DPPnet [27]: uses the Gated Recurrent Unit (GRU) representation of question to predict certain parameters for a CNN classiï¬cation network. They pre-train the GRU for question representation on a large-scale text corpus to improve the GRU generalization performance. ⢠iBOWIMG [3]: concatenates the BOW question representation with image feature (GoogLeNet), and uses a softmax classiï¬cation to predict the answer. | 1511.05234#42 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 43 | The overall accuracy and per-answer category accuracy for our SMem-VQA models and the four baseline models on VQA dataset are shown in Tab. 2. From the table, we can see that the SMem-VQA One-Hop model obtains slightly better results compared to the iBOWIMG model. However, the SMem-VQA Two-Hop model achieves an improvement of 2.27% on test-dev and 2.35% on test-standard compared to the iBOWIMG model, demonstrating the value of spatial attention. The SMem-VQA Two-Hop model also shows best performance in the per-answer category accuracy. The SMem-VQA Two-Hop model has slightly better result than the DPPnet model. The DPPnet model uses a large-scale text corpus to pre-train the Gated Recurrent Unit (GRU) network for question representation. Similar pre-training work on extra data to improve model accuracy has been
13
14
do tourist enjoy a day at the beach? yes.
what color is the fork? green
do tourist enjoy a day at the beach? yes. what color is the fork? green what game are they playing? baseball what is the woman doing? eating
what game are they playing? baseball
what is the woman doing? eating | 1511.05234#43 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 44 | what game are they playing? baseball
what is the woman doing? eating
Fig. 6. Visualization of the original image (left), the spatial attention weights Watt in the ï¬rst hop (middle) and one correlation vector from the correlation matrix C for the location with highest attention weight in the SMem-VQA Two-Hop model on the VQA dataset. Higher values in the correlation vector indicate stronger correlation of that word with the chosen locationâs image features.
done in [32]. Considering the fact that our model does not use extra data to pre- train the word embeddings, its results are very competitive. We also experiment with adding a third hop into our model on the VQA dataset, but the result does not improve further. | 1511.05234#44 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 45 | The attention weights visualization examples for the SMem-VQA One-Hop and Two-Hop models on the VQA dataset are shown in Fig. 5 (top row). From the visualization, we can see that the two-hop model collects supplementary evidence for inferring the answer, which may be necessary to achieve an im- provement on these complicated real-world datasets. We also visualize the ï¬ne- grained alignment in the ï¬rst hop of our SMem-VQA Two-Hop model in Fig. 6. The correlation vector values (blue bars) measure the correlation between image regions and each word vector in the question. Higher values indicate stronger correlation of that particular word with the speciï¬c locationâs image features. We observe that the ï¬ne-grained visual evidence collected using each local word vector, together with the global visual evidence from the whole question, com- plement each other to infer the correct answer for the given image and question, as shown in Fig. 1. | 1511.05234#45 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 46 | 5 Conclusion In this paper, we proposed the Spatial Memory Network for VQA, a memory network architecture with a spatial attention mechanism adapted to the visual question answering task. We proposed a set of synthetic spatial questions and demonstrated that our model learns inference rules based on spatial attention through attention weight visualization. Evaluation on the challenging DAQUAR and VQA datasets showed improved results over previously published models. Our model can be used to visualize the inference steps learned by the deep network, giving some insight into its processing. Future work may include further exploring the inference ability of our SMem-VQA model and exploring other VQA attention models.
# References
1. Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. CoRR abs/1410.0210 (2014)
2. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: visual question answering. CoRR abs/1505.00468 (2015)
3. Zhou, B., Tian, Y., Sukhbaatar, S., Szlam, A., Fergus, R.: Simple baseline for visual question answering. arXiv preprint arXiv:1512.02167 (2015) | 1511.05234#46 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 47 | 4. Tu, K., Meng, M., Lee, M.W., Choe, T.E., Zhu, S.C.: Joint video and text parsing for understanding events and answering queries. MultiMedia, IEEE 21(2) (2014) 42â70
5. Lasecki, W.S., Zhong, Y., Bigham, J.P.: Increasing the bandwidth of crowdsourced visual question answering to better support blind users. In: Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility, ACM (2014) 263â264
6. Donahue, J., Hendricks, L.A., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint arXiv:1411.4389 (2014)
7. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555 (2014)
8. Karpathy, A., Joulin, A., Li, F.F.F.: Deep fragment embeddings for bidirectional image sentence mapping. In: Advances in neural information processing systems. (2014) 1889â1897 | 1511.05234#47 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 48 | 9. Fang, H., Gupta, S., Iandola, F., Srivastava, R., Deng, L., Doll´ar, P., Gao, J., He, X., Mitchell, M., Platt, J., et al.: From captions to visual concepts and back. arXiv preprint arXiv:1411.4952 (2014)
10. Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based ap- proach to answering questions about images. arXiv preprint arXiv:1505.01121 (2015)
11. Ren, M., Kiros, R., Zemel, R.S.: Exploring models and data for image question answering. CoRR abs/1505.02074 (2015)
12. Weston, J., Chopra, S., Bordes, A.: Memory networks. CoRR abs/1410.3916 (2014)
13. Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. arXiv preprint arXiv:1503.08895 (2015) | 1511.05234#48 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 49 | 14. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar- rama, S., Darrell, T.: Caï¬e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
15. Yahya, M., Berberich, K., Elbassuoni, S., Ramanath, M., Tresp, V., Weikum, G.: Natural language questions for the web of data. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning, Association for Computational Linguistics (2012) 379â390
16. Berant, J., Liang, P.: Semantic parsing via paraphrasing. In: Proceedings of ACL. Volume 7. (2014) 92
17. Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676 (2014)
18. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014)
15
16 | 1511.05234#49 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 50 | 18. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014)
15
16
19. Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044 (2015)
20. Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H., Courville, A.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 4507â4515
21. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
22. Luong, M.T., Pham, H., Manning, C.D.: Eï¬ective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015) | 1511.05234#50 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 51 | 23. Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems. (2015) 1684â1692
24. Cho, K., Courville, A., Bengio, Y.: Describing multimedia content using attention- based encoderâdecoder networks. (2015)
25. Zhu, Y., Groth, O., Bernstein, M., Fei-Fei, L.: Visual7w: Grounded question an- swering in images. arXiv preprint arXiv:1511.03416 (2015)
26. Wu, Q., Wang, P., Shen, C., Hengel, A.v.d., Dick, A.: Ask me anything: Free- form visual question answering based on knowledge from external sources. arXiv preprint arXiv:1511.06973 (2015)
27. Noh, H., Seo, P.H., Han, B.: Image question answering using convolutional neu- ral network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756 (2015) | 1511.05234#51 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.05234 | 52 | 28. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR 2015. (2015)
29. Shih, K.J., Singh, S., Hoiem, D.: Where to look: Focus regions for visual question answering. arXiv preprint arXiv:1511.07394 (2015)
30. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Visionâ ECCV 2014. Springer (2014) 740â755
31. Nathan Silberman, Derek Hoiem, P.K., Fergus, R.: Indoor segmentation and sup- port inference from rgbd images. In: ECCV. (2012)
32. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014) | 1511.05234#52 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3]. | http://arxiv.org/pdf/1511.05234 | Huijuan Xu, Kate Saenko | cs.CV, cs.AI, cs.CL, cs.NE | include test-standard result on VQA full release (V1.0) dataset | null | cs.CV | 20151117 | 20160319 | [
{
"id": "1511.03416"
},
{
"id": "1511.07394"
},
{
"id": "1512.02167"
},
{
"id": "1508.04025"
},
{
"id": "1503.08895"
},
{
"id": "1511.05756"
},
{
"id": "1511.06973"
},
{
"id": "1505.01121"
},
{
"id": "1502.03044"
}
] |
1511.04636 | 0 | 8 Jun 2016
arXiv:1511.04636v5 [cs.AI]
# Deep Reinforcement Learning with a Natural Language Action Space
# Ji Heâ, Jianshu Chenâ, Xiaodong Heâ, Jianfeng Gao', Lihong Lit Li Deng! and Mari Ostendorf*
âDepartment of Electrical Engineering, University of Washington, Seattle, WA 98195, USA
{jvking, ostendor}@uw.edu
*Microsoft Research, Redmond, WA 98052, USA
{jianshuc, xiaohe, jfgao, lihongli, deng}@microsoft.com
# Abstract
This paper introduces a novel architec- ture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture repre- sents action and state spaces with sepa- rate embedding vectors, which are com- bined with an interaction function to ap- proximate the Q-function in reinforce- ment learning. We evaluate the DRRN on two popular text games, showing su- perior performance over other deep Q- learning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.
# 1 Introduction | 1511.04636#0 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 1 | # 1 Introduction
This work is concerned with learning strategies for sequential decision-making tasks, where a sys- tem takes actions at a particular state with the goal of maximizing a long-term reward. More specifi- cally, we consider tasks where both the states and the actions are characterized by natural language, such as in human-computer dialog systems, tutor- ing systems, or text-based games. In a text-based game, for example, the player (or system, in this case) is given a text string that describes the cur- rent state of the game and several text strings that describe possible actions one could take. After se- lecting one of the actions, the environment state is updated and revealed in a new textual description. A reward is given either at each transition or in the end. The objective is to understand, at each step, the state text and all the action texts to pick the most relevant action, navigating through the sequence of texts so as to obtain the highest long- term reward. Here the notion of relevance is based on the joint state/action impact on the reward: an action text string is said to be âmore relevantâ (to a state text string) than the other action texts if taking that action would lead to a higher long- term reward. Because a playerâs action changes the environment, reinforcement learning (Sutton and Barto, 1998) is appropriate for modeling long- term dependency in text games. | 1511.04636#1 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 2 | There is a large body of work on reinforcement learning. Of most interest here are approaches leveraging neural networks because of their suc- cess in handling a large state space. Early work â TD-gammon â used a neural network to approxi- mate the state value function (Tesauro, 1995). Re- cently, inspired by advances in deep learning (Le- Cun et al., 2015; Hinton et al., 2012; Krizhevsky et al., 2012; Dahl et al., 2012), significant progress has been made by combining deep learning with reinforcement learning. Building on the approach of Q-learning (Watkins and Dayan, 1992), the âDeep Q-Networkâ (DQN) was developed and ap- plied to Atari games (Mnih et al., 2013; Mnih et al., 2015) and shown to achieve human level per- formance by applying convolutional neural net- works to the raw image pixels. Narasimhan et al. (2015) applied a Long Short-Term Memory network to characterize the state space in a DQN framework for learning control policies for parser- based text games. More recently, Nogueira and Cho (2016) have also proposed a goal-driven web navigation task for language based sequential de- cision making | 1511.04636#2 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 4 | # space.
Inspired by these successes and recent work us- ing neural networks to learn phrase- or sentencelevel embeddings (Collobert and Weston, 2008; Huang et al., 2013; Le and Mikolov, 2014; Sutskever et al., 2014; Kiros et al., 2015), we propose a novel deep architecture for text under- standing, which we call a deep reinforcement rele- vance network (DRRN). The DRRN uses separate deep neural networks to map state and action text strings into embedding vectors, from which ârel- evanceâ is measured numerically by a general in- teraction function, such as their inner product. The output of this interaction function defines the value of the Q-function for the current state-action pair, which characterizes the optimal long-term reward for pairing these two text strings. The Q-function approximation is learned in an end-to-end manner by Q-learning. | 1511.04636#4 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 5 | The DRRN differs from prior work in that ear- lier studies mostly considered action spaces that are bounded and known. For actions described by natural language text strings, the action space is inherently discrete and potentially unbounded due to the exponential complexity of language with re- spect to sentence length. A distinguishing aspect of the DRRN architecture â compared to sim- ple DQN extensions â is that two different types of meaning representations are learned, reflecting the tendency for state texts to describe scenes and action texts to describe potential actions from the user. We show that the DRRN learns a continuous space representation of actions that successfully generalize to paraphrased descriptions of actions unseen in training.
# 2 Deep Reinforcement Relevance Network
# 2.1 Text Games and Q-learning | 1511.04636#5 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 6 | # 2 Deep Reinforcement Relevance Network
# 2.1 Text Games and Q-learning
We consider the sequential decision making prob- lem for text understanding. At each time step t, the agent will receive a string of text that de- scribes the state s; (i.e., âstate-textââ) and several strings of text that describe all the potential ac- tions a; (i.e., âaction-textâ). The agent attempts to understand the texts from both the state side and the action side, measuring their relevance to the current context s; for the purpose of maximizing the long-term reward, and then picking the best action. Then, the environment state is updated St41 = 8â according to the probability p(sâ|s, a), and the agent receives a reward 7; for that partic- ular transition. The policy of the agent is defined to be the probability 7(a,|s;) of taking action a,
at state s;. Define the Q-function Q7(s, a) as the expected return starting from s, taking the action a, and thereafter following policy 7(a|s) to be:
St =a =ah +oo Q"(s,a) =E {s risk k=0 | 1511.04636#6 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 7 | St =a =ah +oo Q"(s,a) =E {s risk k=0
where y denotes a discount factor. The optimal policy and Q-function can be found by using the Q-learning algorithm (Watkins and Dayan, 1992):
Q(st, a4) â Q(St, ae) + (1) m (Tet: max Q(s/41,@) â Q(s¢, a2)
where 1, is the learning rate of the algorithm. In this paper, we use a softmax selection strategy as the exploration policy during the learning stage, which chooses the action a; at state s; according to the following probability:
expla Qlseai)) uy Ai jy)? Tyatexp(a- Q(se,af)) m (az = a}|5¢) | 1511.04636#7 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 8 | expla Qlseai)) uy Ai jy)? Tyatexp(a- Q(se,af)) m (az = a}|5¢)
where A; is the set of feasible actions at state s;, aj, is the i-th feasible action in A;, | - | denotes the cardinality of the set, and a is the scaling factor in the softmax operation. a is kept constant through- out the learning period. All methods are initialized with small random weights, so initial Q-value dif- ferences will be small, thus making the Q-learning algorithm more explorative initially. As Q-values better approximate the true values, a reasonable a will make action selection put high probability on the optimal action (exploitation), but still maintain a small exploration probability.
# 2.2. Natural language action space | 1511.04636#8 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 9 | # 2.2. Natural language action space
Let S denote the state space, and let A denote the entire action space that includes all the unique ac- tions over time. A vanilla Q-learning recursion (1) needs to maintain a table of size |S| x |A|, which is problematic for a large state/action space. Prior work using a DNN in Q-function approximation has shown high capacity and scalability for han- dling a large state space, but most studies have used a network that generates |A| outputs, each of which represents the value of Q(s, a) for a par- ticular action a. It is not practical to have a DQN architecture of a size that is explicitly dependence on the large number of natural language actions. Further, in many text games, the feasible action set A, at each time ¢ is an unknown subset of the unbounded action space A that varies over time. | 1511.04636#9 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 10 | For the case where the maximum number of possible actions at any point in time (max; |A;|) is known, the DQN can be modified to simply use that number of outputs (ââMax-action DQNâ), as illustrated in Figure l(a), where the state and ac- tion vectors are concatenated (i.e., as an extended state vector) as its input. The network computes the Q-function values for the actions in the current feasible set as its outputs. For a complex game, max; |A;| may be difficult to obtain, because A; is usually unknown beforehand. Nevertheless, we will use this modified DQN as a baseline.
An alternative approach is to use a function ap- proximation using a neural network that takes a state-action pair as input, and outputs a single Q- value for each possible action (âPer-action DQNâ in Figure 1(b)). This architecture easily handles a varying number of actions and represents a second baseline. | 1511.04636#10 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 11 | We propose an alternative architecture for han- dling a natural language action space in sequential text understanding: the deep reinforcement rele- vance network (DRRN). As shown in Figure 1(c), the DRRN consists of a pair of DNNs, one for the state text embedding and the other for action text embeddings, which are combined using a pair- wise interaction function. The texts used to de- scribe states and actions could be very different in nature, e.g., a state text could be long, contain- ing sentences with complex linguistic structure, whereas an action text could be very concise or just a verb phrase. Therefore, it is desirable to use two networks with different structures to handle state/action texts, respectively. As we will see in the experimental sections, by using two separate deep neural networks for state and action sides, we obtain much better results.
# 2.3 DRRN architecture: Forward activation | 1511.04636#11 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 12 | # 2.3 DRRN architecture: Forward activation
Given any state/action text pair (s;, at), the DRRN estimates the Q-function Q(s;,a}) in two steps. First, map both s; and ai to their embedding vec- tors using the corresponding DNNs, respectively. Second, approximate Q(s:, ai) using an interac- tion function such as the inner product of the em- bedding vectors. Then, given a particular state s;, we can select the optimal action a; among the set of actions via a; = arg max,; Q(s:, ai).
More formally, let hj, and hj, denote the /-th hidden layer for state and action side neural net- works, respectively. For the state side, W),, and
bj; denote the linear transformation weight ma- trix and bias vector between the (/ â 1)-th and /-th hidden layers. W),, and bj, denote the equivalent parameters for the action side. In this study, the DRRN has L hidden layers on each side.
has = f (Wi,sst + b1,s) (3)
hia = f(Wiaai + bia) (4)
his = f(Wi-1,shi-1,s + bi-1,s) (5) | 1511.04636#12 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 13 | hia = f(Wiaai + bia) (4)
his = f(Wi-1,shi-1,s + bi-1,s) (5)
Nia=f Wiaahi-tja + bi-1,a) (6)
where f(-) is the nonlinear activation function at the hidden layers, which, for example, could be chosen as tanh(a), andi = 1,2,3,...,|A:| is the action index. A general interaction function g(-) is used to approximate the Q-function values, Q(s, a), in the following parametric form:
Q(s,a';®) =g (hiss Nia) re)
where © denotes all the model parameters. The in- teraction function could be an inner product, a bi- linear operation, or a nonlinear function such as a deep neural network. In our experiments, the inner product and bilinear operation gave similar results. For simplicity, we present our experiments mostly using the inner product interaction function. | 1511.04636#13 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 14 | The success of the DRRN in handling a natu- ral language action space A lies in the fact that the state-text and the action-texts are mapped into separate finite-dimensional embedding spaces. The end-to-end learning process (discussed next) makes the embedding vectors in the two spaces more aligned for âgoodâ (or relevant) action texts compared to âbadâ (or irrelevant) choices, result- ing in a higher interaction function output (Q- function value).
# 2.4 Learning the DRRN: Back propagation
To learn the DRRN, we use the âexperience- replayâ strategy (Lin, 1993), which uses a fixed exploration policy to interact with the environment to obtain a sample trajectory. Then, we randomly sample a transition tuple (5%, @k, Tk, 8k41), Com- pute the temporal difference error for sample k:
dk = re+y max Q(Sk-41, 4; On-1)-Q(Sx, Ak; Ox-1),
and update the model according to the recursions:
OQ(Sk, 4k; Of-1) Wok = Wo,kâ-1 + ede - aw (8) OQ (Sk, Ok; O¢â Duk = bv ka + Med * On tr ea) (9) | 1511.04636#14 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 15 | Qr(s,at) Q(s, aâ) Q(s, a") 1 1@_@ pairwise interaction i A RY (e.g. inner product) h2 has Nba T ii t â his La T 2 j i I t St a ay os | ay | St at (a) Max-action DQN (b) Per-action DQN (c) DRRN
# function
Figure 1: Different deep Q-learning architectures: Max-action DQN and Per-action DQN both treat input text as concantenated vectors and compute output Q-values with a single NN. DRRN mo lels text embeddings from state/action sides separately, and use an interaction function to compute Q-values.
1 T T T T y â , action 2 (-1.30) oF 2â ~ action 1 (-0.55) | after 200 episodes state 1 T T T T T y , ; action 1 (+0.91) > action 2 (-17.17) L < | state after 400 episodes action 1 (+16.53) action 2 (-22.08) oF state â_â_ | after 600 episodes âtg =6 â4 -2 0 2 4 6 8 | 1511.04636#15 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 16 | Figure 2: PCA projections of text embedding vectors for state and associated action vectors after 200, 400 and 600 training episodes. The state is âAs you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street.â Action 1 (good choice) is âLook upâ, and action 2 (poor choice) is âIgnore the alarm of others and continue moving forward.â
9Q | 92 and for v ⬠{s,a}. Expressions for ay, 55° other algorithm details are given in supplementary materials. Random sampling essentially scram- bles the trajectory from experience-replay into a âbag-of-transitionsâ, which has been shown to avoid oscillations or divergence and achieve faster convergence in Q-learning (Mnih et al., 2015). Since the models on the action side share the same parameters, models associated with all actions are effectively updated even though the back propaga- tion is only over one action. We apply back prop- agation to learn how to pair the text strings from the reward signals in an end-to-end manner. The representation vectors for the state-text and the action-text are automatically learned to be aligned with each other in the text embedding space from the reward signals. A summary of the full learning algorithm is given in Algorithm 1. | 1511.04636#16 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 17 | Figure 2 illustrates learning with an inner product interaction function. We used Principal Component Analysis (PCA) to project the 100- dimension last hidden layer representation (before the inner product) to a 2-D plane. The vector em- beddings start with small values, and after 600 episodes of experience-replay training, the embed- dings are very close to the converged embedding (4000 episodes). The embedding vector of the op- timal action (Action 1) converges to a positive in- ner product with the state embedding vector, while Action 2 converges to a negative inner product.
# 3 Experimental Results
# 3.1 Text games
Text games, although simple compared to video games, still enjoy high popularity in online com- munities, with annual competitions held online
Algorithm 1 Learning algorithm for DRRN
1: Initialize replay memory D to capacity N.
2: Initialize DRRN with small random weights.
3: Initialize game simulator and load dictionary.
4: for episode = 1,...,M do
5: Restart game simulator.
6: Read raw state text and a list of action text from the simulator, and convert them to representation A Ss. and at,a?,...,a! 1
7 fort=1,...,7 do
Compute Q(s;, ai; ©) for the list of actions using DRRN forward activation (Section 2.3). | 1511.04636#17 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 18 | 7 fort=1,...,7 do
Compute Q(s;, ai; ©) for the list of actions using DRRN forward activation (Section 2.3).
9: Select an action a; based on probability distribution (a; = ai|s;) (Equation 2)
10: Execute action a; in simulator
11: Observe reward r;. Read the next state text and the next list of action texts, and convert them to representation 5,41 and aj, @744,--. sae
12: Store transition (s¢, at, Tt, $141, At41) in D.
13: Sample random mini batch of transitions (5%, 44,1, 8h-41, Ap41) from D.
if Sit is terminal
Set y,
# a:
=
re +ymMaxaed,,, Q(Sk+1; aâ;®)) otherwise
15: Perform a gradient descent step on (yz, â Q(s, a4; ©))? with respect to the network parameters © (Section 2.4). Back-propagation is performed only for a, even though there are |.A;| actions at time k.
16: end for
17:
# end for | 1511.04636#18 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 19 | 16: end for
17:
# end for
since 1995. Text games communicate to players in the form of a text display, which players have to understand and respond to by typing or click- ing text (Adams, 2014). There are three types of text games: parser-based (Figure 3(a)), choice- based (Figure 3(b)), and hypertext-based (Figure 3(c)). Parser-based games accept typed-in com- mands from the player, usually in the form of verb phrases, such as âeat appleâ, âget keyâ, or âgo eastâ. They involve the least complex ac- tion language. Choice-based and hypertext-based games present actions after or embedded within the state text. The player chooses an action, and the story continues based on the action taken at this particular state. With the development of web browsing and richer HTML display, choice-based and hypertext-based text games have become more popular, increasing in percentage from 8% in 2010 to 62% in 2014. | 1511.04636#19 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 20 | For parser-based text games, Narasimhan et al. (2015) have defined a fixed set of 222 actions, which is the total number of possible phrases the parser accepts. Thus the parser-based text game is reduced to a problem that is well suited to a fixedGame Saving John Machine of Death Text game type Choice Choice & Hypertext Vocab size 1762 2258 Action vocab size 171 419 Avg. words/description | 76.67 67.80 State transitions Deterministic | Stochastic # of states (underlying) | > 70 > 200
Table 1: Statistics for the games âSaving Johnâ and and âMachine of Deathâ.
action-set DQN. However, for choice-based and hypertext-based text games, the size of the action space could be exponential with the length of the action sentences, which is handled here by using a continuous representation of the action space. | 1511.04636#20 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 21 | In this study, we evaluate the DRRN with two games: a deterministic text game task called âSav- ing Johnâ and a larger-scale stochastic text game called âMachine of Deathâ from a public archive.â The basic text statistics of these tasks are shown in Table 1. The maximum value of feasible actions (ie., max; |A;|) is four in âSaving Johnâ, and nine in âMachine of Deathâ. We manually annotate fi'Statistics obtained from http: //www.ifarchive. org
?Simulators are available at https: //github.com/ jvking/text-games | 1511.04636#21 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 22 | Front Steps leads into the lobby. Well, here we are, back home again. The battered front door leads north into the lobby. The cat is out here with you, parked directly in front of the door and looking up at you expectantly + Return the catâs stare >. + âHowdy, Mittens.â (a) Parser-based The cat is out here with you, parked directly in front of the door and looking up at you expectantly. + Step purposefully over the cat and into the lobby (b) Choiced-based Well, here we are, back home again. The battered front door Well, here we are, back home again. The battered front door leads into the lobby. The cat is out here with you, parked directly in front of the door and looking up at you expectantly. You're hungry. (c) Hypertext-based Figure 3: Different types of text games nal rewards for all distinct endings in both games (as shown in supplementary materials). The mag- nitude of reward scores are given to describe sen- timent polarity of good/bad endings. On the other hand, each non-terminating step we assign with a small negative reward, to encourage the learner to finish the game as soon | 1511.04636#22 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 23 | good/bad endings. On the other hand, each non-terminating step we assign with a small negative reward, to encourage the learner to finish the game as soon as possible. For the text game âMachine of Deathâ, we restrict an episode to be no longer than 500 steps. In âSaving Johnâ all actions are choice-based, for which the mapping from text strings to a, are clear. In âMachine of Deathâ, when actions are hypertext, the actions are substrings of the state. In this case s; is associated with the full state de- scription, and a; are given by the substrings with- out any surrounding context. For text input, we use raw bag-of-words as features, with different vocabularies for the state side and action side. 3.2 Experiment setup We apply DRRNs with both | and 2 hidden layer structures. In most experiments, we use dot- product as the interaction function and set the hidden dimension to be the same for each hid- den layer. We use DRRNs with 20, 50 and 100-dimension hidden layer(s) and build learn- ing curves during experience-replay training. The learning rate is constant: 7; = 0.001. In testing, as in training, we apply softmax selection. We record | 1511.04636#23 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 24 | ing curves during experience-replay training. The learning rate is constant: 7; = 0.001. In testing, as in training, we apply softmax selection. We record average final rewards as performance of the model. The DRRN is compared to multiple baselines: a linear model, two max-action DQNs (MA DQN) (L = 1 or 2 hidden layers), and two per-action DQNs (PA DQN) (again, L = 1,2). All base- lines use the same Q-learning framework with dif- ferent function approximators to predict Q(s;, at) given the current state and actions. For the lin- ear and MA DQN baselines, the input is the text- based state and action descriptions, each as a bag of words, with the number of outputs equal to the maximum number of actions. When there are fewer actions than the maximum, the highest scor- ing available action is used. The PA DQN baseline Eval metric Average reward hidden dimension 20 50 100 Linear 44 (0.4) PA DQN (£ = 1) 2.0(1.5) | 4.014) | 44 (2.0) PA DQN (ZL = 2) 1.5.0) | 45(2.5) | | 1511.04636#24 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 25 | | 4.014) | 44 (2.0) PA DQN (ZL = 2) 1.5.0) | 45(2.5) | [email protected]) MA DQN(L=1) | 2.9.1) | 4.0 (4.2) 5.9 (2.5) MA DQN (LZ = 2) | 4.93.2) | 9.0.2) | 7.1G.1) DRRN (L = 1) 17.1 (0.6) | 18.3 (0.2) | 18.2 (0.2) DRRN (L = 2) 18.4 (0.1) | 18.5 (0.3) | 18.7 (0.4) Table 2: The final average rewards and standard deviations on âSaving Johnâ. takes each pair of state-action texts as input, and generates a corresponding Q-value. We use softmax selection, which is widely applied in practice, to trade-off exploration vs. exploitation. Specifically, for each experience- replay, we first generate 200 episodes of data (about 3K tuples in âSaving Johnâ and 16K tuples in âMachine of Deathâ) using the softmax | 1511.04636#25 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 26 | first generate 200 episodes of data (about 3K tuples in âSaving Johnâ and 16K tuples in âMachine of Deathâ) using the softmax selec- tion rule in (2), where we set a = 0.2 for the first game and a = 1.0 for the second game. The a is picked according to an estimation of range of the optimal Q-values. We then shuffle the generated data tuples (s:, a¢, 1, +41) update the model as described in Section 2.4. The model is trained with multiple epochs for all configurations, and is eval- uated after each experience-replay. The discount factor Â¥ is set to 0.9. For DRRN and all baselines, network weights are initialized with small random values. To prevent algorithms from âremember- ingâ state-action ordering and make choices based on action wording, each time the algorithm/player reads text from the simulator, we randomly shuffle the list of actions. This will encourage the algo- rithms to make decisions based on the understand- ing of the texts that describe the states and actions. 3.3. Performance In Figure 4, we show the learning curves of dif- ferent models, where the dimension of the hid- 3When in | 1511.04636#26 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 29 | Average reward t â2âDRAN (2-hidden) â4â DRRN (1-hidden) =o PADON (2-hidden) =o MADON (2-hidden FT ty â st |X LHe A$ Average reward ° âE=DRAN (@hiddeny âaâ DRRN (1-hidden) âo- PA DON (2-hidden) =o MADON (2-hidden) 500 1000 1500 2000 Number of episodes 2500 3000 3500 (a) Game 1: âSaving Johnâ ia) 500. 1000 1500 2000 2500 Number of episodes (b) Game 2: âMachine of Deathâ 3000 3500 4000
Figure 4: Learning curves of the two text games. | 1511.04636#29 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 30 | Figure 4: Learning curves of the two text games.
Eval metric Average reward hidden dimension 20 50 100 Linear 3.3 (1.0) PA DQN (Z = 1) 0.9 (2.4) 2.3 (0.9) 3.1 (1.3) PA DQN (Z = 2) 1.3 (1.2) 2.3 (1.6) 3.4 (1.7) MA DQN (L = 1) [| 2.01.2) 3.71.6) | 4.8 (2.9) MA DQN (L = 2) | 2.8 (0.9) 43 (0.9) 5.2 (1.2) DRRN (L = 1) 7.2 (1.5) 8.4 (1.3) 8.7 (0.9) DRRN (L = 2) 9.2 (2.1) | 10.7 (2.7) | 11.2 0.6)
Table 3: The final average rewards and standard deviations on âMachine of Deathâ. | 1511.04636#30 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 31 | Table 3: The final average rewards and standard deviations on âMachine of Deathâ.
Game 2, due to the complexity of the underly- ing state transition function, we cannot compute the exact optimal policy score. To provide more insight into the performance, we averaged scores of 8 human players for initial trials (novice) and after gaining experience, yielding scores of â5.5 and 16.0, respectively. The experienced players do outperform our algorithm. The converged per- formance is higher with two hidden layers for all models. However, deep models also converge more slowly than their | hidden layer versions, as shown for the DRRN in Figure 4. | 1511.04636#31 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 32 | den layers in the DQNs and DRRN are all set to 100. The error bars are obtained by running 5 independent experiments. The proposed meth- ods and baselines all start at about the same per- formance (roughly -7 average rewards for Game 1, and roughly -8 average rewards for Game 2), which is the random guess policy. After around 4000 episodes of experience-replay training, all methods converge. The DRRN converges much faster than the other three baselines and achieves a higher average reward. We hypothesize this is be- cause the DRRN architecture is better at capturing relevance between state text and action text. The faster convergence for âSaving Johnâ may be due to the smaller observation space and/or the deter- ministic nature of its state transitions (in contrast to the stochastic transitions in the other game). | 1511.04636#32 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 33 | Besides an inner-product, we also experimented with more complex interaction functions: a) a bi- linear operation with different action side dimen- sions; and b) a non-linear deep neural network us- ing the concatenated state and action space embed- dings as input and trained in an end-to-end fash- ion to predict Q values. For different configura- tions, we fix the state side embedding to be 100 dimensions and vary the action side embedding dimensions. The bilinear operation gave similar results, but the concatenation input to a DNN de- graded performance. Similar behaviors have been observed on a different task (Luong et al., 2015).
# 3.4. Actions with paraphrased descriptions
The final performance (at convergence) for both baselines and proposed methods are shown in Ta- bles 2 and 3. We test for different model sizes with 20, 50, and 100 dimensions in the hidden layers. The DRRN performs consistently better than all baselines, and often with a lower variance. For | 1511.04636#33 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 34 | To investigate how our models handle actions with âunseenâ natural language descriptions, we had two people paraphrase all actions in the game âMachine of Deathâ (used in testing phase), except a few single-word actions whose syn- onyms are out-of-vocabulary (OOV). The word- level OOV rate of paraphrased actions is 18.6%,
Q-values scatterplot between state-action pairs
scatterplot pairs iS é â y=2 x0.85 +0.24, pRâ =0.95 re oN ow e 5s 8 8 With paraphrased action i S q iy 8 1 w 8 ~40! =30 =20 =10 0 10 20 30 40 With original action
Figure 5: Scatterplot and strong correlation be- tween Q-values of paraphrased actions versus original actions
and standard 4-gram BLEU score between the paraphrased and original actions is 0.325. The re- sulting 153 paraphrased action descriptions are as- sociated with 532 unique state-action pairs. | 1511.04636#34 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 35 | We apply a well-trained 2-layer DRRN model (with hidden dimension 100), and predict Q- values for each state-action pair with fixed model parameters. Figure 5 shows the correlation be- tween Q-values associated with paraphrased ac- tions versus original actions. The predictive R- squared is 0.95, showing a strong positive corre- lation. We also run Q-value correlation for the NN interaction and pR? = 0.90. For baseline MA-DQN and PA-DQN, their corresponding pR? is 0.84 and 0.97, indicating they also have some generalization ability. This is confirmed in the paraphrasing-based experiments too, where the test reward on the paraphrased setup is close to the original setup. This supports the claim that deep learning is useful in general for this language understanding task, and our findings show that a decoupled architecture most effectively leverages that approach.
In Table 4 we provide examples with predicted Q-values of original descriptions and paraphrased descriptions. We also include alternative action descriptions with in-vocabulary words that will lead to positive / negative / irrelevant game devel- opment at that particular state. Table 4 shows ac- tions that are more likely to result in good endings are predicted with high Q-values. This indicates that the DRRN has some generalization ability and gains a useful level of language understanding in | 1511.04636#35 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 37 | Table 5: The final average rewards and stan- dard deviations on paraphrased game âMachine of Deathâ.
the game scenario.
We use the baseline models and proposed DRRN model trained with the original action de- scriptions for âMachine of Deathâ, and test on paraphrased action descriptions. For this game, the underlying state transition mechanism has not changed. The only change to the game interface is that during testing, every time the player reads the actions from the game simulator, it reads the para- phrased descriptions and performs selection based on these paraphrases. Since the texts in test time are âunseenâ to the player, a good model needs to have some level of language understanding, while a naive model that memorizes all unique action texts in the original game will do poorly. The re- sults for these models are shown in Table 5. All methods have a slightly lower average reward in this setting (10.5 vs. 11.2 for the original actions), but the DRRN still gives a high reward and sig- nificantly outperforms other methods. This shows that the DRRN can generalize well to âunseenâ natural language descriptions of actions.
# 4 Related Work | 1511.04636#37 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 38 | # 4 Related Work
There has been increasing interest in applying deep reinforcement learning to a variety problems, but only a few studies address problems with nat- ural language state or action spaces. In language processing, reinforcement learning has been ap- plied to a dialogue management system that con- verses with a human user by taking actions that generate natural language (Scheffler and Young, 2002; Young et al., 2013). There has also been in- terest in extracting textual knowledge to improve game control performance (Branavan et al., 2011), and mapping text instructions to sequences of ex- ecutable actions (Branavan et al., 2009). In some applications, it is possible to manually design fea- tures for state-action pairs, which are then used in reinforcement learning to learn a near-optimal policy (Li et al., 2009). Designing such features, however, require substantial domain knowledge. | 1511.04636#38 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 39 | Text (with predicted Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions in the original game Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Paraphrased actions (not original) look. (17.5) Disregard the caution of others and keep pushing ahead. (-11.9) Turn up and Positive actions (not original) Stay there. (2.8) Stay calmly. (2.0) Negative actions (not original) Screw it. Iâm going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions (not original) Insert a coin. (-1.4) Throw a coin to the ground. (-3.6)
Table 4: Predicted Q-value examples | 1511.04636#39 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 40 | Table 4: Predicted Q-value examples
The work most closely related to our study in- olves application of deep reinforcement to learn- ing decision policies for parser-based text games. Narasimhan et al. (2015) applied a Long Short- Term Memory DQN framework, which achieves higher average reward than the random and Bag- of-Words DQN baselines. In this work, actions are constrained to a set of known fixed command structures (one action and one argument object), based on a limited action-side vocabulary size. The overall action space is defined by the action- argument product space. This pre-specified prod- uct space is not feasible for the more complex text strings in other forms of text-based games. Our proposed DRRN, on the other hand, can handle the more complex text strings, as well as parser- based games. In preliminary experiments with the parser-based game from (Narasimhan et al., 2015), we find that the DRRN using a bag-of-words (BOW) input achieves results on par with their BOW DQN. The main advantage of the DRRN is that it can also handle actions described with more complex language.
reasonably well.
# 5 Conclusion | 1511.04636#40 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 41 | reasonably well.
# 5 Conclusion
In this paper we develop a deep reinforcement relevance network, a novel DNN architecture for handling actions described by natural language in decision-making tasks such as text games. We show that the DRRN converges faster and to a better solution for Q-learning than alternative ar- chitectures that do not use separate embeddings for the state and action spaces. Future work in- cludes: (i) adding an attention model to robustly analyze which part of state/actions text correspond to strategic planning, and (ii) applying the pro- posed methods to more complex text games or other tasks with actions defined through natural language.
# Acknowledgments
We thank Karthik Narasimhan and Tejas Kulka- mi for providing instructions on setting up their parser-based games.
The DRRN experiments described here lever- age only a simple bag-of-words representa- tion of phrases and sentences. As observed in (Narasimhan et al., 2015), more complex sentence-based models can give further improve- ments. In preliminary experiments with âMachine of Deathâ, we did not find LSTMs to give im- proved performance, but we conjecture that they would be useful in larger-scale tasks, or when the word embeddings are initialized by training on large data sets.
# References
[Adams2014] E. Adams. 2014. Fundamentals of game design. Pearson Education. | 1511.04636#41 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 42 | # References
[Adams2014] E. Adams. 2014. Fundamentals of game design. Pearson Education.
[Branavan et al.2009] S.R.K. Branavan, H. Chen, L. Zettlemoyer, and R. Barzilay. 2009. Reinforce- ment learning for mapping instructions to actions. In Proc. of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pages 82-90, August.
As mentioned earlier, other work has applied deep reinforcement learning to a problem with a continuous action space (Lillicrap et al., 2016). In the DRRN, the action space is inherently discrete, but we learn a continuous representation of it. As indicated by the paraphrasing experiment, the con- tinuous space representation seems to generalize
[Branavan et al.2011] S.R.K. Branavan, D. Silver, and R. Barzilay. 2011. Learning to win by reading man- uals in a monte-carlo framework. In Proc. of the An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 268-277. Association for Computational Linguistics. | 1511.04636#42 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 43 | [Collobert and Weston2008] R. Collobert and J. We- ston. 2008. A unified architecture for natural language processing: Deep neural networks with mul- titask learning. In Proc. of the 25th International Conference on Machine learning, pages 160-167. ACM.
[Dahl et al.2012] G. E Dahl, D. Yu, L. Deng, and A. Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Process- ing, IEEE Transactions on, 20(1):30-42.
[Hinton et al.2012] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Van- houcke, P. Nguyen, T. N. Sainath, and B. Kings- bury. 2012. Deep neural networks for acoustic mod- eling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82-97. | 1511.04636#43 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 44 | [Huang et al.2013] P-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. 2013. Learning deep struc- tured semantic models for web search using click- through data. In Proc. of the ACM International Conference on Information & Knowledge Manage- ment, pages 2333-2338. ACM.
[Kiros et al.2015] R. Kiros, Y. Zhu, R. R Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3276-3284.
[Krizhevsky et al.2012] A. Krizhevsky, I. Sutskever, and G. E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105.
[Le and Mikolov2014] Q. V Le and T. Mikolov. 2014. Distributed representations of sentences and docu- ments. In International Conference on Machine Learning. | 1511.04636#44 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 45 | [Le and Mikolov2014] Q. V Le and T. Mikolov. 2014. Distributed representations of sentences and docu- ments. In International Conference on Machine Learning.
[LeCun et al.2015] Y. LeCun, Y. Bengio, and G. Hin- ton. 2015. Deep learning. Nature, 521(7553):436â 444.
[Li et al.2009] L. Li, J. D. Williams, and S. Balakr- ishnan. 2009. Reinforcement learning for spo- ken dialog management using least-squares _pol- icy iteration and fast feature selection. In Pro- ceedings of the Tenth Annual Conference of the International Speech Communication Association (INTERSPEECH-09), page 24752478.
[Lillicrap et al.2016] T. P Lillicrap, J. J Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wier- stra. 2016. Continuous control with deep rein- forcement learning. In International Conference on Learning Representations.
[Lin1993] L-J. Lin. 1993. Reinforcement learning for robots using neural networks. Technical report, DTIC Document. | 1511.04636#45 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.