id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1511.06342#51 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 18 actions. All layers except the ï¬ nal one were followed with a rectiï¬ er non- linearity. 13 Published as a conference paper at ICLR 2016 # atlantis 105 6X assault 5000 410000 beam rider . â â AMN-policy 5/ J â DQN 4000 +}|â DQN-Max 8000 â DQN-Mean 4 3000 6000 3f 4 2000 2 4000 1000 i} 2000 0 1 1 0 0 ie} 5 10 15 0 0 4 . boxing 42 10 ; crazy climber x104 100 - | â â â s === ==] 10 2.5 Y 50 } 8 ; 6 / ok ~ 4 2 P| -50 Lâ 0) 5 10 15 0 enduro fishing derby - kangaroo 1000 20 10000 9) 0 â | 800 /o~ / â 8000 a / . -20 \ 600 fo { 6000 _7 40F | 400 f | } 4000 J -60 | 2 7 / | ; y 200 yo â sol | / 2000 7 / wa / _/ 0 â â ~100 - - 9 L_+_â â 0 5 10 15 0 5 10 15 0 pong name this game 30 seaquest 10000 g : 7000 20 ==} 6000 8000 y, \ 5000 1 ~/ | 10 ( \ 6000 } , o\ | _ Sf 4000 1 } i¢) } \ | â / | \] ~ Sf sooo | / | 7 \ wy, 3000 Z / : ' 2000 a Ae 2000 / a lana AN 20; â â _ 1000 7 47 â ¢ 0 . -30 . â â . 0 5 10 15 5 10 15 it) 10 15 space invaders 2500 - . 2000 | 1500 1000 500 â 0 : 1 1 Figure 4: | 1511.06342#50 | 1511.06342#52 | 1511.06342 | [
"1503.02531"
] |
1511.06342#52 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The Actor-Mimic training curves for the network trained solely with the policy regression objective (AMN-policy). The AMN-policy is trained for 16 epochs, or 4 million frames per game. We compare against the (smaller network) expert DQNs, which are trained until convergence. We also report the maximum test reward the expert DQN achieved over all training epochs, as well as the mean testing reward achieved over the last 10 epochs. 14 Published as a conference paper at ICLR 2016 10% | 1511.06342#51 | 1511.06342#53 | 1511.06342 | [
"1503.02531"
] |
1511.06342#53 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | # atlantis g assault 5000 10000 beam rider -â AMIN-feature 5 â pan 4000 }|â DQN-Max 8000 â DQN-Mean . 4 3000 2000 1000 0 100 50 0 -50 0 fishing derby 1000 enduro 20 ig derby 10000 kangaroo 800 8000 600 6000 400 4000 200 â 30 2000 0 -100 ° 0 5 15 oO 5 10 15 0 30 peng seaquest 10000 7000 6000 8000 5000 6000 4000 4000 3000 2000 2000 1000 o - -30 o 0 5 10 15 0 5 10 15 0 space invaders 2500 2000 1500 1000 a /S 500 7 o Figure 5: | 1511.06342#52 | 1511.06342#54 | 1511.06342 | [
"1503.02531"
] |
1511.06342#54 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The Actor-Mimic training curves for the network trained with both the feature and policy regression objective (AMN-feature). The AMN-feature is trained for 16 epochs, or 4 million frames per game. We compare against the (smaller network) expert DQNs, which are trained until convergence. We also report the maximum test reward the expert DQN achieved over all training epochs, as well as the mean testing reward achieved over the last 10 epochs. 15 Published as a conference paper at ICLR 2016 # APPENDIX E TABLE 1 BARPLOT | 1511.06342#53 | 1511.06342#55 | 1511.06342 | [
"1503.02531"
] |
1511.06342#55 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 200 200 100 Relative Mean Score (100% x AM) Relative Max Seore (100% x AMN) â atlantis boxiug breakout ery enduro pongâ seaquest space climber invaders boxing breakout crazy enduro pong seaquest space climber invaders Figure 6: Plots showing relative mean reward improvement (left) and relative max reward improvement (right) of the multitask AMN over the expert DQNs. See Table 1 for details on how these values were calculated. APPENDIX F TABLE 2 LEARNING CURVES 00r 002 KRULL BREAKOUT 8 GOPHER 8 x4104 ROAD RUNNER 8 a eee g 3 8 3 3 iN) |â Random » |â AMN-Policy 8 Iâ AMN-Feature ° 8 ° 5 ROBOTANK > x104 STAR GUNNER _. «104 VIDEO PINBALL a 8 ° a ° ° ° 0 5 40 0 5 10 0 5 10 Figure 7: Learning curve plots of the results in Table2. | 1511.06342#54 | 1511.06342#56 | 1511.06342 | [
"1503.02531"
] |
1511.06342#56 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | 16 | 1511.06342#55 | 1511.06342 | [
"1503.02531"
] |
|
1511.05756#0 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | 5 1 0 2 v o N 8 1 ] V C . s c [ 1 v 6 5 7 5 0 . 1 1 5 1 : v i X r a # Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction Hyeonwoo Noh Bohyung Han Paul Hongsuck Seo Department of Computer Science and Engineering, POSTECH, Korea {hyeonwoonoh , hsseo, bhhan}@postech.ac.kr | 1511.05756#1 | 1511.05756 | [
"1506.00333"
] |
|
1511.05756#1 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | # Abstract We tackle image question answering (ImageQA) prob- lem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction net- work, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer gener- ating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dy- namic parameter layer of the CNN. We reduce the complex- ity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter pre- diction network are selected using a predeï¬ ned hash func- tion to determine individual weights in the dynamic param- eter layer. The proposed networkâ joint network with the CNN for ImageQA and the parameter prediction networkâ is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art per- formance on all available public ImageQA benchmarks. | 1511.05756#0 | 1511.05756#2 | 1511.05756 | [
"1506.00333"
] |
1511.05756#2 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | # 1. Introduction One of the ultimate goals in computer vision is holistic scene understanding [30], which requires a system to cap- ture various kinds of information such as objects, actions, events, scene, atmosphere, and their relations in many dif- ferent levels of semantics. Although signiï¬ cant progress on various recognition tasks [5, 8, 21, 24, 26, 27, 31] has been made in recent years, these works focus only on solv- ing relatively simple recognition problems in controlled set- tings, where each dataset consists of concepts with similar level of understanding (e.g. object, scene, bird species, face identity, action, texture etc.). There has been less efforts made on solving various recognition problems simultane- ously, which is more complex and realistic, even though this is a crucial step toward holistic scene understanding. | 1511.05756#1 | 1511.05756#3 | 1511.05756 | [
"1506.00333"
] |
1511.05756#3 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Q: What type of animal is this? Q:Is this animal alone? Q Is it snowing? Q: Is this picture taken during the day? Q: What kind of oranges are these? Q Is the fruit sliced? Q: What is leaning on the wall? Q: How many boards are there? Figure 1. Sample images and questions in VQA dataset [1]. Each question requires different type and/or level of understanding of the corresponding input image to ï¬ nd correct answers. Image question answering (ImageQA) [1, 17, 23] aims to solve the holistic scene understanding problem by propos- ing a task unifying various recognition problems. ImageQA is a task automatically answering the questions about an in- put image as illustrated in Figure 1. The critical challenge of this problem is that different questions require different types and levels of understanding of an image to ï¬ nd correct answers. For example, to answer the question like â how is the weather?â we need to perform classiï¬ cation on multiple choices related to weather, while we should decide between yes and no for the question like â is this picture taken dur- ing the day?â | 1511.05756#2 | 1511.05756#4 | 1511.05756 | [
"1506.00333"
] |
1511.05756#4 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | For this reason, not only the performance on a single recognition task but also the capability to select a proper task is important to solve ImageQA problem. ImageQA problem has a short history in computer vi- sion and machine learning community, but there already ex- ist several approaches [10, 16, 17, 18, 23]. Among these methods, simple deep learning based approaches that per- form classiï¬ cation on a combination of features extracted from image and question currently demonstrate the state-of- | 1511.05756#3 | 1511.05756#5 | 1511.05756 | [
"1506.00333"
] |
1511.05756#5 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | 1 the-art accuracy on public benchmarks [23, 16]; these ap- proaches extract image features using a convolutional neu- ral network (CNN), and use CNN or bag-of-words to obtain feature descriptors from question. They can be interpreted as a method that the answer is given by the co-occurrence of a particular combination of features extracted from an image and a question. Contrary to the existing approaches, we deï¬ ne a differ- ent recognition task depending on a question. To realize this idea, we propose a deep CNN with a dynamic param- eter layer whose weights are determined adaptively based on questions. We claim that a single deep CNN architecture can take care of various tasks by allowing adaptive weight assignment in the dynamic parameter layer. For the adap- tive parameter prediction, we employ a parameter predic- tion network, which consists of gated recurrent units (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights for the dynamic pa- rameter layer. The entire network including the CNN for ImageQA and the parameter prediction network is trained end-to-end through back-propagation, where its weights are initialized using pre-trained CNN and GRU. | 1511.05756#4 | 1511.05756#6 | 1511.05756 | [
"1506.00333"
] |
1511.05756#6 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Our main con- tributions in this work are summarized below: â ¢ We successfully adopt a deep CNN with a dynamic pa- rameter layer for ImageQA, which is a fully-connected layer whose parameters are determined dynamically based on a given question. â ¢ To predict a large number of weights in the dynamic parameter layer effectively and efï¬ ciently, we apply hashing trick [3], which reduces the number of param- eters signiï¬ cantly with little impact on network capac- ity. | 1511.05756#5 | 1511.05756#7 | 1511.05756 | [
"1506.00333"
] |
1511.05756#7 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | â ¢ We ï¬ ne-tune GRU pre-trained on a large-scale text cor- pus [14] to improve generalization performance of our network. Pre-training GRU on a large corpus is natural way to deal with a small number of training data, but no one has attempted it yet to our knowledge. â ¢ This is the ï¬ rst work to report the results on all cur- rently available benchmark datasets such as DAQUAR, COCO-QA and VQA. Our algorithm achieves the state-of-the-art performance on all the three datasets. The rest of this paper is organized as follows. | 1511.05756#6 | 1511.05756#8 | 1511.05756 | [
"1506.00333"
] |
1511.05756#8 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We ï¬ rst review related work in Section 2. Section 3 and 4 describe the overview of our algorithm and the architecture of our network, respectively. We discuss the detailed procedure to train the proposed network in Section 5. Experimental results are demonstrated in Section 6. # 2. Related Work There are several recent papers to address ImageQA [1, 10, 16, 17, 18, 23]; the most of them are based on deep learning except [17]. Malinowski and Fritz [17] propose a Bayesian framework, which exploits recent advances in computer vision and natural language processing. Specif- ically, it employs semantic image segmentation and sym- bolic question reasoning to solve ImageQA problem. | 1511.05756#7 | 1511.05756#9 | 1511.05756 | [
"1506.00333"
] |
1511.05756#9 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | How- ever, this method depends on a pre-deï¬ ned set of predicates, which makes it difï¬ cult to represent complex models re- quired to understand input images. Deep learning based approaches demonstrate competi- tive performances in ImageQA [18, 10, 23, 16, 1]. Most approaches based on deep learning commonly use CNNs to extract features from image while they use different strate- gies to handle question sentences. Some algorithms em- ploy embedding of joint features based on image and ques- tion [1, 10, 18]. However, learning a softmax classiï¬ er on the simple joint featuresâ concatenation of CNN-based image features and continuous bag-of-words representation of a questionâ performs better than LSTM-based embed- ding on COCO-QA [23] dataset. Another line of research is to utilize CNNs for feature extraction from both image and question and combine the two features [16]; this ap- proach demonstrates impressive performance enhancement on DAQUAR [17] dataset by allowing ï¬ | 1511.05756#8 | 1511.05756#10 | 1511.05756 | [
"1506.00333"
] |
1511.05756#10 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | ne-tuning the whole parameters. The prediction of the weight parameters in deep neural networks has been explored in [2] in the context of zero- shot learning. To perform classiï¬ cation of unseen classes, it trains a multi-layer perceptron to predict a binary clas- siï¬ er for class-speciï¬ c description in text. However, this method is not directly applicable to ImageQA since ï¬ nding solutions based on the combination of question and answer is a more complex problem than the one discussed in [2], and ImageQA involves a signiï¬ cantly larger set of candidate answers, which requires much more parameters than the bi- nary classiï¬ cation case. Recently, a parameter reduction technique based on a hashing trick is proposed by Chen et al. [3] to ï¬ t a large neural network in a limited memory budget. However, applying this technique to the dynamic prediction of parameters in deep neural networks is not at- tempted yet to our knowledge. | 1511.05756#9 | 1511.05756#11 | 1511.05756 | [
"1506.00333"
] |
1511.05756#11 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | # 3. Algorithm Overview We brieï¬ y describe the motivation and formulation of our approach in this section. # 3.1. Motivation Although ImageQA requires different types and levels of image understanding, existing approaches [1, 10, 18] pose the problem as a ï¬ at classiï¬ cation task. However, we be- lieve that it is difï¬ cult to solve ImageQA using a single deep neural network with ï¬ xed parameters. In many CNN-based recognition problems, it is well-known to ï¬ ne-tune a few layers for the adaptation to new tasks. In addition, some eee Network > Dynamic Parameter Layer ae 2 ! Parameter Prediction Network = !| GRU P| GRU }} GRU >| GRU >| GRU GRU P| oT fT) OT TT oe ) gj e || What is in the cabinet ? 3 Figure 2. Overall architecture of the proposed Dynamic Parameter Prediction network (DPPnet), which is composed of the classiï¬ cation network and the parameter prediction network. The weights in the dynamic parameter layer are mapped by a hashing trick from the candidate weights obtained from the parameter prediction network. networks are designed to solve two or more tasks jointly by constructing multiple branches connected to a common CNN architecture. In this work, we hope to solve the het- erogeneous recognition tasks using a single CNN by adapt- ing the weights in the dynamic parameter layer. | 1511.05756#10 | 1511.05756#12 | 1511.05756 | [
"1506.00333"
] |
1511.05756#12 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Since the task is deï¬ ned by the question in ImageQA, the weights in the layer are determined depending on the question sen- tence. In addition, a hashing trick is employed to predict a large number of weights in the dynamic parameter layer and avoid parameter explosion. # 3.2. Problem Formulation ImageQA systems predict the best answer Ë a given an im- age I and a question q. Conventional approaches [16, 23] typically construct a joint feature vector based on two inputs I and q and solve a classiï¬ cation problem for ImageQA us- ing the following equation: network. The classiï¬ cation network is a CNN. One of the fully-connected layers in the CNN is the dynamic parame- ter layer, and the weights in the layer are determined adap- tively by the parameter prediction network. The parame- ter prediction network has GRU cells and a fully-connected layer. It takes a question as its input, and generates a real- valued vector, which corresponds to candidate weights for the dynamic parameter layer in the classiï¬ cation network. Given an image and a question, our algorithm estimates the weights in the dynamic parameter layer through hash- ing with the candidate weights obtained from the parameter prediction network. Then, it feeds the input image to the classiï¬ cation network to obtain the ï¬ | 1511.05756#11 | 1511.05756#13 | 1511.05756 | [
"1506.00333"
] |
1511.05756#13 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | nal answer. More de- tails of the proposed network are discussed in the following subsections. # 4.1. Classiï¬ cation Network Ë a = argmax p(a|I, q; θ) aâ â ¦ (1) where â ¦ is a set of all possible answers and θ is a vector for the parameters in the network. On the contrary, we use the question to predict weights in the classiï¬ er and solve the problem. We ï¬ nd the solution by Ë a = argmax p(a|I; θs, θd(q)) aâ â ¦ (2) where θs and θd(q) denote static and dynamic parameters, respectively. Note that the values of θd(q) are determined by the question q. # 4. | 1511.05756#12 | 1511.05756#14 | 1511.05756 | [
"1506.00333"
] |
1511.05756#14 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Network Architecture Figure 2 illustrates the overall architecture of the pro- posed algorithm. The network is composed of two sub- networks: classiï¬ cation network and parameter prediction The classiï¬ cation network is constructed based on VGG 16-layer net [24], which is pre-trained on ImageNet [6]. We remove the last layer in the network and attach three fully- connected layers. The second last fully-connected layer of the network is the dynamic parameter layer whose weights are determined by the parameter prediction network, and the last fully-connected layer is the classiï¬ cation layer whose output dimensionality is equal to the number of possible answers. The probability for each answer is computed by applying a softmax function to the output vector of the ï¬ nal layer. We put the dynamic parameter layer in the second last fully-connected layer instead of the classiï¬ cation layer be- cause it involves the smallest number of parameters. As the number of parameters in the classiï¬ cation layer increases in proportion to the number of possible answers, predicting the weights for the classiï¬ cation layer may not be a good op- tion to general ImageQA problems in terms of scalability. Our choice for the dynamic parameter layer can be inter- preted as follows. By ï¬ xing the classiï¬ cation layer while adapting the immediately preceding layer, we obtain the task-independent semantic embedding of all possible an- swers and use the representation of an input embedded in the answer space to solve an ImageQA problem. Therefore, the relationships of the answers globally learned from all recognition tasks can help solve new ones involving unseen classes, especially in multiple choice questions. For exam- ple, when not the exact ground-truth word (e.g., kitten) but similar words (e.g., cat and kitty) are shown at training time, the network can still predict the close answers (e.g., kit- ten) based on the globally learned answer embedding. | 1511.05756#13 | 1511.05756#15 | 1511.05756 | [
"1506.00333"
] |
1511.05756#15 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Even though we could also exploit the beneï¬ t of answer embed- ding based on the relations among answers to deï¬ ne a loss function, we leave it as our future work. # 4.2. Parameter Prediction Network As mentioned earlier, our classification network has a dynamic parameter layer. That is, for an input vector of the dynamic parameter layer f* = [f/,..., f4]â , its output vector denoted by f° = [f?,..., f2]â is given by f o = Wd(q)f i + b (3) where b denotes a bias and Wd(q) â RM à N denotes the matrix constructed dynamically using the parameter predic- tion network given the input question. In other words, the weight matrix corresponding to the layer is parametrized by a function of the input question q. The parameter prediction network is composed of GRU cells [4] followed by a fully-connected layer, which pro- duces the candidate weights to be used for the construction of weight matrix in the dynamic parameter layer within the classiï¬ cation network. GRU, which is similar to LSTM, is designed to model dependency in multiple time scales. As illustrated in Figure 3, such dependency is captured by adaptively updating its hidden states with gate units. How- ever, contrary to LSTM, which maintains a separate mem- ory cell explicitly, GRU directly updates its hidden states with a reset gate and an update gate. The detailed proce- dure of the update is described below. | 1511.05756#14 | 1511.05756#16 | 1511.05756 | [
"1506.00333"
] |
1511.05756#16 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Let w1, ..., wT be the words in a question q, where T is the number of words in the question. In each time step t, given the embedded vector xt for a word wt, the GRU encoder updates its hidden state at time t, denoted by ht, using the following equations: (4) ry, = 0(W,x;, + U,hy_1) a = o(W-x, + Uzhy-1) h, = tanh(W;,x; + Un(ri © hi-1)) hy = (1 â 2) © hy-1 + Zt © by (6) (6) (7) f Input â Output, | input Gate Gate (@)) | | | Modulation | Forget Gate Candidate Activation GRU LSTM Figure 3. Comparison of GRU and LSTM. Contrary to LSTM that contains memory cell explicitly, GRU updates the hidden state di- rectly. where r; and z, respectively denote the reset and update gates at time t, and h, is candidate activation at time ¢. In addition, © indicates element-wise multiplication operator and o(-) is a sigmoid function. Note that the coefficient matrices related to GRU such as W,., W., Wp, U,, Uz, and U), are learned by our training algorithm. By applying this encoder to a question sentence through a series of GRU cells, we obtain the final embedding vector h, â ¬ R* of the question sentence. Once the question embedding is obtained by GRU, the candidate weight vector, p = [p1, . . . , pK]T, is given by applying a fully-connected layer to the embedded question hT as p = WphT where p â RK is the output of the parameter prediction net- work, and Wp is the weight matrix of the fully-connected layer in the parameter prediction network. Note that even though we employ GRU for a parameter prediction network since the pre-trained network for sentence embeddingâ skip-thought vector model [14]â is based on GRU, any form of neural networks, e.g., fully-connected and convo- lutional neural network, can be used to construct the pa- rameter prediction network. # 4.3. Parameter Hashing The weights in the dynamic parameter layers are deter- mined based on the learned model in the parameter predic- tion network given a question. | 1511.05756#15 | 1511.05756#17 | 1511.05756 | [
"1506.00333"
] |
1511.05756#17 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | The most straightforward approach to obtain the weights is to generate the whole ma- trix Wd(q) using the parameter prediction network. How- ever, the size of the matrix is very large, and the network may be overï¬ tted easily given the limited number of train- ing examples. In addition, since we need quadratically more parameters between GRU and the fully-connected layer in the parameter prediction network to increase the dimension- ality of its output, it is not desirable to predict full weight matrix using the network. Therefore, it is preferable to con- struct Wd(q) based on a small number of candidate weights using a hashing trick. We employ the recently proposed random weight sharing technique based on hashing [3] to construct the weights in the dynamic parameter layer. | 1511.05756#16 | 1511.05756#18 | 1511.05756 | [
"1506.00333"
] |
1511.05756#18 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Speciï¬ cally, a single param- eter in the candidate weight vector p is shared by multiple elements of Wd(q), which is done by applying a predeï¬ ned hash function that converts the 2D location in Wd(q) to the 1D index in p. By this simple hashing trick, we can reduce the number of parameters in Wd(q) while maintaining the accuracy of the network [3]. mn be the element at (m, n) in Wd(q), which cor- responds to the weight between mth output and nth input neuron. Denote by Ï (m, n) a hash function mapping a key (m, n) to a natural number in {1, . . . , K}, where K is the dimensionality of p. The ï¬ nal hash function is given by mn = pÏ (m,n) · ξ(m, n) where ξ(m, n) : N à N â {+1, â 1} is another hash func- tion independent of Ï (m, n). This function is useful to re- move the bias of hashed inner product [3]. In our imple- mentation of the hash function, we adopt an open-source implementation of xxHash1. We believe that it is reasonable to reduce the number of free parameters based on the hashing technique as there are many redundant parameters in deep neural networks [7] and the network can be parametrized using a smaller set of can- didate weights. Instead of training a huge number of pa- rameters without any constraint, it would be advantageous practically to allow multiple elements in the weight matrix It is also demonstrated that the to share the same value. number of free parameter can be reduced substantially with little loss of network performance [3]. | 1511.05756#17 | 1511.05756#19 | 1511.05756 | [
"1506.00333"
] |
1511.05756#19 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | # 5. Training Algorithm This section discusses the error back-propagation algo- rithm in the proposed network and introduces the tech- niques adopted to enhance performance of the network. # 5.1. Training by Error Back-Propagation The proposed network is trained end-to-end to minimize the error between the ground-truths and the estimated an- swers. The error is back-propagated by chain rule through both the classiï¬ cation network and the parameter prediction network and they are jointly trained by a ï¬ rst-order opti- mization method. Let L denote the loss function. The partial derivatives of L with respect to the kth element in the input and output of the dynamic parameter layer are given respectively by | 1511.05756#18 | 1511.05756#20 | 1511.05756 | [
"1506.00333"
] |
1511.05756#20 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | δi k â ¡ â L â f i k and δo k â ¡ â L â f o k . (10) The two derivatives have the following relation: M 5 = So who, (11) m=1 # 1https://code.google.com/p/xxhash/ Likewise, the derivative with respect to the assigned weights in the dynamic parameter layer is given by â L â wd mn = f i nδo m. (12) As a single output value of the parameter prediction net- work is shared by multiple connections in the dynamic parameter layer, the derivatives with respect to all shared weights need to be accumulated to compute the derivative with respect to an element in the output of the parameter prediction network as follows: OL â Mo OL dwt mn Opr 2 y Ow!,,, OV: m=1n=1 M N OL = =â â &(m, n)I[y(m,n) =k], (13) yy » Ow! where I[·] denotes the indicator function. The gradients of all the preceding layers in the classiï¬ cation and parame- ter prediction networks are computed by the standard back- propagation algorithm. # 5.2. Using Pre-trained GRU Although encoders based on recurrent neural networks (RNNs) such as LSTM [11] and GRU [4] demonstrate im- pressive performance on sentence embedding [19, 25], their beneï¬ ts in the ImageQA task are marginal in comparison to bag-of-words model [23]. One of the reasons for this fact is the lack of language data in ImageQA dataset. Contrary to the tasks that have large-scale training corpora, even the largest ImageQA dataset contains relatively small amount of language data; for example, [1] contains 750K questions in total. Note that the model in [25] is trained using a corpus with more than 12M sentences. | 1511.05756#19 | 1511.05756#21 | 1511.05756 | [
"1506.00333"
] |
1511.05756#21 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | To deal with the deï¬ ciency of linguistic information in ImageQA problem, we transfer the information acquired from a large language corpus by ï¬ ne-tuning the pre-trained embedding network. We initialize the GRU with the skip- thought vector model trained on a book-collection corpus containing more than 74M sentences [14]. Note that the GRU of the skip-thought vector model is trained in an un- supervised manner by predicting the surrounding sentences from the embedded sentences. As this task requires to un- derstand context, the pre-trained model produces a generic sentence embedding, which is difï¬ cult to be trained with a limited number of training examples. By ï¬ ne-tuning our GRU initialized with a generic sentence embedding model for ImageQA, we obtain the representations for questions that are generalized better. # 5.3. Fine-tuning CNN It is very common to transfer CNNs for new tasks in classiï¬ | 1511.05756#20 | 1511.05756#22 | 1511.05756 | [
"1506.00333"
] |
1511.05756#22 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | cation problems, but it is not trivial to ï¬ ne-tune the CNN in our problem. We observe that the gradients below the dynamic parameter layer in the CNN are noisy since the weights are predicted by the parameter prediction net- work. Hence, a straightforward approach to ï¬ ne-tune the CNN typically fails to improve performance, and we em- ploy a slightly different technique for CNN ï¬ ne-tuning to sidestep the observed problem. We update the parameters of the network using new datasets except the part transferred from VGG 16-layer net at the beginning, and start to update the weights in the subnetwork if the validation accuracy is saturated. # 5.4. Training Details Before training, question sentences are normalized to lower cases and preprocessed by a simple tokenization tech- nique as in [29]. We normalize the answers to lower cases and regard a whole answer in a single or multiple words as a separate class. The network is trained end-to-end by back-propagation. Adam [13] is used for optimization with initial learning rate 0.01. We clip the gradient to 0.1 to handle the gradient ex- plosion from the recurrent structure of GRU [22]. Training is terminated when there is no progress on validation accu- racy for 5 epochs. Optimizing the dynamic parameter layer is not straight- forward since the distribution of the outputs in the dynamic parameter layer is likely to change signiï¬ cantly in each batch. Therefore, we apply batch-normalization [12] to the output activations of the layer to alleviate this problem. In addition, we observe that GRU tends to converge fast and overï¬ t data easily if training continues without any restric- tion. We stop ï¬ ne-tuning GRU when the network start to overï¬ t and continue to train the other parts of the network; this strategy improves performance in practice. | 1511.05756#21 | 1511.05756#23 | 1511.05756 | [
"1506.00333"
] |
1511.05756#23 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | # 6. Experiments We now describe the details of our implementation and evaluate the proposed method in various aspects. # 6.1. Datasets We evaluate the proposed network on all public Im- ageQA benchmark datasets such as DAQUAR [17], COCO- QA [23] and VQA [1]. They collected question-answer pairs from existing image datasets and most of the answers are single words or short phrases. DAQUAR is based on NYUDv2 [20] dataset, which is originally designed for indoor segmentation using RGBD images. DAQUAR provides two benchmarks, which are distinguished by the number of classes and the amount of data; DAQUAR-all consists of 6,795 and 5,673 questions for training and testing respectively, and includes 894 cate- gories in answer. DAQUAR-reduced includes only 37 an- swer categories for 3,876 training and 297 testing questions. Some questions in this dataset are associated with a set of multiple answers instead of a single one. The questions in COCO-QA are automatically gener- ated from the image descriptions in MS COCO dataset [15] using the constituency parser with simple question-answer generation rules. The questions in this dataset are typi- cally long and explicitly classiï¬ ed into 4 types depending on the generation rules: object questions, number questions, color questions and location questions. All answers are with one-words and there are 78,736 questions for training and 38,948 questions for testing. Similar to COCO-QA, VQA is also constructed on MS COCO [15] but each question is associated with multiple answers annotated by different people. This dataset con- tains the largest number of questions: 248,349 for train- ing, 121,512 for validation, and 244,302 for testing, where the testing data is splited into test-dev, test-standard, test- challenge and test-reserve as in [15]. Each question is pro- vided with 10 answers to take the consensus of annotators into account. About 90% of answers have single words and 98% of answers do not exceed three words. | 1511.05756#22 | 1511.05756#24 | 1511.05756 | [
"1506.00333"
] |
1511.05756#24 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | # 6.2. Evaluation Metrics DAQUAR and COCO-QA employ both classiï¬ cation ac- curacy and its relaxed version based on word similarity, WUPS [17]. It uses thresholded Wu-Palmer similarity [28] based on WordNet [9] taxonomy to compute the similarity between words. For predicted answer set Ai and ground- truth answer set T i of the ith example, WUPS is given by # WUPS = 1 N â i a ;t), é st), 4 ydon{ I max yi(a,t), [] max y(a, ) (14) i=l acAt teT? where µ (·, ·) denotes the thresholded Wu-Palmer similarity between prediction and ground-truth. We use two threshold values (0.9 and 0.0) in our evaluation. VQA dataset provides open-ended task and multiple- choice task for evaluation. For open-ended task, the answer can be any word or phrase while an answer should be cho- sen out of 18 candidate answers in the multiple-choice task. In both cases, answers are evaluated by accuracy reï¬ ecting human consensus. For predicted answer ai and target an- swer set T i of the ith example, the accuracy is given by N 1 illa; =t Accyoa N > min { ter â ] ; i} (15) i=1 . where I [·] denotes an indicator function. | 1511.05756#23 | 1511.05756#25 | 1511.05756 | [
"1506.00333"
] |
1511.05756#25 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | In other words, a predicted answer is regarded as a correct one if at least three annotators agree, and the score depends on the number of agreements if the predicted answer is not correct. Table 1. Evaluation results on VQA test-dev in terms of AccVQA All Y/N Num Others All Y/N Num Others Question [1] 48.09 75.66 36.70 27.14 53.68 75.71 37.05 38.64 28.13 64.01 00.42 03.77 30.53 69.87 00.45 03.76 52.64 75.55 33.67 37.37 58.97 75.59 34.35 50.33 LSTM Q [1] 48.76 78.20 35.68 26.59 54.75 78.22 36.82 38.78 LSTM Q+I [1] 53.74 78.94 35.24 36.42 57.17 78.95 35.80 43.41 54.70 77.09 36.62 39.67 59.92 77.10 37.48 50.31 RAND-GRU 55.46 79.58 36.20 39.23 61.18 79.64 38.07 50.63 CNN-FIXED 56.74 80.48 37.20 40.90 61.95 80.56 38.32 51.40 57.22 80.71 37.24 41.69 62.48 80.79 38.94 52.16 Table 2. Evaluation results on VQA test-standard Open-Ended Multiple-Choice All Y/N Num Others All Y/N Num Others 83.30 95.77 83.39 72.67 Human [1] - - - - - - - - - DPPnet 57.36 80.28 36.92 42.24 62.69 80.35 38.79 52.79 - - # 6.3. Results We test three independent datasets, VQA, COCO-QA, and DAQUAR, and ï¬ | 1511.05756#24 | 1511.05756#26 | 1511.05756 | [
"1506.00333"
] |
1511.05756#26 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | rst present the results for VQA dataset in Table 1. The proposed Dynamic Parameter Prediction network (DPPnet) outperforms all existing methods non- trivially. We performed controlled experiments to ana- lyze the contribution of individual components in the pro- posed algorithmâ dynamic parameter prediction, use of pre-trained GRU and CNN ï¬ ne-tuning, and trained 3 addi- tional models, CONCAT, RAND-GRU, and CNN-FIXED. CNN-FIXED is useful to see the impact of CNN ï¬ ne-tuning since it is identical to DPPnet except that the weights in CNN are ï¬ | 1511.05756#25 | 1511.05756#27 | 1511.05756 | [
"1506.00333"
] |
1511.05756#27 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | xed. RAND-GRU is the model without GRU pre-training, where the weights of GRU and word embed- ding model are initialized randomly. It does not ï¬ ne-tune CNN either. CONCAT is the most basic model, which predicts answers using the two fully-connected layers for a combination of CNN and GRU features. Obviously, it does not employ any of new components such as parameter prediction, pre-trained GRU and CNN ï¬ ne-tuning. The results of the controlled experiment are also illus- trated in Table 1. | 1511.05756#26 | 1511.05756#28 | 1511.05756 | [
"1506.00333"
] |
1511.05756#28 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | CONCAT already outperforms LSTM Q+I by integrating GRU instead of LSTM [4] and batch normalization. RAND-GRU achieves better accuracy by employing dynamic parameter prediction additionally. It is interesting that most of the improvement comes from yes/no questions, which may involve various kinds of tasks since it is easy to ask many different aspects in an input image for binary classiï¬ cation. CNN-FIXED improves accuracy further by adding GRU pre-training, and our ï¬ nal model DPPnet achieves the state-of-the-art performance on VQA dataset with large margins as illustrated in Table 1 and 2. Table 3, 4, and 5 illustrate the results by all algorithms in- cluding ours that have reported performance on COCO-QA, DAQUAR-reduced, DAQUAR-all datasets. | 1511.05756#27 | 1511.05756#29 | 1511.05756 | [
"1506.00333"
] |
1511.05756#29 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | The proposed Table 3. Evaluation results on COCO-QA IMG+BOW [23] 2VIS+BLSTM [23] Ensemble [23] ConvQA [16] DPPnet Acc 55.92 55.09 57.84 54.95 61.19 WUPS 0.9 66.78 65.34 67.90 65.36 70.84 WUPS 0.0 88.99 88.64 89.52 88.58 90.61 Table 4. Evaluation results on DAQUAR reduced Acc - 34.68 34.17 2VIS+BLSTM [23] 35.78 36.94 39.66 44.48 Multiworld [17] Askneuron [18] IMG+BOW [23] Ensemble [23] ConvQA [16] DPPnet Single answer 0.9 - Multiple answers 0.9 Acc 0.0 - 12.73 40.76 79.54 29.27 44.99 81.48 46.83 82.15 48.15 82.68 44.86 83.06 38.72 49.56 83.95 44.44 0.0 18.10 51.47 36.50 79.47 - - - - - - - - - 44.19 79.52 49.06 82.57 Table 5. Evaluation results on DAQUAR all Human [17] Multiworld [17] Askneuron [18] ConvQA [16] DPPnet Single answer 0.9 - - Multiple answers 0.9 Acc - - 19.43 23.40 28.98 Acc 0.0 50.20 - - 07.86 25.28 62.00 17.49 29.59 62.95 20.69 34.80 67.81 25.60 0.0 50.82 67.27 11.86 38.79 23.28 57.76 25.89 55.48 31.03 60.77 algorithm outperforms all existing approaches consistently in all benchmarks. | 1511.05756#28 | 1511.05756#30 | 1511.05756 | [
"1506.00333"
] |
1511.05756#30 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | In Table 4 and 5, single answer and mul- tiple answers denote the two subsets of questions divided by the number of ground-truth answers. Also, the numbers (0.9 and 0.0) in the second rows are WUPS thresholds. To understand how the parameter prediction network un- derstand questions, we present several representative ques- tions before and after ï¬ ne-tuning GRU in a descending or- der based on their cosine similarities to the query ques- tion in Table 6. The retrieved sentences are frequently de- termined by common subjective or objective words before ï¬ ne-tuning while they rely more on the tasks to be solved after ï¬ ne-tuning. The qualitative results of the proposed algorithm are pre- sented in Figure 4. In general, the proposed network is suc- cessful to handle various types of questions that need differ- ent levels of semantic understanding. Figure 4(a) shows that the network is able to adapt recognition tasks depending on questions. However, it often fails in the questions asking the number of occurrences since these questions involve the dif- ï¬ cult tasks (e.g., object detection) to learn only with image level annotations. On the other hand, the proposed network is effective to ï¬ nd the answers for the same question on dif- ferent images fairly well as illustrated in Figure 4(b). Refer to our project website2 for more comprehensive qualitative results. # 2http://cvlab.postech.ac.kr/research/dppnet/ Table 6. Retrieved sentences before and after ï¬ ne-tuning GRU | 1511.05756#29 | 1511.05756#31 | 1511.05756 | [
"1506.00333"
] |
1511.05756#31 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Query question What body part has most recently contacted the ball? Is the person feeding the birds? Before ï¬ ne-tuning After ï¬ ne-tuning What shape is the ball? What colors are the ball? What team has the ball? How many times has the girl hit the ball? What number is on the womenâ s Jersey closest to the ball? What is unusual about the ball? What is the speed of the ball? What body part is the boy holding the bear by? What body part is on the right side of this picture? What human body part is on the table? What body parts appear to be touching? What partial body parts are in the foreground? What part of the body does the woman on the left have on the ramp? | 1511.05756#30 | 1511.05756#32 | 1511.05756 | [
"1506.00333"
] |
1511.05756#32 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Is he feeding the birds? Is the reptile ï¬ ghting the birds? Does the elephant want to play with the birds? What is the fence made of behind the birds? Where are the majority of the birds? What colors are the birds? Is this man feeding the pigeons? Is he feeding the birds? Is the person feeding the sheep? Is the man feeding the pigeons? Is she feeding the pigeons? Is that the zookeeper feeding the giraffes? Is the reptile ï¬ ghting the birds? | 1511.05756#31 | 1511.05756#33 | 1511.05756 | [
"1506.00333"
] |
1511.05756#33 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Name a body part that would not be visible if the womanâ s mouth was closed? Does the elephant want to play with the birds? e N Q: How does the woman feel? DPPnet: happy Q: What type of hat is she wearing? DPPnet: cowboy VW bs = Q: Is it raining? DPPnet: no â Q: What is he holding? DPPnet: umbrella Q: What is he doing? DPPnet: skateboarding Q: Is this person dancing? DPPnet: no = 2 Q: | 1511.05756#32 | 1511.05756#34 | 1511.05756 | [
"1506.00333"
] |
1511.05756#34 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | How many cranes are in the image? DPPnet: 2 (3) Q: How many people are on the bench? DPPnet: 2 (1) (a) Result of the proposed algorithm on multiple questions for a single image Q: What is the boy holding? DPPnet: surfboard __Q: What animal is shown? DPPnet: giraffe Q: What is this room? DPPnet: living room DPPnet: kitchen Q: What is the animal doing? DPPnet: resting (relaxing) DPPnet: swimming (fishing) | 1511.05756#33 | 1511.05756#35 | 1511.05756 | [
"1506.00333"
] |
1511.05756#35 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | (b) Results of the proposed algorithm on a single common question for multiple images Figure 4. Sample images and questions in VQA dataset [1]. Each question requires a different type and/or level of understanding of the corresponding input image to ï¬ nd correct answer. Answers in blue are correct while answers in red are incorrect. For the incorrect answers, ground-truth answers are provided within the parentheses. # 7. Conclusion We proposed a novel architecture for image question an- swering based on two subnetworksâ | 1511.05756#34 | 1511.05756#36 | 1511.05756 | [
"1506.00333"
] |
1511.05756#36 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | classiï¬ cation network and parameter prediction network. The classiï¬ cation net- work has a dynamic parameter layer, which enables the classiï¬ cation network to adaptively determine its weights through the parameter prediction network. While predicting all entries of the weight matrix is infeasible due to its large dimensionality, we relieved this limitation using parame- ter hashing and weight sharing. The effectiveness of the proposed architecture is supported by experimental results showing the state-of-the-art performances on three different datasets. Note that the proposed method achieved outstand- ing performance even without more complex recognition processes such as referencing objects. We believe that the proposed algorithm can be extended further by integrating attention model [29] to solve such difï¬ | 1511.05756#35 | 1511.05756#37 | 1511.05756 | [
"1506.00333"
] |
1511.05756#37 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | cult problems. # References [1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: visual question answering. In ICCV, 2015. 1, 2, 5, 6, 7, 8 [2] J. Ba, K. Swersky, S. Fidler, and R. Salakhutdinov. | 1511.05756#36 | 1511.05756#38 | 1511.05756 | [
"1506.00333"
] |
1511.05756#38 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Predict- ing deep zero-shot convolutional neural networks using tex- tual descriptions. In ICCV, 2015. 2 [3] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In ICML, 2015. 2, 4, 5 [4] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning Workshop, 2014. 4, 5, 7 I. | 1511.05756#37 | 1511.05756#39 | 1511.05756 | [
"1506.00333"
] |
1511.05756#39 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In CVPR, 2014. 1 [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 3 [7] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. | 1511.05756#38 | 1511.05756#40 | 1511.05756 | [
"1506.00333"
] |
1511.05756#40 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Predicting parameters in deep learning. In NIPS, 2013. 5 [8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: a deep convolutional acti- vation feature for generic visual recognition. In ICML, 2014. 1 [9] C. Fellbaum. Wordnet: An electronic database, 1998. 6 [10] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. | 1511.05756#39 | 1511.05756#41 | 1511.05756 | [
"1506.00333"
] |
1511.05756#41 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Are you talking to a machine? dataset and methods for mul- tilingual image question answering. In NIPS, 2015. 1, 2 [11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. 5 [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 6 [13] D. Kingma and J. Ba. Adam: | 1511.05756#40 | 1511.05756#42 | 1511.05756 | [
"1506.00333"
] |
1511.05756#42 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | A method for stochastic opti- mization. In ICLR, 2015. 6 [14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-thought vectors. In NIPS, 2015. 2, 4, 5 [15] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: com- mon objects in context. | 1511.05756#41 | 1511.05756#43 | 1511.05756 | [
"1506.00333"
] |
1511.05756#43 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | In ECCV, 2014. 6 [16] L. Ma, Z. Lu, and H. Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. 1, 2, 3, 7 [17] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. | 1511.05756#42 | 1511.05756#44 | 1511.05756 | [
"1506.00333"
] |
1511.05756#44 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | In NIPS, 2014. 1, 2, 6, 7 [18] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 2, 7 [19] T. Mikolov, M. Karaï¬ Â´at, L. Burget, J. Cernock`y, and S. Khu- danpur. | 1511.05756#43 | 1511.05756#45 | 1511.05756 | [
"1506.00333"
] |
1511.05756#45 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Recurrent neural network based language model. In INTERSPEECH, pages 1045â 1048, 2010. 5 [20] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. 6 [21] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolu- tional neural networks. In CVPR, 2014. 1 | 1511.05756#44 | 1511.05756#46 | 1511.05756 | [
"1506.00333"
] |
1511.05756#46 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | [22] R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬ culty of training recurrent neural networks. In ICML, 2013. 6 [23] M. Ren, R. Kiros, and R. S. Zemel. Exploring models and data for image question answering. In NIPS, 2015. 1, 2, 3, 5, 6, 7 | 1511.05756#45 | 1511.05756#47 | 1511.05756 | [
"1506.00333"
] |
1511.05756#47 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | [24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 1, 3 [25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. 5 [26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 1 [27] L. Wolf. Deepface: Closing the gap to human-level perfor- mance in face veriï¬ | 1511.05756#46 | 1511.05756#48 | 1511.05756 | [
"1506.00333"
] |
1511.05756#48 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | cation. In CVPR, 2014. 1 [28] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In ACL, pages 133â 138, 1994. 6 [29] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural im- age caption generation with visual attention. In ICML, 2015. 6, 9 [30] J. Yao, S. Fidler, and R. | 1511.05756#47 | 1511.05756#49 | 1511.05756 | [
"1506.00333"
] |
1511.05756#49 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | Urtasun. Describing the scene as a whole: Joint object detection, scene classiï¬ cation and se- mantic segmentation. In CVPR, 2012. 1 [31] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. 1 | 1511.05756#48 | 1511.05756 | [
"1506.00333"
] |
|
1511.05234#0 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | 6 1 0 2 r a M 9 1 ] # Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering Huijuan Xu and Kate Saenko Department of Computer Science, UMass Lowell, USA [email protected], [email protected] # V C . s c [ 2 v 4 3 2 5 0 . 1 1 5 1 : v i X r a Abstract. We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a ques- tion about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the infor- mation stored in memory. Our Spatial Memory Network stores neuron activations from diï¬ erent spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single â | 1511.05234#1 | 1511.05234 | [
"1511.03416"
] |
|
1511.05234#1 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | hopâ in the network. We propose a novel spatial attention architecture that aligns words with image patches in the ï¬ rst hop, and obtain improved results by adding a second atten- tion hop which considers the whole question to choose visual evidence based on the results of the ï¬ rst hop. To better understand the inference process learned by the network, we design synthetic questions that specif- ically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3]. | 1511.05234#0 | 1511.05234#2 | 1511.05234 | [
"1511.03416"
] |
1511.05234#2 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Keywords: Visual Question Answering, Spatial Attention, Memory Net- work, Deep Learning # 1 Introduction Visual Question Answering (VQA) is an emerging interdisciplinary research problem at the intersection of computer vision, natural language processing and artiï¬ cial intelligence. It has many real-life applications, such as automatic query- ing of surveillance video [4] or assisting the visually impaired [5]. Compared to the recently popular image captioning task [6,7,8,9], VQA requires a deeper un- derstanding of the image, but is considerably easier to evaluate. It also puts more focus on artiï¬ cial intelligence, namely the inference process needed to produce the answer to the visual question. 2 | 1511.05234#1 | 1511.05234#3 | 1511.05234 | [
"1511.03416"
] |
1511.05234#3 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | What is the child standing on? skateboard What color is the phone booth? blue What color is the phone booth? blue Fig. 1. We propose a Spatial Memory Network for VQA (SMem-VQA) that answers questions about images using spatial inference. The ï¬ gure shows the inference process of our two-hop model on examples from the VQA dataset [2]. In the ï¬ rst hop (middle), the attention process captures the correspondence between individual words in the question and image regions. High attention regions (bright areas) are marked with bounding boxes and the corresponding words are highlighted using the same color. In the second hop (right), the ï¬ ne-grained evidence gathered in the ï¬ rst hop, as well as an embedding of the entire question, are used to collect more exact evidence to predict the answer. (Best viewed in color.) In one of the early works [1], VQA is seen as a Turing test proxy. The authors propose an approach based on handcrafted features using a semantic parse of the question and scene analysis of the image combined in a latent-world Bayesian framework. More recently, several end-to-end deep neural networks that learn features directly from data have been applied to this problem [10,11]. Most of these are directly adapted from captioning models [6,7,8], and utilize a recurrent LSTM network, which takes the question and Convolutional Neural Net (CNN) image features as input, and outputs the answer. Though the deep learning methods in [10,11] have shown great improvement compared to the handcrafted feature method [1], they have their own drawbacks. These models based on the LSTM reading in both the question and the image features do not show a clear improvement compared to an LSTM reading in the question only [10,11]. Fur- thermore, the rather complicated LSTM models obtain similar or worse accuracy to a baseline model which concatenates CNN features and a bag-of-words ques- tion embedding to predict the answer, see the IMG+BOW model in [11] and the iBOWIMG model in [3]. A major drawback of existing models is that they do not have any explicit notion of object position, and do not support the computation of intermedi- ate results based on spatial attention. Our intuition is that answering visual questions often involves looking at diï¬ erent spatial regions and comparing their contents and/or locations. | 1511.05234#2 | 1511.05234#4 | 1511.05234 | [
"1511.03416"
] |
1511.05234#4 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | For example, to answer the questions in Fig. 1, we need to look at a portion of the image, such as the child or the phone booth. Similarly, to answer the question â Is there a cat in the basket?â in Fig. 2, we can ï¬ rst ï¬ nd the basket and the cat objects, and then compare their locations. We propose a new deep learning approach to VQA that incorporates explicit spatial attention, which we call the Spatial Memory Network VQA (SMem- VQA). Our approach is based on memory networks, which have recently been proposed for text Question Answering (QA) [12,13]. Memory networks combine learned text embeddings with an attention mechanism and multi-step inference. The text QA memory network stores textual knowledge in its â memoryâ in the form of sentences, and selects relevant sentences to infer the answer. However, in VQA, the knowledge is in the form of an image, thus the memory and the question come from diï¬ erent modalities. We adapt the end-to-end memory net- work [13] to solve visual question answering by storing the convolutional network outputs obtained from diï¬ erent receptive ï¬ elds into the memory, which explicitly allows spatial attention over the image. We also propose to repeat the process of gathering evidence from attended regions, enabling the model to update the answer based on several attention steps, or â | 1511.05234#3 | 1511.05234#5 | 1511.05234 | [
"1511.03416"
] |
1511.05234#5 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | hopsâ . The entire model is trained end-to-end and the evidence for the computed answer can be visualized using the attention weights. To summarize our contributions, in this paper we â propose a novel multi-hop memory network with spatial attention for the VQA task which allows one to visualize the spatial inference process used by the deep network (a CAFFE [14] implementation will be made available), â design an attention architecture in the ï¬ rst hop which uses each word em- bedding to capture ï¬ ne-grained alignment between the image and question, â create a series of synthetic questions that explicitly require spatial inference to analyze the working principles of the network, and show that it learns logical inference rules by visualizing the attention weights, â provide an extensive evaluation of several existing models and our own model on the same publicly available datasets. Sec. 2 introduces relevant work on memory networks and attention models. Sec. 3 describes our design of the multi-hop memory network architecture for visual question answering (SMem-VQA). Sec. 4 visualizes the inference rules learned by the network for synthetic spatial questions and shows the experimen- tal results on DAQUAR [1] and VQA [2] datasets. Sec. 5 concludes the paper. # 2 Related work | 1511.05234#4 | 1511.05234#6 | 1511.05234 | [
"1511.03416"
] |
1511.05234#6 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Before the popularity of visual question answering (VQA), text question an- swering (QA) had already been established as a mature research problem in the area of natural language processing. Previous QA methods include searching for the key words of the question in a search engine [15]; parsing the question as a knowledge base (KB) query [16]; or embedding the question and using a similarity measurement to ï¬ nd evidence for the answer [17]. Recently, memory networks were proposed for solving the QA problem. [12] ï¬ rst introduces the memory network as a general model that consists of a memory and four compo- nents: input feature map, generalization, output feature map and response. The model is investigated in the context of question answering, where the long-term | 1511.05234#5 | 1511.05234#7 | 1511.05234 | [
"1511.03416"
] |
1511.05234#7 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | 3 4 memory acts as a dynamic knowledge base and the output is a textual response. [13] proposes a competitive memory network model that uses less supervision, called end-to-end memory network, which has a recurrent attention model over a large external memory. The Neural Turing Machine (NTM) [18] couples a neural network to external memory and interacts with it by attentional processes to in- fer simple algorithms such as copying, sorting, and associative recall from input and output examples. In this paper, we solve the VQA problem using a multi- modal memory network architecture that applies a spatial attention mechanism over an input image guided by an input text question. The neural attention mechanism has been widely used in diï¬ erent areas of computer vision and natural language processing, see for example the atten- tion models in image captioning [19], video description generation [20], machine translation [21][22] and machine reading systems [23]. Most methods use the soft attention mechanism ï¬ rst proposed in [21], which adds a layer to the network that predicts soft weights and uses them to compute a weighted combination of the items in memory. The two main types of soft attention mechanisms diï¬ er in the function that aligns the input feature vector and the candidate feature vectors in order to compute the soft attention weights. The ï¬ rst type uses an alignment function based on â concatenationâ of the input and each candidate (we use the term â concatenationâ as described [22]), and the second type uses an alignment function based on the dot product of the input and each candi- date. The â concatenationâ alignment function adds one input vector (e.g. hidden state vector of the LSTM) to each candidate feature vector, embeds the result- ing vectors into scalar values, and then applies the softmax function to generate the attention weight for each candidate. [19][20][21][23] use the â concatenationâ alignment function in their soft attention models and [24] gives a literature review of such models applied to diï¬ erent tasks. On the other hand, the dot product alignment function ï¬ rst projects both inputs to a common vector em- bedding space, then takes the dot product of the two input vectors, and applies a softmax function to the resulting scalar value to produce the attention weight for each candidate. The end-to-end memory network [13] uses the dot product alignment function. | 1511.05234#6 | 1511.05234#8 | 1511.05234 | [
"1511.03416"
] |
1511.05234#8 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | In [22], the authors compare these two alignment functions in an attention model for the neural machine translation task, and ï¬ nd that their implementation of the â concatenationâ alignment function does not yield good performance on their task. Motivated by this, in this paper we use the dot product alignment function in our Spatial Memory Network. VQA is related to image captioning. Several early papers about VQA directly adapt the image captioning models to solve the VQA problem [10][11] by gen- erating the answer using a recurrent LSTM network conditioned on the CNN output. | 1511.05234#7 | 1511.05234#9 | 1511.05234 | [
"1511.03416"
] |
1511.05234#9 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | But these modelsâ performance is still limited [10][11]. [25] proposes a new dataset and uses a similar attention model to that in image captioning [19], but does not give results on the more common VQA benchmark [2], and our own implementation of this model is less accurate on [2] than other baseline models. [3] summarizes several recent papers reporting results on the VQA dataset [2] on arxiv.org and gives a simple but strong baseline model (iBOWIMG) on this dataset. This simple baseline concatenates the image features with the bag of word embedding question representation and feeds them into a softmax classiï¬ er to predict the answer. The iBOWIMG model beats most VQA models consid- ered in the paper. Here, we compare our proposed model to the VQA models (namely, the ACK model [26] and the DPPnet model [27]) which have compa- rable or better results than the iBOWIMG model. The ACK model in [26] is essentially the same as the LSTM model in [11], except that it uses image at- tribute features, the generated image caption and relevant external knowledge from a knowledge base as the input to the LSTMâ | 1511.05234#8 | 1511.05234#10 | 1511.05234 | [
"1511.03416"
] |
1511.05234#10 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | s ï¬ rst time step. The DPPnet model in [27] tackles VQA by learning a convolutional neural network (CNN) with some parameters predicted from a separate parameter prediction network. Their parameter prediction network uses a Gate Recurrent Unit (GRU) to gen- erate a question representation, and based on this question input, maps the predicted weights to CNN via hashing. Neither of these models [26][27] contain a spatial attention mechanism, and they both use external data in addition to the VQA dataset [2], e.g. the knowledge base in [26] and the large-scale text corpus used to pre-train the GRU question representation [27]. In this paper, we explore a complementary approach of spatial attention to both improve perfor- mance and visualize the networkâ s inference process, and obtain improved results without using external data compared to the iBOWIMG model [3] as well as the ACK model [26] and the DPPnet model [27] which use external data. # 3 Spatial Memory Network for VQA | 1511.05234#9 | 1511.05234#11 | 1511.05234 | [
"1511.03416"
] |
1511.05234#11 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We ï¬ rst give an overview of the proposed SMem-VQA network, illustrated in Fig. 2 (a). Sec. 3.1 details the word-guided spatial attention process of the ï¬ rst hop shown in Fig. 2 (b), and Sec. 3.2 describes adding a second hop into SMem- VQA network. The input to our network is a question comprised of a variable-length se- quence of words, and an image of ï¬ xed size. Each word in the question is ï¬ rst represented as a one-hot vector in the size of the vocabulary, with a value of one only in the corresponding word position and zeros in the other posi- tions. Each one-hot vector is then embedded into a real-valued word vector, V = {vj | vj â RN ; j = 1, · · · , T }, where T is the maximum number of words in the question and N is the dimensionality of the embedding space. Sentences with length less than T are padded with special â 1 value, which are embedded to all-zero word vector. The words in questions are used to compute attention over the visual mem- ory, which contains extracted image features. The input image is processed by a convolutional neural network (CNN) to extract high-level M -dimensional vi- sual features on a grid of spatial locations. | 1511.05234#10 | 1511.05234#12 | 1511.05234 | [
"1511.03416"
] |
1511.05234#12 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Speciï¬ cally, we use S = {si | si â RM ; i = 1, · · · , L} to represent the spatial CNN features at each of the L grid locations. In this paper, the spatial feature outputs of the last convolutional layer of GoogLeNet (inception 5b/output) [28] are used as the visual features for the image. 5 6 vy, fi Is there a cat in the basket? word embedding first hop Se |-â vy se (b) SxW, 4 â testis | py aim image embeddings Oo ase eet ce (a) Overview (b) Word-guided attention next hop memory Fig. 2. Our proposed Spatial Memory Network for Visual Question Answering (SMem- VQA). (a) Overview. First, the CNN activation vectors S = {si} at image locations i are projected into the semantic space of the question word vectors vj using the â atten- tionâ visual embedding WA (Sec. 3). The results are then used to infer spatial attention weights Watt using the word-guided attention process shown in (b). (b) Word-guided attention. This process predicts attention determined by the question word that has the maximum correlation with embedded visual features at each location, e.g. choosing the word basket to attend to the location of the basket in the above image (Sec. 3.1). The resulting spatial attention weights Watt are then used to compute a weighted sum over the visual features embedded via a separate â | 1511.05234#11 | 1511.05234#13 | 1511.05234 | [
"1511.03416"
] |
1511.05234#13 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | evidenceâ transformation WE, e.g., selecting evidence for the cat concept at the basket location. Finally, the weighted evidence vector Satt is combined with the full question embedding Q to predict the answer. An additional hop can repeat the process to gather more evidence (Sec. 3.2). The convolutional image feature vectors at each location are embedded into a common semantic space with the word vectors. Two diï¬ erent embeddings are used: the â attentionâ embedding WA and the â evidenceâ embedding WE. The attention embedding projects each visual feature vector such that its combina- tion with the embedded question words generates the attention weight at that location. The evidence embedding detects the presence of semantic concepts or objects, and the embedding results are multiplied with attention weights and summed over all locations to generate the visual evidence vector Satt. Finally, the visual evidence vector is combined with the question represen- tation and used to predict the answer for the given image and question. In the next section, we describe the one-hop Spatial Memory network model and the speciï¬ c attention mechanism it uses in more detail. # 3.1 Word Guided Spatial Attention in One-Hop Model Rather than using the bag-of-words question representation to guide attention, the attention architecture in the ï¬ rst hop (Fig. 2(b)) uses each word vector separately to extract correlated visual features in memory. The intuition is that the BOW representation may be too coarse, and letting each word select a related region may provide more ï¬ ne-grained attention. The correlation matrix C â RT à L between word vectors V and visual features S is computed as C = V · (S · WA + bA)T (1) where WA â RM à N contains the attention embedding weights of visual features S, and bA â RLà N is the bias term. This correlation matrix is the dot product result of each word embedding and each spatial locationâ s visual feature, thus each value in correlation matrix C measures the similarity between each word and each locationâ s visual feature. The spatial attention weights Watt are calculated by taking maximum over the word dimension T for the correlation matrix C, selecting the highest corre- lation value for each spatial location, and then applying the softmax function Watt = softmax( max i=1,··· ,T (Ci)), Ci â | 1511.05234#12 | 1511.05234#14 | 1511.05234 | [
"1511.03416"
] |
1511.05234#14 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | RL (2) The resulting attention weights Watt â RL are high for selected locations and low for other locations, with the sum of weights equal to 1. For instance, in the example shown in Fig. 2, the question â Is there a cat in the basket?â pro- duces high attention weights for the location of the basket because of the high correlation of the word vector for basket with the visual features at that location. The evidence embedding WE projects visual features S to produce high ac- tivations for certain semantic concepts. E.g., in Fig. 2, it has high activations in the region containing the cat. The results of this evidence embedding are then multiplied by the generated attention weights Watt, and summed to produce the selected visual â evidenceâ vector Satt â RN , Satt = Watt · (S · WE + bE) where WE â RM à N are the evidence embedding weights of the visual features S, and bE â RLà N is the bias term. | 1511.05234#13 | 1511.05234#15 | 1511.05234 | [
"1511.03416"
] |
1511.05234#15 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | In our running example, this step accumulates cat presence features at the basket location. Finally, the sum of this evidence vector Satt and the question embedding Q is used to predict the answer for the given image and question. For the question representation Q, we choose the bag-of-words (BOW). Other question represen- tations, such as an LSTM, can also be used, however, BOW has fewer parameters yet has shown good performance. As noted in [29], the simple BOW model per- forms roughly as well if not better than the sequence-based LSTM for the VQA task. | 1511.05234#14 | 1511.05234#16 | 1511.05234 | [
"1511.03416"
] |
1511.05234#16 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Speciï¬ cally, we compute Q = WQ · V + bQ where WQ â RT represents the BOW weights for word vectors V , and bQ â RN is the bias term. The ï¬ nal prediction P is P = softmax(WP · f (Satt + Q) + bP ) where WP â RKà N , bias term bP â RK, and K represents the number of possible prediction answers. f is the activation function, and we use ReLU here. In our running example, this step adds the evidence gathered for cat near the basket location to the question, and, since the cat was not found, predicts the answer â | 1511.05234#15 | 1511.05234#17 | 1511.05234 | [
"1511.03416"
] |
1511.05234#17 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | noâ . The attention and evidence computation steps can be optionally repeated in another hop, before predicting the ï¬ nal answer, as detailed in the next section. 7 8 # 3.2 Spatial Attention in Two-Hop Model We can repeat hops to promote deeper inference, gathering additional evidence at each hop. Recall that the visual evidence vector Satt is added to the question representation Q in the ï¬ rst hop to produce an updated question vector, Ohop1 = Satt + Q (6) On the next hop, this vector Ohop1 â RN is used in place of the individual word vectors V to extract additional correlated visual features to the whole question from memory and update the visual evidence. The correlation matrix C in the ï¬ rst hop provides ï¬ ne-grained local evidence from each word vectors V in the question, while the correlation vector Chop2 in next hop considers the global evidence from the whole question representation Q. The correlation vector Chop2 â RL in the second hop is calculated by Chop2 = (S · WE + bE) · Ohop1 (7) where WE â RM à N should be the attention embedding weights of visual features S in the second hop and bE â RLà N should be the bias term. Since the attention embedding weights in the second hop are shared with the evidence embedding in the ï¬ rst hop, so we directly use WE and bE from ï¬ rst hop here. The attention weights in the second hop Watt2 are obtained by applying the softmax function to the correlation vector Chop2. Watt2 = softmax(Chop2) (8) Then, the correlated visual information in the second hop Satt2 â RN is extracted using attention weights Watt2. Satt2 = Watt2 · (S · WE2 + bE2 ) (9) where WE2 â RM à N are the evidence embedding weights of visual features S in the second hop, and bE2 â RLà N is the bias term. The ï¬ nal answer P is predicted by combining the whole question represen- tation Q, the local visual evidence Satt from each word vector in the ï¬ rst hop and the global visual evidence Satt2 from the whole question in the second hop, P = softmax(WP · f (Ohop1 + Satt2) + bP ) (10) where WP â RKà | 1511.05234#16 | 1511.05234#18 | 1511.05234 | [
"1511.03416"
] |
1511.05234#18 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | N , bias term bP â RK, and K represents the number of possible prediction answers. f is activation function. More hops can be added in this manner. The entire network is diï¬ erentiable and is trained using stochastic gradient descent via standard backpropagation, allowing image feature extraction, image embedding, word embedding and answer prediction to be jointly optimized on the training image/question/answer triples. Is there a red square on the top ? Is there a red square on the bottom ? Is there a red square on the right ? Is there a red square on the left ? GT: no Prediction: no Gt yes Prediction: yes GT: no Prediction: no Gt: yes Prediction: yes Is there a red square on the bottom ? Is there a red square on the right ? Is there a red square on the left ? Is there a red square on the left ? | 1511.05234#17 | 1511.05234#19 | 1511.05234 | [
"1511.03416"
] |
1511.05234#19 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | GT no Prediction: no Gt: no Prediction: no Gt:no Prediction: no Gt:no Prediction: no . a a Is there a red square on the top ? GT: no Prediction: no Is there a red square on the bottom ? Gt yes Prediction: yes Is there a red square on the right ? GT: no Prediction: no Is there a red square on the left ? Gt: yes Prediction: yes Is there a red square on the bottom ? GT no Prediction: no . Is there a red square on the right ? Gt: no Prediction: no Is there a red square on the left ? Gt:no Prediction: no a Is there a red square on the left ? Gt:no Prediction: no a Fig. 3. Absolute position experiment: for each image and question pair, we show the original image (left) and the attention weights Watt (right). The attention follows the following rules. The ï¬ rst rule (top row) looks at the position speciï¬ ed in question (top|bottom|right|left), if it contains a square, answer â yesâ ; otherwise answer â noâ . The second rule (bottom row) looks at the region where there is a square, and answers â yesâ if the question contains that position and â noâ for the other three positions. # 4 Experiments In this section, we conduct a series of experiments to evaluate our model. To explore whether the model learns to perform the spatial inference necessary for answering visual questions that explicitly require spatial reasoning, we design a set of experiments using synthetic visual question/answer data in Sec. 4.1. The experimental results of our model in standard datasets (DAQUAR [1] and VQA [2] datasets) are reported in Sec. 4.2. # 4.1 Exploring Attention on Synthetic Data The questions in the public VQA datasets are quite varied and diï¬ cult and often require common sense knowledge to answer (e.g., â Does this man have 20/20 vision?â about a person wearing glasses). Furthermore, past work [10,11] showed that the question text alone (no image) is a very strong predictor of the answer. Therefore, before evaluating on standard datasets, we would ï¬ rst like to understand how the proposed model uses spatial attention to answer simple visual questions where the answer cannot be predicted from question alone. | 1511.05234#18 | 1511.05234#20 | 1511.05234 | [
"1511.03416"
] |
1511.05234#20 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Our visualization demonstrates that the attention mechanism does learn to attend to objects and gather evidence via certain inference rules. Absolute Position Recognition We investigate whether the model has the ability to recognize the absolute location of the object in the image. We explore this by designing a simple task where an object (a red square) appears in some region of a white-background image, and the question is â Is there a red square on the [top|bottom|left|right]?â For each image, we randomly place the square in one of the four regions, and generate the four questions above, together with three â | 1511.05234#19 | 1511.05234#21 | 1511.05234 | [
"1511.03416"
] |
1511.05234#21 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | noâ answers and one â yesâ answer. The generated data is split into training and testing sets. Due to the simplicity of this synthetic dataset, the SMem-VQA one-hop model achieves 100% test accuracy. However, the baseline model (iBOWIMG) [3] 9 10 2 Prediction: Is there a red square on the top of the cat? Prediction: no Is there a red square on the bottom of th Gr. yer ts there a red square on yes . F Is there a red square on the left of the cat? rie) : 7 â on the right of the cat? | 1511.05234#20 | 1511.05234#22 | 1511.05234 | [
"1511.03416"
] |
1511.05234#22 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Prediction: Is there a red square on the right of the cat? eT: 10 Prediction: no = Is there a red square on the top of the cat? Prediction: no cr: no Prediction: no 2 Prediction: Is there a red square on the bottom of th Gr. yer ts there a red square on yes . F â on the right of the cat? Prediction: Is there a red square on the right of the cat? eT: 10 Prediction: no = Is there a red square on the top of the cat? Prediction: no rie) Is there a red square on the left of the cat? : 7 Prediction: no Is there a red square on the top of the cat? cr: no Prediction: no | 1511.05234#21 | 1511.05234#23 | 1511.05234 | [
"1511.03416"
] |
1511.05234#23 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Fig. 4. Relative position experiment: for each image and question pair, we show the original image (left), the evidence embedding WE of the convolutional layer (mid- dle) and the attention weights Watt (right). The evidence embedding WE has high activations on both cat and red square. The attention weights follow similar inference rules as in Fig. 3, with the diï¬ erence that the attention position is around the cat. cannot infer the answer and only obtains accuracy of around 75%, which is the prior probability of the answer â | 1511.05234#22 | 1511.05234#24 | 1511.05234 | [
"1511.03416"
] |
1511.05234#24 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | noâ in the training set. The SMem-VQA one-hop model is equivalent to the iBOWIMG model if the attention weights in our one- hop model are set equally for each location, since the iBOWIMG model uses the mean pool of the convolutional feature (inception 5b/output) in GoogLeNet that we use in SMem-VQA model. We check the visualization of the attention weights and ï¬ nd that the relationship between the high attention position and the answer can be expressed by logical expressions. We show the attention weights of several typical examples in Fig. 3 which reï¬ ect two logic rules: 1) Look at the position speciï¬ ed in question (top|bottom|right|left), if it contains a square, then answer â yesâ ; if it does not contain a square, then answer â noâ . 2) Look at the region where there is a square, then answer â yesâ for the question about that position and â noâ for the questions about the other three positions. | 1511.05234#23 | 1511.05234#25 | 1511.05234 | [
"1511.03416"
] |
1511.05234#25 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | In the iBOWIMG model, the mean-pooled GoogLeNet visual features lose spatial information and thus cannot distinguish images with a square in diï¬ er- ent positions. On the contrary, our SMem-VQA model can pay high attention to diï¬ erent regions according to the question, and generate an answer based on the selected region, using some learned inference rules. This experiment demon- strates that the attention mechanism in our model is able to make absolute spatial location inference based on the spatial attention. Relative Position Recognition In order to check whether the model has the ability to infer the position of one object relative to another object, we collect all the cat images from the MS COCO Detection dataset [30], and add a red square on the [top|bottom|left|right] of the bounding box of the cat in the images. For each generated image, we create four questions, â Is there a red square on the [top|bottom|left|right] of the cat?â together with three â noâ answers and one â yesâ answer. | 1511.05234#24 | 1511.05234#26 | 1511.05234 | [
"1511.03416"
] |
1511.05234#26 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We select 2639 training cat images and 1395 testing cat images from MS COCO Detection dataset. Our SMem-VQA one-hop model achieves 96% test accuracy on this synthetic task, while the baseline model (iBOWIMG) accuracy is around 75%. We also check that another simple baseline that predicts the answer based on the abso- Table 1. Accuracy results on the DAQUAR dataset (in percentage). Multi-World [1] Neural-Image-QA [10] Question LSTM [10] VIS+LSTM [11] Question BOW [11] IMG+BOW [11] SMem-VQA One-Hop SMem-VQA Two-Hop DAQUAR 12.73 29.27 32.32 34.41 32.67 34.17 36.03 40.07 lute position of the square in the image gets around 70% accuracy. We visualize the evidence embedding WE features and the attention weights Watt of several typical examples in Fig. 4. The evidence embedding WE has high activations on the cat and the red square, while the attention weights pay high attention to certain locations around the cat. We can analyze the attention in the correctly predicted examples using the same rules as in absolute position recognition ex- periment. These rules still work, but the position is relative to the cat object: 1) Check the speciï¬ ed position relative to the cat, if it ï¬ nds the square, then answer â yesâ , otherwise â noâ ; 2) Find the square, then answer â yesâ for the speciï¬ ed position, and answer â noâ for the other positions around the cat. We also check the images where our model makes mistakes, and ï¬ nd that the mis- takes mainly occur in images with more than one cats. The red square appears near only one of the cats in the image, but our model might make mistakes by focusing on the other cats. We conclude that our SMem-VQA model can infer the relative spatial position based on the spatial attention around the speciï¬ ed object, which can also be represented by some logical inference rules. # 4.2 Experiments on Standard Datasets Results on DAQUAR The DAQUAR dataset is a relatively small dataset which builds on the NYU Depth Dataset V2 [31]. | 1511.05234#25 | 1511.05234#27 | 1511.05234 | [
"1511.03416"
] |
1511.05234#27 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | We use the reduced DAQUAR dataset [1]. The evaluation metric for this dataset is 0-1 accuracy. The embedding dimension is 512 for our models running on the DAQUAR dataset. We use several reported models on DAQUAR as baselines, which are listed below: â ¢ Multi-World [1]: an approach based on handcrafted features using a semantic parse of the question and scene analysis of the image combined in a latent-world Bayesian framework. â ¢ Neural-Image-QA [10]: uses an LSTM to encode the question and then decode the hidden information into the answer. The image CNN feature vector is shown at each time step of the encoding phase. â ¢ Question LSTM [10]: only shows the question to the LSTM to predict the answer without any image information. â ¢ VIS+LSTM [11]: similar to Neural-Image-QA, but only shows the image features to the LSTM at the ï¬ rst time step, and the question in the remaining time steps to predict the answer. | 1511.05234#26 | 1511.05234#28 | 1511.05234 | [
"1511.03416"
] |
1511.05234#28 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | 11 12 what electrical GT: blender___One Hi which way can you not turn 7 GT: left â One Hop: right _ Two Hop: left what is the colour of the object near the bed ? GT: pink One Hop: bed Two Hop: pink what is beneath the framed picture ? GT: sofa â One Hop: table___Two Hop: sofa what electrical GT: blender___One Hi which way can you not turn 7 GT: left â One Hop: right _ Two Hop: left | 1511.05234#27 | 1511.05234#29 | 1511.05234 | [
"1511.03416"
] |
1511.05234#29 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | what is the colour of the object near the bed ? GT: pink One Hop: bed Two Hop: pink what is beneath the framed picture ? GT: sofa â One Hop: table___Two Hop: sofa Fig. 5. Visualization of the spatial attention weights in the SMem-VQA One-Hop and Two-Hop models on VQA (top row) and DAQUAR (bottom row) datasets. For each image and question pair, we show the original image, the attention weights Watt of the One-Hop model, and the two attention weights Watt and Watt2 of the Two-Hop model in order. | 1511.05234#28 | 1511.05234#30 | 1511.05234 | [
"1511.03416"
] |
1511.05234#30 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | â ¢ Question BOW [11]: only uses the BOW question representation and a single hidden layer neural network to predict the answer, without any image features. â ¢ IMG+BOW [11]: concatenates the BOW question representation with image features, and then uses a single hidden layer neural network to predict the answer. This model is similar to the iBOWIMG baseline model in [3]. Results of our SMem-VQA model on the DAQUAR dataset and the base- line model results reported in previous work are shown in Tab. 1. From the DAQUAR result in Tab. 1, we see that models based on deep features signif- icantly outperform the Multi-World approach based on hand-crafted features. Modeling the question only with either the LSTM model or Question BOW model does equally well in comparison, indicating the the question text contains important prior information for predicting the answer. Also, on this dataset, the VIS+LSTM model achieves better accuracy than Neural-Image-QA model; the former shows the image only at the ï¬ rst timestep of the LSTM, while the latter does so at each timestep. In comparison, both our One-Hop model and Two-Hop spatial attention models outperform the IMG+BOW, as well as the other baseline models. A major advantage of our model is the ability to visual- ize the inference process in the deep network. To illustrate this, two attention weights visualization examples in SMem-VQA One-Hop and Two-Hop models on DAQUAR dataset are shown in Fig. 5 (bottom row). Results on VQA The VQA dataset is a recent large dataset based on MS COCO [30]. We use the full release (V1.0) open-ended dataset, which con- tains a train set and a val set. Following standard practice, we choose the top 1000 answers in train and val sets as possible prediction answers, and only keep the examples whose answers belong to these 1000 answers as train- ing data. The question vocabulary size is 7477 with the word frequency of at least three. Because of the larger training size, the embedding dimension is 1000 on the VQA dataset. We report the test-dev and test-standard results from the VQA evaluation server. The server evaluation uses the evaluation met- Table 2. | 1511.05234#29 | 1511.05234#31 | 1511.05234 | [
"1511.03416"
] |
1511.05234#31 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Test-dev and test-standard results on the Open-Ended VQA dataset (in percentage). Models with â use external training data in addition to the VQA dataset. test-dev test-standard Overall yes/no number others Overall yes/no number others 36.42 53.74 LSTM Q+I [2] ACKâ [26] 40.08 55.72 DPPnetâ [27] 41.69 57.22 42.62 55.72 iBOWIMG [3] SMem-VQA One-Hop 42.09 56.56 SMem-VQA Two-Hop 57.99 80.87 37.32 43.12 78.94 79.23 80.71 76.55 78.98 35.24 36.13 37.24 35.03 35.93 54.06 55.98 57.36 55.89 - 58.24 - 79.05 80.28 76.76 - 80.8 - 36.10 36.92 34.98 - - 40.61 42.24 42.62 - 37.53 43.48 | 1511.05234#30 | 1511.05234#32 | 1511.05234 | [
"1511.03416"
] |
1511.05234#32 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | ric introduced by [2], which gives partial credit to certain synonym answers: Acc(ans) = min {(# humans that said ans)/3, 1}. For the attention models, we do not mirror the input image when using the CNN to extract convolutional features, since this might cause confusion about the spatial locations of objects in the input image. The optimization algorithm used is stochastic gradient descent (SGD) with a minibatch of size 50 and mo- mentum of 0.9. The base learning rate is set to be 0.01 which is halved every six epoches. Regularization, dropout and L2 norm are cross-validated and used. For the VQA dataset, we use the simple iBOWIMG model in [3] as one baseline model, which beats most existing VQA models currently on arxiv.org. We also compare to two models in [26][27] which have comparable or better results to the iBOWIMG model. These three baseline models as well the best model in VQA dataset paper [2] are listed in the following: â ¢ LSTM Q+I [2]: uses the element-wise multiplication of the LSTM encoding of the question and the image feature vector to predict the answer. This is the best model in the VQA dataset paper. â ¢ ACK [26]: shows the image attribute features, the generated image caption and relevant external knowledge from knowledge base to the LSTM at the ï¬ rst time step, and the question in the remaining time steps to predict the answer. â ¢ DPPnet [27]: uses the Gated Recurrent Unit (GRU) representation of question to predict certain parameters for a CNN classiï¬ cation network. They pre-train the GRU for question representation on a large-scale text corpus to improve the GRU generalization performance. â ¢ iBOWIMG [3]: concatenates the BOW question representation with image feature (GoogLeNet), and uses a softmax classiï¬ cation to predict the answer. The overall accuracy and per-answer category accuracy for our SMem-VQA models and the four baseline models on VQA dataset are shown in Tab. 2. From the table, we can see that the SMem-VQA One-Hop model obtains slightly better results compared to the iBOWIMG model. | 1511.05234#31 | 1511.05234#33 | 1511.05234 | [
"1511.03416"
] |
1511.05234#33 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | However, the SMem-VQA Two-Hop model achieves an improvement of 2.27% on test-dev and 2.35% on test-standard compared to the iBOWIMG model, demonstrating the value of spatial attention. The SMem-VQA Two-Hop model also shows best performance in the per-answer category accuracy. The SMem-VQA Two-Hop model has slightly better result than the DPPnet model. The DPPnet model uses a large-scale text corpus to pre-train the Gated Recurrent Unit (GRU) network for question representation. Similar pre-training work on extra data to improve model accuracy has been | 1511.05234#32 | 1511.05234#34 | 1511.05234 | [
"1511.03416"
] |
1511.05234#34 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | 13 14 do tourist enjoy a day at the beach? yes. what color is the fork? green do tourist enjoy a day at the beach? yes. what color is the fork? green what game are they playing? baseball what is the woman doing? eating what game are they playing? baseball what is the woman doing? eating Fig. 6. Visualization of the original image (left), the spatial attention weights Watt in the ï¬ rst hop (middle) and one correlation vector from the correlation matrix C for the location with highest attention weight in the SMem-VQA Two-Hop model on the VQA dataset. Higher values in the correlation vector indicate stronger correlation of that word with the chosen locationâ s image features. | 1511.05234#33 | 1511.05234#35 | 1511.05234 | [
"1511.03416"
] |
1511.05234#35 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | done in [32]. Considering the fact that our model does not use extra data to pre- train the word embeddings, its results are very competitive. We also experiment with adding a third hop into our model on the VQA dataset, but the result does not improve further. The attention weights visualization examples for the SMem-VQA One-Hop and Two-Hop models on the VQA dataset are shown in Fig. 5 (top row). From the visualization, we can see that the two-hop model collects supplementary evidence for inferring the answer, which may be necessary to achieve an im- provement on these complicated real-world datasets. We also visualize the ï¬ ne- grained alignment in the ï¬ rst hop of our SMem-VQA Two-Hop model in Fig. 6. The correlation vector values (blue bars) measure the correlation between image regions and each word vector in the question. Higher values indicate stronger correlation of that particular word with the speciï¬ | 1511.05234#34 | 1511.05234#36 | 1511.05234 | [
"1511.03416"
] |
1511.05234#36 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | c locationâ s image features. We observe that the ï¬ ne-grained visual evidence collected using each local word vector, together with the global visual evidence from the whole question, com- plement each other to infer the correct answer for the given image and question, as shown in Fig. 1. 5 Conclusion In this paper, we proposed the Spatial Memory Network for VQA, a memory network architecture with a spatial attention mechanism adapted to the visual question answering task. We proposed a set of synthetic spatial questions and demonstrated that our model learns inference rules based on spatial attention through attention weight visualization. Evaluation on the challenging DAQUAR and VQA datasets showed improved results over previously published models. Our model can be used to visualize the inference steps learned by the deep network, giving some insight into its processing. Future work may include further exploring the inference ability of our SMem-VQA model and exploring other VQA attention models. | 1511.05234#35 | 1511.05234#37 | 1511.05234 | [
"1511.03416"
] |
1511.05234#37 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | # References 1. Malinowski, M., Fritz, M.: A multi-world approach to question answering about real-world scenes based on uncertain input. CoRR abs/1410.0210 (2014) 2. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C.L., Parikh, D.: VQA: visual question answering. CoRR abs/1505.00468 (2015) 3. Zhou, B., Tian, Y., Sukhbaatar, S., Szlam, A., Fergus, R.: Simple baseline for visual question answering. arXiv preprint arXiv:1512.02167 (2015) 4. Tu, K., Meng, M., Lee, M.W., Choe, T.E., Zhu, S.C.: Joint video and text parsing for understanding events and answering queries. MultiMedia, IEEE 21(2) (2014) 42â | 1511.05234#36 | 1511.05234#38 | 1511.05234 | [
"1511.03416"
] |
1511.05234#38 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | 70 5. Lasecki, W.S., Zhong, Y., Bigham, J.P.: Increasing the bandwidth of crowdsourced visual question answering to better support blind users. In: Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility, ACM (2014) 263â 264 6. Donahue, J., Hendricks, L.A., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint arXiv:1411.4389 (2014) 7. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555 (2014) 8. Karpathy, A., Joulin, A., Li, F.F.F.: | 1511.05234#37 | 1511.05234#39 | 1511.05234 | [
"1511.03416"
] |
1511.05234#39 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Deep fragment embeddings for bidirectional image sentence mapping. In: Advances in neural information processing systems. (2014) 1889â 1897 9. Fang, H., Gupta, S., Iandola, F., Srivastava, R., Deng, L., Doll´ar, P., Gao, J., He, X., Mitchell, M., Platt, J., et al.: From captions to visual concepts and back. arXiv preprint arXiv:1411.4952 (2014) 10. Malinowski, M., Rohrbach, M., Fritz, M.: Ask your neurons: A neural-based ap- proach to answering questions about images. arXiv preprint arXiv:1505.01121 (2015) 11. Ren, M., Kiros, R., Zemel, R.S.: Exploring models and data for image question answering. CoRR abs/1505.02074 (2015) 12. Weston, J., Chopra, S., Bordes, A.: Memory networks. CoRR abs/1410.3916 (2014) 13. Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. arXiv preprint arXiv:1503.08895 (2015) | 1511.05234#38 | 1511.05234#40 | 1511.05234 | [
"1511.03416"
] |
1511.05234#40 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | 14. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar- rama, S., Darrell, T.: Caï¬ e: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014) 15. Yahya, M., Berberich, K., Elbassuoni, S., Ramanath, M., Tresp, V., Weikum, G.: Natural language questions for the web of data. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning, Association for Computational Linguistics (2012) 379â 390 16. Berant, J., Liang, P.: Semantic parsing via paraphrasing. In: Proceedings of ACL. Volume 7. (2014) 92 17. | 1511.05234#39 | 1511.05234#41 | 1511.05234 | [
"1511.03416"
] |
1511.05234#41 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676 (2014) 18. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014) 15 16 19. Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: | 1511.05234#40 | 1511.05234#42 | 1511.05234 | [
"1511.03416"
] |
1511.05234#42 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044 (2015) 20. Yao, L., Torabi, A., Cho, K., Ballas, N., Pal, C., Larochelle, H., Courville, A.: Describing videos by exploiting temporal structure. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 4507â 4515 21. | 1511.05234#41 | 1511.05234#43 | 1511.05234 | [
"1511.03416"
] |
1511.05234#43 | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering | Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014) 22. Luong, M.T., Pham, H., Manning, C.D.: Eï¬ ective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015) 23. Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: | 1511.05234#42 | 1511.05234#44 | 1511.05234 | [
"1511.03416"
] |
Subsets and Splits