id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1511.05234#44
Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems. (2015) 1684â 1692 24. Cho, K., Courville, A., Bengio, Y.: Describing multimedia content using attention- based encoderâ decoder networks. (2015) 25. Zhu, Y., Groth, O., Bernstein, M., Fei-Fei, L.: Visual7w: Grounded question an- swering in images. arXiv preprint arXiv:1511.03416 (2015) 26. Wu, Q., Wang, P., Shen, C., Hengel, A.v.d., Dick, A.: Ask me anything: Free- form visual question answering based on knowledge from external sources. arXiv preprint arXiv:1511.06973 (2015) 27.
1511.05234#43
1511.05234#45
1511.05234
[ "1511.03416" ]
1511.05234#45
Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
Noh, H., Seo, P.H., Han, B.: Image question answering using convolutional neu- ral network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756 (2015) 28. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR 2015. (2015) 29. Shih, K.J., Singh, S., Hoiem, D.: Where to look: Focus regions for visual question answering. arXiv preprint arXiv:1511.07394 (2015)
1511.05234#44
1511.05234#46
1511.05234
[ "1511.03416" ]
1511.05234#46
Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering
30. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Visionâ ECCV 2014. Springer (2014) 740â 755 31. Nathan Silberman, Derek Hoiem, P.K., Fergus, R.: Indoor segmentation and sup- port inference from rgbd images. In: ECCV. (2012) 32. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014)
1511.05234#45
1511.05234
[ "1511.03416" ]
1511.04636#0
Deep Reinforcement Learning with a Natural Language Action Space
8 Jun 2016 arXiv:1511.04636v5 [cs.AI] # Deep Reinforcement Learning with a Natural Language Action Space # Ji Heâ , Jianshu Chenâ , Xiaodong Heâ , Jianfeng Gao', Lihong Lit Li Deng! and Mari Ostendorf* â Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA {jvking, ostendor}@uw.edu *Microsoft Research, Redmond, WA 98052, USA {jianshuc, xiaohe, jfgao, lihongli, deng}@microsoft.com # Abstract This paper introduces a novel architec- ture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture repre- sents action and state spaces with sepa- rate embedding vectors, which are com- bined with an interaction function to ap- proximate the Q-function in reinforce- ment learning. We evaluate the DRRN on two popular text games, showing su- perior performance over other deep Q- learning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.
1511.04636#1
1511.04636
[ "1511.04636" ]
1511.04636#1
Deep Reinforcement Learning with a Natural Language Action Space
# 1 Introduction This work is concerned with learning strategies for sequential decision-making tasks, where a sys- tem takes actions at a particular state with the goal of maximizing a long-term reward. More specifi- cally, we consider tasks where both the states and the actions are characterized by natural language, such as in human-computer dialog systems, tutor- ing systems, or text-based games. In a text-based game, for example, the player (or system, in this case) is given a text string that describes the cur- rent state of the game and several text strings that describe possible actions one could take. After se- lecting one of the actions, the environment state is updated and revealed in a new textual description. A reward is given either at each transition or in the end. The objective is to understand, at each step, the state text and all the action texts to pick the most relevant action, navigating through the se- quence of texts so as to obtain the highest long- term reward. Here the notion of relevance is based on the joint state/action impact on the reward: an action text string is said to be â more relevantâ (to a state text string) than the other action texts if taking that action would lead to a higher long- term reward. Because a playerâ s action changes the environment, reinforcement learning (Sutton and Barto, 1998) is appropriate for modeling long- term dependency in text games. There is a large body of work on reinforcement learning. Of most interest here are approaches leveraging neural networks because of their suc- cess in handling a large state space. Early work â
1511.04636#0
1511.04636#2
1511.04636
[ "1511.04636" ]
1511.04636#2
Deep Reinforcement Learning with a Natural Language Action Space
TD-gammon â used a neural network to approxi- mate the state value function (Tesauro, 1995). Re- cently, inspired by advances in deep learning (Le- Cun et al., 2015; Hinton et al., 2012; Krizhevsky et al., 2012; Dahl et al., 2012), significant progress has been made by combining deep learning with reinforcement learning. Building on the approach of Q-learning (Watkins and Dayan, 1992), the â Deep Q-Networkâ (DQN) was developed and ap- plied to Atari games (Mnih et al., 2013; Mnih et al., 2015) and shown to achieve human level per- formance by applying convolutional neural net- works to the raw image pixels. Narasimhan et al. (2015) applied a Long Short-Term Memory network to characterize the state space in a DQN framework for learning control policies for parser- based text games. More recently, Nogueira and Cho (2016) have also proposed a goal-driven web navigation task for language based sequential de- cision making study. Another stream of work fo- cuses on continuous control with deep reinforce- ment learning (Lillicrap et al., 2016), where an actor-critic algorithm operates over a known con- tinuous action
1511.04636#1
1511.04636#3
1511.04636
[ "1511.04636" ]
1511.04636#3
Deep Reinforcement Learning with a Natural Language Action Space
# space. Inspired by these successes and recent work us- ing neural networks to learn phrase- or sentence- level embeddings (Collobert and Weston, 2008; Huang et al., 2013; Le and Mikolov, 2014; Sutskever et al., 2014; Kiros et al., 2015), we propose a novel deep architecture for text under- standing, which we call a deep reinforcement rele- vance network (DRRN). The DRRN uses separate deep neural networks to map state and action text strings into embedding vectors, from which â rel- evanceâ is measured numerically by a general in- teraction function, such as their inner product. The output of this interaction function defines the value of the Q-function for the current state-action pair, which characterizes the optimal long-term reward for pairing these two text strings. The Q-function approximation is learned in an end-to-end manner by Q-learning. The DRRN differs from prior work in that ear- lier studies mostly considered action spaces that are bounded and known. For actions described by natural language text strings, the action space is inherently discrete and potentially unbounded due to the exponential complexity of language with re- spect to sentence length. A distinguishing aspect of the DRRN architecture â compared to sim- ple DQN extensions â is that two different types of meaning representations are learned, reflecting the tendency for state texts to describe scenes and action texts to describe potential actions from the user. We show that the DRRN learns a continuous space representation of actions that successfully generalize to paraphrased descriptions of actions unseen in training. # 2 Deep Reinforcement Relevance Network # 2.1 Text Games and Q-learning We consider the sequential decision making prob- lem for text understanding. At each time step t, the agent will receive a string of text that de- scribes the state s; (i.e., â state-textâ â ) and several strings of text that describe all the potential ac- tions a; (i.e., â action-textâ ). The agent attempts to understand the texts from both the state side and the action side, measuring their relevance to the current context s; for the purpose of maximizing the long-term reward, and then picking the best action. Then, the environment state is updated St41 = 8â according to the probability p(sâ
1511.04636#2
1511.04636#4
1511.04636
[ "1511.04636" ]
1511.04636#4
Deep Reinforcement Learning with a Natural Language Action Space
|s, a), and the agent receives a reward 7; for that partic- ular transition. The policy of the agent is defined to be the probability 7(a,|s;) of taking action a, at state s;. Define the Q-function Q7(s, a) as the expected return starting from s, taking the action a, and thereafter following policy 7(a|s) to be: St =a =ah +oo Q"(s,a) =E {s risk k=0 where y denotes a discount factor. The optimal policy and Q-function can be found by using the Q-learning algorithm (Watkins and Dayan, 1992): Q(st, a4) â Q(St, ae) + (1) m (Tet: max Q(s/41,@) â Q(s¢, a2)
1511.04636#3
1511.04636#5
1511.04636
[ "1511.04636" ]
1511.04636#5
Deep Reinforcement Learning with a Natural Language Action Space
where 1, is the learning rate of the algorithm. In this paper, we use a softmax selection strategy as the exploration policy during the learning stage, which chooses the action a; at state s; according to the following probability: expla Qlseai)) uy Ai jy)? Tyatexp(a- Q(se,af)) m (az = a}|5¢) where A; is the set of feasible actions at state s;, aj, is the i-th feasible action in A;, | - | denotes the cardinality of the set, and a is the scaling factor in the softmax operation. a is kept constant through- out the learning period. All methods are initialized with small random weights, so initial Q-value dif- ferences will be small, thus making the Q-learning algorithm more explorative initially. As Q-values better approximate the true values, a reasonable a will make action selection put high probability on the optimal action (exploitation), but still maintain a small exploration probability.
1511.04636#4
1511.04636#6
1511.04636
[ "1511.04636" ]
1511.04636#6
Deep Reinforcement Learning with a Natural Language Action Space
# 2.2. Natural language action space Let S denote the state space, and let A denote the entire action space that includes all the unique ac- tions over time. A vanilla Q-learning recursion (1) needs to maintain a table of size |S| x |A|, which is problematic for a large state/action space. Prior work using a DNN in Q-function approximation has shown high capacity and scalability for han- dling a large state space, but most studies have used a network that generates |A| outputs, each of which represents the value of Q(s, a) for a par- ticular action a. It is not practical to have a DQN architecture of a size that is explicitly dependence on the large number of natural language actions. Further, in many text games, the feasible action set A, at each time ¢ is an unknown subset of the unbounded action space A that varies over time. For the case where the maximum number of possible actions at any point in time (max; |A;|) is known, the DQN can be modified to simply use that number of outputs (â
1511.04636#5
1511.04636#7
1511.04636
[ "1511.04636" ]
1511.04636#7
Deep Reinforcement Learning with a Natural Language Action Space
â Max-action DQNâ ), as illustrated in Figure l(a), where the state and ac- tion vectors are concatenated (i.e., as an extended state vector) as its input. The network computes the Q-function values for the actions in the current feasible set as its outputs. For a complex game, max; |A;| may be difficult to obtain, because A; is usually unknown beforehand. Nevertheless, we will use this modified DQN as a baseline. An alternative approach is to use a function ap- proximation using a neural network that takes a state-action pair as input, and outputs a single Q- value for each possible action (â Per-action DQNâ
1511.04636#6
1511.04636#8
1511.04636
[ "1511.04636" ]
1511.04636#8
Deep Reinforcement Learning with a Natural Language Action Space
in Figure 1(b)). This architecture easily handles a varying number of actions and represents a second baseline. We propose an alternative architecture for han- dling a natural language action space in sequential text understanding: the deep reinforcement rele- vance network (DRRN). As shown in Figure 1(c), the DRRN consists of a pair of DNNs, one for the state text embedding and the other for action text embeddings, which are combined using a pair- wise interaction function. The texts used to de- scribe states and actions could be very different in nature, e.g., a state text could be long, contain- ing sentences with complex linguistic structure, whereas an action text could be very concise or just a verb phrase. Therefore, it is desirable to use two networks with different structures to handle state/action texts, respectively. As we will see in the experimental sections, by using two separate deep neural networks for state and action sides, we obtain much better results. # 2.3 DRRN architecture: Forward activation Given any state/action text pair (s;, at), the DRRN estimates the Q-function Q(s;,a}) in two steps. First, map both s; and ai to their embedding vec- tors using the corresponding DNNs, respectively. Second, approximate Q(s:, ai) using an interac- tion function such as the inner product of the em- bedding vectors. Then, given a particular state s;, we can select the optimal action a; among the set of actions via a; = arg max,; Q(s:, ai). More formally, let hj, and hj, denote the /-th hidden layer for state and action side neural net- works, respectively. For the state side, W),, and bj; denote the linear transformation weight ma- trix and bias vector between the (/ â 1)-th and /-th hidden layers. W),, and bj, denote the equivalent parameters for the action side. In this study, the DRRN has L hidden layers on each side. has = f (Wi,sst + b1,s) (3) hia = f(Wiaai + bia) (4) his = f(Wi-1,shi-1,s + bi-1,s) (5) Nia=f Wiaahi-tja + bi-1,a) (6)
1511.04636#7
1511.04636#9
1511.04636
[ "1511.04636" ]
1511.04636#9
Deep Reinforcement Learning with a Natural Language Action Space
where f(-) is the nonlinear activation function at the hidden layers, which, for example, could be chosen as tanh(a), andi = 1,2,3,...,|A:| is the action index. A general interaction function g(-) is used to approximate the Q-function values, Q(s, a), in the following parametric form: Q(s,a';®) =g (hiss Nia) re) where © denotes all the model parameters. The in- teraction function could be an inner product, a bi- linear operation, or a nonlinear function such as a deep neural network. In our experiments, the inner product and bilinear operation gave similar results. For simplicity, we present our experiments mostly using the inner product interaction function. The success of the DRRN in handling a natu- ral language action space A lies in the fact that the state-text and the action-texts are mapped into separate finite-dimensional embedding spaces. The end-to-end learning process (discussed next) makes the embedding vectors in the two spaces more aligned for â
1511.04636#8
1511.04636#10
1511.04636
[ "1511.04636" ]
1511.04636#10
Deep Reinforcement Learning with a Natural Language Action Space
goodâ (or relevant) action texts compared to â badâ (or irrelevant) choices, result- ing in a higher interaction function output (Q- function value). # 2.4 Learning the DRRN: Back propagation To learn the DRRN, we use the â experience- replayâ strategy (Lin, 1993), which uses a fixed exploration policy to interact with the environment to obtain a sample trajectory. Then, we randomly sample a transition tuple (5%, @k, Tk, 8k41), Com- pute the temporal difference error for sample k: dk = re+y max Q(Sk-41, 4; On-1)-Q(Sx, Ak; Ox-1), and update the model according to the recursions: OQ(Sk, 4k; Of-1) Wok = Wo,kâ -1 + ede - aw (8) OQ (Sk, Ok; O¢â Duk = bv ka + Med * On tr ea) (9) Qr(s,at) Q(s, aâ ) Q(s, a") 1 1@_@ pairwise interaction i A RY (e.g. inner product) h2 has Nba T ii t â his La T 2 j i I t St a ay os | ay | St at (a) Max-action DQN (b) Per-action DQN (c) DRRN # function Figure 1:
1511.04636#9
1511.04636#11
1511.04636
[ "1511.04636" ]
1511.04636#11
Deep Reinforcement Learning with a Natural Language Action Space
Different deep Q-learning architectures: Max-action DQN and Per-action DQN both treat input text as concantenated vectors and compute output Q-values with a single NN. DRRN mo lels text embeddings from state/action sides separately, and use an interaction function to compute Q-values. 1 T T T T y â , action 2 (-1.30) oF 2â ~ action 1 (-0.55) | after 200 episodes state 1 T T T T T y , ; action 1 (+0.91) > action 2 (-17.17) L < | state after 400 episodes action 1 (+16.53) action 2 (-22.08) oF state â _â _ | after 600 episodes â tg =6 â 4 -2 0 2 4 6 8 Figure 2: PCA projections of text embedding vectors for state and associated action vectors after 200, 400 and 600 training episodes. The state is â
1511.04636#10
1511.04636#12
1511.04636
[ "1511.04636" ]
1511.04636#12
Deep Reinforcement Learning with a Natural Language Action Space
As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street.â Action 1 (good choice) is â Look upâ , and action 2 (poor choice) is â Ignore the alarm of others and continue moving forward.â 9Q | 92 and for v â ¬ {s,a}. Expressions for ay, 55° other algorithm details are given in supplementary materials. Random sampling essentially scram- bles the trajectory from experience-replay into a â bag-of-transitionsâ , which has been shown to avoid oscillations or divergence and achieve faster convergence in Q-learning (Mnih et al., 2015). Since the models on the action side share the same parameters, models associated with all actions are effectively updated even though the back propaga- tion is only over one action. We apply back prop- agation to learn how to pair the text strings from the reward signals in an end-to-end manner. The representation vectors for the state-text and the action-text are automatically learned to be aligned with each other in the text embedding space from the reward signals. A summary of the full learning algorithm is given in Algorithm 1. Figure 2 illustrates learning with an inner product interaction function. We used Principal Component Analysis (PCA) to project the 100- dimension last hidden layer representation (before the inner product) to a 2-D plane. The vector em- beddings start with small values, and after 600 episodes of experience-replay training, the embed- dings are very close to the converged embedding (4000 episodes). The embedding vector of the op- timal action (Action 1) converges to a positive in- ner product with the state embedding vector, while Action 2 converges to a negative inner product. # 3 Experimental Results # 3.1 Text games Text games, although simple compared to video games, still enjoy high popularity in online com- munities, with annual competitions held online
1511.04636#11
1511.04636#13
1511.04636
[ "1511.04636" ]
1511.04636#13
Deep Reinforcement Learning with a Natural Language Action Space
Algorithm 1 Learning algorithm for DRRN 1: Initialize replay memory D to capacity N. 2: Initialize DRRN with small random weights. 3: Initialize game simulator and load dictionary. 4: for episode = 1,...,M do 5: Restart game simulator. 6: Read raw state text and a list of action text from the simulator, and convert them to representation A Ss. and at,a?,...,a! 1 7 fort=1,...,7 do Compute Q(s;, ai; ©) for the list of actions using DRRN forward activation (Section 2.3). 9: Select an action a; based on probability distribution (a; = ai|s;) (Equation 2) 10: Execute action a; in simulator 11: Observe reward r;. Read the next state text and the next list of action texts, and convert them to representation 5,41 and aj, @744,--. sae 12: Store transition (s¢, at, Tt, $141, At41) in D. 13: Sample random mini batch of transitions (5%, 44,1, 8h-41, Ap41) from D. if Sit is terminal Set y, # a: = re +ymMaxaed,,, Q(Sk+1; aâ ;®)) otherwise 15: Perform a gradient descent step on (yz, â Q(s, a4; ©))? with respect to the network parameters © (Section 2.4). Back-propagation is performed only for a, even though there are |.A;| actions at time k. 16: end for 17: # end for
1511.04636#12
1511.04636#14
1511.04636
[ "1511.04636" ]
1511.04636#14
Deep Reinforcement Learning with a Natural Language Action Space
since 1995. Text games communicate to players in the form of a text display, which players have to understand and respond to by typing or click- ing text (Adams, 2014). There are three types of text games: parser-based (Figure 3(a)), choice- based (Figure 3(b)), and hypertext-based (Figure 3(c)). Parser-based games accept typed-in com- mands from the player, usually in the form of verb phrases, such as â
1511.04636#13
1511.04636#15
1511.04636
[ "1511.04636" ]
1511.04636#15
Deep Reinforcement Learning with a Natural Language Action Space
eat appleâ , â get keyâ , or â go eastâ . They involve the least complex ac- tion language. Choice-based and hypertext-based games present actions after or embedded within the state text. The player chooses an action, and the story continues based on the action taken at this particular state. With the development of web browsing and richer HTML display, choice-based and hypertext-based text games have become more popular, increasing in percentage from 8% in 2010 to 62% in 2014. For parser-based text games, Narasimhan et al. (2015) have defined a fixed set of 222 actions, which is the total number of possible phrases the parser accepts. Thus the parser-based text game is reduced to a problem that is well suited to a fixed- Game Saving John Machine of Death Text game type Choice Choice & Hypertext Vocab size 1762 2258 Action vocab size 171 419 Avg. words/description | 76.67 67.80 State transitions Deterministic | Stochastic # of states (underlying) | > 70 > 200 Table 1: Statistics for the games â Saving Johnâ and and â Machine of Deathâ . action-set DQN. However, for choice-based and hypertext-based text games, the size of the action space could be exponential with the length of the action sentences, which is handled here by using a continuous representation of the action space. In this study, we evaluate the DRRN with two games: a deterministic text game task called â
1511.04636#14
1511.04636#16
1511.04636
[ "1511.04636" ]
1511.04636#16
Deep Reinforcement Learning with a Natural Language Action Space
Sav- ing Johnâ and a larger-scale stochastic text game called â Machine of Deathâ from a public archive.â The basic text statistics of these tasks are shown in Table 1. The maximum value of feasible actions (ie., max; |A;|) is four in â Saving Johnâ , and nine in â Machine of Deathâ . We manually annotate fi- 'Statistics obtained from http: //www.ifarchive. org ?Simulators are available at https: //github.com/ jvking/text-games
1511.04636#15
1511.04636#17
1511.04636
[ "1511.04636" ]
1511.04636#17
Deep Reinforcement Learning with a Natural Language Action Space
Front Steps leads into the lobby. Well, here we are, back home again. The battered front door leads north into the lobby. The cat is out here with you, parked directly in front of the door and looking up at you expectantly + Return the catâ s stare >. + â Howdy, Mittens.â (a) Parser-based The cat is out here with you, parked directly in front of the door and looking up at you expectantly. + Step purposefully over the cat and into the lobby (b) Choiced-based Well, here we are, back home again. The battered front door Well, here we are, back home again. The battered front door leads into the lobby. The cat is out here with you, parked directly in front of the door and looking up at you expectantly. You're hungry. (c) Hypertext-based Figure 3:
1511.04636#16
1511.04636#18
1511.04636
[ "1511.04636" ]
1511.04636#18
Deep Reinforcement Learning with a Natural Language Action Space
Different types of text games nal rewards for all distinct endings in both games (as shown in supplementary materials). The mag- nitude of reward scores are given to describe sen- timent polarity of good/bad endings. On the other hand, each non-terminating step we assign with a small negative reward, to encourage the learner to finish the game as soon as possible. For the text game â Machine of Deathâ , we restrict an episode to be no longer than 500 steps. In â Saving Johnâ
1511.04636#17
1511.04636#19
1511.04636
[ "1511.04636" ]
1511.04636#19
Deep Reinforcement Learning with a Natural Language Action Space
all actions are choice-based, for which the mapping from text strings to a, are clear. In â Machine of Deathâ , when actions are hypertext, the actions are substrings of the state. In this case s; is associated with the full state de- scription, and a; are given by the substrings with- out any surrounding context. For text input, we use raw bag-of-words as features, with different vocabularies for the state side and action side. 3.2 Experiment setup We apply DRRNs with both | and 2 hidden layer structures. In most experiments, we use dot- product as the interaction function and set the hidden dimension to be the same for each hid- den layer. We use DRRNs with 20, 50 and 100-dimension hidden layer(s) and build learn- ing curves during experience-replay training. The learning rate is constant: 7; = 0.001. In testing, as in training, we apply softmax selection. We record average final rewards as performance of the model. The DRRN is compared to multiple baselines: a linear model, two max-action DQNs (MA DQN) (L = 1 or 2 hidden layers), and two per-action DQNs (PA DQN) (again, L = 1,2). All base- lines use the same Q-learning framework with dif- ferent function approximators to predict Q(s;, at) given the current state and actions. For the lin- ear and MA DQN baselines, the input is the text- based state and action descriptions, each as a bag of words, with the number of outputs equal to the maximum number of actions. When there are fewer actions than the maximum, the highest scor- ing available action is used.
1511.04636#18
1511.04636#20
1511.04636
[ "1511.04636" ]
1511.04636#20
Deep Reinforcement Learning with a Natural Language Action Space
The PA DQN baseline Eval metric Average reward hidden dimension 20 50 100 Linear 44 (0.4) PA DQN (£ = 1) 2.0(1.5) | 4.014) | 44 (2.0) PA DQN (ZL = 2) 1.5.0) | 45(2.5) | [email protected]) MA DQN(L=1) | 2.9.1) | 4.0 (4.2) 5.9 (2.5) MA DQN (LZ = 2) | 4.93.2) | 9.0.2) | 7.1G.1) DRRN (L = 1) 17.1 (0.6) | 18.3 (0.2) | 18.2 (0.2) DRRN (L = 2) 18.4 (0.1) | 18.5 (0.3) | 18.7 (0.4) Table 2: The final average rewards and standard deviations on â
1511.04636#19
1511.04636#21
1511.04636
[ "1511.04636" ]
1511.04636#21
Deep Reinforcement Learning with a Natural Language Action Space
Saving Johnâ . takes each pair of state-action texts as input, and generates a corresponding Q-value. We use softmax selection, which is widely applied in practice, to trade-off exploration vs. exploitation. Specifically, for each experience- replay, we first generate 200 episodes of data (about 3K tuples in â Saving Johnâ and 16K tuples in â Machine of Deathâ ) using the softmax selec- tion rule in (2), where we set a = 0.2 for the first game and a = 1.0 for the second game. The a is picked according to an estimation of range of the optimal Q-values. We then shuffle the generated data tuples (s:, a¢, 1, +41) update the model as described in Section 2.4. The model is trained with multiple epochs for all configurations, and is eval- uated after each experience-replay. The discount factor Â¥ is set to 0.9. For DRRN and all baselines, network weights are initialized with small random values. To prevent algorithms from â
1511.04636#20
1511.04636#22
1511.04636
[ "1511.04636" ]
1511.04636#22
Deep Reinforcement Learning with a Natural Language Action Space
remember- ingâ state-action ordering and make choices based on action wording, each time the algorithm/player reads text from the simulator, we randomly shuffle the list of actions. This will encourage the algo- rithms to make decisions based on the understand- ing of the texts that describe the states and actions. 3.3. Performance In Figure 4, we show the learning curves of dif- ferent models, where the dimension of the hid- 3When in a specific state, the simulator presents the pos- sible set of actions in random order, i.e. they may appear in a different order the next time a player is in this same state. Eval metric Average reward hidden dimension 20 50 100 Linear 44 (0.4) PA DQN (£ = 1) 2.0(1.5) | 4.014) | 44 (2.0) PA DQN (ZL = 2) 1.5.0) | 45(2.5) | [email protected]) MA DQN(L=1) | 2.9.1) | 4.0 (4.2) 5.9 (2.5) MA DQN (LZ = 2) | 4.93.2) | 9.0.2) | 7.1G.1) DRRN (L = 1) 17.1 (0.6) | 18.3 (0.2) | 18.2 (0.2) DRRN (L = 2) 18.4 (0.1) | 18.5 (0.3) | 18.7 (0.4) Average reward t â 2â DRAN (2-hidden) â 4â DRRN (1-hidden) =o PADON (2-hidden) =o MADON (2-hidden FT ty â st |X LHe A$ Average reward ° â E=DRAN (@hiddeny â aâ DRRN (1-hidden) â o- PA DON (2-hidden) =o MADON (2-hidden) 500 1000 1500 2000 Number of episodes 2500 3000 3500 (a) Game 1: â Saving Johnâ
1511.04636#21
1511.04636#23
1511.04636
[ "1511.04636" ]
1511.04636#23
Deep Reinforcement Learning with a Natural Language Action Space
ia) 500. 1000 1500 2000 2500 Number of episodes (b) Game 2: â Machine of Deathâ 3000 3500 4000 Figure 4: Learning curves of the two text games. Eval metric Average reward hidden dimension 20 50 100 Linear 3.3 (1.0) PA DQN (Z = 1) 0.9 (2.4) 2.3 (0.9) 3.1 (1.3) PA DQN (Z = 2) 1.3 (1.2) 2.3 (1.6) 3.4 (1.7) MA DQN (L = 1) [| 2.01.2) 3.71.6) | 4.8 (2.9) MA DQN (L = 2) | 2.8 (0.9) 43 (0.9) 5.2 (1.2) DRRN (L = 1) 7.2 (1.5) 8.4 (1.3) 8.7 (0.9) DRRN (L = 2) 9.2 (2.1) | 10.7 (2.7) | 11.2 0.6) Table 3: The final average rewards and standard deviations on â
1511.04636#22
1511.04636#24
1511.04636
[ "1511.04636" ]
1511.04636#24
Deep Reinforcement Learning with a Natural Language Action Space
Machine of Deathâ . Game 2, due to the complexity of the underly- ing state transition function, we cannot compute the exact optimal policy score. To provide more insight into the performance, we averaged scores of 8 human players for initial trials (novice) and after gaining experience, yielding scores of â 5.5 and 16.0, respectively. The experienced players do outperform our algorithm. The converged per- formance is higher with two hidden layers for all models. However, deep models also converge more slowly than their | hidden layer versions, as shown for the DRRN in Figure 4. den layers in the DQNs and DRRN are all set to 100.
1511.04636#23
1511.04636#25
1511.04636
[ "1511.04636" ]
1511.04636#25
Deep Reinforcement Learning with a Natural Language Action Space
The error bars are obtained by running 5 independent experiments. The proposed meth- ods and baselines all start at about the same per- formance (roughly -7 average rewards for Game 1, and roughly -8 average rewards for Game 2), which is the random guess policy. After around 4000 episodes of experience-replay training, all methods converge. The DRRN converges much faster than the other three baselines and achieves a higher average reward. We hypothesize this is be- cause the DRRN architecture is better at capturing relevance between state text and action text. The faster convergence for â
1511.04636#24
1511.04636#26
1511.04636
[ "1511.04636" ]
1511.04636#26
Deep Reinforcement Learning with a Natural Language Action Space
Saving Johnâ may be due to the smaller observation space and/or the deter- ministic nature of its state transitions (in contrast to the stochastic transitions in the other game). Besides an inner-product, we also experimented with more complex interaction functions: a) a bi- linear operation with different action side dimen- sions; and b) a non-linear deep neural network us- ing the concatenated state and action space embed- dings as input and trained in an end-to-end fash- ion to predict Q values. For different configura- tions, we fix the state side embedding to be 100 dimensions and vary the action side embedding dimensions. The bilinear operation gave similar results, but the concatenation input to a DNN de- graded performance. Similar behaviors have been observed on a different task (Luong et al., 2015).
1511.04636#25
1511.04636#27
1511.04636
[ "1511.04636" ]
1511.04636#27
Deep Reinforcement Learning with a Natural Language Action Space
# 3.4. Actions with paraphrased descriptions The final performance (at convergence) for both baselines and proposed methods are shown in Ta- bles 2 and 3. We test for different model sizes with 20, 50, and 100 dimensions in the hidden layers. The DRRN performs consistently better than all baselines, and often with a lower variance. For To investigate how our models handle actions with â unseenâ natural language descriptions, we had two people paraphrase all actions in the game â Machine of Deathâ (used in testing phase), except a few single-word actions whose syn- onyms are out-of-vocabulary (OOV). The word- level OOV rate of paraphrased actions is 18.6%, Q-values scatterplot between state-action pairs scatterplot pairs iS é â y=2 x0.85 +0.24, pRâ =0.95 re oN ow e 5s 8 8 With paraphrased action i S q iy 8 1 w 8 ~40! =30 =20 =10 0 10 20 30 40 With original action Figure 5: Scatterplot and strong correlation be- tween Q-values of paraphrased actions versus original actions and standard 4-gram BLEU score between the paraphrased and original actions is 0.325. The re- sulting 153 paraphrased action descriptions are as- sociated with 532 unique state-action pairs. We apply a well-trained 2-layer DRRN model (with hidden dimension 100), and predict Q- values for each state-action pair with fixed model parameters. Figure 5 shows the correlation be- tween Q-values associated with paraphrased ac- tions versus original actions. The predictive R- squared is 0.95, showing a strong positive corre- lation. We also run Q-value correlation for the NN interaction and pR? = 0.90. For baseline MA-DQN and PA-DQN, their corresponding pR? is 0.84 and 0.97, indicating they also have some generalization ability. This is confirmed in the paraphrasing-based experiments too, where the test reward on the paraphrased setup is close to the original setup.
1511.04636#26
1511.04636#28
1511.04636
[ "1511.04636" ]
1511.04636#28
Deep Reinforcement Learning with a Natural Language Action Space
This supports the claim that deep learning is useful in general for this language understanding task, and our findings show that a decoupled architecture most effectively leverages that approach. In Table 4 we provide examples with predicted Q-values of original descriptions and paraphrased descriptions. We also include alternative action descriptions with in-vocabulary words that will lead to positive / negative / irrelevant game devel- opment at that particular state. Table 4 shows ac- tions that are more likely to result in good endings are predicted with high Q-values. This indicates that the DRRN has some generalization ability and gains a useful level of language understanding in Eval metric Average reward hidden dimension 20 50 100 PA DQN (Z = 2) [| 0.21.2) | 2.6(1.0) | 3.6 0.3) MA DQN (L=2) [| 2.5(1.3) | 4000.9) [| 5.10.) DRRN (L = 2) 7.3 (0.7) | 8.30.7) | 10.5 (0.9) Table 5: The final average rewards and stan- dard deviations on paraphrased game â
1511.04636#27
1511.04636#29
1511.04636
[ "1511.04636" ]
1511.04636#29
Deep Reinforcement Learning with a Natural Language Action Space
Machine of Deathâ . the game scenario. We use the baseline models and proposed DRRN model trained with the original action de- scriptions for â Machine of Deathâ , and test on paraphrased action descriptions. For this game, the underlying state transition mechanism has not changed. The only change to the game interface is that during testing, every time the player reads the actions from the game simulator, it reads the para- phrased descriptions and performs selection based on these paraphrases. Since the texts in test time are â unseenâ to the player, a good model needs to have some level of language understanding, while a naive model that memorizes all unique action texts in the original game will do poorly.
1511.04636#28
1511.04636#30
1511.04636
[ "1511.04636" ]
1511.04636#30
Deep Reinforcement Learning with a Natural Language Action Space
The re- sults for these models are shown in Table 5. All methods have a slightly lower average reward in this setting (10.5 vs. 11.2 for the original actions), but the DRRN still gives a high reward and sig- nificantly outperforms other methods. This shows that the DRRN can generalize well to â unseenâ natural language descriptions of actions. # 4 Related Work There has been increasing interest in applying deep reinforcement learning to a variety problems, but only a few studies address problems with nat- ural language state or action spaces. In language processing, reinforcement learning has been ap- plied to a dialogue management system that con- verses with a human user by taking actions that generate natural language (Scheffler and Young, 2002; Young et al., 2013). There has also been in- terest in extracting textual knowledge to improve game control performance (Branavan et al., 2011), and mapping text instructions to sequences of ex- ecutable actions (Branavan et al., 2009). In some applications, it is possible to manually design fea- tures for state-action pairs, which are then used in reinforcement learning to learn a near-optimal policy (Li et al., 2009). Designing such features, however, require substantial domain knowledge.
1511.04636#29
1511.04636#31
1511.04636
[ "1511.04636" ]
1511.04636#31
Deep Reinforcement Learning with a Natural Language Action Space
Text (with predicted Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions in the original game Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Paraphrased actions (not original) look. (17.5) Disregard the caution of others and keep pushing ahead. (-11.9) Turn up and Positive actions (not original) Stay there. (2.8) Stay calmly. (2.0) Negative actions (not original) Screw it.
1511.04636#30
1511.04636#32
1511.04636
[ "1511.04636" ]
1511.04636#32
Deep Reinforcement Learning with a Natural Language Action Space
Iâ m going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions (not original) Insert a coin. (-1.4) Throw a coin to the ground. (-3.6) Table 4: Predicted Q-value examples The work most closely related to our study in- olves application of deep reinforcement to learn- ing decision policies for parser-based text games. Narasimhan et al. (2015) applied a Long Short- Term Memory DQN framework, which achieves higher average reward than the random and Bag- of-Words DQN baselines. In this work, actions are constrained to a set of known fixed command structures (one action and one argument object), based on a limited action-side vocabulary size. The overall action space is defined by the action- argument product space. This pre-specified prod- uct space is not feasible for the more complex text strings in other forms of text-based games. Our proposed DRRN, on the other hand, can handle the more complex text strings, as well as parser- based games. In preliminary experiments with the parser-based game from (Narasimhan et al., 2015), we find that the DRRN using a bag-of-words (BOW) input achieves results on par with their BOW DQN. The main advantage of the DRRN is that it can also handle actions described with more complex language. reasonably well. # 5 Conclusion
1511.04636#31
1511.04636#33
1511.04636
[ "1511.04636" ]
1511.04636#33
Deep Reinforcement Learning with a Natural Language Action Space
In this paper we develop a deep reinforcement relevance network, a novel DNN architecture for handling actions described by natural language in decision-making tasks such as text games. We show that the DRRN converges faster and to a better solution for Q-learning than alternative ar- chitectures that do not use separate embeddings for the state and action spaces. Future work in- cludes: (i) adding an attention model to robustly analyze which part of state/actions text correspond to strategic planning, and (ii) applying the pro- posed methods to more complex text games or other tasks with actions defined through natural language.
1511.04636#32
1511.04636#34
1511.04636
[ "1511.04636" ]
1511.04636#34
Deep Reinforcement Learning with a Natural Language Action Space
# Acknowledgments We thank Karthik Narasimhan and Tejas Kulka- mi for providing instructions on setting up their parser-based games. The DRRN experiments described here lever- age only a simple bag-of-words representa- tion of phrases and sentences. As observed in (Narasimhan et al., 2015), more complex sentence-based models can give further improve- ments. In preliminary experiments with â Machine of Deathâ , we did not find LSTMs to give im- proved performance, but we conjecture that they would be useful in larger-scale tasks, or when the word embeddings are initialized by training on large data sets.
1511.04636#33
1511.04636#35
1511.04636
[ "1511.04636" ]
1511.04636#35
Deep Reinforcement Learning with a Natural Language Action Space
# References [Adams2014] E. Adams. 2014. Fundamentals of game design. Pearson Education. [Branavan et al.2009] S.R.K. Branavan, H. Chen, L. Zettlemoyer, and R. Barzilay. 2009. Reinforce- ment learning for mapping instructions to actions. In Proc. of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pages 82-90, August. As mentioned earlier, other work has applied deep reinforcement learning to a problem with a continuous action space (Lillicrap et al., 2016). In the DRRN, the action space is inherently discrete, but we learn a continuous representation of it. As indicated by the paraphrasing experiment, the con- tinuous space representation seems to generalize [Branavan et al.2011] S.R.K. Branavan, D. Silver, and R. Barzilay. 2011. Learning to win by reading man- uals in a monte-carlo framework. In Proc. of the An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 268-277. Association for Computational Linguistics. [Collobert and Weston2008] R. Collobert and J. We- ston. 2008. A unified architecture for natural lan- guage processing: Deep neural networks with mul- titask learning. In Proc. of the 25th International Conference on Machine learning, pages 160-167. ACM. [Dahl et al.2012] G. E Dahl, D. Yu, L. Deng, and A. Acero. 2012.
1511.04636#34
1511.04636#36
1511.04636
[ "1511.04636" ]
1511.04636#36
Deep Reinforcement Learning with a Natural Language Action Space
Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Process- ing, IEEE Transactions on, 20(1):30-42. [Hinton et al.2012] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Van- houcke, P. Nguyen, T. N. Sainath, and B.
1511.04636#35
1511.04636#37
1511.04636
[ "1511.04636" ]
1511.04636#37
Deep Reinforcement Learning with a Natural Language Action Space
Kings- bury. 2012. Deep neural networks for acoustic mod- eling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82-97. [Huang et al.2013] P-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. 2013. Learning deep struc- tured semantic models for web search using click- through data. In Proc. of the ACM International Conference on Information & Knowledge Manage- ment, pages 2333-2338. ACM. [Kiros et al.2015] R. Kiros, Y. Zhu, R. R Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S.
1511.04636#36
1511.04636#38
1511.04636
[ "1511.04636" ]
1511.04636#38
Deep Reinforcement Learning with a Natural Language Action Space
Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3276-3284. [Krizhevsky et al.2012] A. Krizhevsky, I. Sutskever, and G. E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105. [Le and Mikolov2014] Q. V Le and T. Mikolov. 2014.
1511.04636#37
1511.04636#39
1511.04636
[ "1511.04636" ]
1511.04636#39
Deep Reinforcement Learning with a Natural Language Action Space
Distributed representations of sentences and docu- ments. In International Conference on Machine Learning. [LeCun et al.2015] Y. LeCun, Y. Bengio, and G. Hin- ton. 2015. Deep learning. Nature, 521(7553):436â 444. [Li et al.2009] L. Li, J. D. Williams, and S. Balakr- ishnan. 2009.
1511.04636#38
1511.04636#40
1511.04636
[ "1511.04636" ]
1511.04636#40
Deep Reinforcement Learning with a Natural Language Action Space
Reinforcement learning for spo- ken dialog management using least-squares _pol- icy iteration and fast feature selection. In Pro- ceedings of the Tenth Annual Conference of the International Speech Communication Association (INTERSPEECH-09), page 24752478. [Lillicrap et al.2016] T. P Lillicrap, J. J Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wier- stra. 2016.
1511.04636#39
1511.04636#41
1511.04636
[ "1511.04636" ]
1511.04636#41
Deep Reinforcement Learning with a Natural Language Action Space
Continuous control with deep rein- forcement learning. In International Conference on Learning Representations. [Lin1993] L-J. Lin. 1993. Reinforcement learning for robots using neural networks. Technical report, DTIC Document. [Luong et al.2015] M-T. Luong, H. Pham, and C. D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Septem- ber. [Mnih et al.2013] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Ried- miller. 2013.
1511.04636#40
1511.04636#42
1511.04636
[ "1511.04636" ]
1511.04636#42
Deep Reinforcement Learning with a Natural Language Action Space
Playing Atari with Deep Reinforce- ment Learning. NIPS Deep Learning Workshop, De- cember. [Mnih et al.2015] V. Mnih, K. Kavukcuoglu, D. Silver, A. A Rusu, J. Veness, M. G Bellemare, A. Graves, M. Riedmiller, A. K Fidjeland, G. Ostrovski, et al. 2015. Human-level control through deep reinforce- ment learning. Nature, 518(7540):529-533. [Narasimhan et al.2015] K. Narasimhan, T. Kulkarni, and R. Barzilay. 2015.
1511.04636#41
1511.04636#43
1511.04636
[ "1511.04636" ]
1511.04636#43
Deep Reinforcement Learning with a Natural Language Action Space
Language understanding for text-based games using deep reinforcement learning. In Proc. of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1-11, September. [Nogueira and Cho2016] R. Nogueira and K. Cho. 2016. Webnav: A new large-scale task for natural language based sequential decision making. arXiv preprint arXiv: 1602.02261. [Scheffler and Young2002] K. Scheffler and S. Young. 2002.
1511.04636#42
1511.04636#44
1511.04636
[ "1511.04636" ]
1511.04636#44
Deep Reinforcement Learning with a Natural Language Action Space
Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proc. of the second International Conference on Hu- man Language Technology Research, pages 12-19. Sutskever et al.2014] I. Sutskever, O. Vinyals, and Q. V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112. Sutton and Barto1998] R. S Sutton and A. G Barto. 1998.
1511.04636#43
1511.04636#45
1511.04636
[ "1511.04636" ]
1511.04636#45
Deep Reinforcement Learning with a Natural Language Action Space
Reinforcement learning: An introduction, volume 1. MIT press Cambridge. Tesaurol995] G. Tesauro. 1995. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58-68. Watkins and Dayanl992] C. JCH Watkins and P. Dayan. 1992. Q-learning. Machine learning, 8(3-4):279-292. Young et al.2013] S. Young, M. Gasic, B. Thomson, and J. D Williams. 2013.
1511.04636#44
1511.04636#46
1511.04636
[ "1511.04636" ]
1511.04636#46
Deep Reinforcement Learning with a Natural Language Action Space
Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179. 2016 arXiv:1511.04636v5 [cs.AI] 8 Jun # Supplementary Material for â Deep Reinforcement Learn- ing with a Natural Language Action Spaceâ # A Percentage of Choice-based and Hypertext-based Text Games As shown in Table 1.! Year 2010 2011 2012 2013 2014 Percentage || 7.69% 7.89% 25.00% | 55.56% | 61.90% Table 1: Percentage of choice-based and hypertext-based text games since 2010, in archive of interactive fictions
1511.04636#45
1511.04636#47
1511.04636
[ "1511.04636" ]
1511.04636#47
Deep Reinforcement Learning with a Natural Language Action Space
# B_ Back Propagation Formula for Learning DRRN Let hj; and hj,_ denote the /-th hidden layer for state and action side neural net- works, respectively. For state side, W;,, and bj,; denote the linear transformation weight matrix and bias vector between the (J â 1)-th and /-th hidden layers. For actions side, W),, and b;,, denote the linear transformation weight matrix and bias vector between the (/ â 1)-th and J-th hidden layers. The DRRN has L hidden layers on each side.
1511.04636#46
1511.04636#48
1511.04636
[ "1511.04636" ]
1511.04636#48
Deep Reinforcement Learning with a Natural Language Action Space
# Forward: his = f(W1,s8¢ + b1,5) (1) Ria =f(Wiaat+bia), %=1,2,3,...,|Ad| (2) his = f(Wi-1,shi-1,s + bi-1,s), 1 = 2,3,...,L (3) # hia =S(Wi-rahiaa +b-ta), Q(s2, ai) = AE hia t=1,2,3,..,|Ad, = 2,3,..,L where f(-) is the nonlinear activation function at the hidden layers, which is chosen as tanh (xz) = (1 â exp (â 2a))/(1 + exp (â 2z)), and A; denotes the set of all actions at time t. # Backward: Note we only back propagate for actions that are actually taken. More for- mally, let a; be action the DRRN takes at time t, and denote A = [Q(s;,a,) â
1511.04636#47
1511.04636#49
1511.04636
[ "1511.04636" ]
1511.04636#49
Deep Reinforcement Learning with a Natural Language Action Space
'Statistics are obtained from http://www. ifarchive.org (4) (5) Reward | Endings (partially shown) -20 Suspicion fills my heart and I scream. Is she trying to kill me? I donâ t trust her one bit... -10 Submerged under water once more, I lose all focus... 0 Even now, sheâ s there for me. And I have done nothing for her... 10 Honest to God, I donâ t know what I see in her. Looking around, the situationâ s not so bad... 20 Suddenly I can see the sky... I focus on the most important thing - that Iâ m happy to be alive.
1511.04636#48
1511.04636#50
1511.04636
[ "1511.04636" ]
1511.04636#50
Deep Reinforcement Learning with a Natural Language Action Space
Table 2: Final rewards defined for the text game â Saving Johnâ (re + ymaxa Q(st41, @))]?/2. Denote 51,5 = dbi,5 = OQ/Abis. dia = Sia = 0Q/0bj,q, and we have (by following chain rules): . OA BQ = FE = Qsisar) â (r+ ymax Q(su41,4)) ©) 61,8 = 9Q-hraO© (1â hy,s) © (1 +hr,s) 61-1, = WoL © (1 _ hi-1,s) © (1 + hits); l= 2,3,...,L ota = §Q + hp sO (1â Area) O (1+ Ara) (8) 6-1, = Wi ba (1 Mita) O(L+ Pita), 1 = 2,3,...,0 bW1,5 = 0Q/OW 5 = 51,5 ° $F (9) SWig = Q/OWi 5 = b19-hE yg, 1=2,8,.4L OW = 0Q/OW1.4 = Ota . a} (10) Wie = 0Q/OW ia = 510° Eas 1=2,3,...,L
1511.04636#49
1511.04636#51
1511.04636
[ "1511.04636" ]
1511.04636#51
Deep Reinforcement Learning with a Natural Language Action Space
where © denotes element-wise Hadamard product. # C_ Final Rewards in the Two Text Games As shown in Table 2 and Table 3. # D Game 2 Learning curve with shared state and action embedding As shown in Figure 1. For the first 1000 episodes, parameter tying gives faster convergence, but learning curve also has high variance and unstable. (7) Reward Endings (partially shown) -20 You spend your last few moments on Earth lying there, shot through the heart, by the image of Jon Bon Jovi. -20 you hear Bon Jovi say as the world fades around you. -20 As the screams you hear around you slowly fade and your vision begins to blur, you look at the words which ended your life. -10 You may be locked away for some time. -10 Eventually youâ re escorted into the back of a police car as Rachel looks on in horror. -10 Fate can wait. -10 Sadly, youâ re so distracted with looking up the number that you donâ t notice the large truck speeding down the street. -10 All these hiccups lead to one grand disaster. 10 Stay the hell away from me! She blurts as she disappears into the crowd emerging from the bar. 20 You canâ t help but smile. 20 Hope you have a good life. 20 Congratulations! 20 Rachel waves goodbye as you begin the long drive home. After a few minutes, you turn the radio on to break the silence. 30 After all, itâ s your life. Itâ s now or never. You ainâ t gonna live forever. You just want to live while youâ re alive.
1511.04636#50
1511.04636#52
1511.04636
[ "1511.04636" ]
1511.04636#52
Deep Reinforcement Learning with a Natural Language Action Space
Table 3: Final rewards for the text game â Machine of Death.â Scores are as- signed according to whether the character survives, how the friendship develops, and whether he overcomes his fear. # E Examples of State-Action Pairs in the Two Text Games As shown in Table 4 and Table 5. # F Examples of State-Action Pairs that do not exist in the feasible set As shown in Table 6. Average reward = A= DRRN (2-hidden) â â ¬â
1511.04636#51
1511.04636#53
1511.04636
[ "1511.04636" ]
1511.04636#53
Deep Reinforcement Learning with a Natural Language Action Space
DRRN (2-hidden tying) 0 1000 2000 3000 Number of episodes 4000 Figure 1: Learning curves of shared state-action embedding vs. proposed DRRN in Game 2 State Actions (with Q values) A wet strand of hair hinders my vision and Iâ m back in the water. Sharp pain pierces my lungs. How much longer do I have? 30 seconds? Less? I need to focus. A hand comes into view once I still donâ t know what to do. (- 8.981) Reach for it. (18.005) more. *Me:â Hello Sent: today â Cherie:â Hey. Can I call you? Sent: | Reply â PIl call youâ (14.569) No today (-9.498) â You donâ
1511.04636#52
1511.04636#54
1511.04636
[ "1511.04636" ]
1511.04636#54
Deep Reinforcement Learning with a Natural Language Action Space
t hold any power over me. Not anymore.â Lucretia raises one eyebrow. The bar is quiet. I really wish I did my hair today.â She twirls a strand. â Iâ m sorry,â â Save itâ //Yellow Submarine plays softly in the background.// I really hate her.â Cherie? Itâ s not her fault.â You'll be sorry,â Please stop screaming.â I laugh and she throws a glass of water in my face. (16.214) I look away and she sips her glass quietly. (-7.986) My dad left before I could remember.
1511.04636#53
1511.04636#55
1511.04636
[ "1511.04636" ]
1511.04636#55
Deep Reinforcement Learning with a Natural Language Action Space
My mom worked all the time but she had to take care of her father, my grandpa. The routine was that she had an hour between her morning shift and afternoon shift, where sheâ d make food for me to bring to pops. He lived three blocks away, in a house with red steps leading up to the metal front door. Inside, the stained yellow wallpaper and rotten oranges reeked of mold. Iâ d walk by myself to my grandfatherâ s and back. It was lonely sometimes, being a kid and all, but it was nothing I couldnâ t deal with. Itâ s not like he abused me, I mean it hurt but why wouldnâ t I fight back?
1511.04636#54
1511.04636#56
1511.04636
[ "1511.04636" ]
1511.04636#56
Deep Reinforcement Learning with a Natural Language Action Space
I met Adam on one of these walks. He made me feel stronger, like I can face anything. Repress this memory (-8.102) Why didnâ t I fight back? (10.601) Face Cherie (14.583) Table 4: Q values (in pare ntheses) for state-action pair from â Saving Johnâ , using trained DRRN. High Q-va leading to better endings ue actions are more cooperative actions thus more likely State Actions (with Q values) Peak hour ended an hour or so ago, alleviating the feeling of being a tinned sardine that?s commonly associated with shopping malls, though there are still quite a few people busily bumbling about.
1511.04636#55
1511.04636#57
1511.04636
[ "1511.04636" ]
1511.04636#57
Deep Reinforcement Learning with a Natural Language Action Space
To your left is a fast food restaurant. To the right is a UFO catcher, and a poster is hanging on the wall beside it. Behind you is the one of the mallâ s exits. In front of you stands the Machine. Youâ re carrying 4 dollars in change. fast food restaurant (1.094) the Machine (3.708) mallâ s exits (0.900) UFO catcher (2.646) poster (1.062) You lift the warm mug to your lips and take a small sip of hot tea. Ask what he was looking for. (3.709) Ask about the blood stains. (7.488) Drink tea. (5.526) Wait. (6.557) As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Ignore the alarm of others and con- tinue moving forward. (-21.464) Look up. (16.593) Are you happy? Is this what you want to do? If you didnâ t avoid that sign, would you be satisfied with how your life had turned out? Sure, youâ re good at your job and it pays well, but is that all you want from work? If not, maybe itâ
1511.04636#56
1511.04636#58
1511.04636
[ "1511.04636" ]
1511.04636#58
Deep Reinforcement Learning with a Natural Language Action Space
s time for a change. Screw it. Iâ m going to find a new life right now. Itâ s not going to be easy, but itâ s what I want. (23.205) Maybe one day. But Iâ m satis- fied right now, and I have bills to pay. Keep on going. (One minute) (14.491) You slam your entire weight against the man, making him stumble backwards and drop the chair to the ground as a group of patrons race to restrain him.
1511.04636#57
1511.04636#59
1511.04636
[ "1511.04636" ]
1511.04636#59
Deep Reinforcement Learning with a Natural Language Action Space
You feel someone grab your arm, and look over to see that it?s Rachel. Letâ s get out of here, she says while motioning towards the exit. You charge out of the bar and leap back into your car, adrenaline still pumping through your veins. As you slam the door, the glove box pops open and reveals your gun. Grab it and hide it in your jacket before Rachel can see it. (21.885) Leave it. (1.915) Table 5: Q values (in parentheses) for state-action pair from â
1511.04636#58
1511.04636#60
1511.04636
[ "1511.04636" ]
1511.04636#60
Deep Reinforcement Learning with a Natural Language Action Space
Machine of Deathâ , using trained DRRN Text (with Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions that are in the feasible set Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Positive actions that are not in the feasible set Stay there. (2.8) Stay calmly. (2.0) Negative actions that are not in the feasible set Screw it.
1511.04636#59
1511.04636#61
1511.04636
[ "1511.04636" ]
1511.04636#61
Deep Reinforcement Learning with a Natural Language Action Space
Iâ m going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions that are not in the feasible set Insert a coin. (-1.4) Throw a coin to the ground. (-3.6) Irrelevant actions that are not in the feasible set Insert a coin. (-1.4) Throw a coin to the ground. (-3.6) Table 6: Q values (in parentheses) for sta e-action pair from â
1511.04636#60
1511.04636#62
1511.04636
[ "1511.04636" ]
1511.04636#62
Deep Reinforcement Learning with a Natural Language Action Space
Machine of Deathâ , using trained DRRN, with made-up actions that were not in the feasible set
1511.04636#61
1511.04636
[ "1511.04636" ]
1511.02274#0
Stacked Attention Networks for Image Question Answering
6 1 0 2 n a J 6 2 ] G L . s c [ 2 v 4 7 2 2 0 . 1 1 5 1 : v i X r a # Stacked Attention Networks for Image Question Answering Zichao Yang1, Xiaodong He2, Jianfeng Gao2, Li Deng2, Alex Smola1 1Carnegie Mellon University, 2Microsoft Research, Redmond, WA 98052, USA [email protected], {xiaohe, jfgao, deng}@microsoft.com, [email protected] # Abstract
1511.02274#1
1511.02274
[ "1506.00333" ]
1511.02274#1
Stacked Attention Networks for Image Question Answering
This paper presents stacked attention networks (SANs) that learn to answer natural language questions from im- ages. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experi- ments conducted on four image QA data sets demonstrate that the proposed SANs signiï¬ cantly outperform previous state-of-the-art approaches. The visualization of the atten- tion layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.
1511.02274#0
1511.02274#2
1511.02274
[ "1506.00333" ]
1511.02274#2
Stacked Attention Networks for Image Question Answering
# 1. Introduction With the recent advancement in computer vision and in natural language processing (NLP), image question an- swering (QA) becomes one of the most active research ar- eas [7, 21, 18, 1, 19]. Unlike pure language based QA sys- tems that have been studied extensively in the NLP commu- nity [28, 14, 4, 31, 3, 32], image QA systems are designed to automatically answer natural language questions according to the content of a reference image. feature vectors of different parts of image B-(I Question: Query -\--EN What are sitting 1 I I in the basket on H + + | | > a bicycle? 1 | ' | XeWYJOS Answer: râ
1511.02274#1
1511.02274#3
1511.02274
[ "1506.00333" ]
1511.02274#3
Stacked Attention Networks for Image Question Answering
> dogs (a) Stacked Attention Network for Image QA Original Image First Attention Layer Second Attention Layer (b) Visualization of the learned multiple attention layers. The stacked attention network ï¬ rst focuses on all referred concepts, e.g., bicycle, basket and objects in the basket (dogs) in the ï¬ rst attention layer and then further narrows down the focus in the second layer and ï¬ nds out the answer dog. Most of the recently proposed image QA models are based on neural networks [7, 21, 18, 1, 19]. A commonly used approach was to extract a global image feature vector using a convolution neural network (CNN) [15] and encode the corresponding question as a feature vector using a long short-term memory network (LSTM) [9] and then combine them to infer the answer. Though impressive results have been reported, these models often fail to give precise an- swers when such answers are related to a set of ï¬ ne-grained regions in an image. By examining the image QA data sets, we ï¬ nd that it is often that case that answering a question from an image re- quires multi-step reasoning. Take the question and image in Fig. 1 as an example. There are several objects in the im- age: bicycles, window, street, baskets and # Figure 1: Model architecture and visualization
1511.02274#2
1511.02274#4
1511.02274
[ "1506.00333" ]
1511.02274#4
Stacked Attention Networks for Image Question Answering
dogs. To answer the question what are sitting in the basket on a bicycle, we need to ï¬ rst locate those objects (e.g. basket, bicycle) and concepts (e.g., sitting in) referred in the question, then gradu- ally rule out irrelevant objects, and ï¬ nally pinpoint to the re- gion that are most indicative to infer the answer (i.e., dogs in the example). In this paper, we propose stacked attention networks (SANs) that allow multi-step reasoning for image QA. SANs can be viewed as an extension of the attention mech- anism that has been successfully applied in image caption- ing [30] and machine translation [2]. The overall architec- ture of SAN is illustrated in Fig. 1a. The SAN consists of three major components: (1) the image model, which uses 1 a CNN to extract high level image representations, e.g. one vector for each region of the image; (2) the question model, which uses a CNN or a LSTM to extract a semantic vector of the question and (3) the stacked attention model, which locates, via multi-step reasoning, the image regions that are relevant to the question for answer prediction. As illustrated in Fig. 1a, the SAN ï¬ rst uses the question vector to query the image vectors in the ï¬ rst visual attention layer, then combine the question vector and the retrieved image vectors to form a reï¬ ned query vector to query the image vectors again in the second attention layer. The higher-level atten- tion layer gives a sharper attention distribution focusing on the regions that are more relevant to the answer. Finally, we combine the image features from the highest attention layer with the last query vector to predict the answer.
1511.02274#3
1511.02274#5
1511.02274
[ "1506.00333" ]
1511.02274#5
Stacked Attention Networks for Image Question Answering
The main contributions of our work are three-fold. First, we propose a stacked attention network for image QA tasks. Second, we perform comprehensive evaluations on four image QA benchmarks, demonstrating that the proposed multiple-layer SAN outperforms previous state-of-the-art approaches by a substantial margin. Third, we perform a detailed analysis where we visualize the outputs of differ- ent attention layers of the SAN and demonstrate the process that the SAN takes multiple steps to progressively focus the attention on the relevant visual clues that lead to the answer.
1511.02274#4
1511.02274#6
1511.02274
[ "1506.00333" ]
1511.02274#6
Stacked Attention Networks for Image Question Answering
# 2. Related Work Image QA is closely related to image captioning [5, 30, 6, 27, 12, 10, 20]. In [27], the system ï¬ rst extracted a high level image feature vector from GoogleNet and then fed it into a LSTM to generate captions. The method proposed in [30] went one step further to use an attention mechanism in the caption generation process. Different from [30, 27], the approach proposed in [6] ï¬ rst used a CNN to detect words given the images, then used a maximum entropy language model to generate a list of caption candidates, and ï¬ nally used a deep multimodal similarity model (DMSM) to re- rank the candidates. Instead of using a RNN or a LSTM, the DMSM uses a CNN to model the semantics of captions. Unlike image captioning, in image QA, the question is given and the task is to learn the relevant visual and text rep- resentation to infer the answer. In order to facilitate the re- search of image QA, several data sets have been constructed in [19, 21, 7, 1] either through automatic generation based on image caption data or by human labeling of questions and answers given images. Among them, the image QA data set in [21] is generated based on the COCO caption data set. Given a sentence that describes an image, the au- thors ï¬ rst used a parser to parse the sentence, then replaced the key word in the sentence using question words and the key word became the answer. [7] created an image QA data set through human labeling. The initial version was in Chi- nese and then was translated to English. [1] also created an image QA data set through human labeling. They collected questions and answers not only for real images, but also for abstract scenes. Several image QA models were proposed in the litera- ture. [18] used semantic parsers and image segmentation methods to predict answers based on images and questions. [19, 7] both used encoder-decoder framework to generate answers given images and questions.
1511.02274#5
1511.02274#7
1511.02274
[ "1506.00333" ]
1511.02274#7
Stacked Attention Networks for Image Question Answering
They ï¬ rst used a LSTM to encoder the images and questions and then used another LSTM to decode the answers. They both fed the image feature to every LSTM cell. [21] proposed sev- eral neural network based models, including the encoder- decoder based models that use single direction LSTMs and bi-direction LSTMs, respectively. However, the authors found the concatenation of image features and bag of words features worked the best. [1] ï¬ rst encoded questions with LSTMs and then combined question vectors with image vectors by element wise multiplication. [17] used a CNN for question modeling and used convolution operations to combine question vectors and image feature vectors.
1511.02274#6
1511.02274#8
1511.02274
[ "1506.00333" ]
1511.02274#8
Stacked Attention Networks for Image Question Answering
We compare the SAN with these models in Sec. 4. To the best of our knowledge, the attention mechanism, which has been proved very successful in image captioning, has not been explored for image QA. The SAN adapt the at- tention mechanism to image QA, and can be viewed as a signiï¬ cant extension to previous models [30] in that multi- ple attention layers are used to support multi-step reasoning for the image QA task. # 3. Stacked Attention Networks (SANs) The overall architecture of the SAN is shown in Fig. 1a. We describe the three major components of SAN in this sec- tion: the image model, the question model, and the stacked attention model. 3.1. Image Model The image model uses a CNN [13, 23, 26] to get the representation of images.
1511.02274#7
1511.02274#9
1511.02274
[ "1506.00333" ]
1511.02274#9
Stacked Attention Networks for Image Question Answering
Speciï¬ cally, the VGGNet [23] is used to extract the image feature map fI from a raw image I: 148 â __> 14 si2 14 448 feature map Figure 2: CNN based image model fI = CNNvgg(I). (1) Unlike previous studies [21, 17, 7] that use features from the last inner product layer, we choose the features fI from the last pooling layer, which retains spatial information of the original images. We ï¬ rst rescale the images to be 448 à 448 pixels, and then take the features from the last pooling layer, which therefore have a dimension of 512à 14à 14, as shown in Fig. 2. 14 à 14 is the number of regions in the image and 512 is the dimension of the feature vector for each region. Accordingly, each feature vector in fI corresponds to a 32à 32 pixel region of the input images. We denote by fi, i â [0, 195] the feature vector of each image region. Then for modeling convenience, we use a single layer perceptron to transform each feature vector to a new vec- tor that has the same dimension as the question vector (de- scribed in Sec. 3.2): vI = tanh(WI fI + bI ), (2) where vI is a matrix and its i-th column vi is the visual feature vector for the region indexed by i. # 3.2. Question Model As [25, 22, 6] show that LSTMs and CNNs are powerful to capture the semantic meaning of texts, we explore both models for question representations in this study. # 3.2.1 LSTM based question model A LSTM >| LSTM Pee >| LSTM i ft t We We se We . i f . Question: â
1511.02274#8
1511.02274#10
1511.02274
[ "1506.00333" ]
1511.02274#10
Stacked Attention Networks for Image Question Answering
what are bicycle Figure 3: LSTM based question model The essential structure of a LSTM unit is a memory cell ct which reserves the state of a sequence. At each step, the LSTM unit takes one input vector (word vector in our case) xt and updates the memory cell ct, then output a hid- den state ht. The update process uses the gate mechanism. A forget gate ft controls how much information from past state ctâ 1 is preserved. An input gate it controls how much the current input xt updates the memory cell. An output gate ot controls how much information of the memory is fed to the output as hidden state.
1511.02274#9
1511.02274#11
1511.02274
[ "1506.00333" ]
1511.02274#11
Stacked Attention Networks for Image Question Answering
The detailed update pro- cess is as follows: it =Ï (Wxixt + Whihtâ 1 + bi), ft =Ï (Wxf xt + Whf htâ 1 + bf ), ot =Ï (Wxoxt + Whohtâ 1 + bo), ct =ftctâ 1 + it tanh(Wxcxt + Whchtâ 1 + bc), ht =ot tanh(ct), where i, f, o, c are input gate, forget gate, output gate and memory cell, respectively.
1511.02274#10
1511.02274#12
1511.02274
[ "1506.00333" ]
1511.02274#12
Stacked Attention Networks for Image Question Answering
The weight matrix and bias are parameters of the LSTM and are learned on training data. (3) (4) (5) (6) (7) Given the question q = [q1, ...qT ], where qt is the one hot vector representation of word at position t, we ï¬ rst embed the words to a vector space through an embedding matrix xt = Weqt. Then for every time step, we feed the embed- ding vector of words in the question to LSTM: xt =Weqt, t â {1, 2, ...T }, ht =LSTM(xt), t â {1, 2, ...T }. (8) (9)
1511.02274#11
1511.02274#13
1511.02274
[ "1506.00333" ]
1511.02274#13
Stacked Attention Networks for Image Question Answering
As shown in Fig. 3, the question what are sitting in the basket on a bicycle is fed into the LSTM. Then the ï¬ nal hidden layer is taken as the repre- sentation vector for the question, i.e., vQ = hT . # 3.2.2 CNN based question model ] ] = : : . ms max pooling unigram.* . trigram. | over time * f bigram . convolution embedding 7 â Question. 58 = ) Re F 6 @Q © # Figure 4:
1511.02274#12
1511.02274#14
1511.02274
[ "1506.00333" ]
1511.02274#14
Stacked Attention Networks for Image Question Answering
CNN based question model In this study, we also explore to use a CNN similar to [11] for question representation. Similar to the LSTM- based question model, we ï¬ rst embed words to vectors xt = Weqt and get the question vector by concatenating the word vectors: x1:T = [x1, x2, ..., xT ]. (10) Then we apply convolution operation on the word embed- ding vectors. We use three convolution ï¬ lters, which have the size of one (unigram), two (bigram) and three (trigram) respectively. The t-th convolution output using window size c is given by: hc,t = tanh(Wcxt:t+câ 1 + bc). (11) The ï¬ lter is applied only to window t : t + c â 1 of size c. Wc is the convolution weight and bc is the bias. The feature map of the ï¬ lter with convolution size c is given by: hc = [hc,1, hc,2, ..., hc,T â c+1]. (12)
1511.02274#13
1511.02274#15
1511.02274
[ "1506.00333" ]
1511.02274#15
Stacked Attention Networks for Image Question Answering
Then we apply max-pooling over the feature maps of the convolution size c and denote it as Ë hc = max [hc,1, hc,2, ..., hc,T â c+1]. t (13) The max-pooling over these vectors is a coordinate-wise max operation. For convolution feature maps of different sizes c = 1, 2, 3, we concatenate them to form the feature representation vector of the whole question sentence: h = [Ë h1, Ë h2, Ë h3], (14) hence vQ = h is the CNN based question vector. The diagram of CNN model for question is shown in Fig. 4. The convolutional and pooling layers for unigrams, bigrams and trigrams are drawn in red, blue and orange, re- spectively. # 3.3. Stacked Attention Networks Given the image feature matrix vI and the question fea- ture vector vQ, SAN predicts the answer via multi-step rea- soning. In many cases, an answer only related to a small region of an image. For example, in Fig. 1b, although there are multiple objects in the image: bicycles, baskets, window, street and dogs and the answer to the ques- tion only relates to dogs. Therefore, using the one global image feature vector to predict the answer could lead to sub- optimal results due to the noises introduced from regions that are irrelevant to the potential answer. Instead, reason- ing via multiple attention layers progressively, the SAN are able to gradually ï¬ lter out noises and pinpoint the regions that are highly relevant to the answer. Given the image feature matrix vI and the question vec- tor vQ, we ï¬ rst feed them through a single layer neural net- work and then a softmax function to generate the attention distribution over the regions of the image: hA = tanh(WI,AvI â (WQ,AvQ + bA)), pI =softmax(WP hA + bP ),
1511.02274#14
1511.02274#16
1511.02274
[ "1506.00333" ]
1511.02274#16
Stacked Attention Networks for Image Question Answering
(15) (16) where vI â Rdà m, d is the image representation dimen- sion and m is the number of image regions, vQ â Rd is a d dimensional vector. Suppose WI,A, WQ,A â Rkà d and WP â R1à k, then pI â Rm is an m dimensional vector, which corresponds to the attention probability of each im- age region given vQ. Note that we denote by â the addition of a matrix and a vector. Since WI,AvI â Rkà m and both WQ,AvQ, bA â Rk are vectors, the addition between a ma- trix and a vector is performed by adding each column of the matrix by the vector. Based on the attention distribution, we calculate the weighted sum of the image vectors, each from a region, Ë vi as in Eq. 17. We then combine Ë vi with the question vec- tor vQ to form a reï¬ ned query vector u as in Eq. 18. u is regarded as a reï¬ ned query since it encodes both question information and the visual information that is relevant to the potential answer: 1 =o ini, (17) i i u =Ë vI + vQ. (18) Compared to models that simply combine the ques- tion vector and the global image vector, attention mod- els construct a more informative u since higher weights are put on the visual regions that are more relevant to the question. However, for complicated questions, a sin- gle attention layer is not sufï¬ cient to locate the correct region for answer prediction. For example, the question in Fig. 1 what are sitting in the basket on a bicycle refers to some subtle relationships among multiple objects in an image. Therefore, we iterate the above query-attention process using multiple attention lay- ers, each extracting more ï¬ ne-grained visual attention infor- mation for answer prediction. Formally, the SANs take the following formula: for the k-th attention layer, we compute: A)), (19) A = tanh(W k hk I =softmax(W k pk # Q,Aukâ 1 + bk I,AvI â (W k A + bk P ). P hk (20)
1511.02274#15
1511.02274#17
1511.02274
[ "1506.00333" ]
1511.02274#17
Stacked Attention Networks for Image Question Answering
where u0 is initialized to be vQ. Then the aggregated image feature vector is added to the previous query vector to form a new query vector: Ë vk I = pk i vi, (21) i I + ukâ 1. uk =Ë vk (22) That is, in every layer, we use the combined question and image vector ukâ 1 as the query for the image. After the image region is picked, we update the new query vector as I + ukâ 1. We repeat this K times and then use the uk = Ë vk ï¬ nal uK to infer the answer:
1511.02274#16
1511.02274#18
1511.02274
[ "1506.00333" ]
1511.02274#18
Stacked Attention Networks for Image Question Answering
pans =softmax(WuuK + bu). (23) Fig. 1b illustrates the reasoning process by an exam- ple. In the ï¬ rst attention layer, the model identiï¬ es roughly the area that are relevant to basket, bicycle, and sitting in. In the second attention layer, the model fo- cuses more sharply on the region that corresponds to the answer dogs. More examples can be found in Sec. 4. # 4. Experiments # 4.1. Data sets We evaluate the SAN on four image QA data sets. DAQUAR-ALL is proposed in [18].
1511.02274#17
1511.02274#19
1511.02274
[ "1506.00333" ]
1511.02274#19
Stacked Attention Networks for Image Question Answering
There are 6, 795 training questions and 5, 673 test questions. These ques- tions are generated on 795 and 654 images respectively. The images are mainly indoor scenes. The questions are catego- rized into three types including Object, Color and Number. Most of the answers are single words. Following the setting in [21, 17, 19], we exclude data samples that have multiple words answers. The remaining data set covers 90% of the original data set. reduced version of DAQUAR-ALL. There are 3, 876 training samples and 297 test samples. This data set is constrained to 37 object categories and uses only 25 test images. The single word answers data set covers 98% of the original data set. COCO-QA is proposed in [21]. Based on the Microsoft COCO data set, the authors ï¬ rst parse the caption of the im- age with an off-the-shelf parser, then replace the key com- ponents in the caption with question words for form ques- tions. There are 78736 training samples and 38948 test sam- ples in the data set. These questions are based on 8, 000 and 4, 000 images respectively. There are four types of ques- tions including Object, Number, Color, and Location. Each type takes 70%, 7%, 17%, and 6% of the whole data set, respectively. All answers in this data set are single word. VQA is created through human labeling [1]. The data set uses images in the COCO image caption data set [16]. Unlike the other data sets, for each image, there are three questions and for each question, there are ten answers la- beled by human annotators. There are 248, 349 training questions and 121, 512 validation questions in the data set. Following [1], we use the top 1000 most frequent answer as possible outputs and this set of answers covers 82.67% of all answers.
1511.02274#18
1511.02274#20
1511.02274
[ "1506.00333" ]
1511.02274#20
Stacked Attention Networks for Image Question Answering
We ï¬ rst studied the performance of the pro- posed model on the validation set. Following [6], we split the validation data set into two halves, val1 and val2. We use training set and val1 to train and validate and val2 to test locally. The results on the val2 set are reported in Ta- ble. 6. We also evaluated the best model, SAN(2, CNN), on the standard test server as provided in [1] and report the results in Table. 5. # 4.2. Baselines and evaluation methods We compare our models with a set of baselines proposed recently [21, 1, 18, 19, 17] on image QA. Since the results of these baselines are reported on different data sets in dif- ferent literature, we present the experimental results on dif- ferent data sets in different tables. For all four data sets, we formulate image QA as a clas- siï¬ cation problem since most of answers are single words. We evaluate the model using classiï¬ cation accuracy as re- ported in [1, 21, 19]. The reference models also report the Wu-Palmer similarity (WUPS) measure [29]. The WUPS measure calculates the similarity between two words based on their longest common subsequence in the taxonomy tree. We can set a threshold for WUPS, if the similarity is less than the threshold, then it is zeroed out.
1511.02274#19
1511.02274#21
1511.02274
[ "1506.00333" ]
1511.02274#21
Stacked Attention Networks for Image Question Answering
Following the refer- ence models, we use WUPS0.9 and WUPS0.0 as evaluation metrics besides the classiï¬ cation accuracy. The evaluation on the VQA data set is different from other three data sets, since for each question there are ten answer labels that may or may not be the same. We follow [1] to use the following metric: min(# human labels that match that answer/3, 1), which basically gives full credit to the answer when three or more of the ten human labels match the answer and gives partial credit if there are less matches. # 4.3. Model conï¬ guration and training For the image model, we use the VGGNet to extract fea- tures. When training the SAN, the parameter set of the CNN of the VGGNet is ï¬
1511.02274#20
1511.02274#22
1511.02274
[ "1506.00333" ]
1511.02274#22
Stacked Attention Networks for Image Question Answering
xed. We take the output from the last pooling layer as our image feature which has a dimension of 512 à 14 à 14 . For DAQUAR and COCO-QA, we set the word embed- ding dimension and LSTMâ s dimension to be 500 in the question model. For the CNN based question model, we set the unigram, bigram and trigram convolution ï¬ lter size to be 128, 256, 256 respectively. The combination of these ï¬ lters makes the question vector size to be 640. For VQA dataset, since it is larger than other data sets, we double the model size of the LSTM and the CNN to accommodate the large data set and the large number of classes. In evaluation, we experiment with SAN with one and two attention layers.
1511.02274#21
1511.02274#23
1511.02274
[ "1506.00333" ]
1511.02274#23
Stacked Attention Networks for Image Question Answering
We ï¬ nd that using three or more attention layers does not further improve the performance. In our experiments, all the models are trained using stochastic gradient descent with momentum 0.9. The batch size is ï¬ xed to be 100. The best learning rate is picked using grid search. Gradient clipping technique [8] and dropout [24] are used. # 4.4. Results and analysis The experimental results on DAQUAR-ALL, DAQUAR- REDUCED, COCO-QA and VQA are presented in Table. 1 to 6 respectively. Our model names explain their settings: SAN is short for the proposed stacked attention networks, the value 1 or 2 in the brackets refer to using one or two attention layers, respectively. The keyword LSTM or CNN refers to the question model that SANs use. The experimental results in Table. 1 to 6 show that the two-layer SAN gives the best results across all data sets and the two kinds of question models in the SAN, LSTM and CNN, give similar performance. For example, on DAQUAR-ALL (Table. 1), both of the proposed two- layer SANs outperform the two best baselines, the IMG- CNN in [17] and the Ask-Your-Neuron in [19], by 5.9% and 7.6% absolute in accuracy, respectively. Similar range of improvements are observed in metrics of WUPS0.9 and WUPS0.0. We also observe signiï¬ cant improvements on DAQUAR-REDUCED (Table. 2), i.e., our SAN(2, LSTM) Methods Accuracy WUPS0.9 WUPS0.0 Multi-World: [18] Multi-World 7.9 11.9 38.8 Ask-Your-Neurons: [19] Language Language + IMG CNN: [17] IMG-CNN 19.1 21.7 23.4 25.2 28.0 29.6 65.1 65.0 63.0 Ours:
1511.02274#22
1511.02274#24
1511.02274
[ "1506.00333" ]
1511.02274#24
Stacked Attention Networks for Image Question Answering
SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 28.9 29.2 29.3 29.3 34.7 35.1 34.9 35.1 68.5 67.8 68.1 68.6 Human :[18] Human 50.2 50.8 67.3 Table 1: DAQUAR-ALL results, in percentage Methods Accuracy WUPS0.9 WUPS0.0 Multi-World: [18] Multi-World 12.7 18.2 51.5 Ask-Your-Neurons: [19] Language Language + IMG 31.7 34.7 38.4 40.8 80.1 79.5 VSE: [21] GUESS BOW LSTM IMG+BOW VIS+LSTM 2-VIS+BLSTM 18.2 32.7 32.7 34.2 34.4 35.8 29.7 43.2 43.5 45.0 46.1 46.8 77.6 81.3 81.6 81.5 82.2 82.2 CNN: [17] IMG-CNN 39.7 44.9 83.1 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 45.2 45.2 46.2 45.5 49.6 49.6 51.2 50.2 84.0 83.7 85.1 83.6 Human :[18] Human 60.3 61.0 79.0 # Table 2: DAQUAR-REDUCED results, in percentage outperforms the IMG-CNN [17], the 2-VIS+BLSTM [21], the Ask-Your-Neurons approach [19] and the Multi-World [18] by 6.5%, 10.4%, 11.5% and 33.5% absolute in accu- racy, respectively. On the larger COCO-QA data set, the proposed two-layer SANs signiï¬
1511.02274#23
1511.02274#25
1511.02274
[ "1506.00333" ]
1511.02274#25
Stacked Attention Networks for Image Question Answering
cantly outperform the best baselines from [17] (IMG-CNN) and [21] (IMG+BOW and 2-VIS+BLSTM) by 5.1% and 6.6% in accuracy (Table. 3). Methods VSE: [21] GUESS BOW LSTM IMG IMG+BOW VIS+LSTM 2-VIS+BLSTM 6.7 37.5 36.8 43.0 55.9 53.3 55.1 17.4 48.5 47.6 58.6 66.8 63.9 65.3 73.4 82.8 82.3 85.9 89.0 88.3 88.6 CNN: [17] IMG-CNN CNN 55.0 32.7 65.4 44.3 88.6 80.9 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 59.6 60.7 61.0 61.6 69.6 70.6 71.0 71.6 90.1 90.5 90.7 90.9 Table 3: COCO-QA results, in percentage Methods VSE: [21] GUESS BOW LSTM IMG IMG+BOW VIS+LSTM 2-VIS+BLSTM 2.1 37.3 35.9 40.4 58.7 56.5 58.2 35.8 43.6 45.3 29.3 44.1 46.1 44.8 13.9 34.8 36.3 42.7 52.0 45.9 49.5 8.9 40.8 38.4 44.2 49.4 45.5 47.3 Ours:
1511.02274#24
1511.02274#26
1511.02274
[ "1506.00333" ]
1511.02274#26
Stacked Attention Networks for Image Question Answering
SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 62.5 63.6 63.6 64.5 49.0 48.7 49.8 48.6 54.8 56.7 57.9 57.9 51.6 52.7 52.8 54.0 # Table 4: COCO-QA accuracy per class, in percentage test-dev test-std Methods All Yes/No Number Other All VQA: [1] Question Image Q+I LSTM Q LSTM Q+I 48.1 28.1 52.6 48.8 53.7 75.7 64.0 75.6 78.2 78.9 36.7 0.4 33.7 35.7 35.2 27.1 3.8 37.4 26.6 36.4 - - - - 54.1 SAN(2, CNN) 58.7 79.3 36.6 46.1 58.9 Table 5: VQA results on the ofï¬ cial server, in percentage Table. 5 summarizes the performance of various models on VQA, which is the largest among the four data sets. The overall results show that our best model, SAN(2, CNN), All Yes/No 36% Number 10% Other 54% 56.6 56.9 57.3 57.6 78.1 78.8 78.3 78.6 41.6 42.0 42.2 41.8 44.8 45.0 45.9 46.4 Table 6: VQA results on our partition, in percentage outperforms the LSTM Q+I model, the best baseline from [1], by 4.8% absolute. The superior performance of the SANs across all four benchmarks demonstrate the effective- ness of using multiple layers of attention. In order to study the strength and weakness of the SAN in detail, we report performance at the question-type level on the two large data sets, COCO-QA and VQA, in Ta- ble. 4 and 5, respectively.
1511.02274#25
1511.02274#27
1511.02274
[ "1506.00333" ]
1511.02274#27
Stacked Attention Networks for Image Question Answering
We observe that on COCO- QA, compared to the two best baselines, IMG+BOW and 2-VIS+BLSTM, out best model SAN(2, CNN) improves 7.2% in the question type of Color, followed by 6.1% in Objects, 5.7% in Location and 4.2% in Number. We ob- serve similar trend of improvements on VQA. As shown in Table. 5, compared to the best baseline LSTM Q+I, the biggest improvement of SAN(2, CNN) is in the Other type, 9.7%, followed by the 1.4% improvement in Number and 0.4% improvement in Yes/No. Note that the Other type in VQA refers to questions that usually have the form of â what color, what kind, what are, what type, whereâ etc., which are similar to question types of Color, Objects and Loca- tion in COCO-QA. The VQA data set has a special Yes/No type of questions. The SAN only improves the performance of this type of questions slightly. This could due to that the answer for a Yes/No question is very question dependent, so better modeling of the visual information does not provide much additional gains.
1511.02274#26
1511.02274#28
1511.02274
[ "1506.00333" ]
1511.02274#28
Stacked Attention Networks for Image Question Answering
This also conï¬ rms the similar ob- servation reported in [1], e.g., using additional image infor- mation only slightly improves the performance in Yes/No, as shown in Table. 5, Q+I vs Question, and LSTM Q+I vs LSTM Q. Our results demonstrate clearly the positive impact of using multiple attention layers. In all four data sets, two- layer SANs always perform better than the one-layer SAN. Speciï¬ cally, on COCO-QA, on average the two-layer SANs outperform the one-layer SANs by 2.2% in the type of Color, followed by 1.3% and 1.0% in the Location and Ob- jects categories, and then 0.4% in Number. This aligns to the order of the improvements of the SAN over baselines. Similar trends are observed on VQA (Table. 6), e.g., the two-layer SAN improve over the one-layer SAN by 1.4% for the Other type of question, followed by 0.2% improve- ment for Number, and ï¬ at for Yes/No. # 4.5.
1511.02274#27
1511.02274#29
1511.02274
[ "1506.00333" ]
1511.02274#29
Stacked Attention Networks for Image Question Answering
Visualization of attention layers In this section, we present analysis to demonstrate that using multiple attention layers to perform multi-step rea- soning leads to more ï¬ ne-grained attention layer-by-layer in locating the regions that are relevant to the potential an- swers. We do so by visualizing the outputs of the atten- tion layers of a sample set of images from the COCO-QA test set. Note the attention probability distribution is of size 14 à 14 and the original image is 448 à 448, we up-sample the attention probability distribution and apply a Gaussian ï¬ lter to make it the same size as the original image.
1511.02274#28
1511.02274#30
1511.02274
[ "1506.00333" ]
1511.02274#30
Stacked Attention Networks for Image Question Answering
Fig. 5 presents six examples. More examples are pre- sented in the appendix. They cover types as broad as Object, Numbers, Color and Location. For each example, the three images from left to right are the original image, the output of the ï¬ rst attention layer and the output of the second at- tention layer, respectively. The bright part of the image is the detected attention. Across all those examples, we see that in the ï¬ rst attention layer, the attention is scattered on many objects in the image, largely corresponds to the ob- jects and concepts referred in the question, whereas in the second layer, the attention is far more focused on the re- gions that lead to the correct answer. For example, consider the question what is the color of the horns, which asks the color of the horn on the womanâ s head in Fig. 5(f). In the output of the ï¬ rst attention layer, the model ï¬ rst recognizes a woman in the image. In the output of the second attention layer, the attention is focused on the head of the woman, which leads to the answer of the question: the color of the horn is red.
1511.02274#29
1511.02274#31
1511.02274
[ "1506.00333" ]
1511.02274#31
Stacked Attention Networks for Image Question Answering
# 4.6. Errors analysis We randomly sample 100 images from the COCO-QA test set that the SAN make mistakes. We group the errors into four categories: (i) the SANs focus the attention on the wrong regions (22%), e.g., the example in Fig. 6(a); (ii) the SANs focus on the right region but predict a wrong answer (42%), e.g., the examples in Fig. 6(b)(c)(d); (iii) the answer is ambiguous, the SANs give answers that are different from labels, but might be acceptable (31%). E.g., in Fig. 6(e), the answer label is pot, but out model predicts vase, which is also visually reasonable; (iv) the labels are clearly wrong (5%). E.g., in Fig. 6(f), our model gives the correct answer trains while the label cars is wrong. 5.
1511.02274#30
1511.02274#32
1511.02274
[ "1506.00333" ]
1511.02274#32
Stacked Attention Networks for Image Question Answering
Conclusion In this paper, we propose a new stacked attention net- work (SAN) for image QA. SAN uses a multiple-layer at- tention mechanism that queries an image multiple times to locate the relevant visual region and to infer the answer pro- gressively. Experimental results demonstrate that the pro- posed SAN signiï¬ cantly outperforms previous state-of-the- art approaches by a substantial margin on all four image QA (a) What are pulling aman on a wagon down on dirt road? (b) What is the color of the box 2 Answer: horses Prediction: horses Answer: red Prediction: red What next to the large umbrella attached to a table? (d ) How many people are going up the mountain with walking sticks? (c) Answer: trees Prediction: tree Answer: four Prediction: four ee" (e) What is sitting on the handle bar of a bicycle? (f) What is the color of the horns? Answer: bird Prediction: bird Answer: red Prediction: red Original Image First Attention Layer Second Attention Layer Original Image First Attention Layer Second Attention Layer Figure 5: Visualization of two attention layers
1511.02274#31
1511.02274#33
1511.02274
[ "1506.00333" ]
1511.02274#33
Stacked Attention Networks for Image Question Answering
What swim in the ocean near two large ferries? What is the color of the shirt? ( a) Answer: ducks Prediction: boats ( b) Answer: purple Prediction: green (c) What is the young woman eating? Answer: banana Prediction: donut (d) How many umbrellas with various patterns? Answer: three Prediction: two What are passing underneath the walkway bridge? The very old looking what is on display? (e) -, a _ (f) Answer: cars Prediction: trains Answer: pot Prediction: vase Original mage _First Attention Layer Second Attention Layer Originallmage _ First Attention Layer Second Attention Layer Figure 6: Examples of mistakes data sets.
1511.02274#32
1511.02274#34
1511.02274
[ "1506.00333" ]