doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.04636 | 46 | [Lin1993] L-J. Lin. 1993. Reinforcement learning for robots using neural networks. Technical report, DTIC Document.
[Luong et al.2015] M-T. Luong, H. Pham, and C. D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Septem- ber.
[Mnih et al.2013] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Ried- miller. 2013. Playing Atari with Deep Reinforce- ment Learning. NIPS Deep Learning Workshop, De- cember.
[Mnih et al.2015] V. Mnih, K. Kavukcuoglu, D. Silver, A. A Rusu, J. Veness, M. G Bellemare, A. Graves, M. Riedmiller, A. K Fidjeland, G. Ostrovski, et al. 2015. Human-level control through deep reinforce- ment learning. Nature, 518(7540):529-533. | 1511.04636#46 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 47 | [Narasimhan et al.2015] K. Narasimhan, T. Kulkarni, and R. Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. In Proc. of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1-11, September.
[Nogueira and Cho2016] R. Nogueira and K. Cho. 2016. Webnav: A new large-scale task for natural language based sequential decision making. arXiv preprint arXiv: 1602.02261.
[Scheffler and Young2002] K. Scheffler and S. Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proc. of the second International Conference on Hu- man Language Technology Research, pages 12-19.
Sutskever et al.2014] I. Sutskever, O. Vinyals, and Q. V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112.
Sutton and Barto1998] R. S Sutton and A. G Barto. 1998. Reinforcement learning: An introduction, volume 1. MIT press Cambridge. | 1511.04636#47 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 48 | Sutton and Barto1998] R. S Sutton and A. G Barto. 1998. Reinforcement learning: An introduction, volume 1. MIT press Cambridge.
Tesaurol995] G. Tesauro. 1995. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58-68.
Watkins and Dayanl992] C. JCH Watkins and P. Dayan. 1992. Q-learning. Machine learning, 8(3-4):279-292.
Young et al.2013] S. Young, M. Gasic, B. Thomson, and J. D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179.
2016
arXiv:1511.04636v5 [cs.AI] 8 Jun
# Supplementary Material for âDeep Reinforcement Learn- ing with a Natural Language Action Spaceâ
# A Percentage of Choice-based and Hypertext-based Text Games
As shown in Table 1.!
Year 2010 2011 2012 2013 2014 Percentage || 7.69% 7.89% 25.00% | 55.56% | 61.90%
Table 1: Percentage of choice-based and hypertext-based text games since 2010, in archive of interactive fictions
# B_ Back Propagation Formula for Learning DRRN | 1511.04636#48 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 49 | Table 1: Percentage of choice-based and hypertext-based text games since 2010, in archive of interactive fictions
# B_ Back Propagation Formula for Learning DRRN
Let hj; and hj,_ denote the /-th hidden layer for state and action side neural net- works, respectively. For state side, W;,, and bj,; denote the linear transformation weight matrix and bias vector between the (J â 1)-th and /-th hidden layers. For actions side, W),, and b;,, denote the linear transformation weight matrix and bias vector between the (/ â 1)-th and J-th hidden layers. The DRRN has L hidden layers on each side.
# Forward:
his = f(W1,s8¢ + b1,5) (1)
Ria =f(Wiaat+bia), %=1,2,3,...,|Ad| (2)
his = f(Wi-1,shi-1,s + bi-1,s), 1 = 2,3,...,L (3)
# hia =S(Wi-rahiaa +b-ta), Q(s2, ai) = AE hia
t=1,2,3,..,|Ad, = 2,3,..,L | 1511.04636#49 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 50 | t=1,2,3,..,|Ad, = 2,3,..,L
where f(-) is the nonlinear activation function at the hidden layers, which is chosen as tanh (xz) = (1 â exp (â2a))/(1 + exp (â2z)), and A; denotes the set of all actions at time t.
# Backward:
Note we only back propagate for actions that are actually taken. More for- mally, let a; be action the DRRN takes at time t, and denote A = [Q(s;,a,) â
'Statistics are obtained from http://www. ifarchive.org
(4)
(5)
Reward | Endings (partially shown) -20 Suspicion fills my heart and I scream. Is she trying to kill me? I donât trust her one bit... -10 Submerged under water once more, I lose all focus... 0 Even now, sheâs there for me. And I have done nothing for her... 10 Honest to God, I donât know what I see in her. Looking around, the situationâs not so bad... 20 Suddenly I can see the sky... I focus on the most important thing - that Iâm happy to be alive.
Table 2: Final rewards defined for the text game âSaving Johnâ | 1511.04636#50 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 51 | Table 2: Final rewards defined for the text game âSaving Johnâ
(re + ymaxa Q(st41, @))]?/2. Denote 51,5 = dbi,5 = OQ/Abis. dia = Sia = 0Q/0bj,q, and we have (by following chain rules):
. OA BQ = FE = Qsisar) â (r+ ymax Q(su41,4)) ©)
61,8 = 9Q-hraO© (1âhy,s) © (1 +hr,s) 61-1, = WoL © (1 _ hi-1,s) © (1 + hits); l= 2,3,...,L
ota = §Q + hp sO (1â Area) O (1+ Ara) (8) 6-1, = Wi ba (1 Mita) O(L+ Pita), 1 = 2,3,...,0
bW1,5 = 0Q/OW 5 = 51,5 ° $F (9) SWig = Q/OWi 5 = b19-hE yg, 1=2,8,.4L
OW = 0Q/OW1.4 = Ota . a} (10) Wie = 0Q/OW ia = 510° Eas 1=2,3,...,L | 1511.04636#51 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 52 | OW = 0Q/OW1.4 = Ota . a} (10) Wie = 0Q/OW ia = 510° Eas 1=2,3,...,L
where © denotes element-wise Hadamard product.
# C_ Final Rewards in the Two Text Games
As shown in Table 2 and Table 3.
# D Game 2 Learning curve with shared state and action embedding
As shown in Figure 1. For the first 1000 episodes, parameter tying gives faster convergence, but learning curve also has high variance and unstable.
(7) | 1511.04636#52 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 53 | (7)
Reward Endings (partially shown) -20 You spend your last few moments on Earth lying there, shot through the heart, by the image of Jon Bon Jovi. -20 you hear Bon Jovi say as the world fades around you. -20 As the screams you hear around you slowly fade and your vision begins to blur, you look at the words which ended your life. -10 You may be locked away for some time. -10 Eventually youâre escorted into the back of a police car as Rachel looks on in horror. -10 Fate can wait. -10 Sadly, youâre so distracted with looking up the number that you donât notice the large truck speeding down the street. -10 All these hiccups lead to one grand disaster. 10 Stay the hell away from me! She blurts as she disappears into the crowd emerging from the bar. 20 You canât help but smile. 20 Hope you have a good life. 20 Congratulations! 20 Rachel waves goodbye as you begin the long drive home. After a few minutes, you turn the radio on to break the silence. 30 After all, itâs your life. Itâs now or never. You ainât gonna live forever. You just want to live while youâre alive. | 1511.04636#53 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 54 | Table 3: Final rewards for the text game âMachine of Death.â Scores are as- signed according to whether the character survives, how the friendship develops, and whether he overcomes his fear.
# E Examples of State-Action Pairs in the Two Text Games
As shown in Table 4 and Table 5.
# F Examples of State-Action Pairs that do not exist in the feasible set
As shown in Table 6.
Average reward = A= DRRN (2-hidden) ââ¬â DRRN (2-hidden tying) 0 1000 2000 3000 Number of episodes 4000
Figure 1: Learning curves of shared state-action embedding vs. proposed DRRN in Game 2 | 1511.04636#54 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 55 | State Actions (with Q values) A wet strand of hair hinders my vision and Iâm back in the water. Sharp pain pierces my lungs. How much longer do I have? 30 seconds? Less? I need to focus. A hand comes into view once I still donât know what to do. (- 8.981) Reach for it. (18.005) more. *Me:â Hello Sent: today âCherie:â Hey. Can I call you? Sent: | Reply âPIl call youâ (14.569) No today (-9.498) âYou donât hold any power over me. Not anymore.â Lucretia raises one eyebrow. The bar is quiet. I really wish I did my hair today.â She twirls a strand. âIâm sorry,â âSave itâ //Yellow Submarine plays softly in the background.// I really hate her.â Cherie? Itâs not her fault.â You'll be sorry,â Please stop screaming.â I laugh and she throws a glass of water in my face. (16.214) I look away and she sips her glass quietly. (-7.986) My dad left | 1511.04636#55 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 56 | I laugh and she throws a glass of water in my face. (16.214) I look away and she sips her glass quietly. (-7.986) My dad left before I could remember. My mom worked all the time but she had to take care of her father, my grandpa. The routine was that she had an hour between her morning shift and afternoon shift, where sheâd make food for me to bring to pops. He lived three blocks away, in a house with red steps leading up to the metal front door. Inside, the stained yellow wallpaper and rotten oranges reeked of mold. Iâd walk by myself to my grandfatherâs and back. It was lonely sometimes, being a kid and all, but it was nothing I couldnât deal with. Itâs not like he abused me, I mean it hurt but why wouldnât I fight back? I met Adam on one of these walks. He made me feel stronger, like I can face anything. Repress this memory (-8.102) Why didnât I fight back? (10.601) Face Cherie (14.583) | 1511.04636#56 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 58 | State Actions (with Q values) Peak hour ended an hour or so ago, alleviating the feeling of being a tinned sardine that?s commonly associated with shopping malls, though there are still quite a few people busily bumbling about. To your left is a fast food restaurant. To the right is a UFO catcher, and a poster is hanging on the wall beside it. Behind you is the one of the mallâs exits. In front of you stands the Machine. Youâre carrying 4 dollars in change. fast food restaurant (1.094) the Machine (3.708) mallâs exits (0.900) UFO catcher (2.646) poster (1.062) You lift the warm mug to your lips and take a small sip of hot tea. Ask what he was looking for. (3.709) Ask about the blood stains. (7.488) Drink tea. (5.526) Wait. (6.557) As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Ignore the alarm of others and con- tinue moving forward. (-21.464) Look up. (16.593) Are you happy? | 1511.04636#58 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 59 | and flee the street. Ignore the alarm of others and con- tinue moving forward. (-21.464) Look up. (16.593) Are you happy? Is this what you want to do? If you didnât avoid that sign, would you be satisfied with how your life had turned out? Sure, youâre good at your job and it pays well, but is that all you want from work? If not, maybe itâs time for a change. Screw it. Iâm going to find a new life right now. Itâs not going to be easy, but itâs what I want. (23.205) Maybe one day. But Iâm satis- fied right now, and I have bills to pay. Keep on going. (One minute) (14.491) You slam your entire weight against the man, making him stumble backwards and drop the chair to the ground as a group of patrons race to restrain him. You feel someone grab your arm, and look over to see that it?s Rachel. Letâs get out of here, she says while motioning towards the exit. You charge out of the bar and leap back into your car, adrenaline still pumping through your veins. As you slam the door, the glove box | 1511.04636#59 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.04636 | 61 | Table 5: Q values (in parentheses) for state-action pair from âMachine of Deathâ, using trained DRRN
Text (with Q-values) State As you move forward, the people surrounding you suddenly look up with terror in their faces, and flee the street. Actions that are in the feasible set Ignore the alarm of others and continue moving forward. (-21.5) Look up. (16.6) Positive actions that are not in the feasible set Stay there. (2.8) Stay calmly. (2.0) Negative actions that are not in the feasible set Screw it. Iâm going carefully. (-17.4) Yell at everyone. (-13.5) Irrelevant actions that are not in the feasible set Insert a coin. (-1.4) Throw a coin to the ground. (-3.6)
Irrelevant actions that are not in the feasible set Insert a coin. (-1.4) Throw a coin to the ground. (-3.6)
Table 6: Q values (in parentheses) for sta e-action pair from âMachine of Deathâ, using trained DRRN, with made-up actions that were not in the feasible set | 1511.04636#61 | Deep Reinforcement Learning with a Natural Language Action Space | This paper introduces a novel architecture for reinforcement learning with
deep neural networks designed to handle state and action spaces characterized
by natural language, as found in text-based games. Termed a deep reinforcement
relevance network (DRRN), the architecture represents action and state spaces
with separate embedding vectors, which are combined with an interaction
function to approximate the Q-function in reinforcement learning. We evaluate
the DRRN on two popular text games, showing superior performance over other
deep Q-learning architectures. Experiments with paraphrased action descriptions
show that the model is extracting meaning rather than simply memorizing strings
of text. | http://arxiv.org/pdf/1511.04636 | Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf | cs.AI, cs.CL, cs.LG | accepted by ACL 2016 | null | cs.AI | 20151114 | 20160608 | [
{
"id": "1511.04636"
}
] |
1511.02274 | 0 | 6 1 0 2
n a J 6 2 ] G L . s c [ 2 v 4 7 2 2 0 . 1 1 5 1 : v i X r a
# Stacked Attention Networks for Image Question Answering
Zichao Yang1, Xiaodong He2, Jianfeng Gao2, Li Deng2, Alex Smola1 1Carnegie Mellon University, 2Microsoft Research, Redmond, WA 98052, USA [email protected], {xiaohe, jfgao, deng}@microsoft.com, [email protected]
# Abstract
This paper presents stacked attention networks (SANs) that learn to answer natural language questions from im- ages. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experi- ments conducted on four image QA data sets demonstrate that the proposed SANs signiï¬cantly outperform previous state-of-the-art approaches. The visualization of the atten- tion layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.
# 1. Introduction | 1511.02274#0 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 1 | # 1. Introduction
With the recent advancement in computer vision and in natural language processing (NLP), image question an- swering (QA) becomes one of the most active research ar- eas [7, 21, 18, 1, 19]. Unlike pure language based QA sys- tems that have been studied extensively in the NLP commu- nity [28, 14, 4, 31, 3, 32], image QA systems are designed to automatically answer natural language questions according to the content of a reference image.
feature vectors of different parts of image B-(I Question: Query -\--EN What are sitting 1 I I in the basket on H + + | | > a bicycle? 1 | ' | XeWYJOS Answer: râ> dogs
(a) Stacked Attention Network for Image QA
Original Image First Attention Layer Second Attention Layer
(b) Visualization of the learned multiple attention layers. The stacked attention network ï¬rst focuses on all referred concepts, e.g., bicycle, basket and objects in the basket (dogs) in the ï¬rst attention layer and then further narrows down the focus in the second layer and ï¬nds out the answer dog. | 1511.02274#1 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 2 | Most of the recently proposed image QA models are based on neural networks [7, 21, 18, 1, 19]. A commonly used approach was to extract a global image feature vector using a convolution neural network (CNN) [15] and encode the corresponding question as a feature vector using a long short-term memory network (LSTM) [9] and then combine them to infer the answer. Though impressive results have been reported, these models often fail to give precise an- swers when such answers are related to a set of ï¬ne-grained regions in an image.
By examining the image QA data sets, we ï¬nd that it is often that case that answering a question from an image re- quires multi-step reasoning. Take the question and image in Fig. 1 as an example. There are several objects in the im- age: bicycles, window, street, baskets and
# Figure 1: Model architecture and visualization
dogs. To answer the question what are sitting in the basket on a bicycle, we need to ï¬rst locate those objects (e.g. basket, bicycle) and concepts (e.g., sitting in) referred in the question, then gradu- ally rule out irrelevant objects, and ï¬nally pinpoint to the re- gion that are most indicative to infer the answer (i.e., dogs in the example). | 1511.02274#2 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 3 | In this paper, we propose stacked attention networks (SANs) that allow multi-step reasoning for image QA. SANs can be viewed as an extension of the attention mech- anism that has been successfully applied in image caption- ing [30] and machine translation [2]. The overall architec- ture of SAN is illustrated in Fig. 1a. The SAN consists of three major components: (1) the image model, which uses
1
a CNN to extract high level image representations, e.g. one vector for each region of the image; (2) the question model, which uses a CNN or a LSTM to extract a semantic vector of the question and (3) the stacked attention model, which locates, via multi-step reasoning, the image regions that are relevant to the question for answer prediction. As illustrated in Fig. 1a, the SAN ï¬rst uses the question vector to query the image vectors in the ï¬rst visual attention layer, then combine the question vector and the retrieved image vectors to form a reï¬ned query vector to query the image vectors again in the second attention layer. The higher-level atten- tion layer gives a sharper attention distribution focusing on the regions that are more relevant to the answer. Finally, we combine the image features from the highest attention layer with the last query vector to predict the answer. | 1511.02274#3 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 4 | The main contributions of our work are three-fold. First, we propose a stacked attention network for image QA tasks. Second, we perform comprehensive evaluations on four image QA benchmarks, demonstrating that the proposed multiple-layer SAN outperforms previous state-of-the-art approaches by a substantial margin. Third, we perform a detailed analysis where we visualize the outputs of differ- ent attention layers of the SAN and demonstrate the process that the SAN takes multiple steps to progressively focus the attention on the relevant visual clues that lead to the answer.
# 2. Related Work | 1511.02274#4 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 5 | Image QA is closely related to image captioning [5, 30, 6, 27, 12, 10, 20]. In [27], the system ï¬rst extracted a high level image feature vector from GoogleNet and then fed it into a LSTM to generate captions. The method proposed in [30] went one step further to use an attention mechanism in the caption generation process. Different from [30, 27], the approach proposed in [6] ï¬rst used a CNN to detect words given the images, then used a maximum entropy language model to generate a list of caption candidates, and ï¬nally used a deep multimodal similarity model (DMSM) to re- rank the candidates. Instead of using a RNN or a LSTM, the DMSM uses a CNN to model the semantics of captions. Unlike image captioning, in image QA, the question is given and the task is to learn the relevant visual and text rep- resentation to infer the answer. In order to facilitate the re- search of image QA, several data sets have been constructed in [19, 21, 7, 1] either through automatic generation based on image caption data or by human labeling of questions and answers given images. Among them, the image | 1511.02274#5 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 6 | in [19, 21, 7, 1] either through automatic generation based on image caption data or by human labeling of questions and answers given images. Among them, the image QA data set in [21] is generated based on the COCO caption data set. Given a sentence that describes an image, the au- thors ï¬rst used a parser to parse the sentence, then replaced the key word in the sentence using question words and the key word became the answer. [7] created an image QA data set through human labeling. The initial version was in Chi- nese and then was translated to English. [1] also created an | 1511.02274#6 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 7 | image QA data set through human labeling. They collected questions and answers not only for real images, but also for abstract scenes.
Several image QA models were proposed in the litera- ture. [18] used semantic parsers and image segmentation methods to predict answers based on images and questions. [19, 7] both used encoder-decoder framework to generate answers given images and questions. They ï¬rst used a LSTM to encoder the images and questions and then used another LSTM to decode the answers. They both fed the image feature to every LSTM cell. [21] proposed sev- eral neural network based models, including the encoder- decoder based models that use single direction LSTMs and bi-direction LSTMs, respectively. However, the authors found the concatenation of image features and bag of words features worked the best. [1] ï¬rst encoded questions with LSTMs and then combined question vectors with image vectors by element wise multiplication. [17] used a CNN for question modeling and used convolution operations to combine question vectors and image feature vectors. We compare the SAN with these models in Sec. 4. | 1511.02274#7 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 8 | To the best of our knowledge, the attention mechanism, which has been proved very successful in image captioning, has not been explored for image QA. The SAN adapt the at- tention mechanism to image QA, and can be viewed as a signiï¬cant extension to previous models [30] in that multi- ple attention layers are used to support multi-step reasoning for the image QA task.
# 3. Stacked Attention Networks (SANs)
The overall architecture of the SAN is shown in Fig. 1a. We describe the three major components of SAN in this sec- tion: the image model, the question model, and the stacked attention model. 3.1. Image Model
The image model uses a CNN [13, 23, 26] to get the representation of images. Speciï¬cally, the VGGNet [23] is used to extract the image feature map fI from a raw image I:
148 â__> 14 si2 14 448 feature map
Figure 2: CNN based image model
fI = CNNvgg(I). (1)
Unlike previous studies [21, 17, 7] that use features from the last inner product layer, we choose the features fI from the last pooling layer, which retains spatial information of the original images. We ï¬rst rescale the images to be 448 à 448 | 1511.02274#8 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 9 | pixels, and then take the features from the last pooling layer, which therefore have a dimension of 512Ã14Ã14, as shown in Fig. 2. 14 Ã 14 is the number of regions in the image and 512 is the dimension of the feature vector for each region. Accordingly, each feature vector in fI corresponds to a 32Ã 32 pixel region of the input images. We denote by fi, i â [0, 195] the feature vector of each image region.
Then for modeling convenience, we use a single layer perceptron to transform each feature vector to a new vec- tor that has the same dimension as the question vector (de- scribed in Sec. 3.2):
vI = tanh(WI fI + bI ), (2)
where vI is a matrix and its i-th column vi is the visual feature vector for the region indexed by i.
# 3.2. Question Model
As [25, 22, 6] show that LSTMs and CNNs are powerful to capture the semantic meaning of texts, we explore both models for question representations in this study.
# 3.2.1 LSTM based question model
A LSTM >| LSTM Pee >| LSTM i ft t We We se We . i f . Question: â what are bicycle
Figure 3: LSTM based question model | 1511.02274#9 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 10 | Figure 3: LSTM based question model
The essential structure of a LSTM unit is a memory cell ct which reserves the state of a sequence. At each step, the LSTM unit takes one input vector (word vector in our case) xt and updates the memory cell ct, then output a hid- den state ht. The update process uses the gate mechanism. A forget gate ft controls how much information from past state ctâ1 is preserved. An input gate it controls how much the current input xt updates the memory cell. An output gate ot controls how much information of the memory is fed to the output as hidden state. The detailed update pro- cess is as follows:
it =Ï(Wxixt + Whihtâ1 + bi), ft =Ï(Wxf xt + Whf htâ1 + bf ), ot =Ï(Wxoxt + Whohtâ1 + bo), ct =ftctâ1 + it tanh(Wxcxt + Whchtâ1 + bc), ht =ot tanh(ct),
where i, f, o, c are input gate, forget gate, output gate and memory cell, respectively. The weight matrix and bias are parameters of the LSTM and are learned on training data.
(3)
(4)
(5)
(6)
(7) | 1511.02274#10 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 11 | (3)
(4)
(5)
(6)
(7)
Given the question q = [q1, ...qT ], where qt is the one hot vector representation of word at position t, we ï¬rst embed the words to a vector space through an embedding matrix xt = Weqt. Then for every time step, we feed the embed- ding vector of words in the question to LSTM:
xt =Weqt, t â {1, 2, ...T }, ht =LSTM(xt), t â {1, 2, ...T }.
(8)
(9)
As shown in Fig. 3, the question what are sitting in the basket on a bicycle is fed into the LSTM. Then the ï¬nal hidden layer is taken as the repre- sentation vector for the question, i.e., vQ = hT .
# 3.2.2 CNN based question model
] ] = : : . ms max pooling unigram.* . trigram. | over time * f bigram . convolution embedding 7 â Question. 58 = ) Re F 6 @Q ©
# Figure 4: CNN based question model | 1511.02274#11 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 12 | # Figure 4: CNN based question model
In this study, we also explore to use a CNN similar to [11] for question representation. Similar to the LSTM- based question model, we ï¬rst embed words to vectors xt = Weqt and get the question vector by concatenating the word vectors:
x1:T = [x1, x2, ..., xT ]. (10)
Then we apply convolution operation on the word embed- ding vectors. We use three convolution ï¬lters, which have the size of one (unigram), two (bigram) and three (trigram) respectively. The t-th convolution output using window size c is given by:
hc,t = tanh(Wcxt:t+câ1 + bc). (11)
The ï¬lter is applied only to window t : t + c â 1 of size c. Wc is the convolution weight and bc is the bias. The feature map of the ï¬lter with convolution size c is given by:
hc = [hc,1, hc,2, ..., hc,T âc+1]. (12)
Then we apply max-pooling over the feature maps of the
convolution size c and denote it as Ëhc = max | 1511.02274#12 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 13 | Then we apply max-pooling over the feature maps of the
convolution size c and denote it as Ëhc = max
[hc,1, hc,2, ..., hc,T âc+1]. t (13)
The max-pooling over these vectors is a coordinate-wise max operation. For convolution feature maps of different sizes c = 1, 2, 3, we concatenate them to form the feature representation vector of the whole question sentence:
h = [Ëh1, Ëh2, Ëh3], (14)
hence vQ = h is the CNN based question vector.
The diagram of CNN model for question is shown in Fig. 4. The convolutional and pooling layers for unigrams, bigrams and trigrams are drawn in red, blue and orange, re- spectively.
# 3.3. Stacked Attention Networks
Given the image feature matrix vI and the question fea- ture vector vQ, SAN predicts the answer via multi-step rea- soning. | 1511.02274#13 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 14 | Given the image feature matrix vI and the question fea- ture vector vQ, SAN predicts the answer via multi-step rea- soning.
In many cases, an answer only related to a small region of an image. For example, in Fig. 1b, although there are multiple objects in the image: bicycles, baskets, window, street and dogs and the answer to the ques- tion only relates to dogs. Therefore, using the one global image feature vector to predict the answer could lead to sub- optimal results due to the noises introduced from regions that are irrelevant to the potential answer. Instead, reason- ing via multiple attention layers progressively, the SAN are able to gradually ï¬lter out noises and pinpoint the regions that are highly relevant to the answer.
Given the image feature matrix vI and the question vec- tor vQ, we ï¬rst feed them through a single layer neural net- work and then a softmax function to generate the attention distribution over the regions of the image:
hA = tanh(WI,AvI â (WQ,AvQ + bA)), pI =softmax(WP hA + bP ),
(15)
(16) | 1511.02274#14 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 15 | (15)
(16)
where vI â RdÃm, d is the image representation dimen- sion and m is the number of image regions, vQ â Rd is a d dimensional vector. Suppose WI,A, WQ,A â RkÃd and WP â R1Ãk, then pI â Rm is an m dimensional vector, which corresponds to the attention probability of each im- age region given vQ. Note that we denote by â the addition of a matrix and a vector. Since WI,AvI â RkÃm and both WQ,AvQ, bA â Rk are vectors, the addition between a ma- trix and a vector is performed by adding each column of the matrix by the vector.
Based on the attention distribution, we calculate the weighted sum of the image vectors, each from a region, Ëvi as in Eq. 17. We then combine Ëvi with the question vec- tor vQ to form a reï¬ned query vector u as in Eq. 18. u is regarded as a reï¬ned query since it encodes both question information and the visual information that is relevant to the
potential answer:
1 =o ini, (17) i
i u =ËvI + vQ.
(18) | 1511.02274#15 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 16 | potential answer:
1 =o ini, (17) i
i u =ËvI + vQ.
(18)
Compared to models that simply combine the ques- tion vector and the global image vector, attention mod- els construct a more informative u since higher weights are put on the visual regions that are more relevant to the question. However, for complicated questions, a sin- gle attention layer is not sufï¬cient to locate the correct region for answer prediction. For example, the question in Fig. 1 what are sitting in the basket on a bicycle refers to some subtle relationships among multiple objects in an image. Therefore, we iterate the above query-attention process using multiple attention lay- ers, each extracting more ï¬ne-grained visual attention infor- mation for answer prediction. Formally, the SANs take the following formula: for the k-th attention layer, we compute:
A)), (19)
A = tanh(W k hk I =softmax(W k pk
# Q,Aukâ1 + bk I,AvI â (W k A + bk P ).
P hk (20)
where u0 is initialized to be vQ. Then the aggregated image feature vector is added to the previous query vector to form a new query vector:
Ëvk I = pk i vi, (21) | 1511.02274#16 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 17 | Ëvk I = pk i vi, (21)
i I + ukâ1. uk =Ëvk
(22)
That is, in every layer, we use the combined question and image vector ukâ1 as the query for the image. After the image region is picked, we update the new query vector as I + ukâ1. We repeat this K times and then use the uk = Ëvk ï¬nal uK to infer the answer:
pans =softmax(WuuK + bu). (23)
Fig. 1b illustrates the reasoning process by an exam- ple. In the ï¬rst attention layer, the model identiï¬es roughly the area that are relevant to basket, bicycle, and sitting in. In the second attention layer, the model fo- cuses more sharply on the region that corresponds to the answer dogs. More examples can be found in Sec. 4.
# 4. Experiments
# 4.1. Data sets
We evaluate the SAN on four image QA data sets. DAQUAR-ALL is proposed in [18]. There are 6, 795 training questions and 5, 673 test questions. These ques- tions are generated on 795 and 654 images respectively. The | 1511.02274#17 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 18 | images are mainly indoor scenes. The questions are catego- rized into three types including Object, Color and Number. Most of the answers are single words. Following the setting in [21, 17, 19], we exclude data samples that have multiple words answers. The remaining data set covers 90% of the original data set.
reduced version of DAQUAR-ALL. There are 3, 876 training samples and 297 test samples. This data set is constrained to 37 object categories and uses only 25 test images. The single word answers data set covers 98% of the original data set.
COCO-QA is proposed in [21]. Based on the Microsoft COCO data set, the authors ï¬rst parse the caption of the im- age with an off-the-shelf parser, then replace the key com- ponents in the caption with question words for form ques- tions. There are 78736 training samples and 38948 test sam- ples in the data set. These questions are based on 8, 000 and 4, 000 images respectively. There are four types of ques- tions including Object, Number, Color, and Location. Each type takes 70%, 7%, 17%, and 6% of the whole data set, respectively. All answers in this data set are single word. | 1511.02274#18 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 19 | VQA is created through human labeling [1]. The data set uses images in the COCO image caption data set [16]. Unlike the other data sets, for each image, there are three questions and for each question, there are ten answers la- beled by human annotators. There are 248, 349 training questions and 121, 512 validation questions in the data set. Following [1], we use the top 1000 most frequent answer as possible outputs and this set of answers covers 82.67% of all answers. We ï¬rst studied the performance of the pro- posed model on the validation set. Following [6], we split the validation data set into two halves, val1 and val2. We use training set and val1 to train and validate and val2 to test locally. The results on the val2 set are reported in Ta- ble. 6. We also evaluated the best model, SAN(2, CNN), on the standard test server as provided in [1] and report the results in Table. 5.
# 4.2. Baselines and evaluation methods
We compare our models with a set of baselines proposed recently [21, 1, 18, 19, 17] on image QA. Since the results of these baselines are reported on different data sets in dif- ferent literature, we present the experimental results on dif- ferent data sets in different tables. | 1511.02274#19 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 20 | For all four data sets, we formulate image QA as a clas- siï¬cation problem since most of answers are single words. We evaluate the model using classiï¬cation accuracy as re- ported in [1, 21, 19]. The reference models also report the Wu-Palmer similarity (WUPS) measure [29]. The WUPS measure calculates the similarity between two words based on their longest common subsequence in the taxonomy tree. We can set a threshold for WUPS, if the similarity is less than the threshold, then it is zeroed out. Following the reference models, we use WUPS0.9 and WUPS0.0 as evaluation metrics besides the classiï¬cation accuracy. The evaluation on the VQA data set is different from other three data sets, since for each question there are ten answer labels that may or may not be the same. We follow [1] to use the following metric: min(# human labels that match that answer/3, 1), which basically gives full credit to the answer when three or more of the ten human labels match the answer and gives partial credit if there are less matches.
# 4.3. Model conï¬guration and training | 1511.02274#20 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 21 | # 4.3. Model conï¬guration and training
For the image model, we use the VGGNet to extract fea- tures. When training the SAN, the parameter set of the CNN of the VGGNet is ï¬xed. We take the output from the last pooling layer as our image feature which has a dimension of 512 à 14 à 14 .
For DAQUAR and COCO-QA, we set the word embed- ding dimension and LSTMâs dimension to be 500 in the question model. For the CNN based question model, we set the unigram, bigram and trigram convolution ï¬lter size to be 128, 256, 256 respectively. The combination of these ï¬lters makes the question vector size to be 640. For VQA dataset, since it is larger than other data sets, we double the model size of the LSTM and the CNN to accommodate the large data set and the large number of classes. In evaluation, we experiment with SAN with one and two attention layers. We ï¬nd that using three or more attention layers does not further improve the performance. | 1511.02274#21 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 22 | In our experiments, all the models are trained using stochastic gradient descent with momentum 0.9. The batch size is ï¬xed to be 100. The best learning rate is picked using grid search. Gradient clipping technique [8] and dropout [24] are used.
# 4.4. Results and analysis
The experimental results on DAQUAR-ALL, DAQUAR- REDUCED, COCO-QA and VQA are presented in Table. 1 to 6 respectively. Our model names explain their settings: SAN is short for the proposed stacked attention networks, the value 1 or 2 in the brackets refer to using one or two attention layers, respectively. The keyword LSTM or CNN refers to the question model that SANs use. | 1511.02274#22 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 23 | The experimental results in Table. 1 to 6 show that the two-layer SAN gives the best results across all data sets and the two kinds of question models in the SAN, LSTM and CNN, give similar performance. For example, on DAQUAR-ALL (Table. 1), both of the proposed two- layer SANs outperform the two best baselines, the IMG- CNN in [17] and the Ask-Your-Neuron in [19], by 5.9% and 7.6% absolute in accuracy, respectively. Similar range of improvements are observed in metrics of WUPS0.9 and WUPS0.0. We also observe signiï¬cant improvements on DAQUAR-REDUCED (Table. 2), i.e., our SAN(2, LSTM) | 1511.02274#23 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 25 | Table 1: DAQUAR-ALL results, in percentage
Methods Accuracy WUPS0.9 WUPS0.0 Multi-World: [18] Multi-World 12.7 18.2 51.5 Ask-Your-Neurons: [19] Language Language + IMG 31.7 34.7 38.4 40.8 80.1 79.5 VSE: [21] GUESS BOW LSTM IMG+BOW VIS+LSTM 2-VIS+BLSTM 18.2 32.7 32.7 34.2 34.4 35.8 29.7 43.2 43.5 45.0 46.1 46.8 77.6 81.3 81.6 81.5 82.2 82.2 CNN: [17] IMG-CNN 39.7 44.9 83.1 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 45.2 45.2 46.2 45.5 49.6 49.6 51.2 50.2 84.0 83.7 85.1 83.6 Human :[18] Human 60.3 61.0 79.0
# Table 2: DAQUAR-REDUCED results, in percentage | 1511.02274#25 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 26 | # Table 2: DAQUAR-REDUCED results, in percentage
outperforms the IMG-CNN [17], the 2-VIS+BLSTM [21], the Ask-Your-Neurons approach [19] and the Multi-World [18] by 6.5%, 10.4%, 11.5% and 33.5% absolute in accu- racy, respectively. On the larger COCO-QA data set, the proposed two-layer SANs signiï¬cantly outperform the best baselines from [17] (IMG-CNN) and [21] (IMG+BOW and 2-VIS+BLSTM) by 5.1% and 6.6% in accuracy (Table. 3). | 1511.02274#26 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 28 | Table 3: COCO-QA results, in percentage
Methods VSE: [21] GUESS BOW LSTM IMG IMG+BOW VIS+LSTM 2-VIS+BLSTM 2.1 37.3 35.9 40.4 58.7 56.5 58.2 35.8 43.6 45.3 29.3 44.1 46.1 44.8 13.9 34.8 36.3 42.7 52.0 45.9 49.5 8.9 40.8 38.4 44.2 49.4 45.5 47.3 Ours: SAN(1, LSTM) SAN(1, CNN) SAN(2, LSTM) SAN(2, CNN) 62.5 63.6 63.6 64.5 49.0 48.7 49.8 48.6 54.8 56.7 57.9 57.9 51.6 52.7 52.8 54.0
# Table 4: COCO-QA accuracy per class, in percentage | 1511.02274#28 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 29 | # Table 4: COCO-QA accuracy per class, in percentage
test-dev test-std Methods All Yes/No Number Other All VQA: [1] Question Image Q+I LSTM Q LSTM Q+I 48.1 28.1 52.6 48.8 53.7 75.7 64.0 75.6 78.2 78.9 36.7 0.4 33.7 35.7 35.2 27.1 3.8 37.4 26.6 36.4 - - - - 54.1 SAN(2, CNN) 58.7 79.3 36.6 46.1 58.9
Table 5: VQA results on the ofï¬cial server, in percentage
Table. 5 summarizes the performance of various models on VQA, which is the largest among the four data sets. The overall results show that our best model, SAN(2, CNN),
All Yes/No 36% Number 10% Other 54% 56.6 56.9 57.3 57.6 78.1 78.8 78.3 78.6 41.6 42.0 42.2 41.8 44.8 45.0 45.9 46.4
Table 6: VQA results on our partition, in percentage | 1511.02274#29 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 31 | In order to study the strength and weakness of the SAN in detail, we report performance at the question-type level on the two large data sets, COCO-QA and VQA, in Ta- ble. 4 and 5, respectively. We observe that on COCO- QA, compared to the two best baselines, IMG+BOW and 2-VIS+BLSTM, out best model SAN(2, CNN) improves 7.2% in the question type of Color, followed by 6.1% in Objects, 5.7% in Location and 4.2% in Number. We ob- serve similar trend of improvements on VQA. As shown in Table. 5, compared to the best baseline LSTM Q+I, the biggest improvement of SAN(2, CNN) is in the Other type, 9.7%, followed by the 1.4% improvement in Number and 0.4% improvement in Yes/No. Note that the Other type in VQA refers to questions that usually have the form of âwhat color, what kind, what are, what type, whereâ etc., which are similar to question types of Color, Objects and Loca- tion in COCO-QA. The VQA data set has a special Yes/No type of | 1511.02274#31 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 32 | etc., which are similar to question types of Color, Objects and Loca- tion in COCO-QA. The VQA data set has a special Yes/No type of questions. The SAN only improves the performance of this type of questions slightly. This could due to that the answer for a Yes/No question is very question dependent, so better modeling of the visual information does not provide much additional gains. This also conï¬rms the similar ob- servation reported in [1], e.g., using additional image infor- mation only slightly improves the performance in Yes/No, as shown in Table. 5, Q+I vs Question, and LSTM Q+I vs LSTM Q. | 1511.02274#32 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 33 | Our results demonstrate clearly the positive impact of using multiple attention layers. In all four data sets, two- layer SANs always perform better than the one-layer SAN. Speciï¬cally, on COCO-QA, on average the two-layer SANs outperform the one-layer SANs by 2.2% in the type of Color, followed by 1.3% and 1.0% in the Location and Ob- jects categories, and then 0.4% in Number. This aligns to the order of the improvements of the SAN over baselines. Similar trends are observed on VQA (Table. 6), e.g., the two-layer SAN improve over the one-layer SAN by 1.4% for the Other type of question, followed by 0.2% improve- ment for Number, and ï¬at for Yes/No.
# 4.5. Visualization of attention layers | 1511.02274#33 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 34 | # 4.5. Visualization of attention layers
In this section, we present analysis to demonstrate that using multiple attention layers to perform multi-step rea- soning leads to more ï¬ne-grained attention layer-by-layer in locating the regions that are relevant to the potential an- swers. We do so by visualizing the outputs of the atten- tion layers of a sample set of images from the COCO-QA test set. Note the attention probability distribution is of size 14 à 14 and the original image is 448 à 448, we up-sample the attention probability distribution and apply a Gaussian ï¬lter to make it the same size as the original image. | 1511.02274#34 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 35 | Fig. 5 presents six examples. More examples are pre- sented in the appendix. They cover types as broad as Object, Numbers, Color and Location. For each example, the three images from left to right are the original image, the output of the ï¬rst attention layer and the output of the second at- tention layer, respectively. The bright part of the image is the detected attention. Across all those examples, we see that in the ï¬rst attention layer, the attention is scattered on many objects in the image, largely corresponds to the ob- jects and concepts referred in the question, whereas in the second layer, the attention is far more focused on the re- gions that lead to the correct answer. For example, consider the question what is the color of the horns, which asks the color of the horn on the womanâs head in Fig. 5(f). In the output of the ï¬rst attention layer, the model ï¬rst recognizes a woman in the image. In the output of the second attention layer, the attention is focused on the head of the woman, which leads to the answer of the question: the color of the horn is red.
# 4.6. Errors analysis | 1511.02274#35 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 36 | # 4.6. Errors analysis
We randomly sample 100 images from the COCO-QA test set that the SAN make mistakes. We group the errors into four categories: (i) the SANs focus the attention on the wrong regions (22%), e.g., the example in Fig. 6(a); (ii) the SANs focus on the right region but predict a wrong answer (42%), e.g., the examples in Fig. 6(b)(c)(d); (iii) the answer is ambiguous, the SANs give answers that are different from labels, but might be acceptable (31%). E.g., in Fig. 6(e), the answer label is pot, but out model predicts vase, which is also visually reasonable; (iv) the labels are clearly wrong (5%). E.g., in Fig. 6(f), our model gives the correct answer trains while the label cars is wrong. 5. Conclusion
In this paper, we propose a new stacked attention net- work (SAN) for image QA. SAN uses a multiple-layer at- tention mechanism that queries an image multiple times to locate the relevant visual region and to infer the answer pro- gressively. Experimental results demonstrate that the pro- posed SAN signiï¬cantly outperforms previous state-of-the- art approaches by a substantial margin on all four image QA | 1511.02274#36 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 37 | (a) What are pulling aman on a wagon down on dirt road? (b) What is the color of the box 2 Answer: horses Prediction: horses Answer: red Prediction: red What next to the large umbrella attached to a table? (d ) How many people are going up the mountain with walking sticks? (c) Answer: trees Prediction: tree Answer: four Prediction: four ee" (e) What is sitting on the handle bar of a bicycle? (f) What is the color of the horns? Answer: bird Prediction: bird Answer: red Prediction: red Original Image First Attention Layer Second Attention Layer Original Image First Attention Layer Second Attention Layer
Figure 5: Visualization of two attention layers
What swim in the ocean near two large ferries? What is the color of the shirt? ( a) Answer: ducks Prediction: boats ( b) Answer: purple Prediction: green (c) What is the young woman eating? Answer: banana Prediction: donut (d) How many umbrellas with various patterns? Answer: three Prediction: two What are passing underneath the walkway bridge? The very old looking what is on display? (e) -, a _ (f) Answer: cars Prediction: trains Answer: pot Prediction: vase Original mage _First Attention Layer Second Attention Layer Originallmage _ First Attention Layer Second Attention Layer
Figure 6: Examples of mistakes | 1511.02274#37 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 38 | Figure 6: Examples of mistakes
data sets. The visualization of the attention layers further il- lustrates the process that the SAN focuses the attention to the relevant visual clues that lead to the answer of the ques- tion layer-by-layer.
# References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015. 1, 2, 5, 6, 7
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. 1
[3] J. Berant and P. Liang. Semantic parsing via paraphrasing. In Proceedings of ACL, volume 7, page 92, 2014. 1
[4] A. Bordes, S. Chopra, and J. Weston. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676, 2014. 1
[5] X. Chen and C. L. Zitnick. Learning a recurrent visual rep- arXiv preprint resentation for image caption generation. arXiv:1411.5654, 2014. 2 | 1511.02274#38 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 39 | [6] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt, et al. From captions to visual concepts and back. arXiv preprint arXiv:1411.4952, 2014. 2, 3, 5
[7] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for arXiv preprint multilingual arXiv:1505.05612, 2015. 1, 2
[8] A. Graves. Generating sequences with recurrent neural net- works. arXiv preprint arXiv:1308.0850, 2013. 5
[9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. 1
[10] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- arXiv preprint ments for generating image descriptions. arXiv:1412.2306, 2014. 2 | 1511.02274#39 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 40 | [11] Y. Kim. Convolutional neural networks for sentence classiï¬- cation. arXiv preprint arXiv:1408.5882, 2014. 3
[12] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural lan- guage models. arXiv preprint arXiv:1411.2539, 2014. 2
Imagenet In classiï¬cation with deep convolutional neural networks. Advances in neural information processing systems, pages 1097â1105, 2012. 2
[14] A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015. 1
[15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278â2324, 1998. 1 | 1511.02274#40 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 41 | [16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- In Computer VisionâECCV 2014, mon objects in context. pages 740â755. Springer, 2014. 5
[17] L. Ma, Z. Lu, and H. Li. Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. 2, 5, 6
[18] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- In Advances in Neural Information Processing tain input. Systems, pages 1682â1690, 2014. 1, 2, 4, 5, 6
[19] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. arXiv preprint arXiv:1505.01121, 2015. 1, 2, 5, 6 | 1511.02274#41 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 42 | [20] J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille. Deep cap- tioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014. 2
[21] M. Ren, R. Kiros, and R. Zemel. and data for image question answering. arXiv:1505.02074, 2015. 1, 2, 5, 6 Exploring models arXiv preprint
[22] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. A latent semantic model with convolutional-pooling structure for in- formation retrieval. In Proceedings of the 23rd ACM Interna- tional Conference on Conference on Information and Knowl- edge Management, pages 101â110. ACM, 2014. 3
[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2 | 1511.02274#42 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 43 | [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. The Journal of Machine Learning Research, 15(1):1929â1958, 2014. 5
[25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural infor- mation processing systems, pages 3104â3112, 2014. 3 [26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 2
[27] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. 2 | 1511.02274#43 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 44 | [28] J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. 1
[29] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 133â138. Association for Computational Linguistics, 1994. 5
[30] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural im- age caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015. 1, 2
[31] W.-t. Yih, M.-W. Chang, X. He, and J. Gao. Semantic pars- ing via staged query graph generation: Question answering with knowledge base. In Proceedings of the Joint Conference of the 53rd Annual Meeting of the ACL and the 7th Interna- tional Joint Conference on Natural Language Processing of the AFNLP, 2015. 1
[32] W.-t. Yih, X. He, and C. Meek. Semantic parsing for single- relation question answering. In Proceedings of ACL, 2014. 1 | 1511.02274#44 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1511.02274 | 45 | [32] W.-t. Yih, X. He, and C. Meek. Semantic parsing for single- relation question answering. In Proceedings of ACL, 2014. 1
What take the nap with a blanket? What is the color of the cake? Answer: dogs Prediction: dogs Answer: brown Prediction: white What stands between two blue lounge chairs on an empty beach? Answer: unbrella Prediction: unbrella What is the color of the motorcycle? Answer: blue Prediction: blue â S What is sitting in the luggage bag? What is the color of the design? Answer: cat Prediction: cat Answer: red Prediction: red What is the color of the trucks? What is in front of the clear sky? Answer: green Prediction: green Answer: tower Prediction: tower What is next to the desk with a computer and laptop? What is the color of the surface? Answer: chair Prediction: chair Answer: white Prediction: white F } sits What are flying against the cloudy sky? Where do the young adult make us standing? Answer: kites Prediction: kites Answer: room Prediction: room
Figure 7: More examples | 1511.02274#45 | Stacked Attention Networks for Image Question Answering | This paper presents stacked attention networks (SANs) that learn to answer
natural language questions from images. SANs use semantic representation of a
question as query to search for the regions in an image that are related to the
answer. We argue that image question answering (QA) often requires multiple
steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an
image multiple times to infer the answer progressively. Experiments conducted
on four image QA data sets demonstrate that the proposed SANs significantly
outperform previous state-of-the-art approaches. The visualization of the
attention layers illustrates the progress that the SAN locates the relevant
visual clues that lead to the answer of the question layer-by-layer. | http://arxiv.org/pdf/1511.02274 | Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola | cs.LG, cs.CL, cs.CV, cs.NE | test-dev/standard results added | null | cs.LG | 20151107 | 20160126 | [
{
"id": "1506.00333"
},
{
"id": "1505.05612"
},
{
"id": "1506.07285"
},
{
"id": "1505.00468"
},
{
"id": "1505.01121"
},
{
"id": "1505.02074"
},
{
"id": "1502.03044"
}
] |
1510.07211 | 0 | 5 1 0 2
t c O 5 2 ] E S . s c [
1 v 1 1 2 7 0 . 0 1 5 1 : v i X r a
# On End-to-End Program Generation from User Intention by Deep Neural Networks
Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin Software Institute, School of EECS, Peking University, Beijing 100871, P. R. China {doublepower.mou,menruimr}@gmail.com, {lige,zhanglu,zhijin}@sei.pku.edu.cn
ABSTRACT This paper envisions an end-to-end program generation sce- nario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a character- by-character fashion. We demonstrate its feasibility through a case study and empirical analysis. To fully make such technique useful in practice, we also point out several cross- disciplinary challenges, including modeling user intention, providing datasets, improving model architectures, etc. Al- though much long-term research shall be addressed in this new ï¬eld, we believe end-to-end program generation would become a reality in future decades, and we are looking for- ward to its practice. | 1510.07211#0 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 1 | A more compelling feature is that the above process works in an âend-to-endâ manner, which requires little, if any, hu- man knowledge, and is completely language independentâ the only thing needed is to represent sentences and programs as characters. The learning machine automatically reads a natural language sentence character-by-character to capture user intention, and then generates code in a similar fashion. As learning machines diï¬er from code retrieval systems, the generated code is diï¬erent from any existing code, being more ï¬exible but maybe also vulnerable. However, the code should be (almost) correct: It satisï¬es the syntax, and im- plements the desired functionality. The code is usable with a little post-editing.
# Categories and Subject Descriptors I.2.2 [Artiï¬cial Intelligence]: Automatic Programmingâ Program synthesis
# General Terms Algorithms
# Keywords Deep learning, Recurrent network, Program generation | 1510.07211#1 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 2 | # General Terms Algorithms
# Keywords Deep learning, Recurrent network, Program generation
Such scenario of automatic program generation has long been the dream of software engineering (SE), and is closely related to a variety of SE tasks, e.g., algorithm discovery, programming assistance [6]. However, traditional approaches are typically weak in terms of automation and abstraction. For example, Manna et al. propose deductive approaches [15], Flener et al. inductive approaches [5]; these methods re- quire human-designed speciï¬cations. Program generation by genetic programming [20, 7] can automatically search the space of candidate programs (ineï¬ciently), but carefully chosen mutation or crossover operations should also be pro- vided. Natural language programming, emerged in the past decade, is much like âpseudo-compiling,â where the natural language is of low-level abstraction [12, 4].
# 1. INTRODUCTION
Imagine a following scenario in software engineering: There exists abundant high-quality source code, well commented and documented, in large software repositories. A very pow- erful machine (e.g., deep neural network) learns the map- ping from natural language of problem descriptions to source code. During development, users express their intention by natural language (similar to some in the repository); the learning machine automatically output the desired code as the solution. | 1510.07211#2 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 3 | Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proï¬t or commercial advantage and that copies bear this notice and the full citation on the ï¬rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speciï¬c permission and/or a fee. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00.
Nowadays, software artifacts, including code and docu- mentation, have become âbig dataâ (e.g., Github, Source- Forge). Provided suï¬cient training data of code with corre- sponding comments and documents, it is possible in princi- ple to train a generative model of programs based on natural language. At the meantime, the natural language processing (NLP) community is witnessing signiï¬cant breakthroughs and amazing results in various tasks including question an- swering [13], machine translation [19], or image-caption gen- eration [21]. These cross-disciplinary advances bring new opportunities for automatic program generation. | 1510.07211#3 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 4 | In this paper, we investigate, by a case study in Section 2, the feasibility of generating executable, functionally coher- ent code by recurrent neural networks (RNNs); empirical analysis reveals the mechanism how RNNs could accomplish the goal. We also envision several scenarios where such tech- niques may help real-world SE tasks, and address long-term research challenges (Section 3). Although we concede there still remains a long way before end-to-end program genera- tion can be used in SE practice, we believe it would become a reality in future decades.
# 2. A CASE STUDY
# 2.1 The Model of Recurrent Networks
Among a variety of machine learning methods, the deep neural network (also known as deep learning) is among re- cent groundbreaking advances, featured by its ability of learn- ing highly complicated features automatically [14].
For end-to-end program generation, we prefer the recur- rent neural networks (RNNs), which are suitable for mod- eling time-series data (e.g., a sequence of characters) by its iterative nature. An RNN typically keeps one or a few hid- den layers, changing over each (discrete) time step according to input data. This process is delineated in Figure 1. | 1510.07211#4 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 5 | Theoretical analysis shows that recurrent neural networks are equivalent to Turing machines [9]. However, training RNNs in early years was diï¬cult because of the gradient blowup or vanishing problem [18]. Long short term memory (LSTM) units [8], or gated units [2] are designed to balance between retaining the previous state and memorizing new information at the current time step, making RNNs much easier to train.
On this basis, Sutskever et al. design an RNN model for sequence to sequence generation [19]. The idea is to ï¬rst read an input sequence, ended with a special symbol, <eos> (end of sequence), depicted by Figure 1a. For output, the RNN applies a softmax layer at each time step, predicting the probability that each symbol1 may occur at the current step; the symbol with the highest probability is chosen, and fed to the network as input at the next time step. This process is done iteratively until the special symbol, <eos>, is generated by the network (Figure 1b). | 1510.07211#5 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 6 | Such RNN architecture can be applied to sequences of dif- ferent granularities, e.g., word-level, sub-word level, etc. In particular, character-level RNN generative models have un- expectedly achieved remarkable and somewhat amazing per- formance. Successful applications include generating texts, music, or even Linux-like C code [10]. Empirical studies show that RNNs are particularly good at modeling syntax aspects, e.g., parenthesis pairing, indentation, etc [11]. It works much like a push-down automata, but seems less capa- ble of capturing semanticsâthe Linux-like code generated, for example, is plausible, but cannot be compiled and lacks coherence in functionality.
We are therefore curious whether RNNs can generate exe- cutable, functionally coherent source code, which is an essence to beneï¬t real-world software engineering tasks. | 1510.07211#6 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 7 | We are therefore curious whether RNNs can generate exe- cutable, functionally coherent source code, which is an essence to beneï¬t real-world software engineering tasks.
To accomplish this goal, we leverage a dataset from a ped- agogical programming online judge (OJ) system,2 intended for the undergraduate course, Introduction to Computing. The OJ system comprises diï¬erent programming problems. Students submit their source code to a speciï¬c problem, and the OJ system judges its validity automatically (via run- ning). We notice that programs corresponding to a speciï¬c problem have exactly the same functionality, which makes the dataset particularly suitable, as a ï¬rst trial, for training neural networks to generate functionally coherent programs. We fed the network with 4 diï¬erent programing problems, each containing more than 500 diï¬erent source code sam- ples. After preprocessing, a program was preceded by a
1A symbol may refer to a word or a character according to the granularity in a certain application. 2http://programming.grids.cn
Output <eos> (softmax) Hidden layer(s) w/ LSTM units Input (one-hot) or oan 'x' <eos> 'm' âa i ân 1 (a) Input sequence | (b) Output sequence | 1510.07211#7 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 8 | Figure 1: A sequence to sequence recurrent neural network, adapted from [19]. (a) Input sequence; (b) Output sequence.
brief comment, e.g., âï¬nd the maximum and second maxi- mum numbers,â serving as the input sequence (Figure 1a).3 What follows is a program that solves the particular prob- lem, serving as the output sequence (Figure 1b). Figures 2b and 2c further illustrate two training samples of the afore- mentioned programming problem.
# 2.2 Result and Analysis
Figure 2a is a sample code generated by RNN. Through a quick analysis, we ï¬nd that the code is almost executable: with a little post-correction of 4 characters among â¼280, the program is compilable and functionally correct.
We would answer a very fundamental question: Is RNN generating code by simply memorizing a particular training sample? If this were the case, RNN would just work in a copy-and-paste fashion, which degrades the problem to a trivial case. | 1510.07211#8 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 9 | By examining the training data, we observe that there does NOT exist a same program in the training set, which rules out the possibility that RNN works by exact memoriz- ing. We further use ccfinder4 to detect most similar code in the training set. Two are shown in Figure 2, and the results are particularly interesting. We provide our expla- nation regarding several aspects of a program as follows.
⢠Structure. Figure 2b shows the most similar code in structure. The generated code implements the same algorithmâscanning the array twice to ï¬nd the max- imum and second maximum numbers respectively. No- tice, however, the two structures (abstract syntax trees, say) are not exactly the same as there are diï¬erences in variable deï¬nitions. A more interesting detail is that RNN has recognized âi<nâ and âi<=n-1â are equivalent in the for loop, and that it does not follow exactly the sample code (b) in the training set but remains correct. | 1510.07211#9 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 10 | ⢠Variable IDs. The training sample with the most similar variable IDs is shown in Figure 2c. Our gener- ated code uses the same ID, a, for the array, and max1, max2 to cache the two wanted numbers; but later, the structure diverges. Nevertheless, our network is aware of the variable IDs it has generated, and remains co- herent until the very end of the program.
3 Entire dataset and conï¬gurations are available on our web- site. http://sites.google.com/site/rnngenprogram 4http://www.ccï¬nder.net/ | 1510.07211#10 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 11 | Figure 2: (a) Code generated by RNN. The code is almost correct except 4 wrong characters (among â¼280 characters in total), high- lighted in the ï¬gure. (b) Code with the most similar structure in the training set, detected by ccfinder. (c) Code with the most similar identiï¬ers in the training set, also detected by ccfinder. Note that we preserve all indents, spaces and line feeds. The 4 errors are (1) The identiï¬er âxâ should be ânâ; (2) âmaxâ should be âmax2â; (3) â==â should be â<â; (4) return type should be void.
⢠Style. We ï¬nd no particular training samples having the same code style in terms of indents, line feeds, etc. It makes much sense because the training programs are written by junior programmers, who may not follow standard style convention, and thus the network has no idea about the ârightâ style. However, as all training samples are âcorrectâ programs, our network has little diï¬culty in learning the syntax of C programs as the generated code can almost be compiled. | 1510.07211#11 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 12 | Through the above analysis, we gain a basic idea on how RNN is able to generate programs. The RNN ï¬rst recog- nizes the brief comment, âï¬nd the maximum and second maximum numbers,â which precedes the code as input. We would like to point out that, in this experiment, the RNN does not understand the meaning of this sentence; but via reading the brief comment, the RNN switches its hidden states to generate code of the functionality in need. For each functionality, RNN is aware of diï¬erent aspects of a possible program, including structures, IDs, etc. When gen- erating, it chooses the most likely character conditioned on the previous characters, also conditioned on the input. In particular, the RNN does have the ability to mix diï¬erent structures and IDs but remain (almost) coherent.
# 3. PROSPECTIVES & ROAD MAP
While simple and preliminary, our case study and anal- ysis provide illuminating pictures on end-to-end program generation with deep neural networks. We point out sev- eral scenarios where deep learning can beneï¬t real-world SE practice, which are also research topics in long-term studies.
⢠Understanding changeable user intention. The current case study shows RNNâs ability of recognizing certain | 1510.07211#12 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 13 | ⢠Understanding changeable user intention. The current case study shows RNNâs ability of recognizing certain
intention and generating corresponding code. In SE practice, however, we are oftentimes facing changeable requirement from users. To address the problem, a direct extension is to train a parametric code gener- ator with arguments (e.g., ï¬le names, protocols) im- plicitly expressed using natural language. To tackle a more challenging prospective, we might ï¬rst train a network to generate diï¬erent âprimitiveâ code snip- pets, and then âglueâ them together. For instance, if a network has learned to write code of ï¬nding the maximum number, and also of ï¬nding the minimum number, then it shall be possible to generate these two snippets subsequently if it reads an instruction âï¬nd the maximum and minimum numbers.â | 1510.07211#13 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 14 | ⢠Incorporating multiple sources of user intention. When developing software, programmers usually ï¬nd their code dependent to context (e.g., previously deï¬ned variables, existing API call sequences) in addition to the functionality in need. In such scenarios, we might train a network to ï¬ll missing blocks of code. While we admit that code completion in general could hardly make any sense, we think this problem is mostly real- istic in some task-speciï¬c scenarios. For example, a typical way of reading a txt ï¬le in Java involves cre- ating FileReader, BufferedReader, reading lines in the ï¬le, closing the ï¬le, and also catching exceptions. Such standard pipelines might be generated automat- ically by neural networks, provided context code.
Despite the promising future of using RNNs to generate source code, eï¬orts shall be made from multiple disciplines including SE, NLP and machine learning communities. Most important questions in the SE community are deï¬ning userâs intention and providing datasets for training. How can we | 1510.07211#14 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 15 | specify the functionality that we want to generate? How can we specify the arguments of a function? How can we collect the dataset which is not only large and informative enough for training, but also clean enough for not including too much noise? These are among the open questions. The NLP and machine learning communities, on the other hand, are continuously improving neural architectures. Attention- based networks [3, 21], for example, are proposed recently to mitigate the problem of long input sequences that cannot be composed into a ï¬xed-size vector. More studies are still needed in terms of understanding the memory capacity of RNNs, generating data with more coherent semantics, or even revising generated data, etc.
We concede that using RNNs to generate programs diï¬ers signiï¬cantly from writing programs by humans. It appears unrealistic currently to train any learning machine, includ- ing deep neural networks, to fully understand either natural languages or programming languages. However, supported by existing evidence in the literature and the case study in this paper, we deem end-to-end program generation shall be possible in the future.
# 4. RELATED WORK IN DEEP LEARNING FOR PROGRAM ANALYSIS | 1510.07211#15 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 16 | # 4. RELATED WORK IN DEEP LEARNING FOR PROGRAM ANALYSIS
Recent years have witnessed the birth of program anal- ysis based on deep neural networks. Our previous work learns programsâ vector representations, serving as a pre- training phrase in deep learning [17]; we also propose tree- based convolutional neural networks to classify programs by functionality and detect source code of certain patterns [16]. Zaremba et al. use RNNs to estimate the output of a re- stricted python program [22]. Allamanis et al. leverage vec- tor representations to suggest method names [1]. All the above models are discriminative, by which we mean the tasks can be viewed as a classiï¬cation problem. Karpathy et al. train an RNN-based language model on C code, which maximizes the joint probability of a program [11]. Diï¬er- ent from the above studies, this paper investigates whether neural models can synthesize executable, functionally coher- ent programs, which demands more need in matching usersâ intention and capturing internal structures of source code.
# 5. CONCLUSIVE REMARKS | 1510.07211#16 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 17 | # 5. CONCLUSIVE REMARKS
In this paper, we trained a recurrent neural network (RNN) to generate (almost) executable, functionally coherent source code. Our initial work has demonstrated the possibility of automatic end-to-end program generation. Through analyz- ing the RNNâs mechanism, we envisioned several scenarios where such techniques can be applied in software engineer- ing tasks in future decades. We call for studies from multiple disciplines to further address this new research direction.
# 6. REFERENCES
[1] M. Allamanis, E. Barr, C. Bird, and C. Sutton. Suggesting accurate method and class names. In ESEC/FSE, 2015.
[2] K. Cho, B. van Merri¨enboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint, 1409.1259, 2014.
[3] J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech recognition. arXiv preprint, 1506.07503, 2015.
[4] A. Cozzie and S. T. King. Macho: Writing programs | 1510.07211#17 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 18 | [4] A. Cozzie and S. T. King. Macho: Writing programs
with natural language and examples. Technical report, University of Illinois at Urbana-Champaign, 2012. [5] P. Flener and D. Partridge. Inductive programming.
P. Flener and D. Partridge. Inductive programming. Automated Softw. Engineering, 8(2):131-137, 2001.
Automated Softw. Engineering, 8(2):131â137, 2001. [6] S. Gulwani. Dimensions in program synthesis. In Proc.
S. Gulwani. Dimensions in program synthesis. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming, 2010.
ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming, 2010. [7] T. Helmuth and L. Spector. General program
synthesis benchmark suite. In Proc. Genetic and Evol. Comput. Conf. ACM, 2015.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, 1997. [9] H. Hy¨otyniemi. Turing machines are recurrent neural
networks. Proc. STeP, 1996. | 1510.07211#18 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 19 | networks. Proc. STeP, 1996.
[10] A. Karpathy. The unreasonable eï¬ectiveness of recurrent neural networks. http://karpathy.github.io/ 2015/05/21/rnn-eï¬ectiveness/, 2015.
[11] A. Karpathy, J. Johnson, and F. Li. Visualizing and understanding recurrent networks. arXiv preprint, 1506.02078, 2015.
[12] R. Kn¨oll and M. Mezini. Pegasus: First steps toward a naturalistic programming language. In OOPSLA, 2006.
[13] A. Kumar, O. Irsoy, J. Su, et al. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint, 1506.07285, 2015.
[14] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
[15] Z. Manna and R. Waldinger. A deductive approach to program synthesis. ACM Trans. Programming Languages and Syst., 2(1):90â121, 1980. | 1510.07211#19 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.07211 | 20 | [16] L. Mou, G. Li, Z. Jin, L. Zhang, and T. Wang. TBCNN: A tree-based convolutional neural network for programming language processing. AAAI Workshop, 2015.
[17] L. Mou, G. Li, Y. Liu, H. Peng, Z. Jin, Y. Xu, and
L. Zhang. Building program vector representations for deep learning. arXiv preprint, 1409.3358, 2014. [18] R. Pascanu, T. Mikolov, and Y. Bengio. On the
diï¬culty of training recurrent neural networks. arXiv preprint, 1211.5063, 2012.
[19] I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
[20] W. Weimer, T. Nguyen, C. Le Goues, and S. Forrest. Automatically ï¬nding patches using genetic programming. In ICSE, 2009.
[21] K. Xu et al. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint.
[22] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint, 1410.4615, 2014. | 1510.07211#20 | On End-to-End Program Generation from User Intention by Deep Neural Networks | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice. | http://arxiv.org/pdf/1510.07211 | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | cs.SE, cs.LG | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | cs.SE | 20151025 | 20151025 | [] |
1510.03009 | 0 | 6 1 0 2
b e F 6 2 ] G L . s c [
3 v 9 0 0 3 0 . 0 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# NEURAL NETWORKS WITH FEW MULTIPLICATIONS
Zhouhan Lin Universit´e de Montr´eal Canada [email protected]
Matthieu Courbariaux Universit´e de Montr´eal Canada [email protected]
Roland Memisevic Universit´e de Montr´eal Canada [email protected]
Yoshua Bengio Universit´e de Montr´eal Canada
# ABSTRACT | 1510.03009#0 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 0 | 6 1 0 2 n u J 0 1 ] L C . s c [
3 v 5 5 0 3 0 . 0 1 5 1 : v i X r a
# A Diversity-Promoting Objective Function for Neural Conversation Models
# Jiwei Li 1â Michel Galley 2
Chris Brockett 2 1Stanford University, Stanford, CA, USA [email protected] 2Microsoft Research, Redmond, WA, USA {mgalley,chrisbkt,jfgao,billdol}@microsoft.com
# Jianfeng Gao 2
# Bill Dolan 2
# Abstract
Sequence-to-sequence neural network mod- els for generation of conversational responses tend to generate safe, commonplace responses (e.g., I donât know) regardless of the input. We suggest that the traditional objective func- tion, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experi- mental results demonstrate that the proposed MMI models produce more diverse, interest- ing, and appropriate responses, yielding sub- stantive gains in BLEU scores on two conver- sational datasets and in human evaluations.
# 1 Introduction | 1510.03055#0 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 1 | Yoshua Bengio Universit´e de Montr´eal Canada
# ABSTRACT
For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on ï¬oating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 pop- ular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classiï¬cation performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware- friendly training of neural networks.
# INTRODUCTION | 1510.03009#1 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 1 | # 1 Introduction
Conversational agents are of growing importance in facilitating smooth interaction between humans and their electronic devices, yet conventional dia- log systems continue to face major challenges in the form of robustness, scalability and domain adapta- tion. Attention has thus turned to learning conversa- tional patterns from data: researchers have begun to explore data-driven generation of conversational re- sponses within the framework of statistical machine translation (SMT), either phrase-based (Ritter et al., 2011), or using neural networks to rerank, or directly in the form of sequence-to-sequence (SEQ2SEQ) models (Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Wen et al., 2015; Serban et al., 2016). SEQ2SEQ models offer the promise of scala- bility and language-independence, together with the
capacity to implicitly learn semantic and syntactic relations between pairs, and to capture contextual de- pendencies (Sordoni et al., 2015) in a way not possi- ble with conventional SMT approaches (Ritter et al., 2011). | 1510.03055#1 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 2 | # INTRODUCTION
Training deep neural networks has long been computational demanding and time consuming. For some state-of-the-art architectures, it can take weeks to get models trained (Krizhevsky et al., 2012). Another problem is that the demand for memory can be huge. For example, many common models in speech recognition or machine translation need 12 Gigabytes or more of storage (Gulcehre et al., 2015). To deal with these issues it is common to train deep neural networks by resorting to GPU or CPU clusters and to well designed parallelization strategies (Le, 2013).
Most of the computation performed in training a neural network are ï¬oating point multiplications. In this paper, we focus on eliminating most of these multiplications to reduce computation. Based on our previous work (Courbariaux et al., 2015), which eliminates multiplications in computing hidden representations by binarizing weights, our method deals with both hidden state computations and backward weight updates. Our approach has 2 components. In the forward pass, weights are stochastically binarized using an approach we call binary connect or ternary connect, and for back- propagation of errors, we propose a new approach which we call quantized back propagation that converts multiplications into bit-shifts. 1
# 2 RELATED WORK | 1510.03009#2 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 2 | An engaging response generation system should be able to output grammatical, coherent responses that are diverse and interesting. In practice, however, neural conversation models tend to generate trivial or non-committal responses, often involving high- frequency phrases along the lines of I donât know or Iâm OK (Sordoni et al., 2015; Vinyals and Le, 2015; Serban et al., 2016). Table 1 illustrates this phenomenon, showing top outputs from SEQ2SEQ models. All the top-ranked responses are generic. Responses that seem more meaningful or speciï¬c can also be found in the N-best lists, but rank much lower. In part at least, this behavior can be ascribed to the relative frequency of generic responses like I donât know in conversational datasets, in contrast with the relative sparsity of more contentful alterna- tive responses.1 It appears that by optimizing for the likelihood of outputs given inputs, neural mod- els assign high probability to âsafeâ responses. This objective function, common in related tasks such as machine translation, may be unsuited to generation tasks involving intrinsically diverse outputs. Intu- itively, it seems desirable to take into account not only the dependency of responses on messages, but also the inverse, the likelihood that a message will be provided to a given response.
We propose to capture this intuition by using Max | 1510.03055#2 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 3 | # 2 RELATED WORK
Several approaches have been proposed in the past to simplify computations in neural networks. Some of them try to restrict weight values to be an integer power of two, thus to reduce all the mul- tiplications to be binary shifts (Kwan & Tang, 1993; Marchesi et al., 1993). In this way, multiplica- tions are eliminated in both training and testing time. The disadvantage is that model performance can be severely reduced, and convergence of training can no longer be guaranteed.
1The codes BinaryConnect for these approaches are available online at https://github.com/hantek/
1
Published as a conference paper at ICLR 2016 | 1510.03009#3 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 3 | We propose to capture this intuition by using Max
* The entirety of this work was conducted at Microsoft.
1In our training dataset from the OpenSubtitles database (OSDb), 0.45% sentences contain the sequence I donât know, a high rate considering huge diversity of this dataset.
â1.09 Get out of here. â1.09 Iâm going home. â1.09 Oh my god! ... â1.55 My name is Robert. ... â1.64 Twenty-ï¬ve. â1.66 Five. â1.71 Eight. | 1510.03055#3 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 4 | 1The codes BinaryConnect for these approaches are available online at https://github.com/hantek/
1
Published as a conference paper at ICLR 2016
Kim & Paris (2015) introduces a completely Boolean network, which simpliï¬es the test time com- putation at an acceptable performance hit. The approach still requires a real-valued, full precision training phase, however, so the beneï¬ts of reducing computations does not apply to training. Sim- ilarly, Machado et al. (2015) manage to get acceptable accuracy on sparse representation classiï¬- cation by replacing all ï¬oating-point multiplications by integer shifts. Bit-stream networks (Burge et al., 1999) also provides a way of binarizing neural network connections, by substituting weight connections with logical gates. Similar to that, Cheng et al. (2015) proves deep neural networks with binary weights can be trained to distinguish between multiple classes with expectation back propagation.
There are some other techniques, which focus on reducing the training complexity. For instance, instead of reducing the precision of weights, Simard & Graf (1994) quantizes states, learning rates, and gradients to powers of two. This approach manages to eliminate multiplications with negligible performance reduction.
# 3 BINARY AND TERNARY CONNECT
3.1 BINARY CONNECT REVISITED | 1510.03009#4 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 4 | Input: What are you doing? â0.86 I donât know. â1.03 I donât know! â1.06 Nothing. â1.09 Get out of the way. â1.10 Iâm talking to you. Input: what is your name? â0.91 I donât know. â0.92 I donât know! â0.92 I donât know, sir. â1.58 My name is John. â0.97 Oh, my god! â1.59 My nameâs John. Input: How old are you? â0.79 I donât know. â1.06 Iâm ï¬ne. â1.17 Iâm all right. â1.17 Iâm not sure. Table 1: Responses generated by a 4-layer SEQ2SEQ neural model trained on 20 million conversation pairs take from the OpenSubtitles dataset. Decoding is imple- mented with beam size set to 200. The top examples are the responses with the highest average probability log- likelihoods in the N-best list. Lower-ranked, less-generic responses were manually chosen. | 1510.03055#4 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 5 | # 3 BINARY AND TERNARY CONNECT
3.1 BINARY CONNECT REVISITED
In Courbariaux et al. (2015), we introduced a weight binarization technique which removes mul- tiplications in the forward pass. We summarize this approach in this subsection, and introduce an extension to it in the next.
Consider a neural network layer with N input and M output units. The forward computation is y = h(W x + b) where W and b are weights and biases, respectively, h is the activation function, and x and y are the layerâs inputs and outputs. If we choose ReLU as h, there will be no multiplications in computing the activation function, thus all multiplications reside in the matrix product W x. For each input vector x, N M ï¬oating point multiplications are needed.
Binary connect eliminates these multiplications by stochastically sampling weights to be â1 or 1. Full precision weights ¯w are kept in memory as reference, and each time when y is needed, we sample a stochastic weight matrix W according to ¯w. For each element of the sampled matrix W , the probability of getting a 1 is proportional to how âcloseâ its corresponding entry in ¯w is to 1. i.e., | 1510.03009#5 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 5 | imum Mutual Information (MMI), ï¬rst introduced in speech recognition (Bahl et al., 1986; Brown, 1987), as an optimization objective that measures the mu- tual dependence between inputs and outputs. Below, we present practical strategies for neural generation models that use MMI as an objective function. We show that use of MMI results in a clear decrease in the proportion of generic response sequences, gen- erating correspondingly more varied and interesting outputs.
# 2 Related work
The approach we take here is data-driven and end-to- end. This stands in contrast to conventional dialog systems, which typically are template- or heuristic- driven even where there is a statistical component (Levin et al., 2000; Oh and Rudnicky, 2000; Ratna- parkhi, 2002; Walker et al., 2003; Pieraccini et al., 2009; Young et al., 2010; Wang et al., 2011; Banchs and Li, 2012; Chen et al., 2013; Ameixa et al., 2014; Nio et al., 2014).
We follow a newer line of investigation, originally introduced by Ritter et al. (2011), which frames re- sponse generation as a statistical machine translation (SMT) problem. Recent progress in SMT stemming from the use of neural language models (Sutskever | 1510.03055#5 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 6 | P (Wij = 1) = ¯wij + 1 2 ; P (Wij = â1) = 1 â P (Wij = 1) (1)
It is necessary to add some edge constraints to ¯w. To ensure that P (Wij = 1) lies in a reasonable range, values in ¯w are forced to be a real value in the interval [-1, 1]. If during the updates any of its value grows beyond that interval, we set it to be its corresponding edge values â1 or 1. That way ï¬oating point multiplications become sign changes. | 1510.03009#6 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 6 | et al., 2014; Gao et al., 2014; Bahdanau et al., 2015; Luong et al., 2015) has inspired attempts to extend these neural techniques to response generation. Sor- doni et al. (2015) improved upon Ritter et al. (2011) by rescoring the output of a phrasal SMT-based con- versation system with a SEQ2SEQ model that incor- porates prior context. Other researchers have subse- quently sought to apply direct end-to-end Seq2Seq models (Shang et al., 2015; Vinyals and Le, 2015; Wen et al., 2015; Yao et al., 2015; Serban et al., 2016). These SEQ2SEQ models are Long Short- Term Memory (LSTM) neural networks (Hochreiter and Schmidhuber, 1997) that can implicitly capture compositionality and long-span dependencies. (Wen et al., 2015) attempt to learn response templates from crowd-sourced data, whereas we seek to de- velop methods that can learn conversational patterns from naturally-occurring data. | 1510.03055#6 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 7 | A remaining question concerns the use of multiplications in the random number generator involved in the sampling process. Sampling an integer has to be faster than multiplication for the algorithm to be worth it. To be precise, in most cases we are doing mini-batch learning and the sampling process is performed only once for the whole mini-batch. Normally the batch size B varies up to several hundreds. So, as long as one sampling process is signiï¬cantly faster than B times of multiplications, it is still worth it. Fortunately, efï¬ciently generating random numbers has been studied in Jeavons et al. (1994); van Daalen et al. (1993). Also, it is possible to get random numbers according to real random processes, like CPU temperatures, etc. We are not going into the details of random number generation as this is not the focus of this paper.
3.2 TERNARY CONNECT
The binary connect introduced in the former subsection allows weights to be â1 or 1. However, in a trained neural network, it is common to observe that many learned weights are zero or close to zero. Although the stochastic sampling process would allow the mean value of sampled weights to be zero, this suggests that it may be beneï¬cial to explicitly allow weights to be zero. | 1510.03009#7 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 7 | Prior work in generation has sought to increase diversity, but with different goals and techniques. Carbonell and Goldstein (1998) and Gimpel et al. (2013) produce multiple outputs that are mutually diverse, either non-redundant summary sentences or N-best lists. Our goal, however, is to produce a sin- gle non-trivial output, and our method does not re- quire identifying lexical overlap to foster diversity.2 On a somewhat different task, Mao et al. (2015, Section 6) utilize a mutual information objective in image caption retrieval. Below, we focus on the chal- lenge of using MMI in response generation, compar- ing the performance of MMI models against maxi- mum likelihood.
# 3 Sequence-to-Sequence Models | 1510.03055#7 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 8 | To allow weights to be zero, some adjustments are needed for Eq. 1. We split the interval of [-1, 1], within which the full precision weight value ¯wij lies, into two sub-intervals: [â1, 0] and (0, 1]. If a
2
Published as a conference paper at ICLR 2016
weight value ¯wij drops into one of them, we sample ¯wij to be the two edge values of that interval, according to their distance from ¯wij, i.e., if ¯wij > 0:
P (Wij = 1) = ¯wij; P (Wij = 0) = 1 â ¯wij (2)
and if ¯wij <= 0:
P (Wij = â1) = â ¯wij; P (Wij = 0) = 1 + ¯wij (3)
Like binary connect, ternary connect also eliminates all multiplications in the forward pass.
# 4 QUANTIZED BACK PROPAGATION
In the former section we described how multiplications can be eliminated from the forward pass. In this section, we propose a way to eliminate multiplications from the backward pass. | 1510.03009#8 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 8 | # 3 Sequence-to-Sequence Models
Given a sequence of inputs X = {x1, x2, ..., xNx }, an LSTM associates each time step with an input gate, a memory gate and an output gate, respectively denoted as ik, fk and ok. We distinguish e and h where ek denotes the vector for an individual text unit (for example, a word or sentence) at time step k while hk denotes the vector computed by LSTM model at time k by combining ek and hkâ1. ck is the cell state vector at time k, and Ï denotes the sig- moid function. Then, the vector representation hk
2Augmenting our technique with MMR-based (Carbonell and Goldstein, 1998) diversity helped increase lexical but not semantic diversity (e.g., I donât know vs. I havenât a clue), and with no gain in performance.
for each time step k is given by: | 1510.03055#8 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 9 | In the former section we described how multiplications can be eliminated from the forward pass. In this section, we propose a way to eliminate multiplications from the backward pass.
Suppose the i-th layer of the network has N input and M output units, and consider an error signal δ propagating downward from its output. The updates for weights and biases would be the outer product of the layerâs input and the error signal:
AW =n [6 On (Wxt »)| x? (4)
Ab=n [5 Ohâ (Wx + »)| (5)
where 77 is the learning rate, and x the input to the layer. The operator © stands for element-wise multiply. While propagating through the layers, the error signal 6 needs to be updated, too. Its update taking into account the next layer below takes the form:
# 6 = [Wd] Oh
(W x + b) (6) | 1510.03009#9 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 9 | for each time step k is given by:
ik = Ï(Wi · [hkâ1, ek]) fk = Ï(Wf · [hkâ1, ek]) ok = Ï(Wo · [hkâ1, ek]) lk = tanh(Wl · [hkâ1, ek]) ck = fk · ckâ1 + ik · lk hs k = ok · tanh(ck)
ip = 0(W;- [hpâ1, ex]) (1)
fre = o(We = [Anâ1, ex) (2)
(3)
(4)
Ch = fe Ch-1 + te + Ue (5)
hj, = og - tanh(c,) (6)
where Wi, Wf , Wo, Wl â RDÃ2D. In SEQ2SEQ generation tasks, each input X is paired with a se- quence of outputs to predict: Y = {y1, y2, ..., yNy }. The LSTM deï¬nes a distribution over outputs and se- quentially predicts tokens using a softmax function: | 1510.03055#9 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 10 | # 6 = [Wd] Oh
(W x + b) (6)
There are 3 terms that appear repeatedly in Eqs. to 6,hâ (Wx + b) and x. The latter two terms introduce matrix outer products. To eliminate multiplications, we can quantize one of them to be an integer power of 2, so that multiplications involving that term become binary shifts. The expression nh (Wx + b) contains downflowing gradients, which are largely determined by the cost function and network parameters, thus it is hard to bound its values. However, bounding the values is essential for quantization because we need to supply a fixed number of bits for each sampled value, and if that value varies too much, we will need too many bits for the exponent. This, in turn, will result in the need for more bits to store the sampled value and unnecessarily increase the required amount of computation.
While h (W x + b) is not a good choice for quantization, x is a better choice, because it is the hidden representation at each layer, and we know roughly the distribution of each layerâs activation. | 1510.03009#10 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03009 | 11 | Our approach is therefore to eliminate multiplications in Eq. 4 by quantizing each entry in x to an integer power of 2. That way the outer product in Eq. 4 becomes a series of bit shifts. Experi- mentally, we ï¬nd that allowing a maximum of 3 to 4 bits of shift is sufï¬cient to make the network work well. This means that 3 bits are already enough to quantize x. As the ï¬oat32 format has 24 bits of mantissa, shifting (to the left or right) by 3 to 4 bits is completely tolerable. We refer to this approach of back propagation as âquantized back propagation.â
If we choose ReLU as the activation function, and since we are reusing the (W x + b) that was (W x + b) involves no additional sampling computed during the forward pass, computing the term h or multiplications. In addition, quantized back propagation eliminates the multiplications in the outer product in Eq. 4. The only places where multiplications remain are the element-wise products. In Eq. 5, multiplying by η and Ï requires 2 à M multiplications, while in Eq. 4 we can reuse the result of Eq. 5. To update δ would need another M multiplications, thus 3 à M multiplications
3
Published as a conference paper at ICLR 2016 | 1510.03009#11 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.