doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1611.09823
44
We refer the readers to (Weston, 2016) for more detailed descriptions and the motivation behind these tasks. The difference in our system is that the model can be trained on-the-fly via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix # B INSTRUCTIONS GIVEN TO TURKERS These are the instructions given for the textual feedback mechanical turk task (we also constructed a separate task to collect the initial questions, not described here): Title: Write brief responses to given dialogue exchanges (about 15 min) Description: Write a brief response to a student’s answer to a teacher’s question, providing feedback to the student on their answer. Instructions: Each task consists of the following triplets: 1. a question by the teacher 2. the correct answer(s) to the question (separated by “OR”) 3. a proposed answer in reply to the question from the student Consider the scenario where you are the teacher and have already asked the question, and received the reply from the student. Please compose a brief response giving feedback to the student about their answer. The correct answers are provided so that you know whether the student was correct or not.
1611.09823#44
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09830
44
Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense. Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an armed man had entered an office building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force
1611.09830#44
NewsQA: A Machine Comprehension Dataset
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
http://arxiv.org/pdf/1611.09830
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
cs.CL, cs.AI
null
null
cs.CL
20161129
20170207
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1606.05250" }, { "id": "1610.00956" }, { "id": "1612.04211" }, { "id": "1603.01547" }, { "id": "1603.08023" } ]
1611.09823
45
For example, given 1) question: “what is a color in the united states flag?”; 2) correct answer: “white, blue, red”; 3) student reply: “red”, your response could be something like “that’s right!”; for 3) reply: “green”, you might say “no that’s not right” or “nope, a correct answer is actually white”. Please vary responses and try to minimize spelling mistakes. If the same responses are copied/pasted or overused, we’ll reject the HIT. Avoid naming the student or addressing “the class” directly. We will consider bonuses for higher quality responses during review. 13 # Under review as a conference paper at ICLR 2017
1611.09823#45
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09830
45
Question What is the age of Patrick McGoohan? © Click here if the question does not make sense or is not a question. (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 8OJaccording to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from ‘The Prisoner’ to celebrate the 40th anniversary of the show in 2007. The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show ‘The Danger Man, is best remembered for writing and starring in 'The Prisoner’ about a former spy locked away in an isolated village who tries to escape each episode.
1611.09830#45
NewsQA: A Machine Comprehension Dataset
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
http://arxiv.org/pdf/1611.09830
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
cs.CL, cs.AI
null
null
cs.CL
20161129
20170207
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1606.05250" }, { "id": "1610.00956" }, { "id": "1612.04211" }, { "id": "1603.01547" }, { "id": "1603.08023" } ]
1611.09823
46
T : Which movie did Tom Hanks star in ? S : Forrest Gump T : Which movie did Tom Hanks star in ? S : Brad Pitt. Task 1: Imitating an Expert Student S: Forrest Gump T: (no response) Task 1: Imitating an Expert Student S: Forrest Gump T: (no response) Task 2: Positive and Negative Feedback T: Yes, that’s right! (+) Task 2: Positive and Negative Feedback T: No, that’s incorrect! Task 3: Answers Supplied by Teacher T: Yes, that is correct. (+) Task 3: Answers Supplied by Teacher T: No, the answer is Forrest Gump ! Task 4: Hints Supplied by Teacher T: Correct! (+) Task 4: Hints Supplied by Teacher T: No, it’s a movie ! Task 5: Supporting Facts Supplied by Teacher T: That’s right. (+) Task 5: Supporting Facts Supplied by Teacher T: No, because Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise ! Task 6: Partial Feedback if random(0,1)<0.5 then T: That’s correct. (+) else T: That’s correct. Task 6:
1611.09823#46
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09830
46
Question When was the lockdown initiated? Select the best answer: Tucson, Arizona, © 10:30am. -- liam, * Allanswers are very bad. * The question doesn't make sense. Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an armed man had entered an office building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force to call for a lock-down -- which began at 10:30 a.m. following the unconfirmed sighting of" such a man. No shots were ever fired and law enforcement teams are on site, said the official, who had direct knowledge of the situation from conversations with base officials but did not want to be identified. In fact, at 6 p.m., Col. John Cherrey -- who commands the Air Force's 355th Fighter Wing -- told reporters that no gunman or weapon was ever found. He added that the building, where the gunman was once thought to Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation. 13
1611.09830#46
NewsQA: A Machine Comprehension Dataset
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
http://arxiv.org/pdf/1611.09830
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
cs.CL, cs.AI
null
null
cs.CL
20161129
20170207
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1606.05250" }, { "id": "1610.00956" }, { "id": "1612.04211" }, { "id": "1603.01547" }, { "id": "1603.08023" } ]
1611.09823
47
6: Partial Feedback if random(0,1)<0.5 then T: That’s correct. (+) else T: That’s correct. Task 6: Partial Feedback T: Sorry, wrong. Task 7: No Feedback T: Yes. Task 7: No Feedback T: No. Task 8: Imitation and Feedback Mixture if random(0,1)<0.5 then T: Yes, that’s right! (+) Task 8: Imitation and Feedback Mixture if random(0,1)<0.5 then T: Wrong. else T: (no response) else S: Forrest Gump Task 9: Asking For Corrections T: Correct! (+) Task 9: Asking For Corrections T: No, that’s wrong. S: Can you help me? T: Forrest Gump ! Task 10: Asking For Supporting Facts T: Yes, that’s right! (+)
1611.09823#47
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09830
47
Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation. 13 Write Questions From A Summary Instructions + Overview Write questions about the highlights of a story. Steps 1. Read the highlights 2. Write questions about the highlights Example Highlights * Sarah Palin from Alaska meets with McCain e Fareed Zakaria says John McCain did not put country first with his choice © Zakaria: This is "hell of a time" for Palin to start thinking about national, global issues Questions The questions can refer directly to the highlights, for example: © Where is Palin from? © What did Fareed say about John McCain's choice? e Whois thinking about global issues? Questions must always be related to the highlights but their answers don't have to be in the highlights. You can assume that the highlights summarize a document which can answer other questions for example: e What was the meeting about? * What was McCain's choice? © What issues is Palin thinking about? Other Rules * Donot re-use the same or very similar questions. ® Questions should be written to have short answers. ¢ Donot write "how" nor "why" type questions since their answers are not short. "How far/long/many/much" are okay. Figure 3: Question sourcing instructions for the crowdworkers. 14
1611.09830#47
NewsQA: A Machine Comprehension Dataset
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
http://arxiv.org/pdf/1611.09830
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
cs.CL, cs.AI
null
null
cs.CL
20161129
20170207
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1606.05250" }, { "id": "1610.00956" }, { "id": "1612.04211" }, { "id": "1603.01547" }, { "id": "1603.08023" } ]
1611.09823
48
Task 10: Asking For Supporting Facts T: Sorry, that’s not it. S: Can you help me? T: A relevant fact is that Forrest Gump starred actors Tom Hanks, Robin Wright, Gary Sinise ! Figure 5: The ten tasks our simulator implements, which evaluate different forms of teacher response and binary feedback. In each case the same example from WikiMovies is given for simplicity, where the student answered correctly for all tasks (left) or incorrectly (right). Red text denotes responses by the bot with S denoting the bot. Blue text is spoken by the teacher with T denoting the teacher’s response. For imitation learning the teacher provides the response the student should say denoted with S in Tasks 1 and 8. A (+) denotes a positive reward. C ADDITIONAL EXPERIMENTS
1611.09823#48
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
49
C ADDITIONAL EXPERIMENTS Iteration 1 2 3 4 5 6 Imitation Learning 0.24 | 0.23 | 0.23 | 0.23 |] 0.25 | 0.25 Reward Based Imitation (RBI) | 0.95 | 0.99 | 0.99 | 0.99 | 1.00 | 1.00 Forward Pred. (FP) 1.00 | 0.19 | 0.86 | 0.30 | 99 | 0.22 RBI+FP 0.99 | 0.99 | 0.99 | 0.99 | 99 | 0.99 FP (balanced) 0.99 | 0.97 | 0.98 | 0.98 | 0.96 | 0.97 FP (rand. exploration € = 0.25) | 0.99 | 0.91 | 0.93 | 0.88 | 0.94 | 0.94 FP (rand. exploration € = 0.5) | 0.98 | 0.93 | 0.97 | 0.96 | 0.95 | 0.97 Table 3: Test accuracy of various models in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, task 3. Results > 0.95 are in bold. 14 # Under review as a conference paper at ICLR 2017 1.0, Random Exploration for RBI i?) 20 40 60 80 Epoch
1611.09823#49
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
50
14 # Under review as a conference paper at ICLR 2017 1.0, Random Exploration for RBI i?) 20 40 60 80 Epoch i?) 20 40 60 80 Epoch i¢) ~ 20 40 60 80 Epoch 1.0, RBI (eps=0.6) Varying Batch Size Random Exploration for FP i) 20 40 60 80 Epoch Comparing RBI, FP and REINFORCE @—e REINFORCE sa RBI ma FP i?) 20 40 60 80 Epoch # FP (eps=0.6) Varying Batch Size 0.9|[e-e # batch 20 09 # da # batch 80 0.8 0.8)) # pm # batch 320 50.7 20.6 ¥ 0.5 0.4) # ee # batch 20 0.7||@-* > & 0.6 FE} 90.5 0.4) # batch 1000 # da # batch 80 0.3 # mm # batch 320 03 . 02 @« # batch 1000 0.2 # i 0 20 # 40 60 Epoch 80 100 0 20 # 40 60 Epoch 80 100 Figure 6: Training epoch vs. test accuracy for bAbI (Task 2) varying exploration ¢ and batch size. 15 # Under review as a conference paper at ICLR 2017
1611.09823#50
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
51
15 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 1.0 0.9) 0.9) 0.8) 0.8) 0.7) > 0.7| > £0.6 £06 £05 g 0.5] 0.4 0.4) 0.3) 0.3 0.2 0.2 i¢) 20 40 60 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancin 1.0-—Comparing RBI, FP and REINFORCE @—e REINFORCE aa RBI mm FP i?) 20 40 60 80 i?) 20 40 60 80 Epoch Epoch 1.0, RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 0.9! 0.9) 0.8] 0.8} > 0.7] > 0.7 8 8 5 0-6 5 0.6 205 3g <” to5 0.4 @—e batch 20 @—e batch 20 aa batch 80 0.4] ta batch 80 0.3 mm batch 320 mm batch 320 02 |e batch 1000 0.3 | batch 1000 i) 20 40 60 80 100 i) 20 40 60 80 100 Epoch Epoch
1611.09823#51
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
52
Figure 7: Training epoch vs. test accuracy for bAbI (Task 3) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP). 16 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI i¢) 20 40 60 Epoch Random Exploration for FP with Balancin 0 20 40 60 80 0 Epoch ing Batch Size Random Exploration for FP 40 Epoch 60 Comparing RBI, FP and REINFORCE ee REINFORCE oa RBI mm FP 20 FP (eps=0.6) Varying Batch Size 40 Epoch 60 80 0.9| 0.9) 0.8! 0.8} >, 0.7 507 o o £ 0.6 £ 0.6) 5 £05 go5 0.4 ee batch 20 0.4! @@ batch 20 aa batch 80 aa batch 80 0.3 ma batch 320 0.3 mm batch 320 02 | ee batch 1000 02 + batch 1000 i) 20 40 60 80 100 ie) 20 40 60 80 100 Epoch Epoch Figure 8: Training epoch vs. test accuracy for bAbI (Task 4) varying exploration ¢ and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP). 17 # Under review as a conference paper at ICLR 2017
1611.09823#52
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
53
17 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 07] 0.7, 0.6 0.6. > 05 > 05 u u oO oO Soa a Soa — uu ih U oh 03 _ 203 ua _ _ 0.2 o- 0.2) oo 0.1 “~ “~ a ol — 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE ° un ° uu Accuracy ° Ss Accuracy ° + @e batch 32 0.3 0.3 sa batch 320 0.2 ma batch 3200 02 ee REINFORCE + batch 32000 os RBI ol © full dataset 0.1 ma FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch Figure 9: WikiMovies: Training epoch vs. test accuracy on Task 2 varying (top left panel) explo- ration rate € while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. 18
1611.09823#53
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
54
18 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 0.71 06 ost 0.5| a B05} Loa O90 £ oo a a oO — © 0.4} — 2 0.3 Ga 4 -« xt xt _ 0.34 _ 0.2| o~6 02 o~6 0.1 Vy Vy a a 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.71 0.6] 0.6) gos 3 0.5} 50.4 5 0.4 uu U fo4 ee batch 32 {03 . aa batch 320 0.2 ma batch 3200 0.2 © REINFORCE = batch 32000 ta RBI 0.1 © full dataset 0.1 mm FP ) 5 10 15 20 0 5 10 15 20 Epoch Epoch
1611.09823#54
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
55
Figure 10: WikiMovies: Training epoch vs. test accuracy on Task 3 varying (top left pane ) explo- ration rate € while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. 19 # Under review as a conference paper at ICLR 2017
1611.09823#55
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
56
19 # Under review as a conference paper at ICLR 2017 Random Exploration for RBI Random Exploration for FP 0.7 0.7} 0.6 0.6 0.5 3 pos, Loa a £ f° S 5 0.4) uu — U — Y 03) [e) xt _ to3 _ _ _ 0.2 O° oo 0.2} 0.1 “~ “~ a 01 — 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.7} 0.6 0.6, 705 3° 50.4 5 0. uu U fo4 ee batch 32 Zo . aa batch 320 02 ma batch 3200 02 ee REINFORCE + batch 32000 os RBI ol © full dataset 0.1 ma FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch
1611.09823#56
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
57
Figure 11: WikiMovies: Training epoch vs. test accuracy on Task 4 varying (top left panel) explo- ration rate € while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting « = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. 20 # Under review as a conference paper at ICLR 2017
1611.09823#57
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
58
20 # Under review as a conference paper at ICLR 2017 FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 0.5) o Fo.5| coal £ Fe} 3 0.4 fo4 ee batch 32 2 ee batch 32 aa batch 320 0.3 ta batch 320 0.21 ma batch 3200 ma batch 3200 + batch 32000 0.2 + batch 32000}; ol © full dataset © full dataset La 0.1 j 0 5 10 15 20 0 5 10 15 20 Epoch Epoch FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 B05 Bo.5 oO oO £ £ go4 Z 0.4 <4 © batch 32 2 ee batch 32 : a batch 320 03 sa batch 320 02 ma batch 3200 ma batch 3200 + batch 32000 0.2 + batch 32000 ol © full dataset © full dataset : : 0.1 = } 0 5 10 15 20 0 5 10 15 20 Epoch Epoch
1611.09823#58
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
60
21 # Under review as a conference paper at ICLR 2017 C.1 ADDITIONAL EXPERIMENTS FOR MECHANICAL TURK SETUP In the experiment in Section 5.2 we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,000 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table 4. The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability. The real data is better than using Task 2 synthetic data.
1611.09823#60
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
61
For comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table 5. They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning. Model Reward Based Imitation (RBI) Forward Prediction (FP) [real] RBI+FP [real] Forward Prediction (FP) [synthetic Task 2] Forward Prediction (FP) [synthetic Task 2+3] Forward Prediction (FP) [synthetic Task 3] RBI+FP [synthetic Task 2] RBI+FP [synthetic Task 2+3] RBI+FP [synthetic Task 3] r = 0 0.333 0.358 0.431 0.188 0.328 0.361 0.382 0.459 0.473 r = 0.1 0.340 0.358 0.438 0.188 0.328 0.361 0.383 0.465 0.486 r = 0.5 0.365 0.358 0.443 0.188 0.328 0.361 0.407 0.464 0.490 r = 1 0.375 0.358 0.441 0.188 0.328 0.361 0.408 0.478 0.494
1611.09823#61
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
62
Table 4: Incorporating Feedback From Humans via Mechanical Turk: comparing real human feedback to synthetic feedback. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). We compare real feedback (rows 2 and 3) to synthetic feedback when using FP or RBI+FP (rows 4 and 5). Train data size Supervised MemN2N 0.333 1k 5k 0.429 10k 0.476 20k 0.526 60k 0.599 # Table 5: Fully Supervised (Imitation Learning) Results on Human Questions [r=0|r=01 | r=05|r=1 e=0 0.499 | 0.502 0.501 | 0.502 e€=0.1 | 0.494 | 0.496 0.501 | 0.502 € = 0.25 | 0.493 | 0.495 0.496 | 0.499 €=0.5 | 0.501 | 0.499 0.501 | 0.504 e=1 0.497 | 0.497 0.498 | 0.497
1611.09823#62
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09823
63
Table 6: Second Iteration of Feedback Using synthetic textual feedback of synthetic Task2+3 with the RBI+FP method, an additional iteration of data collection of 10k examples, varying sparse binary reward fraction r and exploration «. The performance of the first iteration model was 0.478. C.2 SECOND ITERATION OF FEEDBACK We conducted experiments with an additional iteration of data collection for the case of binary rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model from the previous training, using RBI+FP with r = 1 which previously gave a test accuracy of 0.478 (see Table 4). Using that model as a predictor, we collected an additional 10,000 training examples. 22 # Under review as a conference paper at ICLR 2017 We then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying r on the additional collected set. We also report the performance from varying ¢, the proportion of random exploration of predictions on the new set. The results are reported in Table [6] Overall, performance is improved in the second iteration, with slightly better performance for large r and € = 0.5. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any case. 23
1611.09823#63
Dialogue Learning With Human-In-The-Loop
An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.
http://arxiv.org/pdf/1611.09823
Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20161129
20170113
[ { "id": "1511.06931" }, { "id": "1505.00521" }, { "id": "1606.05250" }, { "id": "1511.06732" }, { "id": "1502.05698" }, { "id": "1604.06045" }, { "id": "1606.02689" }, { "id": "1506.02075" }, { "id": "1606.03126" } ]
1611.09268
1
# Abstract We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions— sampled from Bing’s search query logs—each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages—extracted from 3,563,535 web documents retrieved by Bing—that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models. # Introduction
1611.09268#1
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
2
# Introduction Building intelligent agents with machine reading comprehension (MRC) or open-domain question answering (QA) capabilities using real world data is an important goal of artificial intelligence. Progress in developing these capabilities can be of significant consumer value if employed in automated assistants—e.g., Cortana [Cortana], Siri [Siri], Alexa [Amazon Alexa], or Google Assistant [Google Assistant]—on mobile devices and smart speakers, such as Amazon Echo [Amazon Echo]. Many of these devices rely heavily on recent advances in speech recognition technology powered by neural models with deep architectures [Hinton et al., 2012, Dahl et al., 2012]. The rising popularity of spoken interfaces makes it more attractive for users to use natural language dialog for question- answering and information retrieval from the web as opposed to viewing traditional search result pages on a web browser [Gao et al., 2018]. Chatbots and other messenger based intelligent agents are also becoming popular in automating business processes—e.g., answering customer service requests. All of these scenarios can benefit from fundamental improvements in MRC models. However, MRC in the wild is extremely challenging. Successful MRC systems should be able to learn good representations from raw text, infer and reason over learned representations, and finally generate a summarized response that is correct in both form and content.
1611.09268#2
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
3
The public availability of large datasets has been instrumental in many AI research breakthroughs [Wissner-Gross, 2016]. For example, ImageNet’s [Deng et al., 2009] release of 1.5 million labeled 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. examples with 1000 object categories led to the development of object classification models that perform better than humans on the ImageNet task [He et al., 2015]. Similarly, the large speech database collected over 20 years by DARPA enabled new breakthroughs in speech recognition performance from deep learning models Deng and Huang [2004]. Several MRC and QA datasets have also recently emerged. However, many of these existing datasets are not sufficiently large to train deep neural models with large number of parameters. Large scale existing MRC datasets, when available, are often synthetic. Furthermore, a common characteristic, shared by many of these datasets, is that the questions are usually generated by crowd workers based on provided text spans or documents. In MS MARCO, in contrast, the questions correspond to actual search queries that users submitted to Bing, and therefore may be more representative of a “natural” distribution of information need that users may want to satisfy using, say, an intelligent assistant.
1611.09268#3
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
4
Real-world text is messy: they may include typos or abbreviations—and transcription errors in case of spoken interfaces. The text from different documents may also often contain conflicting information. Most existing datasets, in contrast, often contain high-quality stories or text spans from sources such as Wikipedia. Real-world MRC systems should be benchmarked on realistic datasets where they need to be robust to noisy and problematic inputs. Finally, another potential limitation of existing MRC tasks is that they often require the model to operate on a single entity or a text span. Under many real-world application settings, the information necessary to answer a question may be spread across different parts of the same document, or even across multiple documents. It is, therefore, important to test an MRC model on its ability to extract information and support for the final answer from multiple passages and documents.
1611.09268#4
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
5
In this paper, we introduce Microsoft MAchine Reading Comprehension (MS MARCO)—a large scale real-world reading comprehension dataset—with the goal of addressing many of the above mentioned shortcomings of existing MRC and QA datasets. The dataset comprises of anonymized search queries issued through Bing or Cortana. We annotate each question with segment information as we describe in Section 3. Corresponding to each question, we provide a set of extracted passages from documents retrieved by Bing in response to the question. The passages and the documents may or may not actually contain the necessary information to answer the question. For each question, we ask crowd-sourced editors to generate answers based on the information contained in the retrieved passages. In addition to generating the answer, the editors are also instructed to mark the passages containing the supporting information—although we do not enforce these annotations to be exhaustive. The editors are allowed to mark a question as unanswerable based on the passages provided. We include these unanswerable questions in our dataset because we believe that the ability to recognize insufficient (or conflicting) information that makes a question unanswerable is important to develop for an MRC
1611.09268#5
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
6
we believe that the ability to recognize insufficient (or conflicting) information that makes a question unanswerable is important to develop for an MRC model. The editors are strongly encouraged to form answers in complete sentences. In total, the MS MARCO dataset contains 1,010,916 questions, 8,841,823 companion passages extracted from 3,563,535 web documents, and 182,669 editorially generated answers. Using this dataset, we propose three different tasks with varying levels of difficulty:
1611.09268#6
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
7
(i) Predict if a question is answerable given a set of context passages, and extract relevant information and synthesize the answer. (ii) Generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context. (iii) Rank a set of retrieved passages given a question. We describe the dataset and the proposed tasks in more details in the rest of this paper and present some preliminary benchmarking results on these tasks. # 2 Related work Machine reading comprehension and open domain question-answering are challenging tasks [Weston et al., 2015]. To encourage more rapid progress, the community has made several different datasets and tasks publicly available for benchmarking. We summarize some of them in this section. The Stanford Question Answering Dataset (SQuAD) Rajpurkar et al. [2016] consists of 107,785 question-answer pairs from 536 articles, where each answer is a text span. The key distinction between SQUAD and MS MARCO are: 2 Table 1: Comparison of MS MARCO and some of the other MRC datasets. # Questions # Documents Span of words 100k Human generated 200k Human generated 46,765 Span of words 140k 97k 7,787 100K 10k 1M 1,572 stories 6.9M passages 28k 14M sentences 536 8.8M passages, 3.2m docs.
1611.09268#7
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
8
10k 1M 1,572 stories 6.9M passages 28k 14M sentences 536 8.8M passages, 3.2m docs. 1. The MS MARCO dataset is more than ten times larger than SQuAD—which is an important consideration if we want to benchmark large deep learning models [Frank, 2017]. 2. The questions in SQuAD are editorially generated based on selected answer spans, while in MS MARCO they are sampled from Bing’s query logs. 3. The answers in SQuAD consists of spans of texts from the provided passages while the answers in MS MARCO are editorially generated. 4. Originally SQuAD contained only answerable questions, although this changed in the more recent edition of the task [Rajpurkar et al., 2018].
1611.09268#8
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
9
4. Originally SQuAD contained only answerable questions, although this changed in the more recent edition of the task [Rajpurkar et al., 2018]. NewsQA [Trischler et al., 2017] is a MRC dataset with over 100,000 question and span-answer pairs based off roughly 10,000 CNN news articles. The goal of the NewsQA task is to test MRC models on reasoning skills—beyond word matching and paraphrasing. Crowd-sourced editors created the questions from the title of the articles and the summary points (provided by CNN) without access to the article itself. A 4-stage collection methodology was employed to generate a more challenging MRC task. More than 44% of the NewsQA questions require inference and synthesis, compared to SQuAD’s 20%. DuReader [He et al., 2017] is a Chinese MRC dataset built with real application data from Baidu search and Baidu Zhidao—a community question answering website. It contains 200,000 questions and 420,000 answers from 1,000,000 documents. In addition, DuReader provides additional annotations of the answers—labelling them as either fact based or opinionative. Within each category, they are further divided into entity, yes/no, and descriptive answers.
1611.09268#9
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
10
NarrativeQA [Kociský et al., 2017] dataset contains questions created by editors based on sum- maries of movie scripts and books. The dataset contains about 45,000 question-answer pairs over 1,567 stories, evenly split between books and movie scripts. Compared to the news corpus used in NewsQA, the collection of movie scripts and books are more complex and diverse—allowing the editors to create questions that may require more complex reasoning. The movie scripts and books are also longer documents than the news or wikipedia article, as is the case with NewsQA and SQuAD, respectively. SearchQA [Dunn et al., 2017] takes questions from the American TV quiz show, Jeopardy1 and submits them as queries to Google to extract snippets from top 40 retrieved documents that may contain the answers to the questions. Document snippets not containing answers are filtered out, leaving more than 140K questions-answer pairs and 6.9M snippets. The answers are short exact spans of text averaging between 1-2 tokens. MS MARCO, in contrast, focuses more on longer natural language answer generation, and the questions correspond to Bing search queries instead of trivia questions.
1611.09268#10
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
11
RACE [Lai et al., 2017] contains roughly 100,000 multiple choice questions and 27,000 passages from standardized tests for Chinese students learning English as a foreign language. The dataset is split up into: RACE-M, which has approximately 30,000 questions targeted at middle school students aged 12-15, and RACE-H, which has approximately 70,000 questions targeted at high school students aged 15 to 18. Lai et al. [2017] claim that current state of the art neural models at the time of their publishing were performing at 44% accuracy while the ceiling human performance was 95%. AI2 Reasoning Challenge (ARC) [Clark et al., 2018] by Allen Institute for Artificial Intelligence consists of 7,787 grade-school multiple choice science questions—typically with 4 possible answers. The answers generally require external knowledge or complex reasoning. In addition, # 1https://www.jeopardy.com/ 3
1611.09268#11
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
12
# 1https://www.jeopardy.com/ 3 @ will i quality for osap ifm new in canada Candidate passages hc passage 2 acto alata, Selected passages is in order to apply online for funding consideration from The Ontario Student Assistance (PROGRAM), ofap you must first register as a new use to this. website ce: hitpsJ/osap.gov.on.ca/OSAPSecuttyWeb/publiciagreementahtm) Visit the OSAP website for application deadlines To get OSAP, you have to be eligible. You can apply using an online form. oF you can print off the application forms. you submit a paper application. you ‘must pay an application fee. assstance-for-post-secondary-education/how-do-i-apply-for-the-onfario-shudent-assistance- program. sand fie. You ven Figure 1: Simplified passage selection and answer summarization UI for human editors. ARC provides a corpus of 14M science-related sentences with knowledge relevant to the challenge. However, the training of the models does not have to include, nor be limited to, this corpus.
1611.09268#12
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
13
ARC provides a corpus of 14M science-related sentences with knowledge relevant to the challenge. However, the training of the models does not have to include, nor be limited to, this corpus. ReCoRD [Zhang et al., 2018] contains 12,000 Cloze-style question-passage pairs extracted from CNN/Daily Mail news articles. For each pair in this dataset, the question and the passage are selected from the same news article such that they have minimal text overlap—making them unlikely to be paraphrases of each other—but refer to at least one common named entity. The focus of this dataset is on evaluating MRC models on their common-sense reasoning capabilities. # 3 The MS Marco dataset
1611.09268#13
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
14
# 3 The MS Marco dataset To generate the 1,010,916 questions with 1,026,758 unique answers we begin by sampling queries from Bing’s search logs. We filter out any non-question queries from this set. We retrieve relevant documents for each question using Bing from its large-scale web index. Then we automatically extract relevant passages from these documents. Finally, human editors annotate passages that contain useful and necessary information for answering the questions—and compose a well-formed natural language answers summarizing the said information. Figure 1 shows the user interface for a web-based tool that the editors use for completing these annotation and answer composition tasks. During the editorial annotation and answer generation process, we continuously audit the data being generated to ensure accuracy and quality of answers—and verify that the guidelines are appropriately followed.
1611.09268#14
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
15
As previously mentioned, the questions in MS MARCO correspond to user submitted queries from Bing’s query logs. The question formulations, therefore, are often complex, ambiguous, and may even contain typographical and other errors. An example of such a question issued to Bing is: “in what type of circulation does the oxygenated blood flow between the heart and the cells of the body?”. We believe that these questions, while sometimes not well-formatted, are more representative of human information seeking behaviour. Another example of a question from our dataset includes: “will I qualify for osap if i’m new in Canada”. As shown in figure 1, one of the relevant passages include: “You must be a 1. Canadian citizen, 2. Permanent Resident or 3. Protected person”. When auditing our editorial process, we observe that even the human editors find the task of answering these questions to be sometimes difficult—especially when the question is in a domain the editor is unfamiliar with. We, therefore, believe that the MS MARCO presents a challenging dataset for benchmarking MRC models. The MS MARCO dataset that we are publishing consists of six major components:
1611.09268#15
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
16
The MS MARCO dataset that we are publishing consists of six major components: 1. Questions: These are a set of anonymized question queries from Bing’s search logs, where the user is looking for a specific answer. Queries with navigational and other intents are 4 Table 2: Distribution of questions based on answer-type classifier Question segment Question contains YesNo What How Where When Why Who Which Other Question classification Description Numeric Entity Location Person 7.46% 34.96% 16.8% 3.46% 2.71% 1.67% 3.33% 1.79% 27.83% 53.12% 26.12% 8.81% 6.17% 5.78% # Percentage of question excluded from our dataset. This filtering of question queries is performed automatically by a machine learning based classifier trained previously on human annotated data. Selected questions are further annotated by editors based on whether they are answerable using the passages provided.
1611.09268#16
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
17
2. Passages: For each question, on average we include a set of 10 passages which may contain the answer to the question. These passages are extracted from relevant web documents. They are selected by a state-of-the-art passage retrieval system at Bing. The editors are instructed to annotate the passages they use to compose the final answer as is_selected. For questions, where no answer was present in any of the passages, they should all be annotated by setting is_selected to 0. 3. Answers: For each question, the dataset contains zero, or more answers composed manually by the human editors. The editors are instructed to read and understand the questions, inspect the retrieved passages, and then synthesize a natural language answer with the correct information extracted strictly from the passages provided.
1611.09268#17
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
18
4. Well-formed Answers: For some question-answer pairs, the data also contains one or more answers that are generated by a post-hoc review-and-rewrite process. This process involves a separate editor reviewing the provided answer and rewriting it if: (i) it does not have proper grammar, (ii) there is a high overlap in the answer and one of the provided passages (indicating that the original editor may have copied the passage directly), or (iii) the answer can not be understood without the question and the passage context. e.g., given the question “tablespoon in cup” and the answer “16”, the well-formed answer should be “There are 16 tablespoons in a cup.”. 5. Document: For each of the documents from which the passages were originally extracted from, we include: (i) the URL, (ii) the body text, and (iii) the title. We extracted these documents from Bing’s index as a separate post-processing step. Roughly 300,000 docu- ments could not be retrieved because they were no longer in the index and for the remaining it is possible—even likely—that the content may have changed since the passages were originally extracted.
1611.09268#18
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
19
6. Question type: Each question is further automatically annotated using a machine learned classifier with one of the following segment labels: (i) NUMERIC, (ii) ENTITY, (iii) LOCA- TION, (iv) PERSON, or (v) DESCRIPTION (phrase). Table 2 lists the relative size of the different question segments and compares it with the proportion of questions that explicitly contain words like “what” and “"where”. Note that because the questions in our dataset are based on web search queries, we are may observe a question like “what is the age of barack obama” be expressed simply as “barack obama age” in our dataset. 5 Table 3: The MS MARCO dataset format. # Field Description Query A question query issued to Bing. Passages Top 10 passages from Web documents as retrieved by Bing. The passages are presented in ranked order to human editors. The passage that the editor uses to compose the answer is annotated as is_selected: 1. Document URLs URLs of the top ranked documents for the question from Bing. The passages are extracted from these documents. Answer(s) Answers composed by human editors for the question, automatically extracted passages and their corresponding documents.
1611.09268#19
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
20
Answer(s) Answers composed by human editors for the question, automatically extracted passages and their corresponding documents. Well Formed Answer(s) Well-formed answer rewritten by human editors, and the original answer. Segment QA classification. E.g., tallest mountain in south america belongs to the ENTITY segment because the answer is an entity (Aconcagua). Table 3 describes the final dataset format for MS MARCO. Inspired by [Gebru et al., 2018] we also release our dataset’s datasheet on our website. Finally, we summarize the key distinguishing features of the MS MARCO dataset as follows: 1. The questions are anonymized user queries issued to the Bing. 2. All questions are annotated with segment information. 3. The context passages—from which the answers are derived—are extracted from real web documents. 4. The answers are composed by human editors. 5. A subset of the questions have multiple answers. 6. A subset of the questions have no answers. # 3.1 The passage ranking dataset
1611.09268#20
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
21
4. The answers are composed by human editors. 5. A subset of the questions have multiple answers. 6. A subset of the questions have no answers. # 3.1 The passage ranking dataset To facilitate the benchmarking of ML based retrieval models that benefit from supervised training on large datasets, we are releasing a passage collection—constructed by taking the union of all the passages in the MS MARCO dataset—and a set of relevant question and passage identifier pairs. To identify the relevant passages, we use the is_selected annotation provided by the editors. As the editors were not required to annotate every passage that were retrieved for the question, this annotation should be considered as incomplete—i.e., there are likely passages in the collection that contain the answer to a question but have not been annotated as is_selected: 1. We use this dataset to propose a re-ranking challenge as described in Section 4. Additionally, we are organizing a “Deep Learning” track at the 2019 edition of TREC2 where we use these passage and question collections to setup an ad-hoc retrieval task. # 4 The challenges Using the MS MARCO dataset, we propose three machine learning tasks of diverse difficulty levels:
1611.09268#21
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
22
# 4 The challenges Using the MS MARCO dataset, we propose three machine learning tasks of diverse difficulty levels: The novice task requires the system to first predict whether a question can be answered based only on the information contained in the provided passages. If the question cannot be answered, then the system should return “No Answer Present” as response. If the question can be answered, then the system should generate the correct answer. The intermediate task is similar to the novice task, except that the generated answer should be well-formed—such that, if the answer is read-aloud then it should make sense even without the context of the question and retrieved passages. The passage re-ranking task is an information retrieval (IR) challenge. Given a question and a set of 1000 retrieved passages using BM25 [Robertson et al., 2009], the system must produce a 2https://trec.nist.gov/ 6 ranking of the said passages based on how likely they are to contain information relevant to answer the question. This task is targeted to provide a large scale dataset for benchmarking emerging neural IR methods [Mitra and Craswell, 2018]. # 5 The benchmarking results
1611.09268#22
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
23
# 5 The benchmarking results We continue to develop and refine the MS MARCO dataset iteratively. Presented at NIPS 2016 the V1.0 dataset was released and recieved with enthusiasm In January 2017, we publicly released the 1.1 version of the dataset. In Section 5.1, we present our initial benchmarking results based on this dataset. Subsequently, we release 2.0 the v2.1 version of the MS MARCO dataset in March 2018 and April 2018 respectively. Section 5.2 covers the experimental results on the update dataset. Finally, in October 2018, we released additional data files for the passage ranking task. # 5.1 Experimental results on v1.1 dataset
1611.09268#23
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
24
We group the questions in MS MARCO by the segment annotation, as described in Section 3. The complexity of the answers varies significantly between categories. For example, the answers to Yes/No questions are binary. The answers to entity questions can be a single entity name or phrase— e.g., the answer "Rome" for the question what is the capital of Italy". However, for descriptive questions, a longer textual answer is often necessary—e.g., "What is the agenda for Hollande’s state visit to Washington?". The evaluation strategy that is appropriate for Yes/No answer questions may not be appropriate for benchmarking on questions that require longer answer generation. Therefore, in our experiments we employ different evaluation metrics for different categories, building on metrics proposed initially by [Mitra et al., 2016]. We use accuracy and precision-recall measures for numeric answers and apply metrics like ROUGE-L [Lin, 2004] and phrasing-aware evaluation framework [Mitra et al., 2016] for long textual answers. The phrasing-aware evaluation framework aims to deal with the diversity of natural language in evaluating long textual answers. The evaluation requires several
1611.09268#24
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
25
answers. The phrasing-aware evaluation framework aims to deal with the diversity of natural language in evaluating long textual answers. The evaluation requires several reference answers per question that are each curated by a different human editor, thus providing a natural way to estimate how diversely a group of individuals may phrase the answer to the same question. A family of pairwise similarity-based metrics can used to incorporate consensus between different reference answers for evaluation. These metrics are simple modifications to metrics like BLEU [Papineni et al., 2002] and METEOR [Banerjee and Lavie, 2005] and are shown to achieve better correlation with human judgments. Accordingly, as part of our experiments, a subset of MS MARCO where each question has multiple answers is used to evaluate model performance with both BLEU and pa-BLEU as metrics.
1611.09268#25
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
26
# 5.1.1 Generative Model Experiments The following experiments were run on the V1.1 dataset Recurrent Neural Networks (RNNs) are capable of predicting future elements from sequence prior. It is often used as a generative language model for various NLP tasks, such as machine translation [Bahdanau et al., 2014] and question-answering [Hermann et al., 2015a]. In this QA experiment setup, we target training and evaluation of such generative models which predict the human-generated answers given questions and/or contextual passages as model input. Sequence-to-Sequence (Seq2Seq) Model. We train a vanilla Seq2Seq [Sutskever et al., 2014] model with the question-answer pair as source-target sequences. Memory Networks Model. We adapt end-to-end memory networks [Sukhbaatar et al., 2015]—that has previously demonstrated good performance on other QA tasks—by using summed memory representation as the initial state of the RNN decoder. Discriminative Model. For comparison, we also train a discriminative model to rank provided passages as a baseline. This is a variant of [Huang et al., 2013] where we use LSTM [Hochreiter and Schmidhuber, 1997] in place of multi-layer perceptron (MLP).
1611.09268#26
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
27
Table 4 shows the preformance of these models using ROUGE-L metric. Additionally, we evaluate memory networks model on an MS MARCO subset where questions have multiple answers. Table 5 shows the performance of the model as measured by BLEU and its pairwise variant pa-BLEU [Mitra et al., 2016]. 7 Table 4: ROUGE-L of Different QA Models Tested against a Subset of MS MARCO Description Best ROUGE-L of any passage A DSSM-alike passage ranking model Best Passage Passage Ranking Sequence to Sequence Vanilla seq2seq model predicting answers from questions Memory Network Seq2seq model with MemNN for passages Table 5: BLEU and pa-BLEU on a Multi-Answer Subset of MS MARCO BLEU pa-BLEU Best Passage 0.359 Memory Network 0.340 # 5.1.2 Cloze-Style Model Experiments In Cloze-style tests, a model is required to predict missing words in a text sequence by considering contextual information in textual format. CNN and Daily Mail dataset [Hermann et al., 2015b] is an example of such a cloze-style QA dataset. In this section, we present the performance of two MRC models using both CNN test dataset and a MS MARCO subset. The subset is filtered to numeric answer type category, to which cloze-style test is applicable.
1611.09268#27
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
28
• Attention Sum Reader (AS Reader): AS Reader [Kadlec et al., 2016] is a simple model that uses attention to directly pick the answer from the context. • ReasoNet: ReasoNet [Shen et al., 2016] also relies on attention, but is also a dynamic multi-turn model that attempts to exploit and reason over the relation among questions, contexts, and answers. We show model accuracy numbers on both datasets in table 6, and precision-recall curves on MS MARCO subset in figure 2. # 5.2 Experimental results on v2.1 dataset The human baseline on our v1.1 benchmark was surpassed by competing machine learned models in approximately 15 months. For the v2.1 dataset, we revisit our approach to generating the human baseline. We select five top performing editors—based on their performance on a set of auditing questions—to create a human baseline task group. We randomly sample 1,427 questions from our evaluation set and ask each of these editors to produce a new assessment. Then, we compare all our editorial answers to the ground truth and select the answer with the best ROUGE-L score as the candidate answer. Table 7 shows the results. We evaluate the answer set on both the novice and the intermediate task and we include questions that have no answer.
1611.09268#28
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
29
To provide a competitive experimental baseline for our dataset, we trained the model introduced in [Clark and Gardner, 2017]. This model uses recent ideas in reading comprehension research, like self-attention [Cheng et al., 2016] and bi-directional attention [Seo et al., 2016]. Our goal is to train this model such that, given a question and a passage that contains an answer to the question, the model identifies the answer (or span) in the passage. This is similar to the task in SQuAD [Rajpurkar et al., 2016]. First, we select the question-passage pairs where the passage contains an answer to the question and the answer is a contiguous set of words from the passage. Then, we train the model to predict a span for each question-passage pair and output a confidence score. To evaluate the model, Table 6: Accuracy of MRC Models on Numeric Segment of MS MARCO Accuracy MS MARCO CNN (test) AS Reader ReasoNet 55.0 58.9 69.5 74.7 8 1 AS Reader ReasoNet 0.9 n o i s i c e r P 0.8 0.7 0.6 0 0.2 0.4 0.6 0.8 1 # Recall
1611.09268#29
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
30
# Recall Figure 2: Precision-Recall of Machine Reading Comprehension Models on MS MARCO Subset of Numeric Category Table 7: Performance of MRC Span Model and Human Baseline on MS Marco Tasks ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 Task 0.094 0.268 BiDaF on Original 0.46771 Human Ensemble on Novice 0.73703 0.45439 Human Ensemble on Intermediate 0.63044 0.094 BiDaF on V2 Novice 0.070 BiDaF on V2 Intermediate for each question we chose our model generated answer that has the highest confidence score among all passages available for that question. To compare model performance across datasets we run this exact setup (training and evaluation) on the original dataset and the new V2 Tasks. Table 7 shows the results. The results indicate that the new v2.1 dataset is more difficult than the previous v1.1 version. On the novice task BiDaF cannot determine when the question is not answerable and thus performs substantially worse compared to on the v1.1 dataset. On the intermediate task, BiDaF performance once again drops because the model only uses vocabulary present in the passage whereas the well-formed answers may include words from the general vocabulary. # 6 Future Work and Conclusions
1611.09268#30
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
31
The process of developing the MS MARCO dataset and making it publicly available has been a tremendous learning experience. Between the first version of the dataset and the most recent edition, we have significantly modified how we collect and annotate the data, the definition of our tasks, and even broadened our scope to cater to the neural IR community. The future of this dataset will depend largely on how the broader academic community makes use of this dataset. For example, we believe that the size and the underlying use of Bing’s search queries and web documents in the construction of the dataset makes it particularly attractive for benchmarking new machine learning models for MRC and neural IR. But in addition to improving these ML models, the dataset may also prove to be useful for exploring new metrics—e.g., ROUGE-2 [Ganesan, 2018] and ROUGE-AR[Maples, 2017]—and robust evaluation strategies. Similarly, combining MS MARCO with other existing MRC datasets may also be interesting in the context of multi-task and cross domain learning. We want to engage with the community to get their feedback and
1611.09268#31
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
32
MARCO with other existing MRC datasets may also be interesting in the context of multi-task and cross domain learning. We want to engage with the community to get their feedback and guidance on how we can make it easier to enable such new explorations using the MS MARCO data. If there is enough interest, we may also consider generating similar datasets in other languages in the future—or augment the existing dataset with other information from the web.
1611.09268#32
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
33
9 # References Amazon Alexa. Amazon alexa. http://alexa.amazon.com/, 2018. Amazon Echo. Amazon echo. https://en.wikipedia.org/wiki/Amazon_Echo, 2018. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. S. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pages 65–72, 2005. J. Cheng, L. Dong, and M. Lapata. Long short-term memory-networks for machine reading. CoRR, abs/1601.06733, 2016. URL http://arxiv.org/abs/1601.06733. C. Clark and M. Gardner. Simple and effective multi-paragraph reading comprehension. CoRR, abs/1710.10723, 2017. URL http://arxiv.org/abs/1710.10723.
1611.09268#33
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
34
P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. 2018. Cortana. Cortana personal assistant. http://www.microsoft.com/en-us/mobile/experiences/ cortana/, 2018. G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1):30–42, 2012. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fe. Imagenet: Alarge-scalehierarchicalimagedatabas. CVPR, 2009. URL http://www.image-net.org/papers/imagenet_cvpr09.pdf. L. Deng and X. Huang. Challenges in adopting speech recognition. Communications of the ACM, 47(1):69–75, 2004.
1611.09268#34
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
35
L. Deng and X. Huang. Challenges in adopting speech recognition. Communications of the ACM, 47(1):69–75, 2004. M. Dunn, L. Sagun, M. Higgins, V. U. Güney, V. Cirik, and K. Cho. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, abs/1704.05179, 2017. B. H. Frank. Google brain chief: Deep learning takes at least 100,000 examples. https://venturebeat.com/ 2017/10/23/google-brain-chief-says-100000-examples-is-enough-data-for-deep-learning/, 2017. K. Ganesan. Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. 2018. J. Gao, M. Galley, and L. Li. Neural approaches to conversational ai. arXiv preprint arXiv:1809.08267, 2018. T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. III, and K. Crawford. Datasheets for datasets. 2018. Google Assistant. Google assistant. https://assistant.google.com/, 2018.
1611.09268#35
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
36
Google Assistant. Google assistant. https://assistant.google.com/, 2018. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2015. URL https: //arxiv.org/abs/1512.03385. W. He, K. Liu, Y. Lyu, S. Zhao, X. Xiao, Y. Liu, Y. Wang, H. Wu, Q. She, X. Liu, T. Wu, and H. Wang. Dureader: a chinese machine reading comprehension dataset from real-world applications. CoRR, abs/1711.05073, 2017. K. M. Hermann, T. Kociský, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. 2015a. URL https://arxiv.org/abs/1506.03340. K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693–1701, 2015b.
1611.09268#36
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
37
G. Hinton, L. Deng, D. Yu, G. Dalh, and A. Mohamed. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. 10 P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333–2338. ACM, 2013. R. Kadlec, M. Schmid, O. Bajgar, and J. Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547, 2016. T. Kociský, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. The narrativeqa reading comprehension challenge. CoRR, abs/1712.07040, 2017.
1611.09268#37
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
38
G. Lai, Q. Xie, H. Liu, Y. Yang, and E. H. Hovy. Race: Large-scale reading comprehension dataset from examinations. In EMNLP, 2017. C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004. S. Maples. The rouge-ar: A proposed extension to the rouge evaluation metric for abstractive text summarization. 2017. B. Mitra and N. Craswell. An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval (to appear), 2018. B. Mitra, G. Simon, J. Gao, N. Craswell, and L. J. Deng. A proposal for evaluating answer distillation from web data. 2016. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
1611.09268#38
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
39
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machine comprehension of text. 2016. URL https://arxiv.org/abs/1606.05250. P. Rajpurkar, R. Jia, and P. Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends®) in Information Retrieval, 3(4):333-389, 2009. M. J. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603, 2016. Y. Shen, P.-S. Huang, J. Gao, and W. Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016. Siri. Siri personal assistant. http://www.apple.com/ios/siri/, 2018.
1611.09268#39
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.09268
40
Siri. Siri personal assistant. http://www.apple.com/ios/siri/, 2018. S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448, 2015. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215, 2014. URL http://arxiv.org/abs/1409.3215. A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. Newsqa: A machine comprehension dataset. In Rep4NLP@ACL, 2017. J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. van Merrienboer, A. Joulin, and T. Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. 2015. URL https://arxiv.org/abs/ 1502.05698. A. Wissner-Gross. Datasets over algorithms. Edge. com. Retrieved, 8, 2016.
1611.09268#40
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
http://arxiv.org/pdf/1611.09268
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
cs.CL, cs.IR
null
null
cs.CL
20161128
20181031
[ { "id": "1810.12885" }, { "id": "1609.05284" }, { "id": "1809.08267" }, { "id": "1806.03822" }, { "id": "1603.01547" } ]
1611.08669
0
7 1 0 2 g u A 1 ] V C . s c [ 5 v 9 6 6 8 0 . 1 1 6 1 : v i X r a # Visual Dialog Abhishek Das1, Satwik Kottur2, Khushi Gupta2*, Avi Singh3*, Deshraj Yadav4, José M.F. Moura2, Devi Parikh1, Dhruv Batra1 1Georgia Institute of Technology, 2Carnegie Mellon University, 3UC Berkeley, 4Virginia Tech 2{skottur, khushig, moura}@andrew.cmu.edu 1{abhshkdz, parikh, dbatra}@gatech.edu [email protected] [email protected] visualdialog.org # Abstract
1611.08669#0
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
1
[email protected] [email protected] visualdialog.org # Abstract We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natu- ral, conversational language about visual content. Specifi- cally, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accu- rately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of ma- chine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Di- alog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ∼120k images from COCO, with a total of ∼1.2M dialog question- answer pairs.
1611.08669#1
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
2
‘cat drinking water out of a coffee mug What color is the mug? White and red Are there any pictures on it? No, something is there can't tell what itis Is the mug and cat on a table? Yes, they are Are there other items on the table’ (ea) eo) ca] (co) Yes, magazines, books, toaster and basket, and a plate We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders – Late Fusion, Hierarchi- cal Recurrent Encoder and Memory Network – and 2 de- coders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval- based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and eval- uated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first ‘visual chat- bot’! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org.
1611.08669#2
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
3
Figure 1: We introduce a new AI task – Visual Dialog, where an AI agent must hold a dialog with a human about visual content. We introduce a large-scale dataset (VisDial), an evaluation protocol, and novel encoder-decoder models for this task. tion [63], object detection [34] – to ‘high-level’ AI tasks such as learning to play Atari video games [42] and Go [55], answering reading comprehension questions by understand- ing short stories [21, 65], and even answering questions about images [6, 39, 49, 71] and videos [57, 58]! What lies next for AI? We believe that the next genera- tion of visual intelligence systems will need to posses the ability to hold a meaningful dialog with humans in natural language about visual content. Applications include: # 1. Introduction
1611.08669#3
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
4
# 1. Introduction We are witnessing unprecedented advances in computer vi- sion (CV) and artificial intelligence (AI) – from ‘low-level’ AI tasks such as image classification [20], scene recogniAiding visually impaired users in understanding their sur- roundings [7] or social media content [66] (AI: ‘John just uploaded a picture from his vacation in Hawaii’, Human: ‘Great, is he at the beach?’, AI: ‘No, on a mountain’). • Aiding analysts in making decisions based on large quan- tities of surveillance data (Human: ‘Did anyone enter this room last week?’, AI: ‘Yes, 27 instances logged on cam- era’, Human: ‘Were any of them carrying a black bag?’), *Work done while KG and AS were interns at Virginia Tech. 1
1611.08669#4
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
5
*Work done while KG and AS were interns at Virginia Tech. 1 Captioning ‘Two people are ina wheelchair and one is holding a racket. Visual Dialog Q: How many people are on Visual Dialog VQa wheelchairs ? Q: What is the gender of the Q: How many people Two ‘one in the white shirt ? ‘on wheelchairs ? What are their genders ? She is a woman A: Two A Q: A A: One male and one female | Q:; What is she doing ? Q: Which one is holding a A: Playing a Wii game racket ? Q: Is that a man to her right The woman A: No, it's a woman Q: How many wheelchairs ? A: One A Figure 2: Differences between image captioning, Visual Question Answering (VQA) and Visual Dialog. Two (partial) dialogs are shown from our VisDial dataset, which is curated from a live chat between two Amazon Mechanical Turk workers (Sec. 3). • Interacting with an AI assistant (Human: ‘Alexa – can you see the baby in the baby monitor?’, AI: ‘Yes, I can’, Human: ‘Is he sleeping or playing?’).
1611.08669#5
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
7
Despite rapid progress at the intersection of vision and lan- guage – in particular, in image captioning and visual ques- tion answering (VQA) – it is clear that we are far from this grand goal of an AI agent that can ‘see’ and ‘communicate’. In captioning, the human-machine interaction consists of the machine simply talking at the human (‘Two people are in a wheelchair and one is holding a racket’), with no dia- log or input from the human. While VQA takes a significant step towards human-machine interaction, it still represents only a single round of a dialog – unlike in human conver- sations, there is no scope for follow-up questions, no mem- ory in the system of previous questions asked by the user nor consistency with respect to previous answers provided by the system (Q: ‘How many people on wheelchairs?’, A: ‘Two’; Q: ‘How many wheelchairs?’, A: ‘One’). As a step towards conversational visual AI, we introduce a novel task – Visual Dialog – along with a large-scale dataset, an evaluation protocol, and novel
1611.08669#7
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
8
step towards conversational visual AI, we introduce a novel task – Visual Dialog – along with a large-scale dataset, an evaluation protocol, and novel deep models. Task Definition. The concrete task in Visual Dialog is the following – given an image I, a history of a dialog con- sisting of a sequence of question-answer pairs (Q1: ‘How many people are in wheelchairs?’, A1: ‘Two’, Q2: ‘What are their genders?’, A2: ‘One male and one female’), and a natural language follow-up question (Q3: ‘Which one is holding a racket?’), the task for the machine is to answer the question in free-form natural language (A3: ‘The woman’). This task is the visual analogue of the Turing Test.
1611.08669#8
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
10
requires tion to a relevant region. co-reference resolution (whom does the pronoun ‘she’ re- fer to?), ‘Is that a man to her right?’ further requires the machine to have visual memory (which object in the im- age were we talking about?). Such systems also need to be consistent with their outputs – ‘How many people are in wheelchairs?’, ‘Two’, ‘What are their genders?’, ‘One male and one female’ – note that the number of genders be- ing specified should add up to two. Such difficulties make the problem a highly interesting and challenging one. Why do we talk to machines? Prior work in language-only (non-visual) dialog can be arranged on a spectrum with the following two end-points: goal-driven dialog (e.g. booking a flight for a user) ←→ goal-free dialog (or casual ‘chit-chat’ with chatbots). The two ends have vastly differing purposes and conflicting evaluation criteria. Goal-driven dialog is typically evalu- ated on task-completion rate (how frequently was the user able to book their flight) or
1611.08669#10
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
11
evaluation criteria. Goal-driven dialog is typically evalu- ated on task-completion rate (how frequently was the user able to book their flight) or time to task completion [14, 44] – clearly, the shorter the dialog the better. In contrast, for chit-chat, the longer the user engagement and interaction, the better. For instance, the goal of the 2017 $2.5 Million Amazon Alexa Prize is to “create a socialbot that converses coherently and engagingly with humans on popular topics for 20 minutes.”
1611.08669#11
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
12
We believe our instantiation of Visual Dialog hits a sweet It is disentangled enough from a spot on this spectrum. specific downstream task so as to serve as a general test of machine intelligence, while being grounded enough in vi- sion to allow objective evaluation of individual responses and benchmark progress. The former discourages task- engineered bots for ‘slot filling’ [30] and the latter discour- ages bots that put on a personality to avoid answering ques- tions while keeping the user engaged [64]. Contributions. We make the following contributions: • We propose a new AI task: Visual Dialog, where a ma- chine must hold dialog with a human about visual content. • We develop a novel two-person chat data-collection pro- tocol to curate a large-scale Visual Dialog dataset (Vis- Dial). Upon completion1, VisDial will contain 1 dialog each (with 10 question-answer pairs) on ∼140k images from the COCO dataset [32], for a total of ∼1.4M dialog question-answer pairs. When compared to VQA [6], Vis- Dial studies a significantly richer task (dialog), overcomes a ‘visual
1611.08669#12
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
14
1VisDial data on COCO-train (∼83k images) and COCO- val (∼40k images) is already available for download at https:// visualdialog.org. Since dialog history contains the ground-truth cap- tion, we will not be collecting dialog data on COCO-test. Instead, we will collect dialog data on 20k extra images from COCO distribution (which will be provided to us by the COCO team) for our test set. • We introduce a family of neural encoder-decoder models for Visual Dialog with 3 novel encoders – Late Fusion: that embeds the image, history, and ques- tion into vector spaces separately and performs a ‘late fusion’ of these into a joint embedding. – Hierarchical Recurrent Encoder: that contains a dialog- level Recurrent Neural Network (RNN) sitting on top of a question-answer (QA)-level recurrent block. In each QA-level recurrent block, we also include an attention- over-history mechanism to choose and attend to the round of the history relevant to the current question.
1611.08669#14
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
15
– Memory Network: that treats each previous QA pair as a ‘fact’ in its memory bank and learns to ‘poll’ the stored facts and the image to develop a context vector. We train all these encoders with 2 decoders (generative and discriminative) – all settings outperform a number of sophisticated baselines, including our adaption of state-of- the-art VQA models to VisDial. • We propose a retrieval-based evaluation protocol for Vi- sual Dialog where the AI agent is asked to sort a list of candidate answers and evaluated on metrics such as mean- reciprocal-rank of the human response. We conduct studies to quantify human performance. • Putting it all together, on the project page we demonstrate the first visual chatbot! # 2. Related Work
1611.08669#15
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
16
Vision and Language. A number of problems at the inter- section of vision and language have recently gained promi- nence – image captioning [15, 16, 27, 62], video/movie description [51, 59, 60], text-to-image coreference/ground- ing [10, 22, 29, 45, 47, 50], visual storytelling [4, 23], and of course, visual question answering (VQA) [3, 6, 12, 17, 19, 37–39, 49, 69]. However, all of these involve (at most) a single-shot natural language interaction – there is no dialog. Concurrent with our work, two recent works [13, 43] have also begun studying visually-grounded dialog. Visual Turing Test. Closely related to our work is that of Geman et al. [18], who proposed a fairly restrictive ‘Visual Turing Test’ – a system that asks templated, binary ques- tions. In comparison, 1) our dataset has free-form, open- ended natural language questions collected via two subjects chatting on Amazon Mechanical Turk (AMT), resulting in a more realistic and diverse dataset (see Fig. 5). 2) The dataset in [18] only contains street
1611.08669#16
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
17
Turk (AMT), resulting in a more realistic and diverse dataset (see Fig. 5). 2) The dataset in [18] only contains street scenes, while our dataset has considerably more variety since it uses images from COCO [32]. Moreover, our dataset is two orders of mag- nitude larger – 2,591 images in [18] vs ∼140k images, 10 question-answer pairs per image, total of ∼1.4M QA pairs. Text-based Question Answering. Our work is related to text-based question answering or ‘reading comprehen- sion’ tasks studied in the NLP community. Some recent
1611.08669#17
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
19
large-scale datasets in this domain include the 30M Fac- toid Question-Answer corpus [52], 100K SimpleQuestions dataset [8], DeepMind Q&A dataset [21], the 20 artificial tasks in the bAbI dataset [65], and the SQuAD dataset for reading comprehension [46]. VisDial can be viewed as a fusion of reading comprehension and VQA. In VisDial, the machine must comprehend the history of the past dialog and then understand the image to answer the question. By de- sign, the answer to any question in VisDial is not present in the past dialog – if it were, the question would not be asked. The history of the dialog contextualizes the question – the question ‘what else is she holding?’ requires a machine to comprehend the history to realize who the question is talk- ing about and what has been excluded, and then understand the image to answer the question. Conversational Modeling and Chatbots. Visual Dialog is the visual analogue of text-based dialog and conversation modeling. While some of the earliest developed chatbots were rule-based [64], end-to-end learning based approaches are now being actively explored [9, 14, 26, 31,
1611.08669#19
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
20
While some of the earliest developed chatbots were rule-based [64], end-to-end learning based approaches are now being actively explored [9, 14, 26, 31, 53, 54, 61]. A recent large-scale conversation dataset is the Ubuntu Dia- logue Corpus [35], which contains about 500K dialogs ex- tracted from the Ubuntu channel on Internet Relay Chat (IRC). Liu et al. [33] perform a study of problems in exist- ing evaluation protocols for free-form dialog. One impor- tant difference between free-form textual dialog and Vis- Dial is that in VisDial, the two participants are not symmet- ric – one person (the ‘questioner’) asks questions about an image that they do not see; the other person (the ‘answerer’) sees the image and only answers the questions (in otherwise unconstrained text, but no counter-questions allowed). This role assignment gives a sense of purpose to the interaction (why are we talking? To help the questioner build a men- tal model of the image), and allows objective evaluation of individual responses.
1611.08669#20
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
21
# 3. The Visual Dialog Dataset (VisDial) We now describe our VisDial dataset. We begin by describ- ing the chat interface and data-collection process on AMT, analyze the dataset, then discuss the evaluation protocol. Consistent with previous data collection efforts, we collect visual dialog data on images from the Common Objects in Context (COCO) [32] dataset, which contains multiple ob- jects in everyday scenes. The visual complexity of these images allows for engaging and diverse conversations. Live Chat Interface. Good data for this task should in- clude dialogs that have (1) temporal continuity, (2) ground- ing in the image, and (3) mimic natural ‘conversational’ exchanges. To elicit such responses, we paired 2 work- ers on AMT to chat with each other in real-time (Fig. 3). Each worker was assigned a specific role. One worker (the ‘questioner’) sees only a single line of text describing an imCaption: A sink and toilet in a small room. You have to ASK questions about the image. fp oe U Is this a bathroom ? ; Fellow Turker connected |_Now you can send messages. | jrionTe | Fes. it a athroom 2.You l wit color isthe room 2 > 2. = "2
1611.08669#21
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
22
questions about the image. Caption: A sink and toilet in a small room. You have to ANSWER Fellow Turker connected Now you can send messages. fi : 1S this bathroom ? 1.You: l yes, its a bathroom fa Je.Feliow Turker: what color isthe room ? 2.ou 1 itioks cream colored >) ‘can you see anything else ? there is a shelf with items on it is anyone in the room ? nobody isin the room ‘can you see on the outside ? no, itis only inside what colori tho sink ? the sink is white is the room clean ? itis very clean isthe toilet facing the sink ? yes the tollet is facing the sink ‘can you see a door ? yes, lean see the door Copa: {10 what color isthe door ? {A sink and tole in a small room, 0 the door is tan colored,
1611.08669#22
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
23
Caption: A sink and toilet in a small room. You have to ASK questions about the image. fp oe U Is this a bathroom ? ; Fellow Turker connected |_Now you can send messages. | questions about the image. jrionTe | Fes. it a athroom 2.You l wit color isthe room 2 > Caption: A sink and toilet in a small room. You have to ANSWER ‘can you see anything else ? there is a shelf with items on it is anyone in the room ? nobody isin the room ‘can you see on the outside ? no, itis only inside what colori tho sink ? the sink is white is the room clean ? itis very clean isthe toilet facing the sink ? yes the tollet is facing the sink ‘can you see a door ? yes, lean see the door Copa: {10 what color isthe door ? {A sink and tole in a small room, 0 the door is tan colored, Fellow Turker connected Now you can send messages. fi : 1S this bathroom ? 1.You: l yes, its a bathroom fa Je.Feliow Turker: what color isthe room ? 2.ou 1 itioks cream colored >) 2. = "2 (a) What the ‘questioner’ sees. # (b) What the ‘answerer’ sees.
1611.08669#23
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
24
(a) What the ‘questioner’ sees. # (b) What the ‘answerer’ sees. (c) Example dialog from our VisDial dataset. Figure 3: Collecting visually-grounded dialog data on Amazon Mechanical Turk via a live chat interface where one person is assigned the role of ‘questioner’ and the second person is the ‘answerer’. We show the first two questions being collected via the interface as Turkers interact with each other in Fig. 3a and Fig. 3b. Remaining questions are shown in Fig. 3c. age (caption from COCO); the image remains hidden to the questioner. Their task is to ask questions about this hidden image to ‘imagine the scene better’. The second worker (the ‘answerer’) sees the image and caption. Their task is to an- swer questions asked by their chat partner. Unlike VQA [6], answers are not restricted to be short or concise, instead workers are encouraged to reply as naturally and ‘conversa- tionally’ as possible. Fig. 3c shows an example dialog.
1611.08669#24
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
25
This process is an unconstrained ‘live’ chat, with the only exception that the questioner must wait to receive an answer before posting the next question. The workers are allowed to end the conversation after 20 messages are exchanged (10 pairs of questions and answers). Further details about our final interface can be found in the supplement. one of the workers abandoned a HIT (or was disconnected) midway, automatic conditions in the code kicked in asking the remaining worker to either continue asking questions or providing facts (captions) about the image (depending on their role) till 10 messages were sent by them. Workers who completed the task in this way were fully compensated, but our backend discarded this data and automatically launched a new HIT on this image so a real two-person conversation could be recorded. Our entire data-collection infrastructure (front-end UI, chat interface, backend storage and messag- ing system, error handling protocols) is publicly available2. # 4. VisDial Dataset Analysis
1611.08669#25
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
26
# 4. VisDial Dataset Analysis We also piloted a different setup where the questioner saw a highly blurred version of the image, instead of the caption. The conversations seeded with blurred images resulted in questions that were essentially ‘blob recognition’ – ‘What is the pink patch at the bottom right?’. For our full-scale data-collection, we decided to seed with just the captions since it resulted in more ‘natural’ questions and more closely modeled the real-world applications discussed in Section 1 where no visual signal is available to the human.
1611.08669#26
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
27
Building a 2-person chat on AMT. Despite the popular- ity of AMT as a data collection platform in computer vi- sion, our setup had to design for and overcome some unique challenges – the key issue being that AMT is simply not designed for multi-user Human Intelligence Tasks (HITs). Hosting a live two-person chat on AMT meant that none of the Amazon tools could be used and we developed our own backend messaging and data-storage infrastructure based on Redis messaging queues and Node.js. To support data qual- ity, we ensured that a worker could not chat with themselves (using say, two different browser tabs) by maintaining a pool of worker IDs paired. To minimize wait time for one worker while the second was being searched for, we ensured that there was always a significant pool of available HITs. If We now analyze the v0.9 subset of our VisDial dataset – it contains 1 dialog (10 QA pairs) on ∼123k images from COCO-train/val, a total of 1,232,870 QA pairs. # 4.1. Analyzing VisDial Questions
1611.08669#27
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
28
# 4.1. Analyzing VisDial Questions Visual Priming Bias. One key difference between VisDial and previous image question-answering datasets (VQA [6], Visual 7W [70], Baidu mQA [17]) is the lack of a ‘vi- sual priming bias’ in VisDial. Specifically, in all previ- ous datasets, subjects saw an image while asking questions about it. As analyzed in [3, 19, 69], this leads to a particular bias in the questions – people only ask ‘Is there a clock- tower in the picture?’ on pictures actually containing clock towers. This allows language-only models to perform re- markably well on VQA and results in an inflated sense of progress [19, 69]. As one particularly perverse example – for questions in the VQA dataset starting with ‘Do you see a . . . ’, blindly answering ‘yes’ without reading the rest of the question or looking at the associated image results in an average VQA accuracy of 87%! In VisDial, questioners do not see the image. As a result, this bias is reduced. # 2https://github.com/batra-mlp-lab/ visdial-amt-chat 4
1611.08669#28
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
29
# 2https://github.com/batra-mlp-lab/ visdial-amt-chat 4 ) — Questions 50% — Answers Percentage coverage ee? 3 4 5 6 7 8 9 10 # Words in sentence (a) (b) — VOA — Visual Dialog Percentage coverage oly Gy iy ae # Unique answers (x 10000) 20 Figure 4: Distribution of lengths for questions and answers (left); and percent coverage of unique answers over all answers from the train dataset (right), compared to VQA. For a given coverage, Vis- Dial has more unique answers indicating greater answer diversity. Distributions. Fig. 4a shows the distribution of question lengths in VisDial – we see that most questions range from four to ten words. Fig. 5 shows ‘sunbursts’ visualizing the distribution of questions (based on the first four words) in VisDial vs. VQA. While there are a lot of similarities, some differences immediately jump out. There are more binary questions3 in VisDial as compared to VQA – the most fre- quent first question-word in VisDial is ‘is’ vs. ‘what’ in VQA. A detailed comparison of the statistics of VisDial vs. other datasets is available in Table 1 in the supplement.
1611.08669#29
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
30
Finally, there is a stylistic difference in the questions that is difficult to capture with the simple statistics above. In VQA, subjects saw the image and were asked to stump a smart robot. Thus, most queries involve specific details, of- ten about the background (‘What program is being utilized in the background on the computer?’). In VisDial, question- ers did not see the original image and were asking questions to build a mental model of the scene. Thus, the questions tend to be open-ended, and often follow a pattern: • Generally starting with the entities in the caption: ‘An elephant walking away from a pool in an exhibit’, ‘Is there only 1 elephant?’, • digging deeper into their parts or attributes: ‘Is it full grown?’, ‘Is it facing the camera?’, • asking about the scene category or the picture setting: ‘Is this indoors or outdoors?’, ‘Is this a zoo?’, • the weather: ‘Is it snowing?’, ‘Is it sunny?’, • simply exploring the scene: ‘Are there people?’, ‘Is there shelter for elephant?’,
1611.08669#30
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
31
• simply exploring the scene: ‘Are there people?’, ‘Is there shelter for elephant?’, 3 Questions starting in ‘Do’, ‘Did’, ‘Have’, ‘Has’, ‘Is’, ‘Are’, ‘Was’, ‘Were’, ‘Can’, ‘Could’. 5 • and asking follow-up questions about the new visual en- tities discovered from these explorations: ‘There’s a blue fence in background, like an enclosure’, ‘Is the enclosure inside or outside?’. # 4.2. Analyzing VisDial Answers Answer Lengths. Fig. 4a shows the distribution of answer lengths. Unlike previous datasets, answers in VisDial are longer and more descriptive – mean-length 2.9 words (Vis- Dial) vs 1.1 (VQA), 2.0 (Visual 7W), 2.8 (Visual Madlibs).
1611.08669#31
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
32
Fig. 4b shows the cumulative coverage of all answers (y- axis) by the most frequent answers (x-axis). The difference between VisDial and VQA is stark – the top-1000 answers in VQA cover ∼83% of all answers, while in VisDial that figure is only ∼63%. There is a significant heavy tail in Vis- Dial – most long strings are unique, and thus the coverage curve in Fig. 4b becomes a straight line with slope 1. In total, there are 337,527 unique answers in VisDial v0.9. Answer Types. Since the answers in VisDial are longer strings, we can visualize their distribution based on the starting few words (Fig. 5c). An interesting category of answers emerges – ‘I think so’, ‘I can’t tell’, or ‘I can’t see’ – expressing doubt, uncertainty, or lack of information. This is a consequence of the questioner not being able to see the image – they are asking contextually relevant questions, but not all questions may be answerable with certainty from that image. We believe this is rich data for building more human-like AI that
1611.08669#32
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
33
they are asking contextually relevant questions, but not all questions may be answerable with certainty from that image. We believe this is rich data for building more human-like AI that refuses to answer questions it doesn’t have enough information to answer. See [48] for a related, but complementary effort on question relevance in VQA. Binary Questions vs Binary Answers. In VQA, binary questions are simply those with ‘yes’, ‘no’, ‘maybe’ as an- swers [6]. In VisDial, we must distinguish between binary questions and binary answers. Binary questions are those starting in ‘Do’, ‘Did’, ‘Have’, ‘Has’, ‘Is’, ‘Are’, ‘Was’, ‘Were’, ‘Can’, ‘Could’. Answers to such questions can (1) contain only ‘yes’ or ‘no’, (2) begin with ‘yes’, ‘no’, and contain additional information or clarification, (3) involve ambiguity (‘It’s hard to see’, ‘Maybe’), or
1611.08669#33
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
34
additional information or clarification, (3) involve ambiguity (‘It’s hard to see’, ‘Maybe’), or (4) answer the question without explicitly saying ‘yes’ or ‘no’ (Q: ‘Is there any type of design or pattern on the cloth?’, A: ‘There are circles and lines on the cloth’). We call answers that con- tain ‘yes’ or ‘no’ as binary answers – 149,367 and 76,346 answers in subsets (1) and (2) from above respectively. Bi- nary answers in VQA are biased towards ‘yes’ [6, 69] – 61.40% of yes/no answers are ‘yes’. In VisDial, the trend is reversed. Only 46.96% are ‘yes’ for all yes/no responses. This is understandable since workers did not see the image, and were more likely to end up with negative responses.
1611.08669#34
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
37
Coreference in dialog. Since language in VisDial is the re- sult of a sequential conversation, it naturally contains pro- nouns – ‘he’, ‘she’, ‘his’, ‘her’, ‘it’, ‘their’, ‘they’, ‘this’, ‘that’, ‘those’, etc. In total, 38% of questions, 19% of an- swers, and nearly all (98%) dialogs contain at least one pronoun, thus confirming that a machine will need to over- come coreference ambiguities to be successful on this task. We find that pronoun usage is low in the first round (as ex- pected) and then picks up in frequency. A fine-grained per- round analysis is available in the supplement. Temporal Continuity in Dialog Topics. It is natural for conversational dialog data to have continuity in the ‘top- ics’ being discussed. We have already discussed qualitative differences in VisDial questions vs. VQA. In order to quan- tify the differences, we performed a human study where we manually annotated question
1611.08669#37
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
38
qualitative differences in VisDial questions vs. VQA. In order to quan- tify the differences, we performed a human study where we manually annotated question ‘topics’ for 40 images (a total of 400 questions), chosen randomly from the val set. The topic annotations were based on human judgement with a consensus of 4 annotators, with topics such as: asking about a particular object (‘What is the man doing?’) , scene (‘Is it outdoors or indoors?’), weather (“Is the weather sunny?’), the image (‘Is it a color image?’), and exploration (‘Is there anything else?”). We performed similar topic annotation for questions from VQA for the same set of 40 images, and compared topic continuity in questions. Across 10 rounds, VisDial question have 4.55 ± 0.17 topics on average, con- firming that these are not independent questions. Recall that VisDial has 10 questions per image as opposed to 3 for VQA. Therefore, for a fair comparison, we compute aver- age number of topics in VisDial over all subsets of 3 succes- sive questions. For 500 bootstrap samples
1611.08669#38
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
40
# 4.4. VisDial Evaluation Protocol One fundamental challenge in dialog systems is evaluation. Similar to the state of affairs in captioning and machine translation, it is an open problem to automatically evaluate the quality of free-form answers. Existing metrics such as BLEU, METEOR, ROUGE are known to correlate poorly with human judgement in evaluating dialog responses [33]. Instead of evaluating on a downstream task [9] or holisti- cally evaluating the entire conversation (as in goal-free chit- chat [5]), we evaluate individual responses at each round (t = 1, 2, . . . , 10) in a retrieval or multiple-choice setup. Specifically, at test time, a VisDial system is given an im- age I, the ‘ground-truth’ dialog history (including the im- age caption) C, (Q1, A1), . . . , (Qt−1, At−1), the question Qt, and a list of N = 100 candidate answers, and asked to return a sorting of the candidate answers. The model is evaluated on retrieval metrics – (1) rank of human response (lower is better), (2) recall@k, i.e. existence of the human response in top-k ranked responses, and (3) mean reciprocal rank (MRR) of the human response (higher is better).
1611.08669#40
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
41
The evaluation protocol is compatible with both discrimi- native models (that simply score the input candidates, e.g. via a softmax over the options, and cannot generate new answers), and generative models (that generate an answer string, e.g. via Recurrent Neural Networks) by ranking the candidates by the model’s log-likelihood scores. Candidate Answers. We generate a candidate set of cor- rect and incorrect answers from four sets: Correct: The ground-truth human response to the question. Plausible: Answers to 50 most similar questions. Simi- lar questions are those that start with similar tri-grams and mention similar semantic concepts in the rest of the ques- tion. To capture this, all questions are embedded into a vec- tor space by concatenating the GloVe embeddings of the first three words with the averaged GloVe embeddings of the remaining words in the questions. Euclidean distances 6
1611.08669#41
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
42
6 are used to compute neighbors. Since these neighboring questions were asked on different images, their answers serve as ‘hard negatives’. Popular: The 30 most popular answers from the dataset – e.g. ‘yes’, ‘no’, ‘2’, ‘1’, ‘white’, ‘3’, ‘grey’, ‘gray’, ‘4’, ‘yes it is’. The inclusion of popular answers forces the machine to pick between likely a priori responses and plausible re- sponses for the question, thus increasing the task difficulty. Random: The remaining are answers to random questions in the dataset. To generate 100 candidates, we first find the union of the correct, plausible, and popular answers, and include random answers until a unique set of 100 is found. # 5. Neural Visual Dialog Models
1611.08669#42
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]
1611.08669
43
In this section, we develop a number of neural Visual Dialog answerer models. Recall that the model is given as input — an image J, the ‘ground-truth’ dialog history (including the image caption) H = ( C ,(Q1,A1),---.(Qr-1,At-1)), Ya e_———’ Ho Ay Aya the question Q;, and a list of 100 candidate answers A, = {AM,..., (1 } — and asked to return a sorting of Ay. At a high level, all our models follow the encoder-decoder framework, i.e. factorize into two parts — (1) an encoder that converts the input (I, H, Q,) into a vector space, and (2) a decoder that converts the embedded vector into an output. We describe choices for each component next and present experiments with all encoder-decoder combinations. Decoders: We use two types of decoders: ¢ Generative (LSTM) decoder: where the encoded vector is set as the initial state of the Long Short-Term Mem- ory (LSTM) RNN language model. During training, we maximize the log-likelihood of the ground truth answer sequence given its corresponding encoded representation (trained
1611.08669#43
Visual Dialog
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network -- and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Putting it all together, we demonstrate the first 'visual chatbot'! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org
http://arxiv.org/pdf/1611.08669
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG
23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9 dataset, Webpage: http://visualdialog.org
null
cs.CV
20161126
20170801
[ { "id": "1605.06069" }, { "id": "1701.08251" }, { "id": "1506.02075" }, { "id": "1605.07683" }, { "id": "1610.01119" }, { "id": "1506.05869" } ]