id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.09268#3 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | naturalâ distribution of information need that users may want to satisfy using, say, an intelligent assistant. Real-world text is messy: they may include typos or abbreviationsâ and transcription errors in case of spoken interfaces. The text from different documents may also often contain conï¬ icting information. Most existing datasets, in contrast, often contain high-quality stories or text spans from sources such as Wikipedia. Real-world MRC systems should be benchmarked on realistic datasets where they need to be robust to noisy and problematic inputs. Finally, another potential limitation of existing MRC tasks is that they often require the model to operate on a single entity or a text span. Under many real-world application settings, the information necessary to answer a question may be spread across different parts of the same document, or even across multiple documents. It is, therefore, important to test an MRC model on its ability to extract information and support for the ï¬ nal answer from multiple passages and documents. In this paper, we introduce Microsoft MAchine Reading Comprehension (MS MARCO)â a large scale real-world reading comprehension datasetâ with the goal of addressing many of the above mentioned shortcomings of existing MRC and QA datasets. The dataset comprises of anonymized search queries issued through Bing or Cortana. We annotate each question with segment information as we describe in Section 3. Corresponding to each question, we provide a set of extracted passages from documents retrieved by Bing in response to the question. The passages and the documents may or may not actually contain the necessary information to answer the question. For each question, we ask crowd-sourced editors to generate answers based on the information contained in the retrieved passages. In addition to generating the answer, the editors are also instructed to mark the passages containing the supporting informationâ although we do not enforce these annotations to be exhaustive. The editors are allowed to mark a question as unanswerable based on the passages provided. We include these unanswerable questions in our dataset because we believe that the ability to recognize insufï¬ cient (or conï¬ icting) information that makes a question unanswerable is important to develop for an MRC model. The editors are strongly encouraged to form answers in complete sentences. In total, the MS MARCO dataset contains 1,010,916 questions, 8,841,823 companion passages extracted from 3,563,535 web documents, and 182,669 editorially generated answers. | 1611.09268#2 | 1611.09268#4 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#4 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Using this dataset, we propose three different tasks with varying levels of difï¬ culty: (i) Predict if a question is answerable given a set of context passages, and extract relevant information and synthesize the answer. (ii) Generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context. (iii) Rank a set of retrieved passages given a question. We describe the dataset and the proposed tasks in more details in the rest of this paper and present some preliminary benchmarking results on these tasks. # 2 Related work Machine reading comprehension and open domain question-answering are challenging tasks [Weston et al., 2015]. To encourage more rapid progress, the community has made several different datasets and tasks publicly available for benchmarking. We summarize some of them in this section. The Stanford Question Answering Dataset (SQuAD) Rajpurkar et al. [2016] consists of 107,785 question-answer pairs from 536 articles, where each answer is a text span. The key distinction between SQUAD and MS MARCO are: | 1611.09268#3 | 1611.09268#5 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#5 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | 2 Table 1: Comparison of MS MARCO and some of the other MRC datasets. # Questions # Documents Span of words 100k Human generated 200k Human generated 46,765 Span of words 140k 97k 7,787 100K 10k 1M 1,572 stories 6.9M passages 28k 14M sentences 536 8.8M passages, 3.2m docs. 1. The MS MARCO dataset is more than ten times larger than SQuADâ which is an important consideration if we want to benchmark large deep learning models [Frank, 2017]. 2. The questions in SQuAD are editorially generated based on selected answer spans, while in MS MARCO they are sampled from Bingâ s query logs. 3. The answers in SQuAD consists of spans of texts from the provided passages while the answers in MS MARCO are editorially generated. 4. Originally SQuAD contained only answerable questions, although this changed in the more recent edition of the task [Rajpurkar et al., 2018]. NewsQA [Trischler et al., 2017] is a MRC dataset with over 100,000 question and span-answer pairs based off roughly 10,000 CNN news articles. The goal of the NewsQA task is to test MRC models on reasoning skillsâ beyond word matching and paraphrasing. Crowd-sourced editors created the questions from the title of the articles and the summary points (provided by CNN) without access to the article itself. A 4-stage collection methodology was employed to generate a more challenging MRC task. More than 44% of the NewsQA questions require inference and synthesis, compared to SQuADâ s 20%. DuReader [He et al., 2017] is a Chinese MRC dataset built with real application data from Baidu search and Baidu Zhidaoâ a community question answering website. It contains 200,000 questions and 420,000 answers from 1,000,000 documents. In addition, DuReader provides additional annotations of the answersâ | 1611.09268#4 | 1611.09268#6 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#6 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | labelling them as either fact based or opinionative. Within each category, they are further divided into entity, yes/no, and descriptive answers. NarrativeQA [Kociský et al., 2017] dataset contains questions created by editors based on sum- maries of movie scripts and books. The dataset contains about 45,000 question-answer pairs over 1,567 stories, evenly split between books and movie scripts. Compared to the news corpus used in NewsQA, the collection of movie scripts and books are more complex and diverseâ allowing the editors to create questions that may require more complex reasoning. The movie scripts and books are also longer documents than the news or wikipedia article, as is the case with NewsQA and SQuAD, respectively. SearchQA [Dunn et al., 2017] takes questions from the American TV quiz show, Jeopardy1 and submits them as queries to Google to extract snippets from top 40 retrieved documents that may contain the answers to the questions. Document snippets not containing answers are ï¬ ltered out, leaving more than 140K questions-answer pairs and 6.9M snippets. The answers are short exact spans of text averaging between 1-2 tokens. MS MARCO, in contrast, focuses more on longer natural language answer generation, and the questions correspond to Bing search queries instead of trivia questions. RACE [Lai et al., 2017] contains roughly 100,000 multiple choice questions and 27,000 passages from standardized tests for Chinese students learning English as a foreign language. | 1611.09268#5 | 1611.09268#7 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#7 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | The dataset is split up into: RACE-M, which has approximately 30,000 questions targeted at middle school students aged 12-15, and RACE-H, which has approximately 70,000 questions targeted at high school students aged 15 to 18. Lai et al. [2017] claim that current state of the art neural models at the time of their publishing were performing at 44% accuracy while the ceiling human performance was 95%. AI2 Reasoning Challenge (ARC) [Clark et al., 2018] by Allen Institute for Artiï¬ cial Intelligence consists of 7,787 grade-school multiple choice science questionsâ typically with 4 possible answers. The answers generally require external knowledge or complex reasoning. | 1611.09268#6 | 1611.09268#8 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#8 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | In addition, # 1https://www.jeopardy.com/ 3 @ will i quality for osap ifm new in canada Candidate passages hc passage 2 acto alata, Selected passages is in order to apply online for funding consideration from The Ontario Student Assistance (PROGRAM), ofap you must first register as a new use to this. website ce: hitpsJ/osap.gov.on.ca/OSAPSecuttyWeb/publiciagreementahtm) Visit the OSAP website for application deadlines To get OSAP, you have to be eligible. You can apply using an online form. oF you can print off the application forms. you submit a paper application. you â must pay an application fee. assstance-for-post-secondary-education/how-do-i-apply-for-the-onfario-shudent-assistance- program. sand fie. You ven Figure 1: Simpliï¬ ed passage selection and answer summarization UI for human editors. ARC provides a corpus of 14M science-related sentences with knowledge relevant to the challenge. However, the training of the models does not have to include, nor be limited to, this corpus. ReCoRD [Zhang et al., 2018] contains 12,000 Cloze-style question-passage pairs extracted from CNN/Daily Mail news articles. For each pair in this dataset, the question and the passage are selected from the same news article such that they have minimal text overlapâ making them unlikely to be paraphrases of each otherâ but refer to at least one common named entity. The focus of this dataset is on evaluating MRC models on their common-sense reasoning capabilities. # 3 The MS Marco dataset To generate the 1,010,916 questions with 1,026,758 unique answers we begin by sampling queries from Bingâ s search logs. | 1611.09268#7 | 1611.09268#9 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#9 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | We ï¬ lter out any non-question queries from this set. We retrieve relevant documents for each question using Bing from its large-scale web index. Then we automatically extract relevant passages from these documents. Finally, human editors annotate passages that contain useful and necessary information for answering the questionsâ and compose a well-formed natural language answers summarizing the said information. Figure 1 shows the user interface for a web-based tool that the editors use for completing these annotation and answer composition tasks. During the editorial annotation and answer generation process, we continuously audit the data being generated to ensure accuracy and quality of answersâ and verify that the guidelines are appropriately followed. As previously mentioned, the questions in MS MARCO correspond to user submitted queries from Bingâ s query logs. The question formulations, therefore, are often complex, ambiguous, and may even contain typographical and other errors. An example of such a question issued to Bing is: â | 1611.09268#8 | 1611.09268#10 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#10 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | in what type of circulation does the oxygenated blood ï¬ ow between the heart and the cells of the body?â . We believe that these questions, while sometimes not well-formatted, are more representative of human information seeking behaviour. Another example of a question from our dataset includes: â will I qualify for osap if iâ m new in Canadaâ . As shown in ï¬ gure 1, one of the relevant passages include: â You must be a 1. Canadian citizen, 2. Permanent Resident or 3. Protected personâ . When auditing our editorial process, we observe that even the human editors ï¬ nd the task of answering these questions to be sometimes difï¬ cultâ especially when the question is in a domain the editor is unfamiliar with. We, therefore, believe that the MS MARCO presents a challenging dataset for benchmarking MRC models. The MS MARCO dataset that we are publishing consists of six major components: | 1611.09268#9 | 1611.09268#11 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#11 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | 1. Questions: These are a set of anonymized question queries from Bingâ s search logs, where the user is looking for a speciï¬ c answer. Queries with navigational and other intents are 4 Table 2: Distribution of questions based on answer-type classiï¬ er Question segment Question contains YesNo What How Where When Why Who Which Other Question classiï¬ cation Description Numeric Entity Location Person 7.46% 34.96% 16.8% 3.46% 2.71% 1.67% 3.33% 1.79% 27.83% 53.12% 26.12% 8.81% 6.17% 5.78% # Percentage of question excluded from our dataset. This ï¬ ltering of question queries is performed automatically by a machine learning based classiï¬ er trained previously on human annotated data. Selected questions are further annotated by editors based on whether they are answerable using the passages provided. 2. Passages: For each question, on average we include a set of 10 passages which may contain the answer to the question. These passages are extracted from relevant web documents. They are selected by a state-of-the-art passage retrieval system at Bing. The editors are instructed to annotate the passages they use to compose the ï¬ nal answer as is_selected. For questions, where no answer was present in any of the passages, they should all be annotated by setting is_selected to 0. 3. Answers: For each question, the dataset contains zero, or more answers composed manually by the human editors. The editors are instructed to read and understand the questions, inspect the retrieved passages, and then synthesize a natural language answer with the correct information extracted strictly from the passages provided. 4. Well-formed Answers: For some question-answer pairs, the data also contains one or more answers that are generated by a post-hoc review-and-rewrite process. This process involves a separate editor reviewing the provided answer and rewriting it if: (i) it does not have proper grammar, (ii) there is a high overlap in the answer and one of the provided passages (indicating that the original editor may have copied the passage directly), or (iii) the answer can not be understood without the question and the passage context. e.g., given the question â | 1611.09268#10 | 1611.09268#12 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#12 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | tablespoon in cupâ and the answer â 16â , the well-formed answer should be â There are 16 tablespoons in a cup.â . 5. Document: For each of the documents from which the passages were originally extracted from, we include: (i) the URL, (ii) the body text, and (iii) the title. We extracted these documents from Bingâ s index as a separate post-processing step. Roughly 300,000 docu- ments could not be retrieved because they were no longer in the index and for the remaining it is possibleâ | 1611.09268#11 | 1611.09268#13 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#13 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | even likelyâ that the content may have changed since the passages were originally extracted. 6. Question type: Each question is further automatically annotated using a machine learned classiï¬ er with one of the following segment labels: (i) NUMERIC, (ii) ENTITY, (iii) LOCA- TION, (iv) PERSON, or (v) DESCRIPTION (phrase). Table 2 lists the relative size of the different question segments and compares it with the proportion of questions that explicitly contain words like â whatâ and â "whereâ . Note that because the questions in our dataset are based on web search queries, we are may observe a question like â | 1611.09268#12 | 1611.09268#14 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#14 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | what is the age of barack obamaâ be expressed simply as â barack obama ageâ in our dataset. 5 Table 3: The MS MARCO dataset format. # Field Description Query A question query issued to Bing. Passages Top 10 passages from Web documents as retrieved by Bing. The passages are presented in ranked order to human editors. The passage that the editor uses to compose the answer is annotated as is_selected: 1. Document URLs URLs of the top ranked documents for the question from Bing. The passages are extracted from these documents. Answer(s) Answers composed by human editors for the question, automatically ex- tracted passages and their corresponding documents. Well Formed Answer(s) Well-formed answer rewritten by human editors, and the original answer. Segment QA classiï¬ cation. E.g., tallest mountain in south america belongs to the ENTITY segment because the answer is an entity (Aconcagua). Table 3 describes the ï¬ nal dataset format for MS MARCO. Inspired by [Gebru et al., 2018] we also release our datasetâ s datasheet on our website. Finally, we summarize the key distinguishing features of the MS MARCO dataset as follows: 1. The questions are anonymized user queries issued to the Bing. 2. All questions are annotated with segment information. 3. The context passagesâ from which the answers are derivedâ are extracted from real web documents. 4. The answers are composed by human editors. 5. A subset of the questions have multiple answers. 6. A subset of the questions have no answers. | 1611.09268#13 | 1611.09268#15 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#15 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | # 3.1 The passage ranking dataset To facilitate the benchmarking of ML based retrieval models that beneï¬ t from supervised training on large datasets, we are releasing a passage collectionâ constructed by taking the union of all the passages in the MS MARCO datasetâ and a set of relevant question and passage identiï¬ er pairs. To identify the relevant passages, we use the is_selected annotation provided by the editors. As the editors were not required to annotate every passage that were retrieved for the question, this annotation should be considered as incompleteâ i.e., there are likely passages in the collection that contain the answer to a question but have not been annotated as is_selected: 1. | 1611.09268#14 | 1611.09268#16 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#16 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | We use this dataset to propose a re-ranking challenge as described in Section 4. Additionally, we are organizing a â Deep Learningâ track at the 2019 edition of TREC2 where we use these passage and question collections to setup an ad-hoc retrieval task. # 4 The challenges Using the MS MARCO dataset, we propose three machine learning tasks of diverse difï¬ culty levels: The novice task requires the system to ï¬ rst predict whether a question can be answered based only on the information contained in the provided passages. If the question cannot be answered, then the system should return â No Answer Presentâ as response. If the question can be answered, then the system should generate the correct answer. The intermediate task is similar to the novice task, except that the generated answer should be well-formedâ such that, if the answer is read-aloud then it should make sense even without the context of the question and retrieved passages. The passage re-ranking task is an information retrieval (IR) challenge. Given a question and a set of 1000 retrieved passages using BM25 [Robertson et al., 2009], the system must produce a 2https://trec.nist.gov/ 6 ranking of the said passages based on how likely they are to contain information relevant to answer the question. This task is targeted to provide a large scale dataset for benchmarking emerging neural IR methods [Mitra and Craswell, 2018]. # 5 The benchmarking results We continue to develop and reï¬ ne the MS MARCO dataset iteratively. Presented at NIPS 2016 the V1.0 dataset was released and recieved with enthusiasm In January 2017, we publicly released the 1.1 version of the dataset. In Section 5.1, we present our initial benchmarking results based on this dataset. Subsequently, we release 2.0 the v2.1 version of the MS MARCO dataset in March 2018 and April 2018 respectively. Section 5.2 covers the experimental results on the update dataset. Finally, in October 2018, we released additional data ï¬ les for the passage ranking task. # 5.1 Experimental results on v1.1 dataset We group the questions in MS MARCO by the segment annotation, as described in Section 3. The complexity of the answers varies signiï¬ cantly between categories. | 1611.09268#15 | 1611.09268#17 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#17 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | For example, the answers to Yes/No questions are binary. The answers to entity questions can be a single entity name or phraseâ e.g., the answer "Rome" for the question what is the capital of Italy". However, for descriptive questions, a longer textual answer is often necessaryâ e.g., "What is the agenda for Hollandeâ s state visit to Washington?". The evaluation strategy that is appropriate for Yes/No answer questions may not be appropriate for benchmarking on questions that require longer answer generation. | 1611.09268#16 | 1611.09268#18 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#18 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Therefore, in our experiments we employ different evaluation metrics for different categories, building on metrics proposed initially by [Mitra et al., 2016]. We use accuracy and precision-recall measures for numeric answers and apply metrics like ROUGE-L [Lin, 2004] and phrasing-aware evaluation framework [Mitra et al., 2016] for long textual answers. The phrasing-aware evaluation framework aims to deal with the diversity of natural language in evaluating long textual answers. The evaluation requires several reference answers per question that are each curated by a different human editor, thus providing a natural way to estimate how diversely a group of individuals may phrase the answer to the same question. A family of pairwise similarity-based metrics can used to incorporate consensus between different reference answers for evaluation. These metrics are simple modiï¬ cations to metrics like BLEU [Papineni et al., 2002] and METEOR [Banerjee and Lavie, 2005] and are shown to achieve better correlation with human judgments. Accordingly, as part of our experiments, a subset of MS MARCO where each question has multiple answers is used to evaluate model performance with both BLEU and pa-BLEU as metrics. # 5.1.1 Generative Model Experiments The following experiments were run on the V1.1 dataset Recurrent Neural Networks (RNNs) are capable of predicting future elements from sequence prior. It is often used as a generative language model for various NLP tasks, such as machine translation [Bahdanau et al., 2014] and question-answering [Hermann et al., 2015a]. In this QA experiment setup, we target training and evaluation of such generative models which predict the human-generated answers given questions and/or contextual passages as model input. Sequence-to-Sequence (Seq2Seq) Model. We train a vanilla Seq2Seq [Sutskever et al., 2014] model with the question-answer pair as source-target sequences. Memory Networks Model. We adapt end-to-end memory networks [Sukhbaatar et al., 2015]â | 1611.09268#17 | 1611.09268#19 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#19 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | that has previously demonstrated good performance on other QA tasksâ by using summed memory representation as the initial state of the RNN decoder. Discriminative Model. For comparison, we also train a discriminative model to rank provided passages as a baseline. This is a variant of [Huang et al., 2013] where we use LSTM [Hochreiter and Schmidhuber, 1997] in place of multi-layer perceptron (MLP). Table 4 shows the preformance of these models using ROUGE-L metric. Additionally, we evaluate memory networks model on an MS MARCO subset where questions have multiple answers. Table 5 shows the performance of the model as measured by BLEU and its pairwise variant pa-BLEU [Mitra et al., 2016]. | 1611.09268#18 | 1611.09268#20 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#20 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | 7 Table 4: ROUGE-L of Different QA Models Tested against a Subset of MS MARCO Description Best ROUGE-L of any passage A DSSM-alike passage ranking model Best Passage Passage Ranking Sequence to Sequence Vanilla seq2seq model predicting answers from questions Memory Network Seq2seq model with MemNN for passages Table 5: BLEU and pa-BLEU on a Multi-Answer Subset of MS MARCO BLEU pa-BLEU Best Passage 0.359 Memory Network 0.340 # 5.1.2 Cloze-Style Model Experiments In Cloze-style tests, a model is required to predict missing words in a text sequence by considering contextual information in textual format. CNN and Daily Mail dataset [Hermann et al., 2015b] is an example of such a cloze-style QA dataset. In this section, we present the performance of two MRC models using both CNN test dataset and a MS MARCO subset. The subset is ï¬ ltered to numeric answer type category, to which cloze-style test is applicable. â ¢ Attention Sum Reader (AS Reader): AS Reader [Kadlec et al., 2016] is a simple model that uses attention to directly pick the answer from the context. â ¢ ReasoNet: ReasoNet [Shen et al., 2016] also relies on attention, but is also a dynamic multi-turn model that attempts to exploit and reason over the relation among questions, contexts, and answers. We show model accuracy numbers on both datasets in table 6, and precision-recall curves on MS MARCO subset in ï¬ gure 2. # 5.2 Experimental results on v2.1 dataset The human baseline on our v1.1 benchmark was surpassed by competing machine learned models in approximately 15 months. For the v2.1 dataset, we revisit our approach to generating the human baseline. | 1611.09268#19 | 1611.09268#21 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#21 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | We select ï¬ ve top performing editorsâ based on their performance on a set of auditing questionsâ to create a human baseline task group. We randomly sample 1,427 questions from our evaluation set and ask each of these editors to produce a new assessment. Then, we compare all our editorial answers to the ground truth and select the answer with the best ROUGE-L score as the candidate answer. Table 7 shows the results. We evaluate the answer set on both the novice and the intermediate task and we include questions that have no answer. To provide a competitive experimental baseline for our dataset, we trained the model introduced in [Clark and Gardner, 2017]. This model uses recent ideas in reading comprehension research, like self-attention [Cheng et al., 2016] and bi-directional attention [Seo et al., 2016]. Our goal is to train this model such that, given a question and a passage that contains an answer to the question, the model identiï¬ es the answer (or span) in the passage. This is similar to the task in SQuAD [Rajpurkar et al., 2016]. First, we select the question-passage pairs where the passage contains an answer to the question and the answer is a contiguous set of words from the passage. Then, we train the model to predict a span for each question-passage pair and output a conï¬ | 1611.09268#20 | 1611.09268#22 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#22 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | dence score. To evaluate the model, Table 6: Accuracy of MRC Models on Numeric Segment of MS MARCO Accuracy MS MARCO CNN (test) AS Reader ReasoNet 55.0 58.9 69.5 74.7 8 1 AS Reader ReasoNet 0.9 n o i s i c e r P 0.8 0.7 0.6 0 0.2 0.4 0.6 0.8 1 # Recall Figure 2: Precision-Recall of Machine Reading Comprehension Models on MS MARCO Subset of Numeric Category Table 7: Performance of MRC Span Model and Human Baseline on MS Marco Tasks ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 Task 0.094 0.268 BiDaF on Original 0.46771 Human Ensemble on Novice 0.73703 0.45439 Human Ensemble on Intermediate 0.63044 0.094 BiDaF on V2 Novice 0.070 BiDaF on V2 Intermediate for each question we chose our model generated answer that has the highest conï¬ dence score among all passages available for that question. To compare model performance across datasets we run this exact setup (training and evaluation) on the original dataset and the new V2 Tasks. Table 7 shows the results. The results indicate that the new v2.1 dataset is more difï¬ cult than the previous v1.1 version. On the novice task BiDaF cannot determine when the question is not answerable and thus performs substantially worse compared to on the v1.1 dataset. On the intermediate task, BiDaF performance once again drops because the model only uses vocabulary present in the passage whereas the well-formed answers may include words from the general vocabulary. # 6 Future Work and Conclusions The process of developing the MS MARCO dataset and making it publicly available has been a tremendous learning experience. | 1611.09268#21 | 1611.09268#23 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#23 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Between the ï¬ rst version of the dataset and the most recent edition, we have signiï¬ cantly modiï¬ ed how we collect and annotate the data, the deï¬ nition of our tasks, and even broadened our scope to cater to the neural IR community. The future of this dataset will depend largely on how the broader academic community makes use of this dataset. For example, we believe that the size and the underlying use of Bingâ s search queries and web documents in the construction of the dataset makes it particularly attractive for benchmarking new machine learning models for MRC and neural IR. But in addition to improving these ML models, the dataset may also prove to be useful for exploring new metricsâ e.g., ROUGE-2 [Ganesan, 2018] and ROUGE-AR[Maples, 2017]â and robust evaluation strategies. Similarly, combining MS MARCO with other existing MRC datasets may also be interesting in the context of multi-task and cross domain learning. We want to engage with the community to get their feedback and guidance on how we can make it easier to enable such new explorations using the MS MARCO data. If there is enough interest, we may also consider generating similar datasets in other languages in the futureâ or augment the existing dataset with other information from the web. | 1611.09268#22 | 1611.09268#24 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#24 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | 9 # References Amazon Alexa. Amazon alexa. http://alexa.amazon.com/, 2018. Amazon Echo. Amazon echo. https://en.wikipedia.org/wiki/Amazon_Echo, 2018. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. S. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pages 65â 72, 2005. J. Cheng, L. Dong, and M. Lapata. | 1611.09268#23 | 1611.09268#25 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#25 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Long short-term memory-networks for machine reading. CoRR, abs/1601.06733, 2016. URL http://arxiv.org/abs/1601.06733. C. Clark and M. Gardner. Simple and effective multi-paragraph reading comprehension. CoRR, abs/1710.10723, 2017. URL http://arxiv.org/abs/1710.10723. P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. | 1611.09268#24 | 1611.09268#26 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#26 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. 2018. Cortana. Cortana personal assistant. http://www.microsoft.com/en-us/mobile/experiences/ cortana/, 2018. G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1):30â | 1611.09268#25 | 1611.09268#27 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#27 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | 42, 2012. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fe. Imagenet: Alarge-scalehierarchicalimagedatabas. CVPR, 2009. URL http://www.image-net.org/papers/imagenet_cvpr09.pdf. L. Deng and X. Huang. Challenges in adopting speech recognition. Communications of the ACM, 47(1):69â 75, 2004. M. Dunn, L. Sagun, M. Higgins, V. U. Güney, V. Cirik, and K. | 1611.09268#26 | 1611.09268#28 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#28 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Cho. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, abs/1704.05179, 2017. B. H. Frank. Google brain chief: Deep learning takes at least 100,000 examples. https://venturebeat.com/ 2017/10/23/google-brain-chief-says-100000-examples-is-enough-data-for-deep-learning/, 2017. K. Ganesan. Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. 2018. J. Gao, M. Galley, and L. Li. | 1611.09268#27 | 1611.09268#29 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#29 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Neural approaches to conversational ai. arXiv preprint arXiv:1809.08267, 2018. T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. III, and K. Crawford. Datasheets for datasets. 2018. Google Assistant. Google assistant. https://assistant.google.com/, 2018. K. He, X. Zhang, S. Ren, and J. Sun. | 1611.09268#28 | 1611.09268#30 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#30 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Deep residual learning for image recognition. 2015. URL https: //arxiv.org/abs/1512.03385. W. He, K. Liu, Y. Lyu, S. Zhao, X. Xiao, Y. Liu, Y. Wang, H. Wu, Q. She, X. Liu, T. Wu, and H. Wang. Dureader: a chinese machine reading comprehension dataset from real-world applications. CoRR, abs/1711.05073, 2017. K. M. Hermann, T. Kociský, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. | 1611.09268#29 | 1611.09268#31 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#31 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Teaching machines to read and comprehend. 2015a. URL https://arxiv.org/abs/1506.03340. K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1693â 1701, 2015b. G. Hinton, L. Deng, D. Yu, G. Dalh, and A. Mohamed. | 1611.09268#30 | 1611.09268#32 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#32 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82â 97, 2012. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. 10 P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. | 1611.09268#31 | 1611.09268#33 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#33 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333â 2338. ACM, 2013. R. Kadlec, M. Schmid, O. Bajgar, and J. Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547, 2016. T. Kociský, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. | 1611.09268#32 | 1611.09268#34 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#34 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | The narrativeqa reading comprehension challenge. CoRR, abs/1712.07040, 2017. G. Lai, Q. Xie, H. Liu, Y. Yang, and E. H. Hovy. Race: Large-scale reading comprehension dataset from examinations. In EMNLP, 2017. C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004. | 1611.09268#33 | 1611.09268#35 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#35 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | S. Maples. The rouge-ar: A proposed extension to the rouge evaluation metric for abstractive text summarization. 2017. B. Mitra and N. Craswell. An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval (to appear), 2018. B. Mitra, G. Simon, J. Gao, N. Craswell, and L. J. Deng. A proposal for evaluating answer distillation from web data. 2016. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. | 1611.09268#34 | 1611.09268#36 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#36 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â 318. Association for Computational Linguistics, 2002. P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machine comprehension of text. 2016. URL https://arxiv.org/abs/1606.05250. | 1611.09268#35 | 1611.09268#37 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#37 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | P. Rajpurkar, R. Jia, and P. Liang. Know what you donâ t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends®) in Information Retrieval, 3(4):333-389, 2009. M. J. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi. Bidirectional attention ï¬ | 1611.09268#36 | 1611.09268#38 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#38 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | ow for machine comprehension. CoRR, abs/1611.01603, 2016. Y. Shen, P.-S. Huang, J. Gao, and W. Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016. Siri. Siri personal assistant. http://www.apple.com/ios/siri/, 2018. S. Sukhbaatar, J. Weston, R. Fergus, et al. | 1611.09268#37 | 1611.09268#39 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#39 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | End-to-end memory networks. In Advances in neural information processing systems, pages 2440â 2448, 2015. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215, 2014. URL http://arxiv.org/abs/1409.3215. A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman. Newsqa: | 1611.09268#38 | 1611.09268#40 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#40 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | A machine comprehension dataset. In Rep4NLP@ACL, 2017. J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. van Merrienboer, A. Joulin, and T. Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. 2015. URL https://arxiv.org/abs/ 1502.05698. A. Wissner-Gross. Datasets over algorithms. Edge. com. Retrieved, 8, 2016. S. Zhang, X. Liu, J. Liu, J. Gao, K. Duh, and B. Van Durmeâ . Record: | 1611.09268#39 | 1611.09268#41 | 1611.09268 | [
"1810.12885"
]
|
1611.09268#41 | MS MARCO: A Human Generated MAchine Reading COmprehension Dataset | Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018. 11 | 1611.09268#40 | 1611.09268 | [
"1810.12885"
]
|
|
1611.08669#0 | Visual Dialog | 7 1 0 2 g u A 1 ] V C . s c [ 5 v 9 6 6 8 0 . 1 1 6 1 : v i X r a # Visual Dialog Abhishek Das1, Satwik Kottur2, Khushi Gupta2*, Avi Singh3*, Deshraj Yadav4, José M.F. Moura2, Devi Parikh1, Dhruv Batra1 1Georgia Institute of Technology, 2Carnegie Mellon University, 3UC Berkeley, 4Virginia Tech 2{skottur, khushig, moura}@andrew.cmu.edu 1{abhshkdz, parikh, dbatra}@gatech.edu [email protected] [email protected] visualdialog.org # Abstract We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natu- ral, conversational language about visual content. Speciï¬ - cally, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accu- rately. Visual Dialog is disentangled enough from a speciï¬ c downstream task so as to serve as a general test of ma- chine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data-collection protocol to curate a large-scale Visual Di- alog dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10 question-answer pairs on â ¼120k images from COCO, with a total of â ¼1.2M dialog question- answer pairs. â | 1611.08669#1 | 1611.08669 | [
"1605.06069"
]
|
|
1611.08669#1 | Visual Dialog | cat drinking water out of a coffee mug What color is the mug? White and red Are there any pictures on it? No, something is there can't tell what itis Is the mug and cat on a table? Yes, they are Are there other items on the tableâ (ea) eo) ca] (co) Yes, magazines, books, toaster and basket, and a plate We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders â Late Fusion, Hierarchi- cal Recurrent Encoder and Memory Network â and 2 de- coders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval- based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and eval- uated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. | 1611.08669#0 | 1611.08669#2 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#2 | Visual Dialog | Putting it all together, we demonstrate the ï¬ rst â visual chat- botâ ! Our dataset, code, trained models and visual chatbot are available on https://visualdialog.org. Figure 1: We introduce a new AI task â Visual Dialog, where an AI agent must hold a dialog with a human about visual content. We introduce a large-scale dataset (VisDial), an evaluation protocol, and novel encoder-decoder models for this task. tion [63], object detection [34] â to â high-levelâ AI tasks such as learning to play Atari video games [42] and Go [55], answering reading comprehension questions by understand- ing short stories [21, 65], and even answering questions about images [6, 39, 49, 71] and videos [57, 58]! What lies next for AI? We believe that the next genera- tion of visual intelligence systems will need to posses the ability to hold a meaningful dialog with humans in natural language about visual content. Applications include: # 1. Introduction We are witnessing unprecedented advances in computer vi- sion (CV) and artiï¬ cial intelligence (AI) â from â low-levelâ AI tasks such as image classiï¬ cation [20], scene recogni- Aiding visually impaired users in understanding their sur- roundings [7] or social media content [66] (AI: â | 1611.08669#1 | 1611.08669#3 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#3 | Visual Dialog | John just uploaded a picture from his vacation in Hawaiiâ , Human: â Great, is he at the beach?â , AI: â No, on a mountainâ ). â ¢ Aiding analysts in making decisions based on large quan- tities of surveillance data (Human: â Did anyone enter this room last week?â , AI: â Yes, 27 instances logged on cam- eraâ , Human: â Were any of them carrying a black bag?â ), *Work done while KG and AS were interns at Virginia Tech. 1 Captioning â | 1611.08669#2 | 1611.08669#4 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#4 | Visual Dialog | Two people are ina wheelchair and one is holding a racket. Visual Dialog Q: How many people are on Visual Dialog VQa wheelchairs ? Q: What is the gender of the Q: How many people Two â one in the white shirt ? â on wheelchairs ? What are their genders ? She is a woman A: Two A Q: A A: One male and one female | Q:; What is she doing ? Q: Which one is holding a A: Playing a Wii game racket ? Q: Is that a man to her right The woman A: No, it's a woman Q: | 1611.08669#3 | 1611.08669#5 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#5 | Visual Dialog | How many wheelchairs ? A: One A Figure 2: Differences between image captioning, Visual Question Answering (VQA) and Visual Dialog. Two (partial) dialogs are shown from our VisDial dataset, which is curated from a live chat between two Amazon Mechanical Turk workers (Sec. 3). â ¢ Interacting with an AI assistant (Human: â Alexa â can you see the baby in the baby monitor?â , AI: â Yes, I canâ , Human: â Is he sleeping or playing?â ). â ¢ Robotics applications (e.g. search and rescue missions) where the operator may be â situationally blindâ and oper- ating via language [40] (Human: â | 1611.08669#4 | 1611.08669#6 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#6 | Visual Dialog | Is there smoke in any room around you?â , AI: â Yes, in one roomâ , Human: â Go there and look for peopleâ ). Despite rapid progress at the intersection of vision and lan- guage â in particular, in image captioning and visual ques- tion answering (VQA) â it is clear that we are far from this grand goal of an AI agent that can â seeâ and â communicateâ . In captioning, the human-machine interaction consists of the machine simply talking at the human (â Two people are in a wheelchair and one is holding a racketâ ), with no dia- log or input from the human. While VQA takes a signiï¬ cant step towards human-machine interaction, it still represents only a single round of a dialog â unlike in human conver- sations, there is no scope for follow-up questions, no mem- ory in the system of previous questions asked by the user nor consistency with respect to previous answers provided by the system (Q: â | 1611.08669#5 | 1611.08669#7 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#7 | Visual Dialog | How many people on wheelchairs?â , A: â Twoâ ; Q: â How many wheelchairs?â , A: â Oneâ ). As a step towards conversational visual AI, we introduce a novel task â Visual Dialog â along with a large-scale dataset, an evaluation protocol, and novel deep models. Task Deï¬ nition. The concrete task in Visual Dialog is the following â given an image I, a history of a dialog con- sisting of a sequence of question-answer pairs (Q1: â | 1611.08669#6 | 1611.08669#8 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#8 | Visual Dialog | How many people are in wheelchairs?â , A1: â Twoâ , Q2: â What are their genders?â , A2: â One male and one femaleâ ), and a natural language follow-up question (Q3: â Which one is holding a racket?â ), the task for the machine is to answer the question in free-form natural language (A3: â The womanâ ). This task is the visual analogue of the Turing Test. Consider the Visual Dialog examples in Fig. 2. The ques- tion â What is the gender of the one in the white shirt?â | 1611.08669#7 | 1611.08669#9 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#9 | Visual Dialog | requires the machine to selectively focus and direct atten- 2 requires tion to a relevant region. co-reference resolution (whom does the pronoun â sheâ re- fer to?), â Is that a man to her right?â further requires the machine to have visual memory (which object in the im- age were we talking about?). Such systems also need to be consistent with their outputs â â How many people are in wheelchairs?â , â Twoâ , â What are their genders?â , â One male and one femaleâ â note that the number of genders be- ing speciï¬ ed should add up to two. Such difï¬ culties make the problem a highly interesting and challenging one. | 1611.08669#8 | 1611.08669#10 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#10 | Visual Dialog | Why do we talk to machines? Prior work in language-only (non-visual) dialog can be arranged on a spectrum with the following two end-points: goal-driven dialog (e.g. booking a ï¬ ight for a user) â â goal-free dialog (or casual â chit-chatâ with chatbots). The two ends have vastly differing purposes and conï¬ icting evaluation criteria. Goal-driven dialog is typically evalu- ated on task-completion rate (how frequently was the user able to book their ï¬ ight) or time to task completion [14, 44] â clearly, the shorter the dialog the better. In contrast, for chit-chat, the longer the user engagement and interaction, the better. | 1611.08669#9 | 1611.08669#11 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#11 | Visual Dialog | For instance, the goal of the 2017 $2.5 Million Amazon Alexa Prize is to â create a socialbot that converses coherently and engagingly with humans on popular topics for 20 minutes.â We believe our instantiation of Visual Dialog hits a sweet It is disentangled enough from a spot on this spectrum. speciï¬ c downstream task so as to serve as a general test of machine intelligence, while being grounded enough in vi- sion to allow objective evaluation of individual responses and benchmark progress. The former discourages task- engineered bots for â slot ï¬ llingâ [30] and the latter discour- ages bots that put on a personality to avoid answering ques- tions while keeping the user engaged [64]. Contributions. We make the following contributions: â ¢ We propose a new AI task: Visual Dialog, where a ma- chine must hold dialog with a human about visual content. â ¢ We develop a novel two-person chat data-collection pro- tocol to curate a large-scale Visual Dialog dataset (Vis- Dial). Upon completion1, VisDial will contain 1 dialog each (with 10 question-answer pairs) on â ¼140k images from the COCO dataset [32], for a total of â ¼1.4M dialog question-answer pairs. When compared to VQA [6], Vis- Dial studies a signiï¬ cantly richer task (dialog), overcomes a â visual priming biasâ in VQA (in VisDial, the questioner does not see the image), contains free-form longer an- swers, and is an order of magnitude larger. 1VisDial data on COCO-train (â ¼83k images) and COCO- val (â ¼40k images) is already available for download at https:// visualdialog.org. Since dialog history contains the ground-truth cap- tion, we will not be collecting dialog data on COCO-test. Instead, we will collect dialog data on 20k extra images from COCO distribution (which will be provided to us by the COCO team) for our test set. | 1611.08669#10 | 1611.08669#12 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#12 | Visual Dialog | â ¢ We introduce a family of neural encoder-decoder models for Visual Dialog with 3 novel encoders â Late Fusion: that embeds the image, history, and ques- tion into vector spaces separately and performs a â late fusionâ of these into a joint embedding. â Hierarchical Recurrent Encoder: that contains a dialog- level Recurrent Neural Network (RNN) sitting on top of a question-answer (QA)-level recurrent block. In each QA-level recurrent block, we also include an attention- over-history mechanism to choose and attend to the round of the history relevant to the current question. | 1611.08669#11 | 1611.08669#13 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#13 | Visual Dialog | â Memory Network: that treats each previous QA pair as a â factâ in its memory bank and learns to â pollâ the stored facts and the image to develop a context vector. We train all these encoders with 2 decoders (generative and discriminative) â all settings outperform a number of sophisticated baselines, including our adaption of state-of- the-art VQA models to VisDial. â ¢ We propose a retrieval-based evaluation protocol for Vi- sual Dialog where the AI agent is asked to sort a list of candidate answers and evaluated on metrics such as mean- reciprocal-rank of the human response. We conduct studies to quantify human performance. â | 1611.08669#12 | 1611.08669#14 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#14 | Visual Dialog | ¢ Putting it all together, on the project page we demonstrate the ï¬ rst visual chatbot! # 2. Related Work Vision and Language. A number of problems at the inter- section of vision and language have recently gained promi- nence â image captioning [15, 16, 27, 62], video/movie description [51, 59, 60], text-to-image coreference/ground- ing [10, 22, 29, 45, 47, 50], visual storytelling [4, 23], and of course, visual question answering (VQA) [3, 6, 12, 17, 19, 37â 39, 49, 69]. However, all of these involve (at most) a single-shot natural language interaction â there is no dialog. Concurrent with our work, two recent works [13, 43] have also begun studying visually-grounded dialog. Visual Turing Test. Closely related to our work is that of Geman et al. [18], who proposed a fairly restrictive â Visual Turing Testâ â a system that asks templated, binary ques- tions. In comparison, 1) our dataset has free-form, open- ended natural language questions collected via two subjects chatting on Amazon Mechanical Turk (AMT), resulting in a more realistic and diverse dataset (see Fig. 5). 2) The dataset in [18] only contains street scenes, while our dataset has considerably more variety since it uses images from COCO [32]. Moreover, our dataset is two orders of mag- nitude larger â 2,591 images in [18] vs â ¼140k images, 10 question-answer pairs per image, total of â ¼1.4M QA pairs. Text-based Question Answering. Our work is related to text-based question answering or â reading comprehen- sionâ tasks studied in the NLP community. Some recent 3 large-scale datasets in this domain include the 30M Fac- toid Question-Answer corpus [52], 100K SimpleQuestions dataset [8], DeepMind Q&A dataset [21], the 20 artiï¬ cial tasks in the bAbI dataset [65], and the SQuAD dataset for reading comprehension [46]. VisDial can be viewed as a fusion of reading comprehension and VQA. | 1611.08669#13 | 1611.08669#15 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#15 | Visual Dialog | In VisDial, the machine must comprehend the history of the past dialog and then understand the image to answer the question. By de- sign, the answer to any question in VisDial is not present in the past dialog â if it were, the question would not be asked. The history of the dialog contextualizes the question â the question â what else is she holding?â requires a machine to comprehend the history to realize who the question is talk- ing about and what has been excluded, and then understand the image to answer the question. | 1611.08669#14 | 1611.08669#16 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#16 | Visual Dialog | Conversational Modeling and Chatbots. Visual Dialog is the visual analogue of text-based dialog and conversation modeling. While some of the earliest developed chatbots were rule-based [64], end-to-end learning based approaches are now being actively explored [9, 14, 26, 31, 53, 54, 61]. A recent large-scale conversation dataset is the Ubuntu Dia- logue Corpus [35], which contains about 500K dialogs ex- tracted from the Ubuntu channel on Internet Relay Chat (IRC). Liu et al. [33] perform a study of problems in exist- ing evaluation protocols for free-form dialog. One impor- tant difference between free-form textual dialog and Vis- Dial is that in VisDial, the two participants are not symmet- ric â | 1611.08669#15 | 1611.08669#17 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#17 | Visual Dialog | one person (the â questionerâ ) asks questions about an image that they do not see; the other person (the â answererâ ) sees the image and only answers the questions (in otherwise unconstrained text, but no counter-questions allowed). This role assignment gives a sense of purpose to the interaction (why are we talking? To help the questioner build a men- tal model of the image), and allows objective evaluation of individual responses. # 3. The Visual Dialog Dataset (VisDial) We now describe our VisDial dataset. We begin by describ- ing the chat interface and data-collection process on AMT, analyze the dataset, then discuss the evaluation protocol. Consistent with previous data collection efforts, we collect visual dialog data on images from the Common Objects in Context (COCO) [32] dataset, which contains multiple ob- jects in everyday scenes. The visual complexity of these images allows for engaging and diverse conversations. Live Chat Interface. Good data for this task should in- clude dialogs that have (1) temporal continuity, (2) ground- ing in the image, and (3) mimic natural â conversationalâ exchanges. To elicit such responses, we paired 2 work- ers on AMT to chat with each other in real-time (Fig. 3). Each worker was assigned a speciï¬ c role. One worker (the â questionerâ ) sees only a single line of text describing an im- Caption: A sink and toilet in a small room. You have to ASK questions about the image. fp oe U Is this a bathroom ? ; Fellow Turker connected |_Now you can send messages. | jrionTe | Fes. it a athroom 2.You l wit color isthe room 2 > 2. = "2 questions about the image. Caption: | 1611.08669#16 | 1611.08669#18 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#18 | Visual Dialog | A sink and toilet in a small room. You have to ANSWER Fellow Turker connected Now you can send messages. fi : 1S this bathroom ? 1.You: l yes, its a bathroom fa Je.Feliow Turker: what color isthe room ? 2.ou 1 itioks cream colored >) â can you see anything else ? there is a shelf with items on it is anyone in the room ? nobody isin the room â can you see on the outside ? no, itis only inside what colori tho sink ? the sink is white is the room clean ? itis very clean isthe toilet facing the sink ? yes the tollet is facing the sink â can you see a door ? yes, lean see the door Copa: {10 what color isthe door ? {A sink and tole in a small room, 0 the door is tan colored, Caption: A sink and toilet in a small room. You have to ASK questions about the image. fp oe U Is this a bathroom ? ; Fellow Turker connected |_Now you can send messages. | questions about the image. jrionTe | Fes. it a athroom 2.You l wit color isthe room 2 > Caption: A sink and toilet in a small room. You have to ANSWER â can you see anything else ? there is a shelf with items on it is anyone in the room ? nobody isin the room â can you see on the outside ? no, itis only inside what colori tho sink ? the sink is white is the room clean ? itis very clean isthe toilet facing the sink ? yes the tollet is facing the sink â can you see a door ? yes, lean see the door Copa: {10 what color isthe door ? {A sink and tole in a small room, 0 the door is tan colored, Fellow Turker connected Now you can send messages. fi : 1S this bathroom ? 1.You: l yes, its a bathroom fa Je.Feliow Turker: what color isthe room ? 2.ou 1 itioks cream colored >) 2. = "2 (a) What the â questionerâ sees. # (b) What the â answererâ sees. (c) Example dialog from our VisDial dataset. Figure 3: | 1611.08669#17 | 1611.08669#19 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#19 | Visual Dialog | Collecting visually-grounded dialog data on Amazon Mechanical Turk via a live chat interface where one person is assigned the role of â questionerâ and the second person is the â answererâ . We show the ï¬ rst two questions being collected via the interface as Turkers interact with each other in Fig. 3a and Fig. 3b. Remaining questions are shown in Fig. 3c. age (caption from COCO); the image remains hidden to the questioner. Their task is to ask questions about this hidden image to â imagine the scene betterâ . The second worker (the â answererâ ) sees the image and caption. Their task is to an- swer questions asked by their chat partner. Unlike VQA [6], answers are not restricted to be short or concise, instead workers are encouraged to reply as naturally and â conversa- tionallyâ as possible. Fig. 3c shows an example dialog. This process is an unconstrained â liveâ chat, with the only exception that the questioner must wait to receive an answer before posting the next question. The workers are allowed to end the conversation after 20 messages are exchanged (10 pairs of questions and answers). | 1611.08669#18 | 1611.08669#20 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#20 | Visual Dialog | Further details about our ï¬ nal interface can be found in the supplement. one of the workers abandoned a HIT (or was disconnected) midway, automatic conditions in the code kicked in asking the remaining worker to either continue asking questions or providing facts (captions) about the image (depending on their role) till 10 messages were sent by them. Workers who completed the task in this way were fully compensated, but our backend discarded this data and automatically launched a new HIT on this image so a real two-person conversation could be recorded. Our entire data-collection infrastructure (front-end UI, chat interface, backend storage and messag- ing system, error handling protocols) is publicly available2. | 1611.08669#19 | 1611.08669#21 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#21 | Visual Dialog | # 4. VisDial Dataset Analysis We also piloted a different setup where the questioner saw a highly blurred version of the image, instead of the caption. The conversations seeded with blurred images resulted in questions that were essentially â blob recognitionâ â â What is the pink patch at the bottom right?â . For our full-scale data-collection, we decided to seed with just the captions since it resulted in more â naturalâ questions and more closely modeled the real-world applications discussed in Section 1 where no visual signal is available to the human. | 1611.08669#20 | 1611.08669#22 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#22 | Visual Dialog | Building a 2-person chat on AMT. Despite the popular- ity of AMT as a data collection platform in computer vi- sion, our setup had to design for and overcome some unique challenges â the key issue being that AMT is simply not designed for multi-user Human Intelligence Tasks (HITs). Hosting a live two-person chat on AMT meant that none of the Amazon tools could be used and we developed our own backend messaging and data-storage infrastructure based on Redis messaging queues and Node.js. To support data qual- ity, we ensured that a worker could not chat with themselves (using say, two different browser tabs) by maintaining a pool of worker IDs paired. To minimize wait time for one worker while the second was being searched for, we ensured that there was always a signiï¬ cant pool of available HITs. If We now analyze the v0.9 subset of our VisDial dataset â it contains 1 dialog (10 QA pairs) on â ¼123k images from COCO-train/val, a total of 1,232,870 QA pairs. # 4.1. Analyzing VisDial Questions Visual Priming Bias. One key difference between VisDial and previous image question-answering datasets (VQA [6], Visual 7W [70], Baidu mQA [17]) is the lack of a â vi- sual priming biasâ in VisDial. Speciï¬ cally, in all previ- ous datasets, subjects saw an image while asking questions about it. As analyzed in [3, 19, 69], this leads to a particular bias in the questions â people only ask â Is there a clock- tower in the picture?â on pictures actually containing clock towers. | 1611.08669#21 | 1611.08669#23 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#23 | Visual Dialog | This allows language-only models to perform re- markably well on VQA and results in an inï¬ ated sense of progress [19, 69]. As one particularly perverse example â for questions in the VQA dataset starting with â Do you see a . . . â , blindly answering â yesâ without reading the rest of the question or looking at the associated image results in an average VQA accuracy of 87%! In VisDial, questioners do not see the image. As a result, this bias is reduced. | 1611.08669#22 | 1611.08669#24 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#24 | Visual Dialog | # 2https://github.com/batra-mlp-lab/ visdial-amt-chat 4 ) â Questions 50% â Answers Percentage coverage ee? 3 4 5 6 7 8 9 10 # Words in sentence (a) (b) â VOA â Visual Dialog Percentage coverage oly Gy iy ae # Unique answers (x 10000) 20 Figure 4: Distribution of lengths for questions and answers (left); and percent coverage of unique answers over all answers from the train dataset (right), compared to VQA. For a given coverage, Vis- Dial has more unique answers indicating greater answer diversity. Distributions. Fig. 4a shows the distribution of question lengths in VisDial â we see that most questions range from four to ten words. Fig. 5 shows â sunburstsâ visualizing the distribution of questions (based on the ï¬ rst four words) in VisDial vs. VQA. | 1611.08669#23 | 1611.08669#25 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#25 | Visual Dialog | While there are a lot of similarities, some differences immediately jump out. There are more binary questions3 in VisDial as compared to VQA â the most fre- quent ï¬ rst question-word in VisDial is â isâ vs. â whatâ in VQA. A detailed comparison of the statistics of VisDial vs. other datasets is available in Table 1 in the supplement. Finally, there is a stylistic difference in the questions that is difï¬ cult to capture with the simple statistics above. In VQA, subjects saw the image and were asked to stump a smart robot. Thus, most queries involve speciï¬ c details, of- ten about the background (â What program is being utilized in the background on the computer?â ). In VisDial, question- ers did not see the original image and were asking questions to build a mental model of the scene. Thus, the questions tend to be open-ended, and often follow a pattern: | 1611.08669#24 | 1611.08669#26 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#26 | Visual Dialog | â ¢ Generally starting with the entities in the caption: â An elephant walking away from a pool in an exhibitâ , â Is there only 1 elephant?â , â ¢ digging deeper into their parts or attributes: â Is it full grown?â , â Is it facing the camera?â , â ¢ asking about the scene category or the picture setting: â Is this indoors or outdoors?â , â Is this a zoo?â , â ¢ the weather: â Is it snowing?â , â Is it sunny?â , â ¢ simply exploring the scene: â Are there people?â , â Is there shelter for elephant?â , 3 Questions starting in â Doâ , â Didâ , â Haveâ , â Hasâ , â Isâ , â Areâ , â Wasâ , â Wereâ , â Canâ , â Couldâ . 5 â ¢ and asking follow-up questions about the new visual en- tities discovered from these explorations: â Thereâ s a blue fence in background, like an enclosureâ , â Is the enclosure inside or outside?â . # 4.2. Analyzing VisDial Answers Answer Lengths. Fig. 4a shows the distribution of answer lengths. Unlike previous datasets, answers in VisDial are longer and more descriptive â mean-length 2.9 words (Vis- Dial) vs 1.1 (VQA), 2.0 (Visual 7W), 2.8 (Visual Madlibs). Fig. 4b shows the cumulative coverage of all answers (y- axis) by the most frequent answers (x-axis). The difference between VisDial and VQA is stark â the top-1000 answers in VQA cover â ¼83% of all answers, while in VisDial that ï¬ gure is only â ¼63%. There is a signiï¬ cant heavy tail in Vis- Dial â most long strings are unique, and thus the coverage curve in Fig. 4b becomes a straight line with slope 1. In total, there are 337,527 unique answers in VisDial v0.9. Answer Types. Since the answers in VisDial are longer strings, we can visualize their distribution based on the starting few words (Fig. 5c). An interesting category of answers emerges â â I think soâ , â I canâ t tellâ , or â I canâ | 1611.08669#25 | 1611.08669#27 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#27 | Visual Dialog | t seeâ â expressing doubt, uncertainty, or lack of information. This is a consequence of the questioner not being able to see the image â they are asking contextually relevant questions, but not all questions may be answerable with certainty from that image. We believe this is rich data for building more human-like AI that refuses to answer questions it doesnâ t have enough information to answer. See [48] for a related, but complementary effort on question relevance in VQA. Binary Questions vs Binary Answers. In VQA, binary questions are simply those with â yesâ , â noâ , â maybeâ as an- swers [6]. In VisDial, we must distinguish between binary questions and binary answers. Binary questions are those starting in â Doâ , â Didâ , â Haveâ , â Hasâ , â Isâ , â Areâ , â Wasâ , â Wereâ , â Canâ , â Couldâ . Answers to such questions can (1) contain only â yesâ or â noâ , (2) begin with â yesâ , â noâ , and contain additional information or clariï¬ cation, (3) involve ambiguity (â Itâ s hard to seeâ , â Maybeâ ), or (4) answer the question without explicitly saying â yesâ or â noâ (Q: â Is there any type of design or pattern on the cloth?â , A: â There are circles and lines on the clothâ ). We call answers that con- tain â yesâ or â noâ as binary answers â 149,367 and 76,346 answers in subsets (1) and (2) from above respectively. Bi- nary answers in VQA are biased towards â yesâ [6, 69] â 61.40% of yes/no answers are â yesâ . In VisDial, the trend is reversed. Only 46.96% are â yesâ for all yes/no responses. This is understandable since workers did not see the image, and were more likely to end up with negative responses. # 4.3. Analyzing VisDial Dialog In Section 4.1, we discussed a typical ï¬ ow of dialog in Vis- Dial. We analyze two quantitative statistics here. (a) VisDial Questions (b) VQA Questions (c) VisDial Answers Figure 5: Distribution of ï¬ | 1611.08669#26 | 1611.08669#28 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#28 | Visual Dialog | rst n-grams for (left to right) VisDial questions, VQA questions and VisDial answers. Word ordering starts towards the center and radiates outwards, and arc length is proportional to number of questions containing the word. Coreference in dialog. Since language in VisDial is the re- sult of a sequential conversation, it naturally contains pro- nouns â â heâ , â sheâ , â hisâ , â herâ , â itâ , â theirâ , â theyâ , â thisâ , â thatâ , â thoseâ , etc. In total, 38% of questions, 19% of an- swers, and nearly all (98%) dialogs contain at least one pronoun, thus conï¬ rming that a machine will need to over- come coreference ambiguities to be successful on this task. We ï¬ nd that pronoun usage is low in the ï¬ | 1611.08669#27 | 1611.08669#29 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#29 | Visual Dialog | rst round (as ex- pected) and then picks up in frequency. A ï¬ ne-grained per- round analysis is available in the supplement. Temporal Continuity in Dialog Topics. It is natural for conversational dialog data to have continuity in the â top- icsâ being discussed. We have already discussed qualitative differences in VisDial questions vs. VQA. In order to quan- tify the differences, we performed a human study where we manually annotated question â topicsâ for 40 images (a total of 400 questions), chosen randomly from the val set. The topic annotations were based on human judgement with a consensus of 4 annotators, with topics such as: asking about a particular object (â What is the man doing?â ) , scene (â Is it outdoors or indoors?â ), weather (â Is the weather sunny?â ), the image (â Is it a color image?â ), and exploration (â Is there anything else?â ). | 1611.08669#28 | 1611.08669#30 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#30 | Visual Dialog | We performed similar topic annotation for questions from VQA for the same set of 40 images, and compared topic continuity in questions. Across 10 rounds, VisDial question have 4.55 ± 0.17 topics on average, con- ï¬ rming that these are not independent questions. Recall that VisDial has 10 questions per image as opposed to 3 for VQA. Therefore, for a fair comparison, we compute aver- age number of topics in VisDial over all subsets of 3 succes- sive questions. For 500 bootstrap samples of batch size 40, VisDial has 2.14 ± 0.05 topics while VQA has 2.53 ± 0.09. Lower mean suggests there is more continuity in VisDial because questions do not change topics as often. # 4.4. VisDial Evaluation Protocol One fundamental challenge in dialog systems is evaluation. Similar to the state of affairs in captioning and machine translation, it is an open problem to automatically evaluate the quality of free-form answers. Existing metrics such as BLEU, METEOR, ROUGE are known to correlate poorly with human judgement in evaluating dialog responses [33]. Instead of evaluating on a downstream task [9] or holisti- cally evaluating the entire conversation (as in goal-free chit- chat [5]), we evaluate individual responses at each round (t = 1, 2, . . . , 10) in a retrieval or multiple-choice setup. | 1611.08669#29 | 1611.08669#31 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#31 | Visual Dialog | Speciï¬ cally, at test time, a VisDial system is given an im- age I, the â ground-truthâ dialog history (including the im- age caption) C, (Q1, A1), . . . , (Qtâ 1, Atâ 1), the question Qt, and a list of N = 100 candidate answers, and asked to return a sorting of the candidate answers. The model is evaluated on retrieval metrics â (1) rank of human response (lower is better), (2) recall@k, i.e. existence of the human response in top-k ranked responses, and (3) mean reciprocal rank (MRR) of the human response (higher is better). The evaluation protocol is compatible with both discrimi- native models (that simply score the input candidates, e.g. via a softmax over the options, and cannot generate new answers), and generative models (that generate an answer string, e.g. via Recurrent Neural Networks) by ranking the candidates by the modelâ s log-likelihood scores. Candidate Answers. We generate a candidate set of cor- rect and incorrect answers from four sets: Correct: The ground-truth human response to the question. Plausible: Answers to 50 most similar questions. Simi- lar questions are those that start with similar tri-grams and mention similar semantic concepts in the rest of the ques- tion. To capture this, all questions are embedded into a vec- tor space by concatenating the GloVe embeddings of the ï¬ rst three words with the averaged GloVe embeddings of the remaining words in the questions. Euclidean distances | 1611.08669#30 | 1611.08669#32 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#32 | Visual Dialog | 6 are used to compute neighbors. Since these neighboring questions were asked on different images, their answers serve as â hard negativesâ . Popular: The 30 most popular answers from the dataset â e.g. â yesâ , â noâ , â 2â , â 1â , â whiteâ , â 3â , â greyâ , â grayâ , â 4â , â yes it isâ . The inclusion of popular answers forces the machine to pick between likely a priori responses and plausible re- sponses for the question, thus increasing the task difï¬ culty. Random: The remaining are answers to random questions in the dataset. To generate 100 candidates, we ï¬ rst ï¬ nd the union of the correct, plausible, and popular answers, and include random answers until a unique set of 100 is found. # 5. | 1611.08669#31 | 1611.08669#33 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#33 | Visual Dialog | Neural Visual Dialog Models In this section, we develop a number of neural Visual Dialog answerer models. Recall that the model is given as input â an image J, the â ground-truthâ dialog history (including the image caption) H = ( C ,(Q1,A1),---.(Qr-1,At-1)), Ya e_â â â â Ho Ay Aya the question Q;, and a list of 100 candidate answers A, = {AM,..., (1 } â and asked to return a sorting of Ay. | 1611.08669#32 | 1611.08669#34 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#34 | Visual Dialog | At a high level, all our models follow the encoder-decoder framework, i.e. factorize into two parts â (1) an encoder that converts the input (I, H, Q,) into a vector space, and (2) a decoder that converts the embedded vector into an output. We describe choices for each component next and present experiments with all encoder-decoder combinations. Decoders: We use two types of decoders: ¢ Generative (LSTM) decoder: where the encoded vector is set as the initial state of the Long Short-Term Mem- ory (LSTM) RNN language model. During training, we maximize the log-likelihood of the ground truth answer sequence given its corresponding encoded representation (trained end-to-end). To evaluate, we use the modelâ s log- likelihood scores and rank candidate answers. Note that this decoder does not need to score options dur- ing training. As a result, such models do not exploit the biases in option creation and typically underperform mod- els that do [25], but it is debatable whether exploiting such biases is really indicative of progress. Moreover, genera- tive decoders are more practical in that they can actually be deployed in realistic applications. | 1611.08669#33 | 1611.08669#35 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#35 | Visual Dialog | â ¢ Discriminative (softmax) decoder: computes dot product similarity between input encoding and an LSTM encoding of each of the answer options. These dot products are fed into a softmax to compute the posterior probability over options. During training, we maximize the log-likelihood of the correct option. During evaluation, options are sim- ply ranked based on their posterior probabilities. Encoders: We develop 3 different encoders (listed below) that convert inputs (I, H, Qt) into a joint representation. | 1611.08669#34 | 1611.08669#36 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#36 | Visual Dialog | 7 In all cases, we represent J via the ¢2-normalized activa- tions from the penultimate layer of VGG-16 [56]. For each encoder E, we experiment with all possible ablated ver- sions: £(Q:), E(Q:,1), E(Q:, H), E(Q:, 1, H) (for some encoders, not all combinations are â validâ ; details below). ¢ Late Fusion (LF) Encoder: In this encoder, we treat H as a long string with the entire history (Ho,..., Hiâ 1) concatenated. @, and H are separately encoded with 2 different LSTMs, and individual representations of par- ticipating inputs (I, H, Q,) are concatenated and linearly transformed to a desired size of joint representation. â ¢ Hierarchical Recurrent Encoder (HRE): In this en- coder, we capture the intuition that there is a hierarchical nature to our problem â each question Qt is a sequence of words that need to be embedded, and the dialog as a whole is a sequence of question-answer pairs (Qt, At). Thus, similar to [54], as shown in Fig. 6, we propose an HRE model that contains a dialog-RNN sitting on top of a recur- rent block (Rt). The recurrent block Rt embeds the ques- tion and image jointly via an LSTM (early fusion), embeds each round of the history Ht, and passes a concatenation of these to the dialog-RNN above it. The dialog-RNN pro- duces both an encoding for this round (Et in Fig. 6) and a dialog context to pass onto the next round. We also add an attention-over-history (â Attentionâ in Fig. 6) mechanism allowing the recurrent block Rt to choose and attend to the round of the history relevant to the current question. This attention mechanism consists of a softmax over pre- vious rounds (0, 1, . . . , t â 1) computed from the history and question+image encoding. | 1611.08669#35 | 1611.08669#37 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#37 | Visual Dialog | â ot E: Dialog-RNN Dialog-RNN = |}-â + Ro | eS R, foie Attention over H Attention over H [qzenton overt 7 to T] LSTM LSTM LSTM LSTM t + + F + 4 Her} (4 )(@er Mm) (4 Jie ) Figure 6: Architecture of HRE encoder with attention. At the cur- rent round Rt, the model has the capability to choose and attend to relevant history from previous rounds, based on the current ques- tion. This attention-over-history feeds into a dialog-RNN along with question to generate joint representation Et for the decoder. | 1611.08669#36 | 1611.08669#38 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#38 | Visual Dialog | â ¢ Memory Network (MN) Encoder: We develop a MN encoder that maintains each previous question and answer as a â factâ in its memory bank and learns to refer to the stored facts and image to answer the question. Speciï¬ - cally, we encode Qt with an LSTM to get a 512-d vector, encode each previous round of history (H0, . . . , Htâ 1) with another LSTM to get a t à 512 matrix. We com- | 1611.08669#37 | 1611.08669#39 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#39 | Visual Dialog | pute inner product of question vector with each history vector to get scores over previous rounds, which are fed to a softmax to get attention-over-history probabilities. Con- vex combination of history vectors using these attention probabilities gives us the â context vectorâ , which is passed through an fc-layer and added to the question vectorto con- struct the MN encoding. In the language of Memory Net- work [9], this is a â 1-hopâ encoding. We use a â [encoder]-[input]-[decoder]â convention to refer to model-input combinations. For example, â LF-QI-Dâ has a Late Fusion encoder with question+image inputs (no his- tory), and a discriminative decoder. | 1611.08669#38 | 1611.08669#40 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#40 | Visual Dialog | Implementation details about the models can be found in the supplement. # 6. Experiments Splits. VisDial v0.9 contains 83k dialogs on COCO-train and 40k on COCO-val images. We split the 83k into 80k for training, 3k for validation, and use the 40k as test. Data preprocessing, hyperparameters and training details are included in the supplement. Baselines We compare to a number of baselines: Answer Prior: Answer options to a test question are encoded with an LSTM and scored by a linear classiï¬ er. This captures ranking by frequency of answers in our training set with- out resolving to exact string matching. NN-Q: Given a test question, we ï¬ nd k nearest neighbor questions (in GloVe space) from train, and score answer options by their mean- similarity with these k answers. NN-QI: First, we ï¬ nd K nearest neighbor questions for a test question. Then, we ï¬ nd a subset of size k based on image feature similarity. Finally, we rank options by their mean-similarity to answers to these k questions. We use k = 20, K = 100. Finally, we adapt several (near) state-of-art VQA models (SAN [67], HieCoAtt [37]) to Visual Dialog. Since VQA is posed as classiï¬ cation, we â chopâ the ï¬ | 1611.08669#39 | 1611.08669#41 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#41 | Visual Dialog | nal VQA-answer softmax from these models, feed these activations to our discriminative decoder (Section 5), and train end-to-end on VisDial. Note that our LF-QI-D model is similar to that in [36]. Altogether, these form fairly sophisticated baselines. Results. Tab. 5 shows results for our models and baselines on VisDial v0.9 (evaluated on 40k from COCO-val). A few key takeaways â 1) As expected, all learning based models signiï¬ cantly outperform non-learning baselines. 2) All discriminative models signiï¬ cantly outperform genera- tive models, which as we discussed is expected since dis- criminative models can tune to the biases in the answer options. 3) Our best generative and discriminative mod- els are MN-QIH-G with 0.526 MRR, and MN-QIH-D with 0.597 MRR. 4) We observe that naively incorporating his- tory doesnâ t help much (LF-Q vs. LF-QH and LF-QI vs. LF-QIH) or can even hurt a little (LF-QI-G vs. LF-QIH- | 1611.08669#40 | 1611.08669#42 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#42 | Visual Dialog | 8 Model MRR R@1 R@5 R@10 Mean 2 Answer prior 0.3735 23.55 48.52 53.23 26.50 3 NN-Q 0.4570 35.93 54.07 60.26 18.93 a NN-QI 0.4274 33.13 50.83 58.69 19.62 LF-Q-G 0.5048 39.78 60.58 66.33 17.89 LF-QH-G 0.5055 39.73 60.86 66.68 17.78 o LF-QI-G 0.5204 42.04 61.65 67.66 16.84 5 LF-QIH-G 0.5199 41.83. 61.78 67.59 17.07 5 HRE-QH-G 8 HRE-QIH-G 0.5237 42.29 62.18 67.92 17.07 HREA-QUH-G 0.5242 42.28 62.33 68.17 16.79 MN-QH-G ~ â 05115 â 40.42. 6157 â 67.44 17-747 MN-QIH-G 0.5259 42.29 62.85 68.88 17.06 LF-Q-D 0.5508 41.24 70.45 79.83 7.08 LF-QH-D 0.5578 41.75 71.45 80.94 6.74 2 LF-QLD 0.5759 43.33 74.27 83.68 5.87 3 LF-QIH-D 0.5807 43.82 74.68 84.07 5.78 â | 1611.08669#41 | 1611.08669#43 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#43 | Visual Dialog | 5 HRE-QIH-D 0.5846 44.67 74.50 84.22 5.72 fal HREA-QIH-D 0.5868 44.82 74.81 84.36 5.66 0.5849 44.03 75.26 84.49 5.68 0.5965 45.55 76.22 85.37 5.46 < SANI-QI-D 0.5764 43.44 74.26 83.72 5.88 ot HieCoAtt-QI-D 0.5788 43.51 74.49 83.96 5.84 Table 1: Performance of methods on VisDial v0.9, measured by mean reciprocal rank (MRR), recall@k and mean rank. Higher is better for MRR and recall@k, while lower is better for mean rank. Performance on VisDial v0.5 is included in the supplement. G). | 1611.08669#42 | 1611.08669#44 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#44 | Visual Dialog | However, models that better encode history (MN/HRE) perform better than corresponding LF models with/without history (e.g. LF-Q-D vs. MN-QH-D). 5) Models looking at I ({LF,MN,HRE }-QIH) outperform corresponding blind models (without I). Human Studies. We conduct studies on AMT to quantita- tively evaluate human performance on this task for all com- binations of {with image, without image}à {with history, without history}. We ï¬ nd that without image, humans per- form better when they have access to dialog history. As expected, this gap narrows down when they have access to the image. | 1611.08669#43 | 1611.08669#45 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#45 | Visual Dialog | Complete details can be found in supplement. # 7. Conclusions To summarize, we introduce a new AI task â Visual Dialog, where an AI agent must hold a dialog with a human about visual content. We develop a novel two-person chat data- collection protocol to curate a large-scale dataset (VisDial), propose retrieval-based evaluation protocol, and develop a family of encoder-decoder models for Visual Dialog. We quantify human performance on this task via human stud- ies. Our results indicate that there is signiï¬ cant scope for improvement, and we believe this task can serve as a testbed for measuring progress towards visual intelligence. # 8. Acknowledgements We thank Harsh Agrawal, Jiasen Lu for help with AMT data collection; Xiao Lin, Latha Pemula for model discussions; Marco Baroni, Antoine Bordes, Mike Lewis, Marcâ Aurelio Ranzato for helpful discussions. We are grateful to the de- velopers of Torch [2] for building an excellent framework. This work was funded in part by NSF CAREER awards to DB and DP, ONR YIP awards to DP and DB, ONR Grant N00014-14-1-0679 to DB, a Sloan Fellowship to DP, ARO YIP awards to DB and DP, an Allen Distinguished Investi- gator award to DP from the Paul G. Allen Family Founda- tion, ICTAS Junior Faculty awards to DB and DP, Google Faculty Research Awards to DP and DB, Amazon Aca- demic Research Awards to DP and DB, AWS in Education Research grant to DB, and NVIDIA GPU donations to DB. SK was supported by ONR Grant N00014-12-1-0903. | 1611.08669#44 | 1611.08669#46 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#46 | Visual Dialog | The views and conclusions contained herein are those of the au- thors and should not be interpreted as necessarily represent- ing the ofï¬ cial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor. 9 # Appendix Overview This supplementary document is organized as follows: â ¢ Sec. A studies how and why VisDial is more than just a collection of independent Q&As. â ¢ Sec. B shows qualitative examples from our dataset. â ¢ Sec. | 1611.08669#45 | 1611.08669#47 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#47 | Visual Dialog | C presents detailed human studies along with com- parisons to machine accuracy. The interface for human studies is demonstrated in a video4. â ¢ Sec. D shows snapshots of our two-person chat data- collection interface on Amazon Mechanical Turk. The in- terface is also demonstrated in the video3. â ¢ Sec. E presents further analysis of VisDial, such as ques- tion types, question and answer lengths per question type. A video with an interactive sunburst visualization of the dataset is included3. | 1611.08669#46 | 1611.08669#48 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#48 | Visual Dialog | â ¢ Sec. F presents performance of our models on VisDial v0.5 test. â ¢ Sec. G presents implementation-level training details in- cluding data preprocessing, and model architectures. â ¢ Putting it all together, we compile a video demonstrating our visual chatbot3 that answers a sequence of questions from a user about an image. This demo uses one of our best generative models from the main paper, MN-QIH-G, and uses sampling (without any beam-search) for infer- ence in the LSTM decoder. Note that these videos demon- strate an â unscriptedâ dialog â in the sense that the partic- ular QA sequence is not present in VisDial and the model is not provided with any list of answer options. # A. In what ways are dialogs in VisDial more than just 10 visual Q&As? In this section, we lay out an exhaustive list of differences between VisDial and image question-answering datasets, with the VQA dataset [6] serving as the representative. In essence, we characterize what makes an instance in Vis- Dial more than a collection of 10 independent question- answer pairs about an image â what makes it a dialog. In order to be self-contained and an exhaustive list, some parts of this section repeat content from the main document. # A.1. | 1611.08669#47 | 1611.08669#49 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#49 | Visual Dialog | VisDial has longer free-form answers Fig. 7a shows the distribution of answer lengths in VisDial. and Tab. 2 compares statistics of VisDial with existing im- age question answering datasets. Unlike previous datasets, # 4https://goo.gl/yjlHxY 10 answers in VisDial are longer, conversational, and more de- scriptive â mean-length 2.9 words (VisDial) vs 1.1 (VQA), 2.0 (Visual 7W), 2.8 (Visual Madlibs). Moreover, 37.1% of answers in VisDial are longer than 2 words while the VQA dataset has only 3.8% answers longer than 2 words. | 1611.08669#48 | 1611.08669#50 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#50 | Visual Dialog | ) â Questions â Answers Percentage coverage ee? 3 4 5 6 7 8 9 10 # Words in sentence (a) (b) 100%, â VOA â Visual Dialog 80% Percentage coverage oly Gy iy ae # Unique answers (x 10000) 20 Figure 7: Distribution of lengths for questions and answers (left); and percent coverage of unique answers over all answers from the train dataset (right), compared to VQA. For a given coverage, Vis- Dial has more unique answers indicating greater answer diversity. Fig. 7b shows the cumulative coverage of all answers (y- axis) by the most frequent answers (x-axis). The difference between VisDial and VQA is stark â the top-1000 answers in VQA cover â ¼83% of all answers, while in VisDial that ï¬ gure is only â ¼63%. There is a signiï¬ cant heavy tail of an- swers in VisDial â most long strings are unique, and thus the coverage curve in Fig. 7b becomes a straight line with slope 1. In total, there are 337,527 unique answers in VisDial (out of the 1,232,870 answers currently in the dataset). # A.2. VisDial has co-references in dialogs People conversing with each other tend to use pronouns to refer to already mentioned entities. Since language in Vis- Dial is the result of a sequential conversation, it naturally contains pronouns â â heâ , â sheâ , â hisâ , â herâ , â itâ , â theirâ , â theyâ , â thisâ , â thatâ , â thoseâ , etc. In total, 38% of ques- tions, 19% of answers, and nearly all (98%) dialogs contain at least one pronoun, thus conï¬ rming that a machine will need to overcome coreference ambiguities to be successful on this task. As a comparison, only 9% of questions and 0.25% of answers in VQA contain at least one pronoun. In Fig. 8, we see that pronoun usage is lower in the ï¬ rst round compared to other rounds, which is expected since there are fewer entities to refer to in the earlier rounds. | 1611.08669#49 | 1611.08669#51 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#51 | Visual Dialog | The pronoun usage is also generally lower in answers than ques- tions, which is also understandable since the answers are generally shorter than questions and thus less likely to con- tain pronouns. In general, the pronoun usage is fairly con- sistent across rounds (starting from round 2) for both ques- tions and answers. #QA #Images QLength ALength ALength>2 Top-1000A Human Accuracy DAQUAR [38] 12,468 1,447) 115424 12+0.5 3.4% 96.4% - Visual Madlibs [68] 56,468 9,688 4942.4 2.8+2.0 47.4% 57.9% - COCO-QA [49] 117,684 69,172 8.7+2.7 10+0 0.0% 100% - Baidu [17] 316,193 316,193 - - - - - VQA [6] 614,163 204,721 6242.0 1.1+04 3.8% 82.7% v Visual7W [70] 327,939 47,300 69424 2.0+1.4 27.6% 63.5% v VisDial (Ours) 1,232,870 123,287 5.1+0.0 2.9+0.0 37.1% 63.2% v Table 2: Comparison of existing image question answering datasets with VisDial | 1611.08669#50 | 1611.08669#52 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#52 | Visual Dialog | Cee : s 50% = Fae ° oO 8 - oe £ 20% S 10% @ 1 2 3 4 5 6 7 8 9 10 Round â ¢ and asking follow-up questions about the new visual en- tities discovered from these explorations: â Thereâ s a blue fence in background, like an enclosureâ , â Is the enclosure inside or outside?â . Such a line of questioning does not exist in the VQA dataset, where the subjects were shown the questions already asked about an image, and explicitly instructed to ask about dif- ferent entities [6]. Figure 8: Percentage of QAs with pronouns for different rounds. In round 1, pronoun usage in questions is low (in fact, almost equal to usage in answers). From rounds 2 through 10, pronoun usage is higher in questions and fairly consistent across rounds. # A.3. VisDial has smoothness/continuity in â topicsâ Qualitative Example of Topics. There is a stylistic dif- ference in the questions asked in VisDial (compared to the questions in VQA) due to the nature of the task assigned to the subjects asking the questions. In VQA, subjects saw the image and were asked to â stump a smart robotâ . Thus, most queries involve speciï¬ c details, often about the background (Q: â What program is being utilized in the background on the computer?â ). In VisDial, questioners did not see the original image and were asking questions to build a mental model of the scene. Thus, the questions tend to be open- ended, and often follow a pattern: â ¢ Generally starting with the entities in the caption: â | 1611.08669#51 | 1611.08669#53 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#53 | Visual Dialog | An elephant walking away from a pool in an exhibitâ , â Is there only 1 elephant?â , Counting the Number of Topics. In order to quantify these qualitative differences, we performed a human study where we manually annotated question â topicsâ for 40 im- ages (a total of 400 questions), chosen randomly from the val set. The topic annotations were based on human judge- ment with a consensus of 4 annotators, with topics such as: asking about a particular object (â | 1611.08669#52 | 1611.08669#54 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#54 | Visual Dialog | What is the man doing?â ), the scene (â Is it outdoors or indoors?â ), the weather (â Is the weather sunny?â ), the image (â Is it a color image?â ), and ex- ploration (â Is there anything else?â ). We performed similar topic annotation for questions from VQA for the same set of 40 images, and compared topic continuity in questions. Across 10 rounds, VisDial questions have 4.55 ± 0.17 top- ics on average, conï¬ rming that these are not 10 independent questions. Recall that VisDial has 10 questions per image as opposed to 3 for VQA. Therefore, for a fair compari- son, we compute average number of topics in VisDial over all â | 1611.08669#53 | 1611.08669#55 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#55 | Visual Dialog | sliding windowsâ of 3 successive questions. For 500 bootstrap samples of batch size 40, VisDial has 2.14 ± 0.05 topics while VQA has 2.53 ± 0.09. Lower mean number of topics suggests there is more continuity in VisDial because questions do not change topics as often. â ¢ digging deeper into their parts, attributes, or proper- ties: â Is it full grown?â , â Is it facing the camera?â , # â ¢ asking about the scene category or the picture setting: â Is this indoors or outdoors?â , â Is this a zoo?â , # â ¢ the weather: â Is it snowing?â , â Is it sunny?â , # â ¢ simply exploring the scene: Transition Probabilities over Topics. We can take this analysis a step further by computing topic transition proba- bilities over topics as follows. For a given sequential dialog exchange, we now count the number of topic transitions be- tween consecutive QA pairs, normalized by the total num- ber of possible transitions between rounds (9 for VisDial and 2 for VQA). We compute this â topic transition proba- bilityâ (how likely are two successive QA pairs to be about two different topics) for VisDial and VQA in two different settings â (1) in-order and (2) with a permuted sequence â Are there people?â , â Is there shelter for elephant?â , 11 of QAs. Note that if VisDial were simply a collection of 10 independent QAs as opposed to a dialog, we would ex- pect the topic transition probabilities to be similar for in- order and permuted variants. However, we ï¬ nd that for 1000 permutations of 40 topic-annotated image-dialogs, in- order-VisDial has an average topic transition probability of 0.61, while permuted-VisDial has 0.76 ± 0.02. In contrast, VQA has a topic transition probability of 0.80 for in-order vs. 0.83 ± 0.02 for permuted QAs. | 1611.08669#54 | 1611.08669#56 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#56 | Visual Dialog | There are two key observations: (1) In-order transition probability is lower for VisDial than VQA (i.e. topic transi- tion is less likely in VisDial), and (2) Permuting the order of questions results in a larger increase for VisDial, around 0.15, compared to a mere 0.03 in case of VQA (i.e. in-order- VQA and permuted-VQA behave signiï¬ cantly more simi- larly than in-order-VisDial and permuted-VisDial). Both these observations establish that there is smoothness in the temporal order of topics in VisDial, which is indicative of the narrative structure of a dialog, rather than indepen- dent question-answers. # A.4. VisDial has the statistics of an NLP dialog dataset In this analysis, our goal is to measure whether VisDial be- haves like a dialog dataset. In particular, we compare VisDial, VQA, and Cornell Movie-Dialogs Corpus [11]. The Cornell Movie-Dialogs corpus is a text-only dataset extracted from pairwise inter- actions between characters from approximately 617 movies, and is widely used as a standard dialog corpus in the natural language processing (NLP) and dialog communities. One popular evaluation criteria used in the dialog-systems research community is the perplexity of language models trained on dialog datasets â the lower the perplexity of a model, the better it has learned the structure in the dialog dataset. For the purpose of our analysis, we pick the popular sequence-to-sequence (Seq2Seq) language model [24] and use the perplexity of this model trained on different datasets as a measure of temporal structure in a dataset. As is standard in the dialog literature, we train the Seq2Seq model to predict the probability of utterance Ut given the previous utterance Utâ | 1611.08669#55 | 1611.08669#57 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#57 | Visual Dialog | 1, i.e. P(Ut | Utâ 1) on the Cornell corpus. For VisDial and VQA, we train the Seq2Seq model to predict the probability of a question Qt given the previous question-answer pair, i.e. P(Qt | (Qtâ 1, Atâ 1)). For each dataset, we used its train and val splits for training and hyperparameter tuning respectively, and report results on test. At test time, we only use conversations of length 10 from Cornell corpus for a fair comparison to VisDial (which has 10 rounds of QA). For all three datasets, we created 100 permuted versions of 12 Dataset VQA Cornell (10) VisDial (Ours) Perplexity Per Token Orig 7.83 82.31 6.61 Shufï¬ ed 8.16 ± 0.02 85.31 ± 1.51 7.28 ± 0.01 Classiï¬ cation 52.8 ± 0.9 61.0 ± 0.6 73.3 ± 0.4 Table 3: | 1611.08669#56 | 1611.08669#58 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#58 | Visual Dialog | Comparison of sequences in VisDial, VQA, and Cor- nell Movie-Dialogs corpus in their original ordering vs. permuted â shufï¬ edâ ordering. Lower is better for perplexity while higher is better for classiï¬ cation accuracy. Left: the absolute increase in perplexity from natural to permuted ordering is highest in the Cornell corpus (3.0) followed by VisDial with 0.7, and VQA at 0.35, which is indicative of the degree of linguistic structure in the sequences in these datasets. Right: The accuracy of a simple threshold-based classiï¬ er trained to differentiate between the orig- inal sequences and their permuted or shufï¬ ed versions. A higher classiï¬ cation rate indicates the existence of a strong temporal con- tinuity in the conversation, thus making the ordering important. We can see that the classiï¬ er on VisDial achieves the highest ac- curacy (73.3%), followed by Cornell (61.0%). Note that this is a binary classiï¬ cation task with the prior probability of each class by design being equal, thus chance performance is 50%. The clas- siï¬ er on VQA performs close to chance. test, where either QA pairs or utterances are randomly shufï¬ | 1611.08669#57 | 1611.08669#59 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#59 | Visual Dialog | ed to disturb their natural order. This allows us to compare datasets in their natural ordering w.r.t. permuted orderings. Our hypothesis is that since dialog datasets have linguistic structure in the sequence of QAs or utterances they contain, this structure will be signiï¬ cantly affected by permuting the sequence. In contrast, a collection of inde- pendent question-answers (as in VQA) will not be signiï¬ - cantly affected by a permutation. Tab. 3 compares the original, unshufï¬ ed test with the shufï¬ ed testsets on two metrics: | 1611.08669#58 | 1611.08669#60 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#60 | Visual Dialog | Perplexity: We compute the standard metric of perplex- ity per token, i.e. exponent of the normalized negative-log- probability of a sequence (where normalized is by the length of the sequence). Tab. 3 shows these perplexities for the original unshufï¬ ed test and permuted test sequences. We notice a few trends. First, we note that the absolute perplexity values are higher for the Cornell corpus than QA datasets. We hypothesize that this is due to the broad, unrestrictive dialog generation task in Cornell corpus, which is a more difï¬ cult task than question prediction about images, which is in comparison a more restricted task. Second, in all three datasets, the shufï¬ ed test has statis- tically signiï¬ cant higher perplexity than the original test, which indicates that shufï¬ ing does indeed break the linguis- tic structure in the sequences. Third, the absolute increase in perplexity from natural to permuted ordering is highest in the Cornell corpus (3.0) fol- lowed by our VisDial with 0.7, and VQA at 0.35, which is indicative of the degree of linguistic structure in the se- quences in these datasets. Finally, the relative increases in perplexity are 3.64% in Cornell, 10.13% in VisDial, and 4.21% in VQA â VisDial suffers the highest relative in- crease in perplexity due to shufï¬ ing, indicating the exis- tence of temporal continuity that gets disrupted. Classiï¬ cation: | 1611.08669#59 | 1611.08669#61 | 1611.08669 | [
"1605.06069"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.