doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.08669 | 44 | ory (LSTM) RNN language model. During training, we maximize the log-likelihood of the ground truth answer sequence given its corresponding encoded representation (trained end-to-end). To evaluate, we use the modelâs log- likelihood scores and rank candidate answers. Note that this decoder does not need to score options dur- ing training. As a result, such models do not exploit the biases in option creation and typically underperform mod- els that do [25], but it is debatable whether exploiting such biases is really indicative of progress. Moreover, genera- tive decoders are more practical in that they can actually be deployed in realistic applications. | 1611.08669#44 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 45 | ⢠Discriminative (softmax) decoder: computes dot product similarity between input encoding and an LSTM encoding of each of the answer options. These dot products are fed into a softmax to compute the posterior probability over options. During training, we maximize the log-likelihood of the correct option. During evaluation, options are sim- ply ranked based on their posterior probabilities.
Encoders: We develop 3 different encoders (listed below) that convert inputs (I, H, Qt) into a joint representation.
7
In all cases, we represent J via the ¢2-normalized activa- tions from the penultimate layer of VGG-16 [56]. For each encoder E, we experiment with all possible ablated ver- sions: £(Q:), E(Q:,1), E(Q:, H), E(Q:, 1, H) (for some encoders, not all combinations are âvalidâ; details below). ¢ Late Fusion (LF) Encoder: In this encoder, we treat H as a long string with the entire history (Ho,..., Hiâ1) concatenated. @, and H are separately encoded with 2 different LSTMs, and individual representations of par- ticipating inputs (I, H, Q,) are concatenated and linearly transformed to a desired size of joint representation. | 1611.08669#45 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 46 | ⢠Hierarchical Recurrent Encoder (HRE): In this en- coder, we capture the intuition that there is a hierarchical nature to our problem â each question Qt is a sequence of words that need to be embedded, and the dialog as a whole is a sequence of question-answer pairs (Qt, At). Thus, similar to [54], as shown in Fig. 6, we propose an HRE model that contains a dialog-RNN sitting on top of a recur- rent block (Rt). The recurrent block Rt embeds the ques- tion and image jointly via an LSTM (early fusion), embeds each round of the history Ht, and passes a concatenation of these to the dialog-RNN above it. The dialog-RNN pro- duces both an encoding for this round (Et in Fig. 6) and a dialog context to pass onto the next round. We also add an attention-over-history (âAttentionâ in Fig. 6) mechanism allowing the recurrent block Rt to choose and attend to the round of the history relevant to the current question. This attention mechanism consists of a softmax over pre- vious rounds (0, 1, . . . , t â 1) computed from the history and question+image encoding. | 1611.08669#46 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 47 | âot E: Dialog-RNN Dialog-RNN = |}-â+ Ro | eS R, foie Attention over H Attention over H [qzenton overt 7 to T] LSTM LSTM LSTM LSTM t + + F + 4 Her} (4 )(@er Mm) (4 Jie )
Figure 6: Architecture of HRE encoder with attention. At the cur- rent round Rt, the model has the capability to choose and attend to relevant history from previous rounds, based on the current ques- tion. This attention-over-history feeds into a dialog-RNN along with question to generate joint representation Et for the decoder. | 1611.08669#47 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 48 | ⢠Memory Network (MN) Encoder: We develop a MN encoder that maintains each previous question and answer as a âfactâ in its memory bank and learns to refer to the stored facts and image to answer the question. Speciï¬- cally, we encode Qt with an LSTM to get a 512-d vector, encode each previous round of history (H0, . . . , Htâ1) with another LSTM to get a t à 512 matrix. We compute inner product of question vector with each history vector to get scores over previous rounds, which are fed to a softmax to get attention-over-history probabilities. Con- vex combination of history vectors using these attention probabilities gives us the âcontext vectorâ, which is passed through an fc-layer and added to the question vectorto con- struct the MN encoding. In the language of Memory Net- work [9], this is a â1-hopâ encoding.
We use a â[encoder]-[input]-[decoder]â convention to refer to model-input combinations. For example, âLF-QI-Dâ has a Late Fusion encoder with question+image inputs (no his- tory), and a discriminative decoder. Implementation details about the models can be found in the supplement.
# 6. Experiments | 1611.08669#48 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 50 | Data preprocessing, hyperparameters and training details are included in the supplement. Baselines We compare to a number of baselines: Answer Prior: Answer options to a test question are encoded with an LSTM and scored by a linear classiï¬er. This captures ranking by frequency of answers in our training set with- out resolving to exact string matching. NN-Q: Given a test question, we ï¬nd k nearest neighbor questions (in GloVe space) from train, and score answer options by their mean- similarity with these k answers. NN-QI: First, we ï¬nd K nearest neighbor questions for a test question. Then, we ï¬nd a subset of size k based on image feature similarity. Finally, we rank options by their mean-similarity to answers to these k questions. We use k = 20, K = 100. Finally, we adapt several (near) state-of-art VQA models (SAN [67], HieCoAtt [37]) to Visual Dialog. Since VQA is posed as classiï¬cation, we âchopâ the ï¬nal VQA-answer softmax from these models, feed these activations to our discriminative decoder (Section 5), | 1611.08669#50 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 51 | we âchopâ the ï¬nal VQA-answer softmax from these models, feed these activations to our discriminative decoder (Section 5), and train end-to-end on VisDial. Note that our LF-QI-D model is similar to that in [36]. Altogether, these form fairly sophisticated baselines. Results. Tab. 5 shows results for our models and baselines on VisDial v0.9 (evaluated on 40k from COCO-val). A few key takeaways â 1) As expected, all learning based models signiï¬cantly outperform non-learning baselines. 2) All discriminative models signiï¬cantly outperform genera- tive models, which as we discussed is expected since dis- criminative models can tune to the biases in the answer options. 3) Our best generative and discriminative mod- els are MN-QIH-G with 0.526 MRR, and MN-QIH-D with 0.597 MRR. 4) We observe that naively incorporating his- tory doesnât help much (LF-Q vs. LF-QH and LF-QI vs. LF-QIH) or can even hurt a little | 1611.08669#51 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 53 | Model MRR R@1 R@5 R@10 Mean 2 Answer prior 0.3735 23.55 48.52 53.23 26.50 3 NN-Q 0.4570 35.93 54.07 60.26 18.93 a NN-QI 0.4274 33.13 50.83 58.69 19.62 LF-Q-G 0.5048 39.78 60.58 66.33 17.89 LF-QH-G 0.5055 39.73 60.86 66.68 17.78 o LF-QI-G 0.5204 42.04 61.65 67.66 16.84 5 LF-QIH-G 0.5199 41.83. 61.78 67.59 17.07 5 HRE-QH-G 8 HRE-QIH-G 0.5237 42.29 62.18 67.92 17.07 HREA-QUH-G 0.5242 42.28 62.33 68.17 16.79 MN-QH-G ~ â05115 â40.42. 6157 â67.44 17-747 MN-QIH-G 0.5259 42.29 62.85 68.88 17.06 LF-Q-D 0.5508 41.24 70.45 79.83 7.08 | 1611.08669#53 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 54 | 0.5259 42.29 62.85 68.88 17.06 LF-Q-D 0.5508 41.24 70.45 79.83 7.08 LF-QH-D 0.5578 41.75 71.45 80.94 6.74 2 LF-QLD 0.5759 43.33 74.27 83.68 5.87 3 LF-QIH-D 0.5807 43.82 74.68 84.07 5.78 â 5 HRE-QIH-D 0.5846 44.67 74.50 84.22 5.72 fal HREA-QIH-D 0.5868 44.82 74.81 84.36 5.66 0.5849 44.03 75.26 84.49 5.68 0.5965 45.55 76.22 85.37 5.46 < SANI-QI-D 0.5764 43.44 74.26 83.72 5.88 ot HieCoAtt-QI-D 0.5788 43.51 74.49 83.96 5.84 | 1611.08669#54 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 55 | Table 1: Performance of methods on VisDial v0.9, measured by mean reciprocal rank (MRR), recall@k and mean rank. Higher is better for MRR and recall@k, while lower is better for mean rank. Performance on VisDial v0.5 is included in the supplement.
G). However, models that better encode history (MN/HRE) perform better than corresponding LF models with/without history (e.g. LF-Q-D vs. MN-QH-D). 5) Models looking at I ({LF,MN,HRE }-QIH) outperform corresponding blind models (without I). Human Studies. We conduct studies on AMT to quantita- tively evaluate human performance on this task for all com- binations of {with image, without image}Ã{with history, without history}. We ï¬nd that without image, humans per- form better when they have access to dialog history. As expected, this gap narrows down when they have access to the image. Complete details can be found in supplement.
# 7. Conclusions | 1611.08669#55 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 56 | # 7. Conclusions
To summarize, we introduce a new AI task â Visual Dialog, where an AI agent must hold a dialog with a human about visual content. We develop a novel two-person chat data- collection protocol to curate a large-scale dataset (VisDial), propose retrieval-based evaluation protocol, and develop a family of encoder-decoder models for Visual Dialog. We quantify human performance on this task via human stud- ies. Our results indicate that there is signiï¬cant scope for improvement, and we believe this task can serve as a testbed for measuring progress towards visual intelligence.
# 8. Acknowledgements | 1611.08669#56 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 57 | # 8. Acknowledgements
We thank Harsh Agrawal, Jiasen Lu for help with AMT data collection; Xiao Lin, Latha Pemula for model discussions; Marco Baroni, Antoine Bordes, Mike Lewis, MarcâAurelio Ranzato for helpful discussions. We are grateful to the de- velopers of Torch [2] for building an excellent framework. This work was funded in part by NSF CAREER awards to DB and DP, ONR YIP awards to DP and DB, ONR Grant N00014-14-1-0679 to DB, a Sloan Fellowship to DP, ARO YIP awards to DB and DP, an Allen Distinguished Investi- gator award to DP from the Paul G. Allen Family Founda- tion, ICTAS Junior Faculty awards to DB and DP, Google Faculty Research Awards to DP and DB, Amazon Aca- demic Research Awards to DP and DB, AWS in Education Research grant to DB, and NVIDIA GPU donations to DB. SK was supported by ONR Grant N00014-12-1-0903. The views and conclusions contained herein are those of the au- thors and should not be interpreted as necessarily represent- ing the ofï¬cial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
9
# Appendix Overview
This supplementary document is organized as follows: | 1611.08669#57 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 58 | 9
# Appendix Overview
This supplementary document is organized as follows:
⢠Sec. A studies how and why VisDial is more than just a collection of independent Q&As.
⢠Sec. B shows qualitative examples from our dataset.
⢠Sec. C presents detailed human studies along with com- parisons to machine accuracy. The interface for human studies is demonstrated in a video4.
⢠Sec. D shows snapshots of our two-person chat data- collection interface on Amazon Mechanical Turk. The in- terface is also demonstrated in the video3.
⢠Sec. E presents further analysis of VisDial, such as ques- tion types, question and answer lengths per question type. A video with an interactive sunburst visualization of the dataset is included3.
⢠Sec. F presents performance of our models on VisDial v0.5 test.
⢠Sec. G presents implementation-level training details in- cluding data preprocessing, and model architectures. | 1611.08669#58 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 59 | ⢠Sec. F presents performance of our models on VisDial v0.5 test.
⢠Sec. G presents implementation-level training details in- cluding data preprocessing, and model architectures.
⢠Putting it all together, we compile a video demonstrating our visual chatbot3 that answers a sequence of questions from a user about an image. This demo uses one of our best generative models from the main paper, MN-QIH-G, and uses sampling (without any beam-search) for infer- ence in the LSTM decoder. Note that these videos demon- strate an âunscriptedâ dialog â in the sense that the partic- ular QA sequence is not present in VisDial and the model is not provided with any list of answer options.
# A. In what ways are dialogs in VisDial more than just 10 visual Q&As?
In this section, we lay out an exhaustive list of differences between VisDial and image question-answering datasets, with the VQA dataset [6] serving as the representative.
In essence, we characterize what makes an instance in Vis- Dial more than a collection of 10 independent question- answer pairs about an image â what makes it a dialog. In order to be self-contained and an exhaustive list, some parts of this section repeat content from the main document.
# A.1. VisDial has longer free-form answers | 1611.08669#59 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 60 | # A.1. VisDial has longer free-form answers
Fig. 7a shows the distribution of answer lengths in VisDial. and Tab. 2 compares statistics of VisDial with existing im- age question answering datasets. Unlike previous datasets,
# 4https://goo.gl/yjlHxY
10
answers in VisDial are longer, conversational, and more de- scriptive â mean-length 2.9 words (VisDial) vs 1.1 (VQA), 2.0 (Visual 7W), 2.8 (Visual Madlibs). Moreover, 37.1% of answers in VisDial are longer than 2 words while the VQA dataset has only 3.8% answers longer than 2 words.
) â Questions â Answers Percentage coverage ee? 3 4 5 6 7 8 9 10 # Words in sentence
(a) (b)
100%, â VOA â Visual Dialog 80% Percentage coverage oly Gy iy ae # Unique answers (x 10000) 20
Figure 7: Distribution of lengths for questions and answers (left); and percent coverage of unique answers over all answers from the train dataset (right), compared to VQA. For a given coverage, Vis- Dial has more unique answers indicating greater answer diversity. | 1611.08669#60 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 61 | Fig. 7b shows the cumulative coverage of all answers (y- axis) by the most frequent answers (x-axis). The difference between VisDial and VQA is stark â the top-1000 answers in VQA cover â¼83% of all answers, while in VisDial that ï¬gure is only â¼63%. There is a signiï¬cant heavy tail of an- swers in VisDial â most long strings are unique, and thus the coverage curve in Fig. 7b becomes a straight line with slope 1. In total, there are 337,527 unique answers in VisDial (out of the 1,232,870 answers currently in the dataset).
# A.2. VisDial has co-references in dialogs | 1611.08669#61 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 62 | People conversing with each other tend to use pronouns to refer to already mentioned entities. Since language in Vis- Dial is the result of a sequential conversation, it naturally contains pronouns â âheâ, âsheâ, âhisâ, âherâ, âitâ, âtheirâ, âtheyâ, âthisâ, âthatâ, âthoseâ, etc. In total, 38% of ques- tions, 19% of answers, and nearly all (98%) dialogs contain at least one pronoun, thus conï¬rming that a machine will need to overcome coreference ambiguities to be successful on this task. As a comparison, only 9% of questions and 0.25% of answers in VQA contain at least one pronoun. In Fig. 8, we see that pronoun usage is lower in the ï¬rst round compared to other rounds, which is expected since there are fewer entities to refer to in the earlier rounds. The pronoun usage is also generally lower in answers than ques- tions, which is also understandable since the answers are generally shorter than questions and thus less likely to con- tain pronouns. In general, the pronoun usage is fairly con- sistent across rounds (starting from round 2) for both ques- tions and answers. | 1611.08669#62 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 63 | #QA #Images QLength ALength ALength>2 Top-1000A Human Accuracy DAQUAR [38] 12,468 1,447) 115424 12+0.5 3.4% 96.4% - Visual Madlibs [68] 56,468 9,688 4942.4 2.8+2.0 47.4% 57.9% - COCO-QA [49] 117,684 69,172 8.7+2.7 10+0 0.0% 100% - Baidu [17] 316,193 316,193 - - - - - VQA [6] 614,163 204,721 6242.0 1.1+04 3.8% 82.7% v Visual7W [70] 327,939 47,300 69424 2.0+1.4 27.6% 63.5% v VisDial (Ours) 1,232,870 123,287 5.1+0.0 2.9+0.0 37.1% 63.2% v
Table 2: Comparison of existing image question answering datasets with VisDial
Cee : s 50% = Fae ° oO 8 - oe £ 20% S 10% @ 1 2 3 4 5 6 7 8 9 10 Round
⢠and asking follow-up questions about the new visual en- tities discovered from these explorations: | 1611.08669#63 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 64 | ⢠and asking follow-up questions about the new visual en- tities discovered from these explorations:
âThereâs a blue fence in background, like an enclosureâ, âIs the enclosure inside or outside?â. Such a line of questioning does not exist in the VQA dataset, where the subjects were shown the questions already asked about an image, and explicitly instructed to ask about dif- ferent entities [6].
Figure 8: Percentage of QAs with pronouns for different rounds. In round 1, pronoun usage in questions is low (in fact, almost equal to usage in answers). From rounds 2 through 10, pronoun usage is higher in questions and fairly consistent across rounds.
# A.3. VisDial has smoothness/continuity in âtopicsâ | 1611.08669#64 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 65 | # A.3. VisDial has smoothness/continuity in âtopicsâ
Qualitative Example of Topics. There is a stylistic dif- ference in the questions asked in VisDial (compared to the questions in VQA) due to the nature of the task assigned to the subjects asking the questions. In VQA, subjects saw the image and were asked to âstump a smart robotâ. Thus, most queries involve speciï¬c details, often about the background (Q: âWhat program is being utilized in the background on the computer?â). In VisDial, questioners did not see the original image and were asking questions to build a mental model of the scene. Thus, the questions tend to be open- ended, and often follow a pattern: ⢠Generally starting with the entities in the caption:
âAn elephant walking away from a pool in an exhibitâ, âIs there only 1 elephant?â, | 1611.08669#65 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 66 | Counting the Number of Topics. In order to quantify these qualitative differences, we performed a human study where we manually annotated question âtopicsâ for 40 im- ages (a total of 400 questions), chosen randomly from the val set. The topic annotations were based on human judge- ment with a consensus of 4 annotators, with topics such as: asking about a particular object (âWhat is the man doing?â), the scene (âIs it outdoors or indoors?â), the weather (âIs the weather sunny?â), the image (âIs it a color image?â), and ex- ploration (âIs there anything else?â). We performed similar topic annotation for questions from VQA for the same set of 40 images, and compared topic continuity in questions. Across 10 rounds, VisDial questions have 4.55 ± 0.17 top- ics on average, conï¬rming that these are not 10 independent questions. Recall that VisDial has 10 questions per image as opposed to 3 for VQA. Therefore, for a fair compari- son, we compute average number of topics in VisDial over all âsliding windowsâ of 3 successive questions. For 500 bootstrap | 1611.08669#66 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 68 | ⢠digging deeper into their parts, attributes, or proper- ties:
âIs it full grown?â, âIs it facing the camera?â,
# ⢠asking about the scene category or the picture setting:
âIs this indoors or outdoors?â, âIs this a zoo?â,
# ⢠the weather:
âIs it snowing?â, âIs it sunny?â,
# ⢠simply exploring the scene:
Transition Probabilities over Topics. We can take this analysis a step further by computing topic transition proba- bilities over topics as follows. For a given sequential dialog exchange, we now count the number of topic transitions be- tween consecutive QA pairs, normalized by the total num- ber of possible transitions between rounds (9 for VisDial and 2 for VQA). We compute this âtopic transition proba- bilityâ (how likely are two successive QA pairs to be about two different topics) for VisDial and VQA in two different settings â (1) in-order and (2) with a permuted sequence
âAre there people?â, âIs there shelter for elephant?â,
11 | 1611.08669#68 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 69 | âAre there people?â, âIs there shelter for elephant?â,
11
of QAs. Note that if VisDial were simply a collection of 10 independent QAs as opposed to a dialog, we would ex- pect the topic transition probabilities to be similar for in- order and permuted variants. However, we ï¬nd that for 1000 permutations of 40 topic-annotated image-dialogs, in- order-VisDial has an average topic transition probability of 0.61, while permuted-VisDial has 0.76 ± 0.02. In contrast, VQA has a topic transition probability of 0.80 for in-order vs. 0.83 ± 0.02 for permuted QAs. There are two key observations: (1) In-order transition probability is lower for VisDial than VQA (i.e. topic transi- tion is less likely in VisDial), and (2) Permuting the order of questions results in a larger increase for VisDial, around 0.15, compared to a mere 0.03 in case of VQA (i.e. in-order- VQA and permuted-VQA behave signiï¬cantly more simi- larly than in-order-VisDial and permuted-VisDial). | 1611.08669#69 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 70 | Both these observations establish that there is smoothness in the temporal order of topics in VisDial, which is indicative of the narrative structure of a dialog, rather than indepen- dent question-answers.
# A.4. VisDial has the statistics of an NLP dialog dataset
In this analysis, our goal is to measure whether VisDial be- haves like a dialog dataset. In particular, we compare VisDial, VQA, and Cornell Movie-Dialogs Corpus [11]. The Cornell Movie-Dialogs corpus is a text-only dataset extracted from pairwise inter- actions between characters from approximately 617 movies, and is widely used as a standard dialog corpus in the natural language processing (NLP) and dialog communities.
One popular evaluation criteria used in the dialog-systems research community is the perplexity of language models trained on dialog datasets â the lower the perplexity of a model, the better it has learned the structure in the dialog dataset.
For the purpose of our analysis, we pick the popular sequence-to-sequence (Seq2Seq) language model [24] and use the perplexity of this model trained on different datasets as a measure of temporal structure in a dataset. | 1611.08669#70 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 71 | As is standard in the dialog literature, we train the Seq2Seq model to predict the probability of utterance Ut given the previous utterance Utâ1, i.e. P(Ut | Utâ1) on the Cornell corpus. For VisDial and VQA, we train the Seq2Seq model to predict the probability of a question Qt given the previous question-answer pair, i.e. P(Qt | (Qtâ1, Atâ1)). For each dataset, we used its train and val splits for training and hyperparameter tuning respectively, and report results on test. At test time, we only use conversations of length 10 from Cornell corpus for a fair comparison to VisDial (which has 10 rounds of QA). For all three datasets, we created 100 permuted versions of
12
Dataset VQA Cornell (10) VisDial (Ours) Perplexity Per Token Orig 7.83 82.31 6.61 Shufï¬ed 8.16 ± 0.02 85.31 ± 1.51 7.28 ± 0.01 Classiï¬cation 52.8 ± 0.9 61.0 ± 0.6 73.3 ± 0.4 | 1611.08669#71 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 72 | Table 3: Comparison of sequences in VisDial, VQA, and Cor- nell Movie-Dialogs corpus in their original ordering vs. permuted âshufï¬edâ ordering. Lower is better for perplexity while higher is better for classiï¬cation accuracy. Left: the absolute increase in perplexity from natural to permuted ordering is highest in the Cornell corpus (3.0) followed by VisDial with 0.7, and VQA at 0.35, which is indicative of the degree of linguistic structure in the sequences in these datasets. Right: The accuracy of a simple threshold-based classiï¬er trained to differentiate between the orig- inal sequences and their permuted or shufï¬ed versions. A higher classiï¬cation rate indicates the existence of a strong temporal con- tinuity in the conversation, thus making the ordering important. We can see that the classiï¬er on VisDial achieves the highest ac- curacy (73.3%), followed by Cornell (61.0%). Note that this is a binary classiï¬cation task with the prior probability of each class by design being equal, thus chance performance is 50%. The clas- siï¬er on VQA performs close to chance. | 1611.08669#72 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 73 | test, where either QA pairs or utterances are randomly shufï¬ed to disturb their natural order. This allows us to compare datasets in their natural ordering w.r.t. permuted orderings. Our hypothesis is that since dialog datasets have linguistic structure in the sequence of QAs or utterances they contain, this structure will be signiï¬cantly affected by permuting the sequence. In contrast, a collection of inde- pendent question-answers (as in VQA) will not be signiï¬- cantly affected by a permutation. Tab. 3 compares the original, unshufï¬ed test with the shufï¬ed testsets on two metrics:
Perplexity: We compute the standard metric of perplex- ity per token, i.e. exponent of the normalized negative-log- probability of a sequence (where normalized is by the length of the sequence). Tab. 3 shows these perplexities for the original unshufï¬ed test and permuted test sequences. We notice a few trends. | 1611.08669#73 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 74 | First, we note that the absolute perplexity values are higher for the Cornell corpus than QA datasets. We hypothesize that this is due to the broad, unrestrictive dialog generation task in Cornell corpus, which is a more difï¬cult task than question prediction about images, which is in comparison a more restricted task. Second, in all three datasets, the shufï¬ed test has statis- tically signiï¬cant higher perplexity than the original test, which indicates that shufï¬ing does indeed break the linguis- tic structure in the sequences.
Third, the absolute increase in perplexity from natural to permuted ordering is highest in the Cornell corpus (3.0) followed by our VisDial with 0.7, and VQA at 0.35, which is indicative of the degree of linguistic structure in the se- quences in these datasets. Finally, the relative increases in perplexity are 3.64% in Cornell, 10.13% in VisDial, and 4.21% in VQA â VisDial suffers the highest relative in- crease in perplexity due to shufï¬ing, indicating the exis- tence of temporal continuity that gets disrupted.
Classiï¬cation: As our second metric to compare datasets in their natural vs. permuted order, we test whether we can reliably classify a given sequence as natural or permuted. | 1611.08669#74 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 75 | Classiï¬cation: As our second metric to compare datasets in their natural vs. permuted order, we test whether we can reliably classify a given sequence as natural or permuted.
Our classiï¬er is a simple threshold on perplexity of a se- quence. Speciï¬cally, given a pair of sequences, we compute the perplexity of both from our Seq2Seq model, and predict that the one with higher perplexity is the sequence in per- muted ordering, and the sequence with lower perplexity is the one in natural ordering. The accuracy of this simple classiï¬er indicates how easy or difï¬cult it is to tell the dif- ference between natural and permuted sequences. A higher classiï¬cation rate indicates existence of temporal continuity in the conversation, thus making the ordering important. | 1611.08669#75 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 76 | Tab. 3 shows the classiï¬cation accuracies achieved on all datasets. We can see that the classiï¬er on VisDial achieves the highest accuracy (73.3%), followed by Cornell (61.0%). Note that this is a binary classiï¬cation task with the prior probability of each class by design being equal, thus chance performance is 50%. The classiï¬ers on VisDial and Cornell both signiï¬cantly outperforming chance. On the other hand, the classiï¬er on VQA is near chance (52.8%), indicating a lack of general temporal continuity.
To summarize this analysis, our experiments show that VisDial is signiï¬cantly more dialog-like than VQA, and behaves more like a standard dialog dataset, the Cornell Movie-Dialogs corpus.
# A.5. VisDial eliminates visual priming bias in VQA | 1611.08669#76 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 77 | # A.5. VisDial eliminates visual priming bias in VQA
One key difference between VisDial and previous image question answering datasets (VQA [6], Visual 7W [70], Baidu mQA [17]) is the lack of a âvisual priming biasâ in VisDial. Speciï¬cally, in all previous datasets, subjects saw an image while asking questions about it. As described in [69], this leads to a particular bias in the questions â people only ask âIs there a clocktower in the picture?â on pictures actually containing clock towers. This allows language- only models to perform remarkably well on VQA and re- sults in an inï¬ated sense of progress [69]. As one particu- larly perverse example â for questions in the VQA dataset starting with âDo you see a . . . â, blindly answering âyesâ without reading the rest of the question or looking at the as- sociated image results in an average VQA accuracy of 87%! In VisDial, questioners do not see the image. As a result,
13
this bias is reduced. This lack of visual priming bias (i.e. not being able to see the image while asking questions) and holding a dialog with another person while asking questions results in the follow- ing two unique features in VisDial. | 1611.08669#77 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 78 | Figure 9: Distribution of answers in VisDial by their ï¬rst four words. The ordering of the words starts towards the center and radiates outwards. The arc length is proportional to the number of questions containing the word. White areas are words with contri- butions too small to show.
Uncertainty in Answers in VisDial. Since the answers in VisDial are longer strings, we can visualize their distri- bution based on the starting few words (Fig. 9). An inter- esting category of answers emerges â âI think soâ, âI canât tellâ, or âI canât seeâ â expressing doubt, uncertainty, or lack of information. This is a consequence of the questioner not being able to see the image â they are asking contex- tually relevant questions, but not all questions may be an- swerable with certainty from that image. We believe this is rich data for building more human-like AI that refuses to answer questions it doesnât have enough information to an- swer. See [48] for a related, but complementary effort on question relevance in VQA. | 1611.08669#78 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 79 | Binary Questions # Binary Answers in VisDial. In VQA, binary questions are simply those with âyesâ, ânoâ, âmaybeâ as answers [6]. In VisDial, we must distinguish between binary questions and binary answers. Binary ques- tions are those starting in âDoâ, âDidâ, âHaveâ, âHasâ, âIsâ, âAreâ, âWasâ, âWereâ, âCanâ, âCouldâ. Answers to such questions can (1) contain only âyesâ or ânoâ, (2) begin with âyesâ, ânoâ, and contain additional information or clarifica- tion (Q: âAre there any animals in the image?â, A: âyes, 2 cats and a dogâ), (3) involve ambiguity (âItâs hard to seeâ, | 1611.08669#79 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 80 | âMaybeâ), or (4) answer the question without explicitly say- ing âyesâ or ânoâ (Q: âIs there any type of design or pat- tern on the cloth?â, A: âThere are circles and lines on the clothâ). We call answers that contain âyesâ or ânoâ as binary answers â 149,367 and 76,346 answers in subsets (1) and (2) from above respectively. Binary answers in VQA are biased towards âyesâ [6,69] â 61.40% of yes/no answers are âyesâ. In VisDial, the trend is reversed. Only 46.96% are âyesâ for all yes/no responses. This is understandable since workers did not see the image, and were more likely to end up with negative responses.
# B. Qualitative Examples from VisDial
Fig. 10 shows random samples of dialogs from the VisDial dataset.
# C. Human-Machine Comparison | 1611.08669#80 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 81 | # B. Qualitative Examples from VisDial
Fig. 10 shows random samples of dialogs from the VisDial dataset.
# C. Human-Machine Comparison
Model MRR R@1 R@5 Mean « Human-Q 0.441 25.10 67.37 4.19 2 Human-QH 0.485 30.31 70.53 3.91 eI Human-Ql 0.619 46.12 82.54 2.92 Human-QIH 0.635 48.03 83.76 2.83 3 HREA-QIH-G 0.477 31.64 61.61 4.42 3 { MN-QIH-G_ 0.481 32.16 61.94 4.47 s MN-QIH-D 0.553 36.86 69.39 3.48
Table 4: Human-machine performance comparison on VisDial v0.5, measured by mean reciprocal rank (MRR), recall@k for k = {1, 5} and mean rank. Note that higher is better for MRR and recall@k, while lower is better for mean rank. | 1611.08669#81 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 82 | We conducted studies on AMT to quantitatively evaluate human performance on this task for all combinations of {with image, without image}Ã{with history, without his- tory} on 100 random images at each of the 10 rounds. Speciï¬cally, in each setting, we show human subjects a jumbled list of 10 candidate answers for a question â top-9 predicted responses from our âLF-QIH-Dâ model and the 1 ground truth answer â and ask them to rank the responses. Each task was done by 3 human subjects.
Results of this study are shown in the top-half of Tab. 4. We ï¬nd that without access to the image, humans perform better when they have access to dialog history â compare the Human-QH row to Human-Q (R@1 of 30.31 vs. 25.10). As perhaps expected, this gap narrows down when humans have access to the image â compare Human-QIH to Human- QI (R@1 of 48.03 vs. 46.12). Note that these numbers are not directly comparable to ma- chine performance reported in the main paper because mod- els are tasked with ranking 100 responses, while humans are asked to rank 10 candidates. This is because the task of
14
ranking 100 candidate responses would be too cumbersome for humans. | 1611.08669#82 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 84 | Tab. 4 bottom-half shows the results of this comparison. We can see that, as expected, humans with full information (i.e. Human-QIH) perform the best with a large gap in human and machine performance (compare R@5: Human-QIH 83.76% vs. MN-QIH-D 69.39%). This gap is even larger when compared to generative models, which unlike the dis- criminative models are not actively trying to exploit the bi- ases in the answer candidates (compare R@5: Human-QIH 83.76% vs. HREA-QIH-G 61.61%). Furthermore, we see that humans outperform the best ma- chine even when not looking at the image, simply on the basis of the context provided by the history (compare R@5: Human-QH 70.53% vs. MN-QIH-D 69.39%). Perhaps as expected, with access to the image but not the history, humans are signiï¬cantly better than the best ma- chines (R@5: Human-QI 82.54% vs. MN-QIH-D 69.39%). With access to history humans perform even better. | 1611.08669#84 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 85 | From in-house human studies and worker feedback on AMT, we ï¬nd that dialog history plays the following roles for humans: (1) provides a context for the question and paints a picture of the scene, which helps eliminate cer- tain answer choices (especially when the image is not avail- able), (2) gives cues about the answererâs response style, which helps identify the right answer among similar answer choices, and (3) disambiguates amongst likely interpreta- tions of the image (i.e., when objects are small or occluded), again, helping identify the right answer among multiple plausible options.
# D. Interface
In this section, we show our interface to connect two Ama- zon Mechanical Turk workers live, which we used to collect our data. Instructions. To ensure quality of data, we provide detailed instructions on our interface as shown in Fig. 11a. Since the workers do not know their roles before starting the study, we provide instructions for both questioner and answerer roles. After pairing: Immediately after pairing two workers, we assign them roles of a questioner and a answerer and display role-speciï¬c instructions as shown in Fig. 11b. Observe that
5 We use both HREA-QIH-G, MN-QIH-G since they have similar ac- curacies. | 1611.08669#85 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 86 | 5 We use both HREA-QIH-G, MN-QIH-G since they have similar ac- curacies.
Caption: The skiers stood on top of the mountain Person A (1): how many skiers are there Person B (1): hundreds Person A (2): are they getting ready to go downhill Person B (2); i think so my view is at end of line Person A (3): is it snowing Person B (3): no, there is lot of snow though Person A (4): can you ee anybody going Person B (4): no my view shows people going up small hill on skis i can't see what's going on from there Person A (5): do you see lift Person B (5): no Person A (6): can you tell if they are male or female Person B (6): skiers closest to me are male Person A (7): are there any children Person B (7): i don't s Person A (8): does any but there could be it's huge crowd ybody have hat on Person B (8): they all have winter hat of some sort on Person A (9): is sun shining Person B (9): yes, all blue sky 2e any clouds Person A (10): do you Person B (10): no clouds | 1611.08669#86 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 87 | Caption: an image of a man in a boat with a dog Person A (1): how old does man look Person B (1): he has white hair, so probably 60's Person A (2): can you tell what breed of dog Person B (2): itis dalmatian Person A (3): is this photo in color : Person B (3): yes itis Person A (4): does this photo look like it is older photo or from more recent times Person B (4): it looks fairly recent Person A (5): is boat in water Person B (5): yes it is nae Person A (6): do you see any fishing equipment Person B (6): i do not Person A (7): is boat focal point of photo Person B (7): yes it is Person A (8): does water look smooth Person B (8): it is quite smooth pe Person A (9): is it sunny Person B (9): yes it is Person A (10): does this look like professional photo Person B (10): this is not professional photo
(a)
(b) | 1611.08669#87 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 88 | (a)
(b)
Caption: A man and woman on bicycles are looking at a map. Person A (1): where are they located Person B (1): in city Person A (2): are they on road Person B (2): sidewalk next to 1 Person A (3): any vehicles Person B (3): 1 in background Person A (4): any other people Person B (4): no Person A (5): what color bikes Person B (5): 1 silver and 1 yellow Person A (6): do they look old or new Person B (6): new bikes Person A (7): any buildings Person B (7): yes Person A (8): what color Person B (8): brick Person A (9): are they tall or short Person B (9): i can't see enough of them to tell Person A (10): do they look like couple Person B (10): they are | 1611.08669#88 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 89 | Caption: a horse is pulling a white cart with two people a car and buildings Person A (1): is it day time Person B (1): i think it's day Person A (2); can you see other people Person B (2): no Person A (3): can you see trees Person B (3): yes Person A (4): can you see lot of buildings Person B (4): only 2 maybe 3 Person A (5): does it look warm Person B (5): no, there are no leaves on trees Person A (6): do they nave jackets on Person B (6): no, long sleeve shirts though Person A (7): are they driving cart (7): are they driving car Person B (7): yes Person A (8): what color is car Person B (8): i can't tell photo is in black and white Person A (9): is building brick Person B (9): no, looks like wood Person A (10): do trees look old Person B (10): no they are still small
# (c)
(d) | 1611.08669#89 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 90 | # (c)
(d)
Caption: A statue depicting a bear breaking into a car. Person A (1): how big is statue Person B (1): about size of real full grown bear Person A (2): so is car full size then as well Person B (2): yes replica of car Person A (3): is statue all 1 color Person B (3): no brown and black Person A (4): what color is car Person B (4): dark red Person A (5): where is this, do you think Person B (5): in wooded area someplace Person A (6): do you see any people in image Person B (6): yes 1 man Person A (7): how old is man Person B (7): 35-40 Person A (8): what is man doing Person B (8): sitting in car behind replica Person A (9): do you see any signs : Person B (9): yes, on car door warning sign Person A (10): what else can you tell me about this image Person B (10): there are many trees in background | 1611.08669#90 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 91 | Caption: A dog with goggles is in a mo Person A (1): can you tell what kind of dog this is Person B (1): he ike beautiful pit bull mix Person A (2): can you tell if motorcycle is moving or still Person B (2): it's parked Person A (3): is dog's tongue lolling out Person B (3): not really Person A (4): i Person B (4): y Person A (5): what color is dog Person B (5): light tan with white patch that runs up to bottom of his chin and he has whit Person A (6): can you ¢ motorcycle Person B (6): from side, yes Person A (7): what coâ Person B (7): black wit! scents, sun is glaring so it's h Person A (8): is there anybody sitting on motorcycle Person B (8): no Person A (9):i Person B (9): i Person A (10): do Person B (10): yes
(e)
(f)
Figure 10: Examples from VisDial
the questioner does not see the image while the answerer does have access to it. Both questioner and answerer see the caption for the image.
# E. Additional Analysis of VisDial
In this section, we present additional analyses characteriz- ing our VisDial dataset.
15 | 1611.08669#91 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 92 | Live Question/Answering about an Image. y Instructions In this task, you will be talking to a fellow Turker. You will either be asking questions or answering questions about an image. You will be given more specific instructions once you are connected to a fellow Turker. Stay tuned. A message and a beep will notify you when you have been connected with a fellow Turker. Please keep the following in mind while chatting with your fellow Turker: Please directly start the conversation. Do not make small talk. Please do not write potentially offensive messages. Noawnone Please do not have conversations about something other than the image. Just either ask questions, or answer questions about an image (depending on your role). Please do not use chat/IM language (e.g, "r8" instead of "right"). Please use professional and grammatically correct English. Please have a natural conversation. Unnatural sounding conversation including awkward messages and long silences will be rejected. Please note that you are expected to complete and submit the hit in one go (once you have been connected with a partner). You cannot resume hits. If you see someone who isn't performing HITs as per instructions or is idle for long, do let us know. We'll make sure we keep a close | 1611.08669#92 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 93 | hits. If you see someone who isn't performing HITs as per instructions or is idle for long, do let us know. We'll make sure we keep a close watch on their work and reject it if they have a track record of not doing HITs properly or wasting too much time. Make sure you include a snippet of the conversation and your role (questioner or answerer) in your message to us, so we can look up who the other worker was. 8 Donot wait for your partner to disconnect to be able to type in responses quickly, or your work will be rejected. Please complete one hit before proceeding to the other. Please don't open multiple tabs, you cannot chat with yourself. | 1611.08669#93 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 94 | (a) Detailed instructions for Amazon Mechanical Turkers on our interface
Caption: A man, wearing goggles and a backpack on skis pulls a girl on skis behind him. You have to ASK Questions about the image. Fellow Turker connected Now yuan send messages Type Message Here: Caption: A man, wearing goggles and a backpack on skis pulls a girl on skis behind him. âYou have to ANSWER questions about the image. | Ss S Type Message Here
(b) Left: What questioner sees; Right: What answerer sees.
# E.1. Question and Answer Lengths
# F. Performance on VisDial v0.5 | 1611.08669#94 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 95 | (b) Left: What questioner sees; Right: What answerer sees.
# E.1. Question and Answer Lengths
# F. Performance on VisDial v0.5
Fig. 12 shows question lengths by type and round. Aver- age length of question by type is consistent across rounds. Questions starting with âanyâ (âany people?â, âany other fruits?â, etc.) tend to be the shortest. Fig. 13 shows answer lengths by type of question they were said in response to and round. In contrast to questions, there is signiï¬cant variance in answer lengths. Answers to binary questions (âAny peo- ple?â, âCan you see the dog?â, etc.) tend to be short while answers to âhowâ and âwhatâ questions tend to be more ex- planatory and long. Across question types, answers tend to be the longest in the middle of conversations.
# E.2. Question Types | 1611.08669#95 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 97 | Tab. 5 shows the results for our proposed models and base- lines on VisDial v0.5. A few key takeaways â First, as ex- pected, all learning based models signiï¬cantly outperform non-learning baselines. Second, all discriminative mod- els signiï¬cantly outperform generative models, which as we discussed is expected since discriminative models can tune to the biases in the answer options. This improve- ment comes with the signiï¬cant limitation of not being able to actually generate responses, and we recommend the two decoders be viewed as separate use cases. Third, our best generative and discriminative models are MN-QIH-G with 0.44 MRR, and MN-QIH-D with 0.53 MRR that outper- form a suite of models and sophisticated baselines. Fourth, we observe that models with H perform better than Q-only models, highlighting the importance of history in VisDial. Fifth, models looking at I outperform both the blind models (Q, QH) by at least 2% on recall@1 in both decoders. Fi- nally, models that use both H and I have best performance.
16 | 1611.08669#97 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 98 | 16
a OO does fo can c do 2 h i) what Ys how =] is fom & i) Qa s * any 3 2. 5 6 Round
Figure 12: Question lengths by type and round. Average length of question by type is fairly consistent across rounds. Questions starting with âanyâ (âany people?â, âany other fruits?â, etc.) tend to be the shortest.
4.0 3.5 a ov = 2 3.0 oo what & how 2) z io} 2.5 = does are * is can 2.0 do any 157 2 3 4 5 6 7 8 9 10 Round
Figure 13: Answer lengths by question type and round. Across question types, average response length tends to be longest in the middle of the conversation.
Dialog-level evaluation. Using R@5 to deï¬ne round-level âsuccessâ, our best discriminative model MN-QIH-D gets 7.01 rounds out of 10 correct, while generative MN-QIH- G gets 5.37. Further, the mean ï¬rst-failure-round (under R@5) for MN-QIH-D is 3.23, and 2.39 for MN-QIH-G. Fig. 16a and Fig. 16b show plots for all values of k in R@k.
17 | 1611.08669#98 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 99 | 17
50% 40% 30% 20% Percentage coverage what are 10% can does any how 0% "lr (2 3. 4 #5 6 7 8 9 10 Round
Figure 14: Percentage coverage of question types per round. As conversations progress, âIsâ, âWhatâ and âHowâ questions reduce while âCanâ, âDoâ, âDoesâ, âAnyâ questions occur more often. Questions starting with âIsâ are the most popular in the dataset.
# G. Experimental Details
In this section, we describe details about our models, data preprocessing, training procedure and hyperparameter se- lection.
# G.1. Models
Late Fusion (LF) Encoder. We encode the image with a VGG-16 CNN, question and concatenated history with separate LSTMs and concatenate the three representations. This is followed by a fully-connected layer and tanh non- linearity to a 512-d vector, which is used to decode the re- sponse. Fig. 17a shows the model architecture for our LF encoder. | 1611.08669#99 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 100 | Hierarchical Recurrent Encoder (HRE). In this en- coder, the image representation from VGG-16 CNN is early fused with the question. Speciï¬cally, the image representa- tion is concatenated with every question word as it is fed to an LSTM. Each QA-pair in dialog history is indepen- dently encoded by another LSTM with shared weights. The image-question representation, computed for every round from 1 through t, is concatenated with history representa- tion from the previous round and constitutes a sequence of
20 Counts (x 1000) @ ; SS s SSS a Ss Bio & Â¥ i é - & LS ke oC NS 8 KF ww AN OD GS wv S \e 5Y Nor â's > CNS SS SF Swe a & & ws © ») ee es & Ss Do ARs Co vs
Figure 15: Most frequent answer responses except for âyesâ/ânoâ
(a) (b)
Mean # of correct rounds gee we gy 2 ys @ © k
Mean round of first failure ew 2 wo » @ © 2 a0 DSBS k | 1611.08669#100 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 101 | Mean # of correct rounds gee we gy 2 ys @ © k
Mean round of first failure ew 2 wo » @ © 2 a0 DSBS k
some examples of attention over history facts from our MN encoder. We see that the model learns to attend to facts relevant to the question being asked. For example, when asked âWhat color are kites?â, the model attends to âA lot of people stand around ï¬ying kites in a park.â For âIs any- one on bus?â, it attends to âA large yellow bus parked in some grass.â Note that these are selected examples, and not always are these attention weights interpretable.
Figure 16: Dialog-level evaluation
# G.2. Training
question-history vectors. These vectors are fed as input to a dialog-level LSTM, whose output state at t is used to decode the response to Qt. Fig. 17b shows the model architecture for our HRE.
Splits. Recall that VisDial v0.9 contained 83k dialogs on COCO-train and 40k on COCO-val images. We split the 83k into 80k for training, 3k for validation, and use the 40k as test. | 1611.08669#101 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 102 | Memory Network. The image is encoded with a VGG- 16 CNN and question with an LSTM. We concatenate the representations and follow it by a fully-connected layer and tanh non-linearity to get a âquery vectorâ. Each caption/QA- pair (or âfactâ) in dialog history is encoded independently by an LSTM with shared weights. The query vector is then used to compute attention over the t facts by inner product. Convex combination of attended history vectors is passed through a fully-connected layer and tanh non-linearity, and added back to the query vector. This combined represen- tation is then passed through another fully-connected layer and tanh non-linearity and then used to decode the response. The model architecture is shown in Fig. 17c. Fig. 18 shows
Preprocessing. We spell-correct VisDial data using the Bing API [41]. Following VQA, we lowercase all questions and answers, convert digits to words, and remove contrac- tions, before tokenizing using the Python NLTK [1]. We then construct a dictionary of words that appear at least ï¬ve times in the train set, giving us a vocabulary of around 7.5k. | 1611.08669#102 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 103 | Hyperparameters. All our models are implemented in Torch [2]. Model hyperparameters are chosen by early stop- ping on val based on the Mean Reciprocal Rank (MRR) metric. All LSTMs are 2-layered with 512-dim hidden states. We learn 300-dim embeddings for words and im- ages. These word embeddings are shared across ques- tion, history, and decoder LSTMs. We use Adam [28]
18
No | don't think Decoder they are together Answer A, Do you think the woman is with him? Question Q, The man is riding his bicycle on the sidewalk. Is the man wearing a helmet? No he does not have a helmet on. ... Are there any people nearby? Yes there's a woman walking behind him. t rounds of history (concatenated)
(a) Late Fusion Encoder | 1611.08669#103 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 104 | (a) Late Fusion Encoder
No | don't think -_ Decoder they are together Answer A, Do you think the woman is with him? Question Q, The man is riding his bicycle on the sidewalk Is the man wearing a helmet? No he does not have a = a helmet on. How old is the man? He looks around 40 years old, 9 wi What color is his bike? It has black wheels and handlebars. | can't see the body of the bike that well. â Is anyone else riding a bike? No he's the only one. H 7 t-1 Are there any people nearby? Yes thereâs a woman _ LsTM, walking behind him \ t rounds of history {(Caption), (Q,,A,), + (Q.4,A,)}
(b) Hierarchical Recurrent Encoder | 1611.08669#104 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 105 | (b) Hierarchical Recurrent Encoder
No | donât think Decoder | they are together Answer A, . Fully-connected Do you think layer the woman is with him? Question Q, The man is riding his bicycle on the sidewalk. Weighted sum Is the man wearing a helmet? No he does not have a helmet on. How old is the man? He looks around 40 years old. |» > [ism What color is his bike? It has black wheels and handlebars. | can't see the body of the bike that well. â | Is anyone else riding a bike? No he's the only one. \. Are there any people nearby? Yes there's a woman walking behind him. \, t rounds of history {(Caption), (Q,,A,), «- (Q.) Ay)} tx 612 Attention over history
(c) Memory Network Encoder
Figure 17
19 | 1611.08669#105 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 106 | Model MRR R@1 R@5 R@10 Mean Answer prior 0.311 19.85 39.14 44.28 31.56 NN-Q 0.392 30.54 46.99 49.98 30.88 NN-QI 0.385 29.71 46.57 49.86 30.90 LF-Q-G 0.403 29.74 50.10 56.32 24.06 | LF-QH-G 0.425 32.49 51.56 57.80 23.11 Baseline LF-QI-G 0.437 34.06 52.50 58.89 22.31 HRE-QH-G 0.430 32.84 52.36 58.64 22.59 HRE-QIH-G 0.442: 34.37: 53.40 59.74 21.75 HREA-QIH-G 0.442 34.47 53.43 59.73 21.83 Generative HRE-QIH-D = 0.502 36.26 65.67 77.05 7.79 HREA-QIH-D 0.508 36.76 66.54 77.75 7.59 Discriminative < SANI-QI-D â 0.506 36.21 67.08 78.16 7.74 4 HieCoAtt-QI-D 0.509 | 1611.08669#106 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 108 | # n a m u H
# Human-Q Human-QH Human-QI Human-QIH
=
0.441 25.10 67.37 0.485 30.31 70.53 0.619 46.12 82.54 0.635 48.03 83.76
- - 4.19 3.91 2.92 2.83
Table 5: Performance of methods on VisDial v0.5, measured by mean reciprocal rank (MRR), recall@k for k = {1, 5, 10} and mean rank. Note that higher is better for MRR and recall@k, while lower is better for mean rank. Memory Network has the best performance in both discriminative and generative settings.
20
with a learning rate of 10â3 for all models. Gradients at each iterations are clamped to [â5, 5] to avoid explosion. Our code, architectures, and trained models are available at https://visualdialog.org.
# References
[1] NLTK. http://www.nltk.org/. 18 [2] Torch. http://torch.ch/. 9, 18 [3] A. Agrawal, D. Batra, and D. Parikh. Analyzing the Behavior of Visual Question Answering Models. In EMNLP, 2016. 3, 4 | 1611.08669#108 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 109 | [4] H. Agrawal, A. Chandrasekaran, D. Batra, D. Parikh, and M. Bansal. Sort story: Sorting jumbled images and captions into stories. In EMNLP, 2016. 3
[5] Amazon. Alexa. http://alexa.amazon.com/. 6 [6] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015. 1, 2, 3, 4, 5, 10, 11, 13, 14
[7] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, and T. Yeh. VizWiz: Nearly Real-time Answers to Visual Ques- tions. In UIST, 2010. 1
[8] A. Bordes, N. Usunier, S. Chopra, and J. Weston. Large- scale Simple Question Answering with Memory Networks. arXiv preprint arXiv:1506.02075, 2015. 3 | 1611.08669#109 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 110 | Learning End-to-End Goal- Oriented Dialog. arXiv preprint arXiv:1605.07683, 2016. 3, 6, 8
[10] G. Christie, A. Laddha, A. Agrawal, S. Antol, Y. Goyal, K. Kochersberger, and D. Batra. Resolving language and vision ambiguities together: Joint segmentation and preposi- tional attachment resolution in captioned scenes. In EMNLP, 2016. 3
[11] C. Danescu-Niculescu-Mizil and L. Lee. Chameleons in imagined conversations: A new approach to understanding In Proceedings coordination of linguistic style in dialogs. of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011, 2011. 12
[12] A. Das, H. Agrawal, C. L. Zitnick, D. Parikh, and D. Ba- tra. Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? In EMNLP, 2016. 3
[13] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. C. Courville. GuessWhat?! Visual object discovery through multi-modal dialogue. In CVPR, 2017. 3 | 1611.08669#110 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 111 | [14] J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. Evaluating Prerequi- site Qualities for Learning End-to-End Dialog Systems. In ICLR, 2016. 2, 3
[15] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Re- current Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. 3
[16] H. Fang, S. Gupta, F. N. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zit- nick, and G. Zweig. From Captions to Visual Concepts and Back. In CVPR, 2015. 3
[17] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. | 1611.08669#111 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 112 | What color are kites? Can you see street signs? The computer on the desk shows an image of a car. What color is car? White A lot of people stand around flying kites in a park. Are these people children? It looks like a mixture of families Do you know make? Volkswagen Are there people? Probably driving car Do you see desk? Yes Is this field trip you think? Just family outing Is there lot of grass? Yes Is it laptop? No, desktop What color is computer? You can't see actual computer just screen and keyboard Are there people on carriage? A street scene with a horse and carriage. Is it real? Yes Are there lot of trees? No Any vehicles around? No Is anyone on bus? Are there any black stripes? Yes 3 black stripes Is there any writing? Yes it says âmoon farm day camp" Can you see brand? It's Mac Is picture of car taken outside? Yes What color is horse? Dark brown What color is carriage? Red Is it fairly close up shot? Anice bird standing on a bench. Gazing at? Camera | think Can you tell what kind of bird it is? No it's bright red bird with black face and red beek Is it tiny bird? Yes Is grass well-maintained? No it's all weeds Are they wearing wetsuit? No What sort of area is this in? Looks like it could be back deck | 1611.08669#112 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 113 | A lot of people stand around flying kites in a park. Are these people children? It looks like a mixture of families Is this field trip you think? Just family outing Is there lot of grass? Yes Are there lot of trees? No Any vehicles around? No
A street scene with a horse and carriage. Is it real? Yes What color is horse? Dark brown What color is carriage? Red
Figure 18: Selected examples of attention over history facts from our Memory Network encoder. The intensity of color in each row indicates the strength of attention placed on that round by the model.
Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering. In NIPS, 2015. 3, 4, 11, 13
[18] D. Geman, S. Geman, N. Hallonquist, and L. Younes. A Visual Turing Test for Computer Vision Systems. In PNAS, 2014. 3
21
[19] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017. 3, 4
[20] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016. 1 | 1611.08669#113 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 114 | [20] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016. 1
[21] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In NIPS, 2015. 1, 3
[22] R. Hu, M. Rohrbach, and T. Darrell. Segmentation from
natural language expressions. In ECCV, 2016. 3 [23] T.-H. Huang, F. Ferraro, N. Mostafazadeh,
I. Misra, A. Agrawal, J. Devlin, R. Girshick, X. He, P. Kohli, D. Ba- tra, L. Zitnick, D. Parikh, L. Vanderwende, M. Galley, and M. Mitchell. Visual storytelling. In NAACL HLT, 2016. 3
[24] Q. V. L. Ilya Sutskever, Oriol Vinyals. Sequence to Sequence Learning with Neural Networks. In NIPS, 2014. 12 | 1611.08669#114 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 115 | [24] Q. V. L. Ilya Sutskever, Oriol Vinyals. Sequence to Sequence Learning with Neural Networks. In NIPS, 2014. 12
[25] A. Jabri, A. Joulin, and L. van der Maaten. Revisiting visual question answering baselines. In ECCV, 2016. 7
[26] A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukács, M. Ganea, P. Young, et al. Smart Reply: Automated Response Suggestion for Email. In KDD, 2016. 3
[27] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 3
[28] D. Kingma and J. Ba. Adam: A Method for Stochastic Opti- mization. In ICLR, 2015. 18
[29] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. Fidler. What are you talking about? text-to-image coreference. In CVPR, 2014. 3 | 1611.08669#115 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 116 | [30] O. Lemon, K. Georgila, J. Henderson, and M. Stuttle. An ISU dialogue system exhibiting reinforcement learning of di- alogue policies: generic slot-ï¬lling in the TALK in-car sys- tem. In EACL, 2006. 2
[31] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Juraf- sky. Deep Reinforcement Learning for Dialogue Generation. In EMNLP, 2016. 3
[32] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common Objects in Context. In ECCV, 2014. 2, 3 | 1611.08669#116 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 117 | [33] C.-W. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In EMNLP, 2016. 3, 6 [34] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single Shot MultiBox Detector. In ECCV, 2016. 1
[35] R. Lowe, N. Pow, I. Serban, and J. Pineau. The Ubuntu Dia- logue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In SIGDIAL, 2015. 3
Deeper LSTM and Normalized CNN Visual Question Answering https://github.com/VT-vision-lab/ model. VQA_LSTM_CNN, 2015. 8
[37] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical Question-Image Co-Attention for Visual Question Answer- ing. In NIPS, 2016. 3, 8 | 1611.08669#117 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 118 | [38] M. Malinowski and M. Fritz. A Multi-World Approach to Question Answering about Real-World Scenes based on Un22
certain Input. In NIPS, 2014. 3, 11
[39] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 3
[40] H. Mei, M. Bansal, and M. R. Walter. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In AAAI, 2016. 2
[41] Microsoft. Bing Spell Check API. https://www. | 1611.08669#118 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 119 | [41] Microsoft. Bing Spell Check API. https://www.
microsoft.com/cognitive-services/en-us/ bing-spell-check-api/documentation. 18 [42] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Ve- ness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 02 2015. 1
[43] N. Mostafazadeh, C. Brockett, B. Dolan, M. Galley, J. Gao, G. P. Spithourakis, and L. Vanderwende. Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation. arXiv preprint arXiv:1701.08251, 2017. 3 | 1611.08669#119 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 120 | [44] T. Paek. Empirical methods for evaluating dialog systems. In Proceedings of the workshop on Evaluation for Language and Dialogue Systems-Volume 9, 2001. 2
[45] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Col- lecting region-to-phrase correspondences for richer image- to-sentence models. In ICCV, 2015. 3
[46] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In EMNLP, 2016. 3
[47] V. Ramanathan, A. Joulin, P. Liang, and L. Fei-Fei. Linking people with "their" names using coreference resolution. In ECCV, 2014. 3
[48] A. Ray, G. Christie, M. Bansal, D. Batra, and D. Parikh. Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions. In EMNLP, 2016. 5, 13 | 1611.08669#120 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 121 | [49] M. Ren, R. Kiros, and R. Zemel. Exploring Models and Data for Image Question Answering. In NIPS, 2015. 1, 3, 11 [50] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrases in images by re- construction. In ECCV, 2016. 3
[51] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. A dataset for movie description. In CVPR, 2015. 3
[52] I. V. Serban, A. GarcÃa-Durán, Ã. Gülçehre, S. Ahn, S. Chan- dar, A. C. Courville, and Y. Bengio. Generating Factoid Questions With Recurrent Neural Networks: The 30M Fac- toid Question-Answer Corpus. In ACL, 2016. 3
[53] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In AAAI, 2016. 3 | 1611.08669#121 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 122 | [54] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069, 2016. 3, 7
[55] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou,
V. Panneershelvam, M. Lanctot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016. 1
[56] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 7 | 1611.08669#122 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 123 | [56] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 7
[57] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. Fidler. MovieQA: Understanding Stories in Movies through Question-Answering. In CVPR, 2016. 1 [58] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. C. Zhu. Joint Video and Text Parsing for Understanding Events and Answering Queries. IEEE MultiMedia, 2014. 1
[59] S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko. Sequence to Sequence - Video to Text. In ICCV, 2015. 3
[60] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Lan- guage Using Deep Recurrent Neural Networks. In NAACL HLT, 2015. 3 | 1611.08669#123 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 124 | [61] O. Vinyals and Q. Le. A Neural Conversational Model. arXiv preprint arXiv:1506.05869, 2015. 3
[62] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. 3
[63] L. Wang, S. Guo, W. Huang, Y. Xiong, and Y. Qiao. Knowledge Guided Disambiguation for Large-Scale Scene Classiï¬cation with Multi-Resolution CNNs. arXiv preprint arXiv:1610.01119, 2016. 1
23
[64] J. Weizenbaum. ELIZA. http://psych.fullerton. edu/mbirnbaum/psych101/Eliza.htm. 2, 3 [65] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. In ICLR, 2016. 1, 3 | 1611.08669#124 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.08669 | 125 | [66] S. Wu, H. Pique, and J. Wieland. Intelligence to Help Blind People http://newsroom.fb.com/news/2016/04/using-artiï¬cial- intelligence-to-help-blind-people-see-facebook/, 1
# Artificial
# Facebook.
2016.
[67] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola. Stacked Attention Networks for Image Question Answering. In CVPR, 2016. 8
[68] L. Yu, E. Park, A. C. Berg, and T. L. Berg. Visual Madlibs: Fill in the blank Image Generation and Question Answering. In ICCV, 2015. 11
[69] P. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. Yin and Yang: Balancing and Answering Binary Visual Questions. In CVPR, 2016. 3, 4, 5, 13, 14
[70] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7W: Grounded Question Answering in Images. In CVPR, 2016. 4, 11, 13 | 1611.08669#125 | Visual Dialog | We introduce the task of Visual Dialog, which requires an AI agent to hold a
meaningful dialog with humans in natural, conversational language about visual
content. Specifically, given an image, a dialog history, and a question about
the image, the agent has to ground the question in image, infer context from
history, and answer the question accurately. Visual Dialog is disentangled
enough from a specific downstream task so as to serve as a general test of
machine intelligence, while being grounded in vision enough to allow objective
evaluation of individual responses and benchmark progress. We develop a novel
two-person chat data-collection protocol to curate a large-scale Visual Dialog
dataset (VisDial). VisDial v0.9 has been released and contains 1 dialog with 10
question-answer pairs on ~120k images from COCO, with a total of ~1.2M dialog
question-answer pairs.
We introduce a family of neural encoder-decoder models for Visual Dialog with
3 encoders -- Late Fusion, Hierarchical Recurrent Encoder and Memory Network --
and 2 decoders (generative and discriminative), which outperform a number of
sophisticated baselines. We propose a retrieval-based evaluation protocol for
Visual Dialog where the AI agent is asked to sort a set of candidate answers
and evaluated on metrics such as mean-reciprocal-rank of human response. We
quantify gap between machine and human performance on the Visual Dialog task
via human studies. Putting it all together, we demonstrate the first 'visual
chatbot'! Our dataset, code, trained models and visual chatbot are available on
https://visualdialog.org | http://arxiv.org/pdf/1611.08669 | Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 23 pages, 18 figures, CVPR 2017 camera-ready, results on VisDial v0.9
dataset, Webpage: http://visualdialog.org | null | cs.CV | 20161126 | 20170801 | [
{
"id": "1605.06069"
},
{
"id": "1701.08251"
},
{
"id": "1506.02075"
},
{
"id": "1605.07683"
},
{
"id": "1610.01119"
},
{
"id": "1506.05869"
}
] |
1611.06440 | 1 | # ABSTRACT
We propose a new formulation for pruning convolutional kernels in neural networks to enable efï¬cient inference. We interleave greedy criteria-based pruning with ï¬ne- tuning by backpropagationâa computationally efï¬cient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to ï¬ne-grained classiï¬cation tasks (Birds-200 and Flowers-102) relaying only on the ï¬rst order gradient information. We also show that pruning can lead to more than 10à theoretical reduction in adapted 3D-convolutional ï¬lters with a small drop in accuracy in a recurrent gesture classiï¬er. Finally, we show results for the large- scale ImageNet dataset to emphasize the ï¬exibility of our approach.
# INTRODUCTION | 1611.06440#1 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 2 | # INTRODUCTION
Convolutional neural networks (CNN) are used extensively in computer vision applications, including object classiï¬cation and localization, pedestrian and car detection, and video classiï¬cation. Many problems like these focus on specialized domains for which there are only small amounts of care- fully curated training data. In these cases, accuracy may be improved by ï¬ne-tuning an existing deep network previously trained on a much larger labeled vision dataset, such as images from Ima- geNet (Russakovsky et al., 2015) or videos from Sports-1M (Karpathy et al., 2014). While transfer learning of this form supports state of the art accuracy, inference is expensive due to the time, power, and memory demanded by the heavyweight architecture of the ï¬ne-tuned network.
While modern deep CNNs are composed of a variety of layer types, runtime during prediction is dominated by the evaluation of convolutional layers. With the goal of speeding up inference, we prune entire feature maps so the resulting networks may be run efï¬ciently even on embedded devices. We interleave greedy criteria-based pruning with ï¬ne-tuning by backpropagation, a computationally efï¬cient procedure that maintains good generalization in the pruned network. | 1611.06440#2 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 3 | Neural network pruning was pioneered in the early development of neural networks (Reed, 1993). Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993) leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regu- larization to improve training and generalization. This method requires computation of the Hessian matrix partially or completely, which adds memory and computation costs to standard ï¬ne-tuning.
In line with our work, Anwar et al. (2015) describe structured pruning in convolutional layers at the level of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels. Pruning is accomplished by particle ï¬ltering wherein conï¬gurations are weighted by misclassiï¬cation rate. The method demonstrates good results on small CNNs, but larger CNNs are not addressed.
(2015) introduce a simpler approach by fine-tuning with a strong (2 regularization term and dropping parameters with values below a predefined threshold. Such unstructured pruning is very effective for network compression, and this approach demonstrates good performance for intra-kernel pruning. But compression may not translate directly to faster inference since modern hardware
1
Published as a conference paper at ICLR 2017 | 1611.06440#3 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 4 | 1
Published as a conference paper at ICLR 2017
exploits regularities in computation for high throughput. So specialized hardware may be needed for efï¬cient inference of a network with intra-kernel sparsity (Han et al., 2016). This approach also requires long ï¬ne-tuning times that may exceed the original network training by a factor of 3 or larger. Group sparsity based regularization of network parameters was proposed to penalize unimportant parameters (Wen et al., 2016; Zhou et al., 2016; Alvarez & Salzmann, 2016; Lebedev & Lempitsky, 2016). Regularization-based pruning techniques require per layer sensitivity analysis which adds extra computations. In contrast, our approach relies on global rescaling of criteria for all layers and does not require sensitivity estimation. Moreover, our approach is faster as we directly prune unimportant parameters instead of waiting for their values to be made sufï¬ciently small by optimization under regularization.
Other approaches include combining parameters with correlated weights (Srinivas & Babu, 2015), reducing precision (Gupta et al., 2015; Rastegari et al., 2016) or tensor decomposition (Kim et al., 2015). These approaches usually require a separate training procedure or signiï¬cant ï¬ne-tuning, but potentially may be combined with our method for additional speedups.
# 2 METHOD | 1611.06440#4 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 5 | # 2 METHOD
The proposed method for pruning consists of the following steps: 1) Fine-tune the network until convergence on the target task; 2) Alternate iterations of pruning and further ï¬ne-tuning; 3) Stop prun- ing after reaching the target trade-off between accuracy and pruning objective, e.g. ï¬oating point operations (FLOPs) or memory utiliza- tion.
The procedure is simple, but its success hinges on employing the right pruning criterion. In this section, we introduce several efï¬cient pruning criteria and related technical considerations.
# training examples D = {xv
Consider a set of training examples D = {xv = {Xo,X1-eXv}) = {You ayn th, where x and y rep- resent an input and a target output, respectively. The networkâs parameter] = {(wh, bt), (w?, 02), ...Cw0*, bP*)} are optimized to minimize a cost value C(D|W). The most common choice for a cost function C(-) is a negative log-likelihood function. A cost function is selected independently of pruning and depends only on the task to be solved by the original network. In the case of transfer learning, we adapt a large network initialized with parameters Wo pretrained on a related but distinct dataset.
@ no Stop pruning | 1611.06440#5 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 6 | @ no Stop pruning
Figure 1: Network pruning as a backward ï¬lter.
During pruning, we refine a subset of parameters which preserves the accuracy of the adapted network, C(D|Wâ) = C(D|W). This corresponds to a combinatorial optimization:
min C(DIW') âC(D|W)}_ st. ||W' |p < B, (1)
where the £9 norm in ||Wâ||o bounds the number of non-zero parameters B in Wâ. Intuitively, if W' = W we reach the global minimum of the error function, however ||WVâ||o will also have its maximum.
Finding a good subset of parameters while maintaining a cost value as close as possible to the original is a combinatorial problem. It will require 2|W| evaluations of the cost function for a selected subset of data. For current networks it would be impossible to compute: for example, VGG-16 has |W| = 4224 convolutional feature maps. While it is impossible to solve this optimization exactly for networks of any reasonable size, in this work we investigate a class of greedy methods.
Starting with a full set of parameters W, we iteratively identify and remove the least important parameters, as illustrated in Figure [I] By removing parameters at each iteration, we ensure the eventual satisfaction of the ) bound on Wâ. | 1611.06440#6 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 7 | 1A âparameterâ (w, b) â W might represent an individual weight, a convolutional kernel, or the entire set of kernels that compute a feature map; our experiments operate at the level of feature maps.
2
(1)
Published as a conference paper at ICLR 2017
Since we focus our analysis on pruning feature maps from convolutional layers, let us denote a set of image feature maps by ze ⬠R#¢*exCe with dimensionality Hp x W and Cy individual maps (or channels)P| The feature maps can either be the input to the network, zo, or the output from a convolutional layer, zy with ¢ ⬠[1,2,..., Z]. Individual feature maps are denoted 2") for k ⬠[1,2,...,C]. A convolutional layer ¢ applies the convolution operation (*) to a set of input feature maps ze_ with kernels parameterized by wi") ⬠RO XPxp,
⬠RO XPxp, wi" D4. o®),
2) = BR (0 1% wi" D4. o®), (2) | 1611.06440#7 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 8 | ⬠RO XPxp, wi" D4. o®),
2) = BR (0 1% wi" D4. o®), (2)
where 2i*) ⬠R%â¬*W¢ is the result of convolving each of Ce_ kernels of size p x p with its respective input feature map and adding bias otâ ) We introduce a pruning gate g, ⬠{0,1}', an external switch which determines if a particular feature map is included or pruned during feed-forward propagation, such that when g is vectorized: W! = gW.
# 2.1 ORACLE PRUNING
Minimizing the difference in accuracy between the full and pruned models depends on the criterion for identifying the âleast importantâ parameters, called saliency, at each step. The best criterion would be an exact empirical evaluation of each parameter, which we denote the oracle criterion, accomplished by ablating each non-zero parameter w ⬠Wâ in turn and recording the costâs difference. | 1611.06440#8 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 9 | We distinguish two ways of using this oracle estimation of importance: 1) oracle-loss quantifies importance as the signed change in loss, C(D|Wâ) â C(D|W), and 2) oracle-abs adopts the absolute difference, |C(D|Wâ) â C(D|W)|. While both discourage pruning which increases the loss, the oracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes any pruning in proportion to its change in loss, regardless of the direction of change.
While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiring ||W||o evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Since estimation of parameter importance is key to both the accuracy and the efficiency of this pruning approach, we propose and evaluate several criteria in terms of performance and estimation cost.
2.2 CRITERIA FOR PRUNING | 1611.06440#9 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 10 | 2.2 CRITERIA FOR PRUNING
There are many heuristic criteria which are much more computationally efficient than the oracle. For the specific case of evaluating the importance of a feature map (and implicitly the set of convolutional kernels from which it is computed), reasonable criteria include: the combined ¢2-norm of the kernel weights, the mean, standard deviation or percentage of the feature mapâs activation, and mutual information between activations and predictions. We describe these criteria in the following paragraphs and propose a new criterion which is based on the Taylor expansion.
Minimum weight. Pruning by magnitude of kernel weights is perhaps the simplest possible crite- rion, and it does not require any additional computation during the fine-tuning process. In case of prun- ing according to the norm of a set of weights, the criterion is evaluated as: Oxrw : RO-1XPXP _y R, with Oww(w) = Tel >; w?, where |w| is dimensionality of the set of weights after vectorization. The motivation to apply this type of pruning is that a convolutional kernel with low ¢2 norm detects less important features than those with a high norm. This can be aided during training by applying ¢; or ¢2 regularization, which will push unimportant kernels to have smaller values. | 1611.06440#10 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 11 | Activation. One of the reasons for the popularity of the ReLU activation is the sparsity in activation that is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonable to assume that if an activation value (an output feature map) is small then this feature detector is not important for prediction task at hand. We may evaluate this by mean activation, Oy : RMxWexCe 5 R with oe = rl = a; for activation a = zi"), ) or by the standard deviation of the activation, Oy74_sta( )= [Dia = Ha)? â Ha)?.
2While our notation is at times speciï¬c to 2D convolutions, the methods are applicable to 3D convolutions, as well as fully connected layers.
3
Published as a conference paper at ICLR 2017 | 1611.06440#11 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 12 | 3
Published as a conference paper at ICLR 2017
Mutual information. Mutual information (MI) is a measure of how much information is present in one variable about another variable. We apply MI as a criterion for pruning, @ yy, : R#*WexCe â R, with Oy7;(a) = MI(a, y), where y is the target of neural network. MI is defined for continuous variables, so to simplify computation, we exchange it with information gain (IG), which is defined for quantized variables IG(y|a) = H(x) + H(y) â H(ax,y), where H(z) is the entropy of variable a. We accumulate statistics on activations and ground truth for a number of updates, then quantize the values and compute IG. | 1611.06440#12 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 13 | Taylor expansion. We phrase pruning as an optimization problem, trying to find Wâ with bounded number of non-zero elements that minimize |AC(h;)| = |C(D|Wâ) â C(D|W)|. With this approach based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. Let h; be the output produced from parameter 7. In the case of feature maps, h= {2h), 2), sey 2fOP7F, For notational convenience, we consider the cost function equally depen- dent on parameters and outputs computed from parameters: C(D|h;) = C(D|(w, b);). Assuming independence of parameters, we have: |AC(hi)| = |C(D, hi = 0) â C(D, hi), (3)
where C(D, hi = 0) is a cost value if output hi is pruned, while C(D, hi) is the cost if it is not pruned. While parameters are in reality inter-dependent, we already make an independence assumption at each gradient step during training.
To approximate âC(hi), we use the ï¬rst-degree Taylor polynomial. For a function f (x), the Taylor expansion at point x = a is | 1611.06440#13 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 14 | To approximate âC(hi), we use the ï¬rst-degree Taylor polynomial. For a function f (x), the Taylor expansion at point x = a is
P f(a (0) = P19 (ea)? + Rl), 4) p=0
p=0
where f (p)(a) is the p-th derivative of f evaluated at point a, and Rp(x) is the p-th order remainder. Approximating C(D, hi = 0) with a ï¬rst-order Taylor polynomial near hi = 0, we have:
C(D, hi = 0) = C(D, hi) â δC δhi hi + R1(hi = 0). (5)
The remainder R1(hi = 0) can be calculated through the Lagrange form:
R1(hi = 0) = δ2C i = ξ) δ(h2 h2 i 2 , (6)
where ξ is a real number between 0 and hi. However, we neglect this ï¬rst-order remainder, largely due to the signiï¬cant calculation required, but also in part because the widely-used ReLU activation function encourages a smaller second order term. | 1611.06440#14 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 15 | Finally, by substituting Eq. (5) into Eq. (3) and ignoring the remainder, we have ÎT E : RHlÃWlÃCl â R+, with
6c Orp(hi) = |AC(h;)| = |C(D, hi) â shit âC(D,hj)| = | 6c âh; Ohy . (7)
Intuitively, this criterion prunes parameters that have an almost ï¬at gradient of the cost function w.r.t. feature map hi. This approach requires accumulation of the product of the activation and the gradient of the cost function w.r.t. to the activation, which is easily computed from the same computations for back-propagation. ÎT E is computed for a multi-variate output, such as a feature map, by
1 6C (k) M > 52) Zim m âLm Orn (zt) = ; (8)
where M is length of vectorized feature map. For a minibatch with T > 1 examples, the criterion is computed for each example separately and averaged over T . | 1611.06440#15 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 16 | where M is length of vectorized feature map. For a minibatch with T > 1 examples, the criterion is computed for each example separately and averaged over T .
Independently of our work, Figurnov et al. (2016) came up with similar metric based on the Taylor expansion, called impact, to evaluate importance of spatial cells in a convolutional layer. It shows that the same metric can be applied to evaluate importance of different groups of parameters.
4
Published as a conference paper at ICLR 2017
Relation to Optimal Brain Damage. The Taylor criterion proposed above relies on approximating the change in loss caused by removing a feature map. The core idea is the same as in Optimal Brain Damage (OBD) (LeCun et al., 1990). Here we consider the differences more carefully.
The primary difference is the treatment of the ï¬rst-order term of the Taylor expansion, in our notation y = δC δh h for cost function C and hidden layer activation h. After sufï¬cient training epochs, the δh â 0 and E(y) = 0. At face value y offers little useful information, gradient term tends to zero: δC hence OBD regards the term as zero and focuses on the second-order term. | 1611.06440#16 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 17 | However, the variance of y is non-zero and correlates with the stability of the local function w.r.t. activation h. By considering the absolute change in the cost3 induced by pruning (as in Eq. 3), we use the absolute value of the ï¬rst-order term, |y|. Under assumption that samples come from independent and identical distribution, E(|y|) = Ï Ï where Ï is the standard deviation of y, known as the expected value of the half-normal distribution. So, while y tends to zero, the expectation of |y| is proportional to the variance of y, a value which is empirically more informative as a pruning criterion.
As an additional beneï¬t, we avoid the computation of the second-order Taylor expansion term, or its simpliï¬cation - diagonal of the Hessian, as required in OBD. | 1611.06440#17 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 18 | We found important to compare proposed Taylor criteria to OBD. As described in the original papers (LeCun et al., 1990; 1998), OBD can be efï¬ciently implemented similarly to standard back propagation algorithm doubling backward propagation time and memory usage when used together with standard ï¬ne-tuning. Efï¬cient implementation of the original OBD algorithm might require signiï¬cant changes to the framework based on automatic differentiation like Theano to efï¬ciently compute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle this problem with approximation techniques (Martens, 2010; Martens et al., 2012). In our implementation, we use efï¬cient way of computing Hessian-vector product (Pearlmutter, 1994) and matrix diagonal approximation proposed by (Bekas et al., 2007), please refer to more details in appendix. With current implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3 times slower for iterative pruning, however with different implementation can only be 50% slower as mentioned in the original paper. | 1611.06440#18 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 19 | Average Percentage of Zeros (APoZ). Hu et al. (2016) proposed to explore sparsity in activations for network pruning. ReLU activation function imposes sparsity during inference, and average percentage of positive activations at the output can determine importance of the neuron. Intuitively, it is a good criteria, however feature maps at the ï¬rst layers have similar APoZ regardless of the networkâs target as they learn to be Gabor like ï¬lters. We will use APoZ to estimate saliency of feature maps.
2.3 NORMALIZATION
Some criteria return ârawâ values, whose scale varies with the depth of the parameterâs layer in the network. A simple layer-wise /2-normalization can achieve adequate rescaling across layers:
6(2")=
2.4 FLOPS REGULARIZED PRUNING
One of the main reasons to apply pruning is to reduce number of operations in the network. Feature maps from different layers require different amounts of computation due the number and sizes of input feature maps and convolution kernels. To take this into account we introduce FLOPs regularization:
Î(z(k) l ) = Î(z(k) ) â λÎf lops , l (9) | 1611.06440#19 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 20 | Î(z(k) l ) = Î(z(k) ) â λÎf lops , l (9)
l where λ controls the amount of regularization. For our experiments, we use λ = 10â3. Îf lops is computed under the assumption that convolution is implemented as a sliding window (see Appendix). Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint.
3OBD approximates the signed difference in loss, while our method approximates absolute difference in loss. We ï¬nd in our results that pruning based on absolute difference yields better accuracy.
5
Published as a conference paper at ICLR 2017
4500, a eer rene n ene ee â median 4000) oa -- min y = max 3500 3000 lower better Rank, eopote. °% 2 4 ~~ 8 10 cry 14 Layer, #
10 08 2 a Accuracy © âS \ â oracle-abs 09 ot 0% 95% 90% 85% 80% 75% Parameters
Figure 2: Global statistics of oracle ranking, shown by layer for Birds-200 transfer learning.
_
Figure 3: Pruning without ï¬ne-tuning using oracle ranking for Birds-200 transfer learning.
# 3 RESULTS | 1611.06440#20 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 21 | _
Figure 3: Pruning without ï¬ne-tuning using oracle ranking for Birds-200 transfer learning.
# 3 RESULTS
We empirically study the pruning criteria and procedure detailed in the previous section for a variety of problems. We focus many experiments on transfer learning problems, a setting where pruning seems to excel. We also present results for pruning large networks on their original tasks for more direct comparison with the existing pruning literature. Experiments are performed within Theano (Theano Development Team, 2016). Training and pruning are performed on the respective training sets for each problem, while results are reported on appropriate holdout sets, unless otherwise indicated. For all experiments we prune a single feature map at every pruning iteration, allowing ï¬ne-tuning and re-evaluation of the criterion to account for dependency between parameters.
# 3.1 CHARACTERIZING THE ORACLE RANKING | 1611.06440#21 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 22 | # 3.1 CHARACTERIZING THE ORACLE RANKING
We begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learning problem. We ï¬ne-tune the VGG-16 network (Simonyan & Zisserman, 2014) for classiï¬cation of bird species using the Caltech-UCSD Birds 200-2011 dataset (Wah et al., 2011). The dataset consists of nearly 6000 training images and 5700 test images, covering 200 species. We ï¬ne-tune VGG-16 for 60 epochs with learning rate 0.0001 to achieve a test accuracy of 72.2% using uncropped images. | 1611.06440#22 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 23 | To compute the oracle, we evaluate the change in loss caused by removing each individual feature map from the ï¬ne-tuned VGG-16 network. (See Appendix A.3 for additional analysis.) We rank feature maps by their contributions to the loss, where rank 1 indicates the most important feature mapâremoving it results in the highest increase in lossâand rank 4224 indicates the least important. Statistics of global ranks are shown in Fig. 2 grouped by convolutional layer. We observe: (1) Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to be more important than those without. (VGG-16 has pooling after layers 2, 4, 7, 10, and 13.) However, (3) maximum and minimum ranks show that every layer has some feature maps that are globally important and others that are globally less important. Taken together with the results of subsequent experiments, we opt for encouraging a balanced pruning that distributes selection across all layers. | 1611.06440#23 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
1611.06440 | 24 | Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we do not update the parameters of the network or the oracle ranking between iterations. Training accuracy is illustrated in Fig. 3 over many pruning iterations. Surprisingly, pruning by smallest absolute change in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss). Even though the oracle indicates that removing some feature maps individually may decrease loss, instability accumulates due the large absolute changes that are induced. These results support pruning by absolute difference in cost, as constructed in Eq. 1.
# 3.2 EVALUATING PROPOSED CRITERIA VERSUS THE ORACLE
To evaluate computationally efï¬cient criteria as substitutes for the oracle, we compute Spearmanâs rank correlation, an estimate of how well two predictors provide monotonically related outputs,
6
Published as a conference paper at ICLR 2017 | 1611.06440#24 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach. | http://arxiv.org/pdf/1611.06440 | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | cs.LG, stat.ML | 17 pages, 14 figures, ICLR 2017 paper | null | cs.LG | 20161119 | 20170608 | [
{
"id": "1512.08571"
},
{
"id": "1607.03250"
},
{
"id": "1509.09308"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.