id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1512.03385#76
Deep Residual Learning for Image Recognition
# C. ImageNet Localization The ImageNet Localization (LOC) task [36] requires to classify and localize the objects. Following [40, 41], we assume that the image-level classiï¬ ers are ï¬ rst adopted for predicting the class labels of an image, and the localiza- tion algorithm only accounts for predicting bounding boxes based on the predicted classes. We adopt the â per-class re- gressionâ (PCR) strategy [40, 41], learning a bounding box regressor for each class. We pre-train the networks for Im- ageNet classiï¬ cation and then ï¬ ne-tune them for localiza- tion.
1512.03385#75
1512.03385#77
1512.03385
[ "1505.00387" ]
1512.03385#77
Deep Residual Learning for Image Recognition
We train networks on the provided 1000-class Ima- geNet training set. Our localization algorithm is based on the RPN frame- work of [32] with a few modiï¬ cations. Unlike the way in [32] that is category-agnostic, our RPN for localization is designed in a per-class form. This RPN ends with two sib- 1 convolutional layers for binary classiï¬ cation (cls) ling 1 and box regression (reg), as in [32]. The cls and reg layers are both in a per-class from, in contrast to [32].
1512.03385#76
1512.03385#78
1512.03385
[ "1505.00387" ]
1512.03385#78
Deep Residual Learning for Image Recognition
Speciï¬ - cally, the cls layer has a 1000-d output, and each dimension is binary logistic regression for predicting being or not be- ing an object class; the reg layer has a 1000 4-d output consisting of box regressors for 1000 classes. As in [32], our bounding box regression is with reference to multiple translation-invariant â anchorâ boxes at each position. As in our ImageNet classiï¬ cation training (Sec. 3.4), we randomly sample 224 224 crops for data augmentation. We use a mini-batch size of 256 images for ï¬
1512.03385#77
1512.03385#79
1512.03385
[ "1505.00387" ]
1512.03385#79
Deep Residual Learning for Image Recognition
ne-tuning. To avoid negative samples being dominate, 8 anchors are ran- domly sampled for each image, where the sampled positive and negative anchors have a ratio of 1:1 [32]. For testing, the network is applied on the image fully-convolutionally. Table 13 compares the localization results. Following [41], we ï¬ rst perform â oracleâ testing using the ground truth class as the classiï¬ cation prediction. VGGâ s paper [41] re- 12 method OverFeat [40] (ILSVRCâ 13) GoogLeNet [44] (ILSVRCâ 14) VGG [41] (ILSVRCâ 14) ours (ILSVRCâ 15) top-5 localization err val 30.0 - 26.9 8.9 test 29.9 26.7 25.3 9.0 Table 14. Comparisons of localization error (%) on the ImageNet dataset with state-of-the-art methods. ports a center-crop error of 33.1% (Table 13) using ground truth classes. Under the same setting, our RPN method us- ing ResNet-101 net signiï¬ cantly reduces the center-crop er- ror to 13.3%. This comparison demonstrates the excellent performance of our framework. With dense (fully convolu- tional) and multi-scale testing, our ResNet-101 has an error of 11.7% using ground truth classes. Using ResNet-101 for predicting classes (4.6% top-5 classiï¬ cation error, Table 4), the top-5 localization error is 14.4%. The above results are only based on the proposal network (RPN) in Faster R-CNN [32]. One may use the detection network (Fast R-CNN [7]) in Faster R-CNN to improve the results. But we notice that on this dataset, one image usually contains a single dominate object, and the proposal regions highly overlap with each other and thus have very similar RoI-pooled features. As a result, the image-centric training of Fast R-CNN [7] generates samples of small variations, which may not be desired for stochastic training. Motivated by this, in our current experiment we use the original R- CNN [8] that is RoI-centric, in place of Fast R-CNN.
1512.03385#78
1512.03385#80
1512.03385
[ "1505.00387" ]
1512.03385#80
Deep Residual Learning for Image Recognition
Our R-CNN implementation is as follows. We apply the per-class RPN trained as above on the training images to predict bounding boxes for the ground truth class. These predicted boxes play a role of class-dependent proposals. For each training image, the highest scored 200 proposals are extracted as training samples to train an R-CNN classi- ï¬ er. The image region is cropped from a proposal, warped to 224 224 pixels, and fed into the classiï¬ cation network as in R-CNN [8]. The outputs of this network consist of two sibling fc layers for cls and reg, also in a per-class form. This R-CNN network is ï¬ ne-tuned on the training set us- ing a mini-batch size of 256 in the RoI-centric fashion. For testing, the RPN generates the highest scored 200 proposals for each predicted class, and the R-CNN network is used to update these proposalsâ
1512.03385#79
1512.03385#81
1512.03385
[ "1505.00387" ]
1512.03385#81
Deep Residual Learning for Image Recognition
scores and box positions. This method reduces the top-5 localization error to 10.6% (Table 13). This is our single-model result on the validation set. Using an ensemble of networks for both clas- siï¬ cation and localization, we achieve a top-5 localization error of 9.0% on the test set. This number signiï¬ cantly out- performs the ILSVRC 14 results (Table 14), showing a 64% relative reduction of error. This result won the 1st place in the ImageNet localization task in ILSVRC 2015.
1512.03385#80
1512.03385
[ "1505.00387" ]
1512.02167#0
Simple Baseline for Visual Question Answering
5 1 0 2 c e D 5 1 ] V C . s c [ 2 v 7 6 1 2 0 . 2 1 5 1 : v i X r a # Simple Baseline for Visual Question Answering Bolei Zhou1, Yuandong Tian2, Sainbayar Sukhbaatar2, Arthur Szlam2, and Rob Fergus2 1Massachusetts Institute of Technology 2Facebook AI Research # Abstract
1512.02167#1
1512.02167
[ "1511.05234" ]
1512.02167#1
Simple Baseline for Visual Question Answering
We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2. # Introduction Combining Natural Language Processing with Computer Vision for high-level scene interpretation is a recent trend, e.g., image captioning [10, 15, 7, 4].
1512.02167#0
1512.02167#2
1512.02167
[ "1511.05234" ]
1512.02167#2
Simple Baseline for Visual Question Answering
These works have beneï¬ ted from the rapid development of deep learning for visual recognition (object recognition [8] and scene recognition [20]), and have been made possible by the emergence of large image datasets and text corpus (e.g., [9]). Beyond image captioning, a natural next step is visual question answering (QA) [12, 2, 5]. Compared with the image captioning task, in which an algorithm is required to generate free-form text description for a given image, visual QA can involve a wider range of knowledge and reasoning skills. A captioning algorithm has the liberty to pick the easiest relevant descriptions of the image, whereas as responding to a question needs to ï¬ nd the correct answer for *that* question. Further- more, the algorithms for visual QA are required to answer all kinds of questions people might ask about the image, some of which might be relevant to the image contents, such as â
1512.02167#1
1512.02167#3
1512.02167
[ "1511.05234" ]
1512.02167#3
Simple Baseline for Visual Question Answering
what books are under the televisionâ and â what is the color of the boatâ , while others might require knowledge or reasoning beyond the image content, such as â why is the baby crying?â and â which chair is the most expensive?â . Building robust algorithms for visual QA that perform at near human levels would be an important step towards solving AI. Recently, several papers have appeared on arXiv (after CVPRâ 16 submission deadline) proposing neural network architectures for visual question answering, such as [13, 17, 5, 18, 16, 3, 11, 1]. Some of them are derived from the image captioning framework, in which the output of a recurrent neural network (e.g., LSTM [16, 11, 1]) applied to the question sentence is concatenated with visual features from VGG or other CNNs to feed a classiï¬ er to predict the answer. Other models integrate visual attention mechanisms [17, 13, 3] and visualize how the network learns to attend the local image regions relevant to the content of the question. Interestingly, we notice that in one of the earliest VQA papers [12], the simple baseline Bag-of- words + image feature (referred to as BOWIMG baseline) outperforms the LSTM-based models on a synthesized visual QA dataset built up on top of the image captions of COCO dataset [9]. For the recent much larger COCO VQA dataset [2], the BOWIMG baseline performs worse than the LSTM-based models [2]. 1http://visualqa.csail.mit.edu 2https://github.com/metalbubble/VQAbaseline 1 Image feature â ~~_ â ©] cafeteria:0.01 yes:0.81 no:0.15 are these people family? â » |O| are â â ~ i | people:0.02 Softmax One-hot vector Figure 1: Framework of the iBOWIMG. Features from the question sentence and image are con- catenated then feed into softmax to predict the answer. In this work, we carefully implement the BOWIMG baseline model. We call it iBOWIMG to avoid confusion with the implementation in [2]. With proper setup and training, this simple baseline model shows comparable performance to many recent recurrent network-based approaches for visual QA.
1512.02167#2
1512.02167#4
1512.02167
[ "1511.05234" ]
1512.02167#4
Simple Baseline for Visual Question Answering
Further analysis shows that the baseline learns to correlate the informative words in the question sentence and visual concepts in the image with the answer. Furthermore, such correlations can be used to compute reasonable spatial attention map with the help of the CAM technique proposed in [20]. The source code and the visual QA demo based on the trained model are publicly available. In the demo, iBOWIMG baseline gives answers to any question relevant to the given images. Playing with the visual QA models interactively could reveal the strengths and weakness of the trained model. # iBOWIMG for Visual Question Answering In most of the recent proposed models, visual QA is simpliï¬ ed to a classiï¬ cation task: the number of the different answers in the training set is the number of the ï¬ nal classes the models need to learn to predict. The general pipeline of those models is that the word feature extracted from the question sentence is concatenated with the visual feature extracted from the image, then they are fed into a softmax layer to predict the answer class. The visual feature is usually taken from the top of the VGG network or GoogLeNet, while the word features of the question sentence are usually the popular LSTM-based features [12, 2]. In our iBOWIMG model, we simply use naive bag-of-words as the text feature, and use the deep fea- tures from GoogLeNet [14] as the visual features. Figure 1 shows the framework of the iBOWIMG model, which can be implemented in Torch with no more than 10 lines of code.
1512.02167#3
1512.02167#5
1512.02167
[ "1511.05234" ]
1512.02167#5
Simple Baseline for Visual Question Answering
The input question is ï¬ rst converted to a one-hot vector, which is transformed to word feature via a word embedding layer and then is concatenated with the image feature from CNN. The combined feature is sent to the softmax layer to predict the answer class, which essentially is a multi-class logistic regression model. # 3 Experiments Here we train and evaluate the iBOWIMG model on the Full release of COCO VQA dataset [2], the largest VQA dataset so far. In the COCO VQA dataset, there are 3 questions annotated by Amazon Mechanical Turk (AMT) workers for each image in the COCO dataset. For each question, 10 answers are annotated by another batch of AMT workers. To pre-process the annotation for training, we perform majority voting on the 10 ground-truth answers to get the most certain answer
1512.02167#4
1512.02167#6
1512.02167
[ "1511.05234" ]
1512.02167#6
Simple Baseline for Visual Question Answering
2 # Table 1: Performance comparison on test-dev. IMG [2] BOW [2] BOWIMG [2] LSTMIMG [2] CompMem [6] NMN+LSTM [1] WR Sel. [13] ACK [16] DPPnet [11] iBOWIMG Overall 28.13 48.09 52.64 53.74 52.62 54.80 - 55.72 57.22 55.72 Open-Ended yes/no 64.01 75.66 75.55 78.94 78.33 77.70 - 79.23 80.71 76.55 number 00.42 36.70 33.67 35.24 35.93 37.20 - 36.13 37.24 35.03 others 03.77 27.14 37.37 36.42 34.46 39.30 - 40.08 41.69 42.62 Overall 30.53 53.68 58.97 57.17 - - 60.96 - 62.48 61.68 Multiple-Choice yes/no 69.87 75.71 75.59 78.95 - - - - 80.79 76.68 number 00.45 37.05 34.35 35.80 - - - - 38.94 37.05 others 03.76 38.64 50.33 43.41 - - - - 52.16 54.44 for each question. Here the answer could be in single word or multiple words. Then we have the 3 question-answer pairs from each image for training. There are in total 248,349 pairs in train2014 and 121,512 pairs in val2014, for 123,287 images overall in the training set. Here train2014 and val2014 are the standard splits of the image set in the COCO dataset. To generate the training set and validation set for our model, we ï¬ rst randomly split the images of COCO val2014 into 70% subset A and 30% subset B. To avoid potential overï¬ tting, questions shar- ing the same image will be placed into the same split.
1512.02167#5
1512.02167#7
1512.02167
[ "1511.05234" ]
1512.02167#7
Simple Baseline for Visual Question Answering
The question-answer pairs from the images of COCO train2014 + val2014 subset A are combined and used for training, while the val2014 subset B is used as validation set for parameter tuning. After we ï¬ nd the best model parameters, we combine the whole train2014 and val2014 to train the ï¬ nal model. We submit the prediction result given by the ï¬ nal model on the testing set (COCO test2015) to the evaluation server, to get the ï¬ nal accuracy on the test-dev and test-standard set. For Open-Ended Question track, we take the top-1 predicted answer from the softmax output. For the Multiple-Choice Question track, we ï¬ rst get the softmax probability for each of the given choices then select the most conï¬
1512.02167#6
1512.02167#8
1512.02167
[ "1511.05234" ]
1512.02167#8
Simple Baseline for Visual Question Answering
dent one. The code is implemented in Torch. The training takes about 10 hours on a single GPU NVIDIA Titan Black. # 3.1 Benchmark Performance According to the evaluation standard of the VQA dataset, the result of the any proposed VQA models should report accuracy on the test-standard set for fair comparison. We report our baseline on the test-dev set in Table 1 and the test-standard set in Table 2. The test-dev set is used for debugging and validation experiments and allows for unlimited submission to the evaluation server, while test- standard is used for model comparison with limited submission times. Since this VQA dataset is rather new, the publicly available models evaluated on the dataset are all from non-peer reviewed arXiv papers. We include the performance of the models available at the time of writing (Dec.5, 2015) [2, 6, 1, 13, 16, 11]. Note that some models are evaluated on either test-dev or test-standard for either Open-Ended or Multiple-Choice track. The full set of the VQA dataset was released on Oct.6 2015; previously the v0.1 version and v0.9 version had been released. We notice that some models are evaluated using non-standard setups, rendering performance comparisons difï¬ cult. [17] (arXiv dated at Nov.17 2015) used v0.9 version of VQA with their own split of training and testing; [18] (arXiv dated at Nov.7 2015) used their own split of training and testing for the val2014; [3] (arXiv dated at Nov.18 2015) used v0.9 version of VQA dataset. So these are not included in the comparison. Except for these IMG, BOW, BOWIMG baselines provided in the [2], all the compared methods use either deep or recursive neural networks. However, our iBOWIMG baseline shows comparable performances against these much more complex models, except for DPPnet [11] that is about 1.5% better.
1512.02167#7
1512.02167#9
1512.02167
[ "1511.05234" ]
1512.02167#9
Simple Baseline for Visual Question Answering
3 # Table 2: Performance comparison on test-standard. LSTMIMG [2] NMN+LSTM [1] ACK [16] DPPnet [11] iBOWIMG Overall 54.06 55.10 55.98 57.36 55.89 Open-Ended yes/no - - 79.05 80.28 76.76 number - - 36.10 36.92 34.98 others - - 40.61 42.24 42.62 Overall - - - 62.69 61.97 Multiple-Choice yes/no - - - 80.35 76.86 number - - - 38.79 37.30 others - - - 52.79 54.60 # 3.2 Training Details Learning rate and weight clip.
1512.02167#8
1512.02167#10
1512.02167
[ "1511.05234" ]
1512.02167#10
Simple Baseline for Visual Question Answering
We ï¬ nd that setting up a different learning rate and weight clipping for the word embedding layer and softmax layer leads to better performance. The learning rate for the word embedding layer should be much higher than the learning rate of softmax layer to learn a good word embedding. From the performance of BOW in Table 1, we can see that a good word model is crucial to the accuracy, as BOW model alone could achieve closely to 48%, even without looking at the image content. Model parameters to tune. Though our model could be considered as the simplest baseline so far for visual QA, there are several model parameters to tune: 1) the number of epochs to train. 2) the learning rate and weight clip. 3) the threshold for removing less frequent question word and answer classes. We iterate to search the best value of each model parameter separately on the val2014 subset B. In our best model, there are 5,746 words in the dictionary of question sentence, 5,216 classes of answers.
1512.02167#9
1512.02167#11
1512.02167
[ "1511.05234" ]
1512.02167#11
Simple Baseline for Visual Question Answering
The speciï¬ c model parameters can be found in the source code. # 3.3 Understanding the Visual QA model From the comparisons above, we can see that our baseline model performs as well as the recurrent neural network models on the VQA dataset. Furthermore, due to its simplicity, the behavior of the model could be easily interpreted, demonstrating what it learned for visual QA. Essentially, the BOWIMG baseline model learns to memorize the correlation between the answer class and the informative words in the question sentence along with the visual feature. We split the learned weights of softmax into two parts, one part for the word feature and the other part for the visual feature.
1512.02167#10
1512.02167#12
1512.02167
[ "1511.05234" ]
1512.02167#12
Simple Baseline for Visual Question Answering
Therefore, r = Mwxw + Mvxv. (1) Here the softmax matrix M is decomposed into the weights Mw for word feature xw and the weights Mv for the visual feature xv whereas M = [Mw, Mv]. r is the response of the answer class before softmax normalization. Denote the response rw = Mwxw as the contribution from question words and rv = Mvxv as the contribution from the image contents. Thus for each predicted answer, we know exactly the proportions of contribution from word and image content respectively. We also could rank rw and rv to know what the predicted answer could be if the model only relies on one side of information. Figure 2 shows some examples of the predictions, revealing that the question words usually have dominant inï¬ uence on predicting the answer. For example, the correctly predicted answers for the two questions given for the ï¬
1512.02167#11
1512.02167#13
1512.02167
[ "1511.05234" ]
1512.02167#13
Simple Baseline for Visual Question Answering
rst image â what is the color of sofaâ and â which brand is the laptopâ come mostly from the question words, without the need for image. This demonstrates the bias in the frequency of object and actions appearing in the images of COCO dataset. For the second image, we ask â what are they doingâ : the words-only prediction gives â playing wii (10.62), eating (9.97), playing frisbee (9.24)â , while full prediction gives the correct prediction â playing baseball (10.67 = 2.01 [image] + 8.66 [word])â
1512.02167#12
1512.02167#14
1512.02167
[ "1511.05234" ]
1512.02167#14
Simple Baseline for Visual Question Answering
. To further understand the answers predicted by the model given the visual feature and question sentence, we ï¬ rst decompose the word contribution of the answer into single words of the ques- tion sentence, then we visualize the informative image regions relevant to the answer through the technique proposed in [19]. 4 Question: what is the color of the sofa Predictions: brown (Score: 12.89 = 1.01 [image] + 11.88 [word]) red (score: 11.92 = 1.13 [image] + 10.79 [word]) yellow (score: 11.91 = 1.08 [image] + 10.84 [word] Based on image only: books (3.15), yes (3.14), no (2.95) Based on word only: brown (11.88), gray (11.18), tan (11.16) Question: which brand is the laptop Predictions: apple (Score: 10.87 = 1.10 [image] + 9.77 [word]) dell (score: 9.83 = 0.71 [image] + 9.12 [word)) toshiba (score: 9.76 = 1.18 [image] + 8.58 [word]) Based on image only: books (3.15), yes (3.14), no (2.95) Based on word only: apple (9.77), hp (9.18), dell (9.12) Question: what are they doing Predictions: playing baseball (score: 10.67 = 2.01 [image] + 8.66 [word)) baseball (score: 9.65 = 4.84 [image] + 4.82 [word]) grazing (score: 9.34 = 0.53 [image] + 8.81 [word)) Based on word only: playing wii (10.62), eating (9.97), playing frisbee (9.24) Based on image only: umpire (4.85), baseball (4.84), batter (4.46) Question: how many people inside Predictions: 3 (score: 13.39 = 2.75 [image] + 10.65 [word]) 2 (score: 12.76 = 2.49 [image] + 10.27 [word]) 5 (score: 12.72 = 1.83 [image] + 10.89 [word]) Based on image only: umpire (4.85), baseball (4.84), batter (4.46) Based on word only: 8 (11.24), 7 (10.95), 5 (10.89) what gaming system are they playing s: (score: 19.35 = 0.64 [image] + 18.71 [word]) soccer (score: 13.23 = 0.34 [image] + 12.89 [word] mario kart (Score: 13.17 = 0.11 [image] + 13.06 [word] Question: are they having fun Predictions: yes (score: 10.65 = 3.98 [image] + 6.68 [word] no (score: 8.06 = 3.33 [image] + 4.73 [word)]) library (score: 6.20 = 4.40 [image] + 1.80 [word)) Based on image only: library (4.40), yes (3.98), i don't know (3.85) Based on image only: library (4.40), yes (3.98), i don't know (3.85) Based on word only: wii (18.71), mario kart (13.06), soccer (12.89) Based on word only: yes (6.68), no (4.73), fly kite (3.43)
1512.02167#13
1512.02167#15
1512.02167
[ "1511.05234" ]
1512.02167#15
Simple Baseline for Visual Question Answering
Figure 2: Examples of visual question answering from the iBOWIMG baseline. For each image there are two questions and the top 3 predicted answers from the model. The prediction score of each answer is decomposed into the contributions of image and words respectively. The predicted answers which rely purely on question words or image are also shown. Question: What are they doing? Prediction: texting (score: 12.02=3.78 [image] + 8.24 [word]) Word importance: doing(7.01) are(1.05) they(0.49) what(-0.3) What is he eating? n: hot dog (score: 13.01=5.02 [image] + 7.99 [word]) Word importance: eating(4.12) what(2.81) is(0.74) he(0.30) Question: Is there a cat? Prediction: yes (score: 11.48 = 4.35 [image] + 7.13 [word]) word importance: is(2.65) there(2.46) a(1.70) cat(0.30) ion: Where is the cat? ion: shelf (score: 10.81 = 3.23 [image] + 7.58 [word]) word importance: where(3.89) cat(1.88) the(1.79) is(0.01) Figure 3: The examples of the word importance of question sentences and the informative image regions relevant to the predicted answers. Since there are just two linear transformations (one is word embedding and the other is softmax matrix multiplication) from the one hot vector to the answer response, we could easily know the importance of each single word in the question to the predicted answer. In Figure 3, we plot the ranked word importance for each word in the question sentence.
1512.02167#14
1512.02167#16
1512.02167
[ "1511.05234" ]
1512.02167#16
Simple Baseline for Visual Question Answering
In the ï¬ rst image question word â doingâ is informative to the answer â textingâ while in the second image question word â eatingâ is informative to the answer â hot dogâ . To highlight the informative image regions relevant to the predicted answer we apply a technique called Class Activation Mapping (CAM) proposed in [19]. The CAM technique leverages the linear relation between the softmax prediction and the ï¬ nal convolutional feature map, which allows us to identify the most discriminative image regions relevant to the predicted result. In Figure 3 we plot the heatmaps generated by the CAM associated with the predicted answer, which highlight the
1512.02167#15
1512.02167#17
1512.02167
[ "1511.05234" ]
1512.02167#17
Simple Baseline for Visual Question Answering
5 Predictions: flying kites (score: 12.86 = 1.64 [image] + 11.22 [word] + playing baseball (score: 12.38 = 3.18 [image] + 9.20 {word) playing frisbee (score: 11.96 = 1.72 [image] + 10.24 [word]) Based on image only: baseball (4.74), batting (4.44), glove (4.12), Based on word only: playing wii (11.49), flying kites (11.22), playing frisbee (10.24), Question: where is the place Predictions: + field (score: 10.63 = 3.05 [image] + 7.58 [word]) + park (score: 9.69 = 2.96 [image] + 6.73 [word]) + in air (score: 9.67 = 2.27 [image] + 7.40 [word]) Based on image only: baseball (4.74), batting (4.44), glove (4.12), Based on word only: above stove (8.23), behind clouds (8.08), on floor (8.03), Figure 4: Snapshot of the visual question answering demo. People could type questions into the demo and the demo will give answer predictions. Here we show the answer predictions for two questions.
1512.02167#16
1512.02167#18
1512.02167
[ "1511.05234" ]
1512.02167#18
Simple Baseline for Visual Question Answering
informative image regions such as the cellphone in the ï¬ rst image to the answer â textingâ and the hot dog in the ï¬ rst image to the answer â hot dogâ . The example in lower part of Figure 3 shows the heatmaps generated by two different questions and answers. Visual features from CNN already have implicit attention and selectivity over the image region, thus the resulting class activation maps are similar to the maps generated by the attention mechanisms of the VQA models in [13, 17, 18]. # Interactive Visual QA Demo Question answering is essentially an interactive activity, thus it would be good to make the trained models able to interact with people in real time. Aided by the simplicity of the baseline model, we built a web demo that people could type question about a given image and our AI system powered by iBOWIMG will reply the most possible answers. Here the deep feature of the images are extracted beforehand. Figure 4 shows a snapshot of the demo. People could play with the demo to see the strength and weakness of VQA model. # 5 Concluding Remarks For visual question answering on COCO dataset, our implementation of a simple baseline achieves comparable performance to several recently proposed recurrent neural network-based approaches. To reach the correct prediction, the baseline captures the correlation between the informative words in the question and the answer, and that between image contents and the answer. How to move beyond this, from memorizing the correlations to actual reasoning and understanding of the question and image, is a goal for future research.
1512.02167#17
1512.02167#19
1512.02167
[ "1511.05234" ]
1512.02167#19
Simple Baseline for Visual Question Answering
# References [1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Deep compositional question answering with neural module networks. arXiv preprint arXiv:1511.02799, 2015. [2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015. [3] K. Chen, J. Wang, L.-C. Chen, H. Gao, W. Xu, and R.
1512.02167#18
1512.02167#20
1512.02167
[ "1511.05234" ]
1512.02167#20
Simple Baseline for Visual Question Answering
Nevatia. Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960, 2015. [4] J. Devlin, S. Gupta, R. Girshick, M. Mitchell, and C. L. Zitnick. Exploring nearest neighbor approaches for image captioning. arXiv preprint arXiv:1505.04467, 2015.
1512.02167#19
1512.02167#21
1512.02167
[ "1511.05234" ]
1512.02167#21
Simple Baseline for Visual Question Answering
6 [5] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for multilingual image question answering. arXiv preprint arXiv:1505.05612, 2015. [6] A. Jiang, F. Wang, F. Porikli, and Y. Li. Compositional memory for visual question answering. arXiv preprint arXiv:1511.05676, 2015. [7] R. Kiros, R. Salakhutdinov, and R.
1512.02167#20
1512.02167#22
1512.02167
[ "1511.05234" ]
1512.02167#22
Simple Baseline for Visual Question Answering
Zemel. Multimodal neural language models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 595â 603, 2014. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â 1105, 2012. [9] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context. In Computer Visionâ ECCV 2014, pages 740â 755. Springer, 2014. [10] J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille.
1512.02167#21
1512.02167#23
1512.02167
[ "1511.05234" ]
1512.02167#23
Simple Baseline for Visual Question Answering
Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014. [11] H. Noh, P. H. Seo, and B. Han. Image question answering using convolutional neural network with dynamic parameter prediction. arXiv preprint arXiv:1511.05756, 2015. [12] M. Ren, R. Kiros, and R. Zemel.
1512.02167#22
1512.02167#24
1512.02167
[ "1511.05234" ]
1512.02167#24
Simple Baseline for Visual Question Answering
Exploring models and data for image question answering. In NIPS, volume 1, page 3, 2015. [13] K. J. Shih, S. Singh, and D. Hoiem. Where to look: Focus regions for visual question answer- ing. arXiv preprint arXiv:1511.07394, 2015. [14] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [15] O. Vinyals, A. Toshev, S. Bengio, and D.
1512.02167#23
1512.02167#25
1512.02167
[ "1511.05234" ]
1512.02167#25
Simple Baseline for Visual Question Answering
Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. [16] Q. Wu, P. Wang, C. Shen, A. v. d. Hengel, and A. Dick. Ask me anything: Free- form visual question answering based on knowledge from external sources. arXiv preprint arXiv:1511.06973, 2015.
1512.02167#24
1512.02167#26
1512.02167
[ "1511.05234" ]
1512.02167#26
Simple Baseline for Visual Question Answering
[17] H. Xu and K. Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. arXiv preprint arXiv:1511.05234, 2015. [18] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015. [19] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.
1512.02167#25
1512.02167#27
1512.02167
[ "1511.05234" ]
1512.02167#27
Simple Baseline for Visual Question Answering
Learning deep features for discriminative localization. arXiv preprint arXiv:1512.04150, 2015. [20] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene In Advances in Neural Information Processing Systems, recognition using places database. pages 487â 495, 2014. 7
1512.02167#26
1512.02167
[ "1511.05234" ]
1512.00567#0
Rethinking the Inception Architecture for Computer Vision
5 1 0 2 c e D 1 1 ] V C . s c [ 3 v 7 6 5 0 0 . 2 1 5 1 : v i X r a # Rethinking the Inception Architecture for Computer Vision # Christian Szegedy Google Inc. [email protected] # Vincent Vanhoucke [email protected] # Sergey Ioffe [email protected] Jonathon Shlens [email protected] # Zbigniew Wojna University College London [email protected] # Abstract
1512.00567#1
1512.00567
[ "1502.01852" ]
1512.00567#1
Rethinking the Inception Architecture for Computer Vision
Convolutional networks are at the core of most state- of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in vari- ous benchmarks. Although increased model size and com- putational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efï¬ ciency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are explor- ing ways to scale up networks in ways that aim at utilizing the added computation as efï¬ ciently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classiï¬ cation challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computa- tional cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error. larly high performance in the 2014 ILSVRC [16] classiï¬ ca- tion challenge. One interesting observation was that gains in the classiï¬ cation performance tend to transfer to signiï¬ - cant quality gains in a wide variety of application domains. This means that architectural improvements in deep con- volutional architecture can be utilized for improving perfor- mance for most other computer vision tasks that are increas- ingly reliant on high quality, learned visual features. Also, improvements in the network quality resulted in new appli- cation domains for convolutional networks in cases where AlexNet features could not compete with hand engineered, crafted solutions, e.g. proposal generation in detection[4]. Although VGGNet [18] has the compelling feature of architectural simplicity, this comes at a high cost: evalu- ating the network requires a lot of computation. On the other hand, the Inception architecture of GoogLeNet [20] was also designed to perform well even under strict con- straints on memory and computational budget.
1512.00567#0
1512.00567#2
1512.00567
[ "1502.01852" ]
1512.00567#2
Rethinking the Inception Architecture for Computer Vision
For exam- ple, GoogleNet employed only 5 million parameters, which represented a 12à reduction with respect to its predeces- sor AlexNet, which used 60 million parameters. Further- more, VGGNet employed about 3x more parameters than AlexNet. # 1. Introduction Since the 2012 ImageNet competition [16] winning en- try by Krizhevsky et al [9], their network â AlexNetâ has been successfully applied to a larger variety of computer vision tasks, for example to object-detection [5], segmen- tation [12], human pose estimation [22], video classiï¬ ca- tion [8], object tracking [23], and superresolution [3]. These successes spurred a new line of research that fo- cused on ï¬ nding higher performing convolutional neural networks. Starting in 2014, the quality of network architec- tures signiï¬ cantly improved by utilizing deeper and wider networks. VGGNet [18] and GoogLeNet [20] yielded simi- The computational cost of Inception is also much lower than VGGNet or its higher performing successors [6]. This has made it feasible to utilize Inception networks in big-data scenarios[17], [13], where huge amount of data needed to be processed at reasonable cost or scenarios where memory or computational capacity is inherently limited, for example in mobile vision settings. It is certainly possible to mitigate parts of these issues by applying specialized solutions to tar- get memory use [2], [15] or by optimizing the execution of certain operations via computational tricks [10]. However, these methods add extra complexity. Furthermore, these methods could be applied to optimize the Inception archi- tecture as well, widening the efï¬ ciency gap again. Still, the complexity of the Inception architecture makes 1 it more difï¬ cult to make changes to the network. If the ar- chitecture is scaled up naively, large parts of the computa- tional gains can be immediately lost. Also, [20] does not provide a clear description about the contributing factors that lead to the various design decisions of the GoogLeNet architecture. This makes it much harder to adapt it to new use-cases while maintaining its efï¬
1512.00567#1
1512.00567#3
1512.00567
[ "1502.01852" ]
1512.00567#3
Rethinking the Inception Architecture for Computer Vision
ciency. For example, if it is deemed necessary to increase the capacity of some Inception-style model, the simple transformation of just doubling the number of all ï¬ lter bank sizes will lead to a 4x increase in both computational cost and number of pa- rameters. This might prove prohibitive or unreasonable in a lot of practical scenarios, especially if the associated gains are modest. In this paper, we start with describing a few general principles and optimization ideas that that proved to be useful for scaling up convolution networks in efï¬ cient ways. Although our principles are not limited to Inception- type networks, they are easier to observe in that context as the generic structure of the Inception style building blocks is ï¬ exible enough to incorporate those constraints naturally. This is enabled by the generous use of dimensional reduc- tion and parallel structures of the Inception modules which allows for mitigating the impact of structural changes on nearby components. Still, one needs to be cautious about doing so, as some guiding principles should be observed to maintain high quality of the models. # 2. General Design Principles Here we will describe a few design principles based on large-scale experimentation with various architectural choices with convolutional networks. At this point, the util- ity of the principles below are speculative and additional future experimental evidence will be necessary to assess their accuracy and domain of validity. Still, grave devia- tions from these principles tended to result in deterioration in the quality of the networks and ï¬ xing situations where those deviations were detected resulted in improved archi- tectures in general.
1512.00567#2
1512.00567#4
1512.00567
[ "1502.01852" ]
1512.00567#4
Rethinking the Inception Architecture for Computer Vision
1. Avoid representational bottlenecks, especially early in the network. Feed-forward networks can be repre- sented by an acyclic graph from the input layer(s) to the classiï¬ er or regressor. This deï¬ nes a clear direction for the information ï¬ ow. For any cut separating the in- puts from the outputs, one can access the amount of information passing though the cut. One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the in- puts to the outputs before reaching the ï¬ nal represen- tation used for the task at hand. Theoretically, infor- mation content can not be assessed merely by the di- mensionality of the representation as it discards impor- tant factors like correlation structure; the dimensional- ity merely provides a rough estimate of information content.
1512.00567#3
1512.00567#5
1512.00567
[ "1502.01852" ]
1512.00567#5
Rethinking the Inception Architecture for Computer Vision
2. Higher dimensional representations are easier to pro- cess locally within a network. Increasing the activa- tions per tile in a convolutional network allows for more disentangled features. The resulting networks will train faster. 3. Spatial aggregation can be done over lower dimen- sional embeddings without much or any loss in rep- resentational power. For example, before performing a more spread out (e.g. 3 Ã 3) convolution, one can re- duce the dimension of the input representation before the spatial aggregation without expecting serious ad- verse effects. We hypothesize that the reason for that is the strong correlation between adjacent unit results in much less loss of information during dimension re- duction, if the outputs are used in a spatial aggrega- tion context. Given that these signals should be easily compressible, the dimension reduction even promotes faster learning.
1512.00567#4
1512.00567#6
1512.00567
[ "1502.01852" ]
1512.00567#6
Rethinking the Inception Architecture for Computer Vision
4. Balance the width and depth of the network. Optimal performance of the network can be reached by balanc- ing the number of ï¬ lters per stage and the depth of the network. Increasing both the width and the depth of the network can contribute to higher quality net- works. However, the optimal improvement for a con- stant amount of computation can be reached if both are increased in parallel. The computational budget should therefore be distributed in a balanced way between the depth and width of the network. Although these principles might make sense, it is not straightforward to use them to improve the quality of net- works out of box. The idea is to use them judiciously in ambiguous situations only. # 3.
1512.00567#5
1512.00567#7
1512.00567
[ "1502.01852" ]
1512.00567#7
Rethinking the Inception Architecture for Computer Vision
Factorizing Convolutions with Large Filter Size Much of the original gains of the GoogLeNet net- work [20] arise from a very generous use of dimension re- duction. This can be viewed as a special case of factorizing convolutions in a computationally efï¬ cient manner. Con- sider for example the case of a 1 à 1 convolutional layer followed by a 3 à 3 convolutional layer. In a vision net- work, it is expected that the outputs of near-by activations are highly correlated. Therefore, we can expect that their activations can be reduced before aggregation and that this should result in similarly expressive local representations. Here we explore other ways of factorizing convolutions in various settings, especially in order to increase the com- putational efï¬
1512.00567#6
1512.00567#8
1512.00567
[ "1502.01852" ]
1512.00567#8
Rethinking the Inception Architecture for Computer Vision
ciency of the solution. Since Inception net- works are fully convolutional, each weight corresponds to Figure 1. Mini-network replacing the 5 à 5 convolutions. one multiplication per activation. Therefore, any reduction in computational cost results in reduced number of param- eters. This means that with suitable factorization, we can end up with more disentangled parameters and therefore with faster training. Also, we can use the computational and memory savings to increase the ï¬ lter-bank sizes of our network while maintaining our ability to train each model replica on a single computer. # 3.1. Factorization into smaller convolutions Convolutions with larger spatial ï¬ lters (e.g. 5 à 5 or 7 à 7) tend to be disproportionally expensive in terms of computation. For example, a 5 à 5 convolution with n ï¬ l- ters over a grid with m ï¬ lters is 25/9 = 2.78 times more computationally expensive than a 3 à 3 convolution with the same number of ï¬ lters. Of course, a 5 à 5 ï¬
1512.00567#7
1512.00567#9
1512.00567
[ "1502.01852" ]
1512.00567#9
Rethinking the Inception Architecture for Computer Vision
lter can cap- ture dependencies between signals between activations of units further away in the earlier layers, so a reduction of the geometric size of the ï¬ lters comes at a large cost of expres- siveness. However, we can ask whether a 5 à 5 convolution could be replaced by a multi-layer network with less pa- rameters with the same input size and output depth. If we zoom into the computation graph of the 5 à 5 convolution, we see that each output looks like a small fully-connected network sliding over 5 à 5 tiles over its input (see Figure 1). Since we are constructing a vision network, it seems natural to exploit translation invariance again and replace the fully connected component by a two layer convolutional archi- tecture: the ï¬ rst layer is a 3 à 3 convolution, the second is a fully connected layer on top of the 3 à 3 output grid of the ï¬ rst layer (see Figure 1). Sliding this small network over the input activation grid boils down to replacing the 5 à 5 convolution with two layers of 3 à 3 convolution (compare Figure 4 with 5). This setup clearly reduces the parameter count by shar- ing the weights between adjacent tiles.
1512.00567#8
1512.00567#10
1512.00567
[ "1502.01852" ]
1512.00567#10
Rethinking the Inception Architecture for Computer Vision
To analyze the ex- Figure 2. One of several control experiments between two Incep- tion models, one of them uses factorization into linear + ReLU layers, the other uses two ReLU layers. After 3.86 million opera- tions, the former settles at 76.2%, while the latter reaches 77.2% top-1 Accuracy on the validation set. pected computational cost savings, we will make a few sim- plifying assumptions that apply for the typical situations: We can assume that n = αm, that is that we want to change the number of activations/unit by a constant alpha factor. Since the 5 à 5 convolution is aggregating, α is typically slightly larger than one (around 1.5 in the case of GoogLeNet). Having a two layer replacement for the 5 à 5 layer, it seems reasonable to reach this expansion in α in both two steps: increasing the number of ï¬ lters by steps. In order to simplify our estimate by choosing α = 1 (no expansion), If we would naivly slide a network without reusing the computation between neighboring grid tiles, we would increase the computational cost. sliding this network can be represented by two 3 à 3 convolutional layers which reuses the activations between adjacent tiles. This way, we end up with a net 9+9 25 à reduction of computation, resulting in a relative gain of 28% by this factorization. The exact same saving holds for the parameter count as each parame- ter is used exactly once in the computation of the activation of each unit.
1512.00567#9
1512.00567#11
1512.00567
[ "1502.01852" ]
1512.00567#11
Rethinking the Inception Architecture for Computer Vision
Still, this setup raises two general questions: Does this replacement result in any loss of expressiveness? If our main goal is to factorize the linear part of the compu- tation, would it not suggest to keep linear activations in the ï¬ rst layer? We have ran several control experiments (for ex- ample see ï¬ gure 2) and using linear activation was always inferior to using rectiï¬ ed linear units in all stages of the fac- torization. We attribute this gain to the enhanced space of variations that the network can learn especially if we batch- normalize [7] the output activations. One can see similar effects when using linear activations for the dimension re- duction components.
1512.00567#10
1512.00567#12
1512.00567
[ "1502.01852" ]
1512.00567#12
Rethinking the Inception Architecture for Computer Vision
# 3.2. Spatial Factorization into Asymmetric Convo- lutions The above results suggest that convolutions with ï¬ lters larger 3 à 3 a might not be generally useful as they can always be reduced into a sequence of 3 à 3 convolutional Figure 3. Mini-network replacing the 3 à 3 convolutions. The lower layer of this network consists of a 3 à 1 convolution with 3 output units. Filter Concat Figure 4. Original Inception module as described in [20]. layers.
1512.00567#11
1512.00567#13
1512.00567
[ "1502.01852" ]
1512.00567#13
Rethinking the Inception Architecture for Computer Vision
Still we can ask the question whether one should factorize them into smaller, for example 2 à 2 convolutions. However, it turns out that one can do even better than 2 à 2 by using asymmetric convolutions, e.g. n à 1. For example using a 3 à 1 convolution followed by a 1 à 3 convolution is equivalent to sliding a two layer network with the same receptive ï¬ eld as in a 3 à 3 convolution (see ï¬ gure 3). Still the two-layer solution is 33% cheaper for the same number of output ï¬ lters, if the number of input and output ï¬ lters is equal. By comparison, factorizing a 3 à 3 convolution into a two 2 à 2 convolution represents only a 11% saving of computation. In theory, we could go even further and argue that one can replace any n à n convolution by a 1 à n convolu- Filter Concat Figure 5. Inception modules where each 5 à 5 convolution is re- placed by two 3 à 3 convolution, as suggested by principle 3 of Section 2. tion followed by a n à 1 convolution and the computational cost saving increases dramatically as n grows (see ï¬ gure 6). In practice, we have found that employing this factorization does not work well on early layers, but it gives very good re- sults on medium grid-sizes (On m à m feature maps, where m ranges between 12 and 20). On that level, very good re- sults can be achieved by using 1 à 7 convolutions followed by 7 à 1 convolutions. # 4. Utility of Auxiliary Classiï¬ ers [20] has introduced the notion of auxiliary classiï¬ ers to improve the convergence of very deep networks. The origi- nal motivation was to push useful gradients to the lower lay- ers to make them immediately useful and improve the con- vergence during training by combating the vanishing gra- dient problem in very deep networks.
1512.00567#12
1512.00567#14
1512.00567
[ "1502.01852" ]
1512.00567#14
Rethinking the Inception Architecture for Computer Vision
Also Lee et al[11] argues that auxiliary classiï¬ ers promote more stable learn- Interestingly, we found that ing and better convergence. auxiliary classiï¬ ers did not result in improved convergence early in the training: the training progression of network with and without side head looks virtually identical before both models reach high accuracy. Near the end of training, the network with the auxiliary branches starts to overtake the accuracy of the network without any auxiliary branch and reaches a slightly higher plateau. Also [20] used two side-heads at different stages in the network. The removal of the lower auxiliary branch did not have any adverse effect on the ï¬ nal quality of the network. Together with the earlier observation in the previous para-
1512.00567#13
1512.00567#15
1512.00567
[ "1502.01852" ]
1512.00567#15
Rethinking the Inception Architecture for Computer Vision
Filter Concat Figure 6. Inception modules after the factorization of the n à n convolutions. In our proposed architecture, we chose n = 7 for the 17 à 17 grid. (The ï¬ lter sizes are picked using principle 3) . graph, this means that original the hypothesis of [20] that these branches help evolving the low-level features is most likely misplaced. Instead, we argue that the auxiliary clas- siï¬ ers act as regularizer. This is supported by the fact that the main classiï¬ er of the network performs better if the side branch is batch-normalized [7] or has a dropout layer. This also gives a weak supporting evidence for the conjecture that batch normalization acts as a regularizer.
1512.00567#14
1512.00567#16
1512.00567
[ "1502.01852" ]
1512.00567#16
Rethinking the Inception Architecture for Computer Vision
# 5. Efï¬ cient Grid Size Reduction Traditionally, convolutional networks used some pooling operation to decrease the grid size of the feature maps. In order to avoid a representational bottleneck, before apply- ing maximum or average pooling the activation dimension of the network ï¬ lters is expanded. For example, starting a d à d grid with k ï¬ lters, if we would like to arrive at a d 2 à d 2 grid with 2k ï¬ lters, we ï¬ rst need to compute a stride-1 con- volution with 2k ï¬ lters and then apply an additional pooling step. This means that the overall computational cost is dom- inated by the expensive convolution on the larger grid using 2d2k2 operations. One possibility would be to switch to pooling with convolution and therefore resulting in 2( d 2 )2k2
1512.00567#15
1512.00567#17
1512.00567
[ "1502.01852" ]
1512.00567#17
Rethinking the Inception Architecture for Computer Vision
Filter Concat Figure 7. Inception modules with expanded the ï¬ lter bank outputs. This architecture is used on the coarsest (8 à 8) grids to promote high dimensional representations, as suggested by principle 2 of Section 2. We are using this solution only on the coarsest grid, since that is the place where producing high dimensional sparse representation is the most critical as the ratio of local processing (by 1 à 1 convolutions) is increased compared to the spatial ag- gregation. tx1x1024 [Fully connected 8x8x1280 5x5x128 I 1x1 Convolution Inception 5x5x768 5x5 Average pooling with stride 3 17x17x768 Figure 8. Auxiliary classiï¬ er on top of the last 17à 17 layer.
1512.00567#16
1512.00567#18
1512.00567
[ "1502.01852" ]
1512.00567#18
Rethinking the Inception Architecture for Computer Vision
Batch normalization[7] of the layers in the side head results in a 0.4% absolute gain in top-1 accuracy. The lower axis shows the number of itertions performed, each with batch size 32. reducing the computational cost by a quarter. However, this creates a representational bottlenecks as the overall dimen- sionality of the representation drops to ( d 2 )2k resulting in less expressive networks (see Figure 9). Instead of doing so, we suggest another variant the reduces the computational cost even further while removing the representational bot- tleneck. (see Figure 10). We can use two parallel stride 2 blocks:
1512.00567#17
1512.00567#19
1512.00567
[ "1502.01852" ]
1512.00567#19
Rethinking the Inception Architecture for Computer Vision
P and C. P is a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the ï¬ lter banks of which are concatenated as in ï¬ gure 10. 17x17x640 17x17x640 | 17x17x320 F rig 35x35x320 Pooling 35x35x640 35x35x320 Figure 9. Two alternative ways of reducing the grid size. The so- lution on the left violates the principle 1 of not introducing an rep- resentational bottleneck from Section 2. The version on the right is 3 times more expensive computationally. Filter Concat 3x3 stride 2 17x17x640 i = 3x3 17x17x320 17x17x320 stride 1 i I con oo Pool 1x1 1x1 stride 2 35x35x320 Base Figure 10. Inception module that reduces the grid-size while ex- pands the ï¬ lter banks. It is both cheap and avoids the representa- tional bottleneck as is suggested by principle 1. The diagram on the right represents the same solution but from the perspective of grid sizes rather than the operations.
1512.00567#18
1512.00567#20
1512.00567
[ "1502.01852" ]
1512.00567#20
Rethinking the Inception Architecture for Computer Vision
# 6. Inception-v2 Here we are connecting the dots from above and pro- pose a new architecture with improved performance on the ILSVRC 2012 classiï¬ cation benchmark. The layout of our network is given in table 1. Note that we have factorized the traditional 7 à 7 convolution into three 3 à 3 convolu- tions based on the same ideas as described in section 3.1. For the Inception part of the network, we have 3 traditional inception modules at the 35 à 35 with 288 ï¬ lters each. This is reduced to a 17 à 17 grid with 768 ï¬ lters using the grid reduction technique described in section 5. This is is fol- lowed by 5 instances of the factorized inception modules as depicted in ï¬ gure 5. This is reduced to a 8 à 8 à 1280 grid with the grid reduction technique depicted in ï¬ gure 10. At the coarsest 8 à 8 level, we have two Inception modules as depicted in ï¬ gure 6, with a concatenated output ï¬ lter bank size of 2048 for each tile. The detailed structure of the net- work, including the sizes of ï¬ lter banks inside the Inception modules, is given in the supplementary material, given in the model.txt that is in the tar-ï¬ le of this submission. type conv conv conv padded pool conv conv conv 3à Inception 5à Inception 2à Inception pool linear softmax patch size/stride or remarks 3à 3/2 3à 3/1 3à 3/1 3à 3/2 3à 3/1 3à 3/2 3à 3/1 As in ï¬ gure 5 As in ï¬ gure 6 As in ï¬ gure 7 8 à 8 logits classiï¬ er input size 299à 299à 3 149à 149à 32 147à 147à 32 147à 147à 64 73à 73à 64 71à 71à 80 35à 35à 192 35à 35à 288 17à 17à 768 8à 8à 1280 8 à 8 à 2048 1 à 1 à 2048 1 à 1 Ã
1512.00567#19
1512.00567#21
1512.00567
[ "1502.01852" ]
1512.00567#21
Rethinking the Inception Architecture for Computer Vision
1000 Table 1. The outline of the proposed network architecture. The output size of each module is the input size of the next one. We are using variations of reduction technique depicted Figure 10 to reduce the grid sizes between the Inception blocks whenever ap- plicable. We have marked the convolution with 0-padding, which is used to maintain the grid size. 0-padding is also used inside those Inception modules that do not reduce the grid size. All other layers do not use padding.
1512.00567#20
1512.00567#22
1512.00567
[ "1502.01852" ]
1512.00567#22
Rethinking the Inception Architecture for Computer Vision
The various ï¬ lter bank sizes are chosen to observe principle 4 from Section 2. However, we have observed that the quality of the network is relatively stable to variations as long as the principles from Section 2 are observed. Although our network is 42 layers deep, our computation cost is only about 2.5 higher than that of GoogLeNet and it is still much more efï¬ cient than VGGNet. # 7. Model Regularization via Label Smoothing Here we propose a mechanism to regularize the classiï¬ er layer by estimating the marginalized effect of label-dropout during training. For each training example x, our model computes the probability of each label k â ¬ {1...K}: p(k|z) = sees: Here, z; are the /ogits or unnormalized log- probabilities. Consider the ground-truth distribution over labels q(k|x) for this training example, normalized so that >, a(k|z) = 1. For brevity, let us omit the dependence of p and q on example x. We define the loss for the ex- ample as the cross entropy: £ = â ian log(p(k))q(k). Minimizing this is equivalent to maximizing the expected log-likelihood of a label, where the label is selected accord- ing to its ground-truth distribution q(k). Cross-entropy loss is differentiable with respect to the logits z;, and thus can be used for gradient training of deep models. The gradient has a rather simple form: we = p(k) â q(k), which is bounded between â 1 and 1. Consider the case of a single ground-truth label y, so that q(y) = 1 and q(k) = 0 for all k # y. In this case, minimizing the cross entropy is equivalent to maximizing the log-likelihood of the correct label. For a particular ex- ample x with label y, the log-likelihood is maximized for q(k) = dx,y, where 54,4 is Dirac delta, which equals 1 for k = y and 0 otherwise. This maximum is not achievable for finite z, but is approached if zy >> zz for all k A y â that is, if the logit corresponding to the ground-truth la- bel is much great than all other logits.
1512.00567#21
1512.00567#23
1512.00567
[ "1502.01852" ]
1512.00567#23
Rethinking the Inception Architecture for Computer Vision
This, however, can cause two problems. First, it may result in over-fitting: if the model learns to assign full probability to the ground- truth label for each training example, it is not guaranteed to generalize. Second, it encourages the differences between the largest logit and all others to become large, and this, combined with the bounded gradient a, reduces the abil- ity of the model to adapt. Intuitively, this happens because the model becomes too confident about its predictions. We propose a mechanism for encouraging the model to be less confident. While this may not be desired if the goal is to maximize the log-likelihood of training labels, it does regularize the model and makes it more adaptable.
1512.00567#22
1512.00567#24
1512.00567
[ "1502.01852" ]
1512.00567#24
Rethinking the Inception Architecture for Computer Vision
The method is very simple. Consider a distribution over labels u(k), independent of the training example x, and a smooth- ing parameter ¢. For a training example with ground-truth label y, we replace the label distribution q(k|) = dx, with qd (k\x) = (1 â â ¬)dx,y + eu(k) which is a mixture of the original ground-truth distribution q(k|x) and the fixed distribution u(k), with weights 1 â â ¬ and ¢, respectively.
1512.00567#23
1512.00567#25
1512.00567
[ "1502.01852" ]
1512.00567#25
Rethinking the Inception Architecture for Computer Vision
This can be seen as the distribution of the label k obtained as follows: first, set it to the ground- truth label k = y; then, with probability â ¬, replace k with a sample drawn from the distribution u(k). We propose to use the prior distribution over labels as u(k). In our exper- iments, we used the uniform distribution u(k) = 1/K, so that â ¬ KE 1(k) = (1 â ¬)bky + We refer to this change in ground-truth label distribution as label-smoothing regularization, or LSR. Note that LSR achieves the desired goal of preventing the largest logit from becoming much larger than all others. Indeed, if this were to happen, then a single g(k) would approach 1 while all others would approach 0. This would result in a large cross-entropy with q/(k) because, unlike q(k) = dp,y, all q/(k) have a positive lower bound. Another interpretation of LSR can be obtained by con- sidering the cross entropy: K H(q,p)=- So log p(k)q'(k) = (1-e)H(q, p)+eH (u, p) k=1 Thus, LSR is equivalent to replacing a single cross-entropy loss H(q, p) with a pair of such losses H(q, p) and H(u, p). The second loss penalizes the deviation of predicted label distribution p from the prior wu, with the relative weight -. Note that this deviation could be equivalently captured by the KL divergence, since H(u,p) = Dxz(ullp) + H(u) and H(u) is fixed. When w is the uniform distribution, H(u,p) is a measure of how dissimilar the predicted dis- tribution p is to uniform, which could also be measured (but not equivalently) by negative entropy â H(p); we have not experimented with this approach. In our ImageNet experiments with K = 1000 classes, we used u(k) = 1/1000 and â ¬ = 0.1. For ILSVRC 2012, we have found a consistent improvement of about 0.2% ab- solute both for top-1 error and the top-5 error (cf. Table{3). # 8.
1512.00567#24
1512.00567#26
1512.00567
[ "1502.01852" ]
1512.00567#26
Rethinking the Inception Architecture for Computer Vision
Training Methodology We have trained our networks with stochastic gradient utilizing the TensorFlow [1] distributed machine learning system using 50 replicas running each on a NVidia Kepler GPU with batch size 32 for 100 epochs. Our earlier experi- ments used momentum with a decay of 0.9, while our best models were achieved using RMSProp with de- cay of 0.9 and « = 1.0. We used a learning rate of 0.045, decayed every two epoch using an exponential rate of 0.94. In addition, gradient clipping with threshold 2.0 was found to be useful to stabilize the training. Model evalua- tions are performed using a running average of the parame- ters computed over time.
1512.00567#25
1512.00567#27
1512.00567
[ "1502.01852" ]
1512.00567#27
Rethinking the Inception Architecture for Computer Vision
# 9. Performance on Lower Resolution Input A typical use-case of vision networks is for the the post- classiï¬ cation of detection, for example in the Multibox [4] context. This includes the analysis of a relative small patch of the image containing a single object with some context. The tasks is to decide whether the center part of the patch corresponds to some object and determine the class of the object if it does. The challenge is that objects tend to be relatively small and low-resolution. This raises the question of how to properly deal with lower resolution input. The common wisdom is that models employing higher resolution receptive ï¬ elds tend to result in signiï¬ cantly im- proved recognition performance. However it is important to distinguish between the effect of the increased resolution of the ï¬ rst layer receptive ï¬ eld and the effects of larger model capacitance and computation. If we just change the reso- lution of the input without further adjustment to the model, then we end up using computationally much cheaper mod- els to solve more difï¬
1512.00567#26
1512.00567#28
1512.00567
[ "1502.01852" ]
1512.00567#28
Rethinking the Inception Architecture for Computer Vision
cult tasks. Of course, it is natural, that these solutions loose out already because of the reduced computational effort. In order to make an accurate assess- ment, the model needs to analyze vague hints in order to be able to â hallucinateâ the ï¬ ne details. This is computa- tionally costly. The question remains therefore: how much Receptive Field Size Top-1 Accuracy (single frame) 79 à 79 75.2% 151 à 151 76.4% 299 à 299 76.6% Table 2.
1512.00567#27
1512.00567#29
1512.00567
[ "1502.01852" ]
1512.00567#29
Rethinking the Inception Architecture for Computer Vision
Comparison of recognition performance when the size of the receptive ï¬ eld varies, but the computational cost is constant. does higher input resolution helps if the computational ef- fort is kept constant. One simple way to ensure constant effort is to reduce the strides of the ï¬ rst two layer in the case of lower resolution input, or by simply removing the ï¬ rst pooling layer of the network. For this purpose we have performed the following three experiments: 1. 299 à 299 receptive ï¬ eld with stride 2 and maximum pooling after the ï¬ rst layer. 2. 151 à 151 receptive ï¬ eld with stride 1 and maximum pooling after the ï¬ rst layer.
1512.00567#28
1512.00567#30
1512.00567
[ "1502.01852" ]
1512.00567#30
Rethinking the Inception Architecture for Computer Vision
3. 79 à 79 receptive ï¬ eld with stride 1 and without pool- ing after the ï¬ rst layer. All three networks have almost identical computational cost. Although the third network is slightly cheaper, the cost of the pooling layer is marginal and (within 1% of the total cost of the)network. In each case, the networks were trained until convergence and their quality was measured on the validation set of the ImageNet ILSVRC 2012 classiï¬ ca- tion benchmark. The results can be seen in table 2. Al- though the lower-resolution networks take longer to train, the quality of the ï¬ nal result is quite close to that of their higher resolution counterparts. However, if one would just naively reduce the network size according to the input resolution, then network would perform much more poorly. However this would an unfair comparison as we would are comparing a 16 times cheaper model on a more difï¬
1512.00567#29
1512.00567#31
1512.00567
[ "1502.01852" ]
1512.00567#31
Rethinking the Inception Architecture for Computer Vision
cult task. Also these results of table 2 suggest, one might con- sider using dedicated high-cost low resolution networks for smaller objects in the R-CNN [5] context. # 10. Experimental Results and Comparisons Table 3 shows the experimental results about the recog- nition performance of our proposed architecture (Inception- v2) as described in Section 6. Each Inception-v2 line shows the result of the cumulative changes including the high- lighted new modiï¬ cation plus all the earlier ones. Label Smoothing refers to method described in Section 7. Fac- torized 7 à 7 includes a change that factorizes the ï¬ rst 7 à 7 convolutional layer into a sequence of 3 à 3 convo- lutional layers.
1512.00567#30
1512.00567#32
1512.00567
[ "1502.01852" ]
1512.00567#32
Rethinking the Inception Architecture for Computer Vision
BN-auxiliary refers to the version in which Network GoogLeNet [20] BN-GoogLeNet BN-Inception [7] Inception-v2 Inception-v2 RMSProp Inception-v2 Label Smoothing Inception-v2 Factorized 7 Ã 7 Inception-v2 BN-auxiliary Top-1 Error 29% 26.8% 25.2% 23.4% Top-5 Error 9.2% - 7.8 - 23.1% 6.3 22.8% 6.1 21.6% 5.8 21.2% 5.6% Cost Bn Ops 1.5 1.5 2.0 3.8 3.8 3.8 4.8 4.8 Table 3. Single crop experimental results comparing the cumula- tive effects on the various contributing factors. We compare our numbers with the best published single-crop inference for Ioffe at al [7].
1512.00567#31
1512.00567#33
1512.00567
[ "1502.01852" ]
1512.00567#33
Rethinking the Inception Architecture for Computer Vision
For the â Inception-v2â lines, the changes are cumulative and each subsequent line includes the new change in addition to the previous ones. The last line is referring to all the changes is what we refer to as â Inception-v3â below. Unfortunately, He et al [6] reports the only 10-crop evaluation results, but not single crop results, which is reported in the Table 4 below. Network GoogLeNet [20] GoogLeNet [20] VGG [18] BN-Inception [7] PReLU [6] PReLU [6] Inception-v3 Inception-v3 Crops Evaluated 10 144 - 144 10 - 12 144 Top-1 Error 9.15% 7.89% 6.8% 5.82% 24.27% 7.38% 21.59% 5.71% 19.47% 4.48% 18.77% 4.2% Top-5 Error - - 24.4% 22% Table 4. Single-model, multi-crop experimental results compar- ing the cumulative effects on the various contributing factors. We compare our numbers with the best published single-model infer- ence results on the ILSVRC 2012 classiï¬ cation benchmark. the fully connected layer of the auxiliary classiï¬ er is also batch-normalized, not just the convolutions. We are refer- ring to the model in last row of Table 3 as Inception-v3 and evaluate its performance in the multi-crop and ensemble set- tings. All our evaluations are done on the 48238 non- blacklisted examples on the ILSVRC-2012 validation set, as suggested by [16]. We have evaluated all the 50000 ex- amples as well and the results were roughly 0.1% worse in top-5 error and around 0.2% in top-1 error. In the upcom- ing version of this paper, we will verify our ensemble result on the test set, but at the time of our last evaluation of BN- Inception in spring [7] indicates that the test and validation set error tends to correlate very well.
1512.00567#32
1512.00567#34
1512.00567
[ "1502.01852" ]
1512.00567#34
Rethinking the Inception Architecture for Computer Vision
Network VGGNet [18] GoogLeNet [20] PReLU [6] BN-Inception [7] Inception-v3 Models Evaluated 2 7 - 6 4 Crops Evaluated - 144 - 144 144 Top-1 Top-5 Error Error 23.7% 6.8% - 6.67% - 4.94% 20.1% 4.9% 17.2% 3.58%â Table 5. Ensemble evaluation results comparing multi-model, multi-crop reported results. Our numbers are compared with the best published ensemble inference results on the ILSVRC 2012 classiï¬ cation benchmark. â All results, but the top-5 ensemble result reported are on the validation set. The ensemble yielded 3.46% top-5 error on the validation set.
1512.00567#33
1512.00567#35
1512.00567
[ "1502.01852" ]
1512.00567#35
Rethinking the Inception Architecture for Computer Vision
# 11. Conclusions We have provided several design principles to scale up convolutional networks and studied them in the context of the Inception architecture. This guidance can lead to high performance vision networks that have a relatively mod- est computation cost compared to simpler, more monolithic architectures. Our highest quality version of Inception-v3 reaches 21.2%, top-1 and 5.6% top-5 error for single crop evaluation on the ILSVR 2012 classiï¬ cation, setting a new state of the art. This is achieved with relatively modest (2.5à ) increase in computational cost compared to the net- work described in Ioffe et al [7]. Still our solution uses much less computation than the best published results based on denser networks: our model outperforms the results of He et al [6] â cutting the top-5 (top-1) error by 25% (14%) relative, respectively â while being six times cheaper com- putationally and using at least ï¬ ve times less parameters (estimated). Our ensemble of four Inception-v3 models reaches 3.5% with multi-crop evaluation reaches 3.5% top- 5 error which represents an over 25% reduction to the best published results and is almost half of the error of ILSVRC 2014 winining GoogLeNet ensemble. We have also demonstrated that high quality results can be reached with receptive ï¬ eld resolution as low as 79 à 79. This might prove to be helpful in systems for detecting rel- atively small objects. We have studied how factorizing con- volutions and aggressive dimension reductions inside neural network can result in networks with relatively low computa- tional cost while maintaining high quality. The combination of lower parameter count and additional regularization with batch-normalized auxiliary classiï¬ ers and label-smoothing allows for training high quality networks on relatively mod- est sized training sets.
1512.00567#34
1512.00567#36
1512.00567
[ "1502.01852" ]
1512.00567#36
Rethinking the Inception Architecture for Computer Vision
# References [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghe- mawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. War- den, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng.
1512.00567#35
1512.00567#37
1512.00567
[ "1502.01852" ]
1512.00567#37
Rethinking the Inception Architecture for Computer Vision
Tensor- Flow: Large-scale machine learning on heterogeneous sys- tems, 2015. Software available from tensorï¬ ow.org. [2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In Proceedings of The 32nd International Conference on Machine Learning, 2015. [3] C. Dong, C. C. Loy, K. He, and X. Tang.
1512.00567#36
1512.00567#38
1512.00567
[ "1502.01852" ]
1512.00567#38
Rethinking the Inception Architecture for Computer Vision
Learning a deep convolutional network for image super-resolution. In Com- puter Visionâ ECCV 2014, pages 184â 199. Springer, 2014. [4] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Confer- ence on, pages 2155â
1512.00567#37
1512.00567#39
1512.00567
[ "1502.01852" ]
1512.00567#39
Rethinking the Inception Architecture for Computer Vision
2162. IEEE, 2014. [5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE Conference on segmentation. Computer Vision and Pattern Recognition (CVPR), 2014. [6] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ ers: Surpassing human-level performance on imagenet classiï¬ cation. arXiv preprint arXiv:1502.01852, 2015. [7] S. Ioffe and C. Szegedy.
1512.00567#38
1512.00567#40
1512.00567
[ "1502.01852" ]
1512.00567#40
Rethinking the Inception Architecture for Computer Vision
Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Ma- chine Learning, pages 448â 456, 2015. [8] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬ cation with con- In Computer Vision and Pat- volutional neural networks. tern Recognition (CVPR), 2014 IEEE Conference on, pages 1725â
1512.00567#39
1512.00567#41
1512.00567
[ "1502.01852" ]
1512.00567#41
Rethinking the Inception Architecture for Computer Vision
1732. IEEE, 2014. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. [10] A. Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv:1509.09308, 2015. [11] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply- supervised nets. arXiv preprint arXiv:1409.5185, 2014. [12] J. Long, E. Shelhamer, and T. Darrell.
1512.00567#40
1512.00567#42
1512.00567
[ "1502.01852" ]
1512.00567#42
Rethinking the Inception Architecture for Computer Vision
Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â 3440, 2015. [13] Y. Movshovitz-Attias, Q. Yu, M. C. Stumpe, V. Shet, S. Arnoud, and L. Yatziv. Ontological supervision for ï¬ ne grained classiï¬ cation of street view storefronts. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1693â 1702, 2015. [14] R. Pascanu, T. Mikolov, and Y. Bengio.
1512.00567#41
1512.00567#43
1512.00567
[ "1502.01852" ]
1512.00567#43
Rethinking the Inception Architecture for Computer Vision
On the difï¬ - culty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012. [15] D. C. Psichogios and L. H. Ungar. Svd-net: an algorithm that automatically selects network structure. IEEE transac- tions on neural networks/a publication of the IEEE Neural Networks Council, 5(3):513â 515, 1993. [16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al.
1512.00567#42
1512.00567#44
1512.00567
[ "1502.01852" ]
1512.00567#44
Rethinking the Inception Architecture for Computer Vision
Imagenet large scale visual recognition challenge. 2014. [17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ ed embedding for face recognition and clustering. arXiv preprint arXiv:1503.03832, 2015. [18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [19] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Ma- chine Learning (ICML-13), volume 28, pages 1139â
1512.00567#43
1512.00567#45
1512.00567
[ "1502.01852" ]
1512.00567#45
Rethinking the Inception Architecture for Computer Vision
1147. JMLR Workshop and Conference Proceedings, May 2013. [20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â 9, 2015. [21] T. Tieleman and G. Hinton.
1512.00567#44
1512.00567#46
1512.00567
[ "1502.01852" ]
1512.00567#46
Rethinking the Inception Architecture for Computer Vision
Divide the gradient by a run- ning average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Accessed: 2015- 11-05. [22] A. Toshev and C. Szegedy. Deeppose: Human pose estima- tion via deep neural networks. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 1653â
1512.00567#45
1512.00567#47
1512.00567
[ "1502.01852" ]
1512.00567#47
Rethinking the Inception Architecture for Computer Vision
1660. IEEE, 2014. [23] N. Wang and D.-Y. Yeung. Learning a deep compact image In Advances in Neural representation for visual tracking. Information Processing Systems, pages 809â 817, 2013.
1512.00567#46
1512.00567
[ "1502.01852" ]
1511.08630#0
A C-LSTM Neural Network for Text Classification
2015: 5 1 0 2 # v o N 0 3 # ] L C . s c [ arXiv:1511.08630v2 [cs.CL] 2 v 0 3 6 8 0 . 1 1 5 1 : v i X r a # A C-LSTM Neural Network for Text Classiï¬ cation Chunting Zhou1, Chonglin Sun2, Zhiyuan Liu3, Francis C.M. Lau1 Department of Computer Science, The University of Hong Kong1 School of Innovation Experiment, Dalian University of Technology2 Department of Computer Science and Technology, Tsinghua University, Beijing3 # Abstract
1511.08630#1
1511.08630
[ "1511.08630" ]
1511.08630#1
A C-LSTM Neural Network for Text Classification
Neural network models have been demon- strated to be capable of achieving remarkable performance in sentence and document mod- eling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and uniï¬ ed model called C-LSTM for sentence representation and text classiï¬ cation. C-LSTM utilizes CNN to ex- tract a sequence of higher-level phrase repre- sentations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence se- mantics.
1511.08630#0
1511.08630#2
1511.08630
[ "1511.08630" ]
1511.08630#2
A C-LSTM Neural Network for Text Classification
We evaluate the proposed archi- tecture on sentiment classiï¬ cation and ques- tion classiï¬ cation tasks. The experimental re- sults show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks. # 1 Introduction As one of the core steps in NLP, sentence modeling aims at representing sentences as meaningful features for tasks such as sentiment classiï¬ cation. Traditional sentence modeling uses the bag-of- words model which often suffers from the curse of dimensionality; others use composition based methods instead, e.g., an algebraic operation over semantic word vectors to produce the semantic sentence vector.
1511.08630#1
1511.08630#3
1511.08630
[ "1511.08630" ]
1511.08630#3
A C-LSTM Neural Network for Text Classification
However, such methods may not perform well due to the loss of word order informa- tion. More recent models for distributed sentence representation fall into two categories according to the form of input sentence: sequence-based models and tree-structured models. Sequence-based models from word construct sequences by taking in account the relationship be- tween successive words (Johnson and Zhang, 2015). Tree-structured models treat each word token as a node in a syntactic parse tree and learn sentence representations from leaves to the root in a recursive manner (Socher et al., 2013b). (CNNs) (RNNs) have and recurrent neural networks emerged architectures and are often combined with sequence-based (Tai et al., 2015; or Lei et al., 2015; Kim, 2014; Kalchbrenner et al., 2014; Mou et al., 2015). Owing to the capability of capturing local cor- relations of spatial or temporal structures, CNNs have achieved top performance in computer vi- sion, speech recognition and NLP. For sentence modeling, CNNs perform excellently in extracting n-gram features at different positions of a sentence through convolutional ï¬ lters, and can learn short and long-range relations through pooling opera- tions. CNNs have been successfully combined with both sequence-based model (Denil et al., 2014; Kalchbrenner et al., 2014) tree-structured model (Mou et al., 2015) in sentence modeling. The other popular neural network architecture â RNN â is able to handle sequences of any length and capture long-term dependencies.
1511.08630#2
1511.08630#4
1511.08630
[ "1511.08630" ]
1511.08630#4
A C-LSTM Neural Network for Text Classification
To avoid the problem of gradient exploding or vanishing in the standard RNN, Long Short-term Memory RNN (LSTM) (Hochreiter and Schmidhuber, 1997) and other variants (Cho et al., 2014) were designed for better remembering and memory accesses. Along with the sequence-based (Tang et al., 2015) or the tree-structured (Tai et al., 2015) models, RNNs have achieved remarkable results in sentence or document modeling. To conclude, CNN is able to learn local response from temporal or spatial data but lacks the ability of learning sequential correlations; on the other hand, RNN is specilized for sequential modelling but unable to extract features in a parallel way. It has been shown that higher-level modeling of xt can help to disentangle underlying factors of variation within the input, which should then make it easier to learn temporal structure between successive time steps (Pascanu et al., 2014). For example, Sainath et al. (Sainath et al., 2015) have obtained respectable improvements in WER by learning a deep LSTM from multi-scale inputs. We explore training the LSTM model directly from sequences of higher- level representaions while preserving the sequence order of these representaions. In this paper, we introduce a new architecture short for C-LSTM by combining CNN and LSTM to model sentences.
1511.08630#3
1511.08630#5
1511.08630
[ "1511.08630" ]
1511.08630#5
A C-LSTM Neural Network for Text Classification
To beneï¬ t from the advantages of both CNN and RNN, we design a simple end-to-end, uniï¬ ed architecture by feeding the output of a one-layer CNN into LSTM. The CNN is constructed on top of the pre-trained word vectors from massive unlabeled text data to learn higher-level representions of n-grams. Then to learn sequential correlations from higher-level suqence representations, the feature maps of CNN are organized as sequential window features to serve as the input of LSTM. In this way, instead of constructing LSTM directly from the input sentence, we ï¬ rst transform each sentence into successive window (n-gram) features to help disentangle factors of variations within sentences. We choose sequence-based input other than relying on the syntactic parse trees before feeding in the neural network, thus our model doesnâ t rely on any external language knowledge and complicated pre-processing. In our experiments, we evaluate the semantic sentence representations learned from C-LSTM with two tasks: sentiment classiï¬ cation and 6-way question classiï¬ cation.
1511.08630#4
1511.08630#6
1511.08630
[ "1511.08630" ]
1511.08630#6
A C-LSTM Neural Network for Text Classification
Our evaluations show that the C-LSTM model can achieve excellent results with several benchmarks as compared with a wide range of baseline models. We also show that the combination of CNN and LSTM outperforms individual multi-layer CNN models and RNN models, which indicates that LSTM can learn long- term dependencies from sequences of higher-level representations better than the other models. # 2 Related Work network mod- Deep in many els distributed NLP word, representa- tion (Mikolov et al., 2013b; Le and Mikolov, 2014), parsing (Socher et al., 2013a), statistical machine translation (Devlin et al., 2014), sentiment clas- siï¬ cation (Kim, 2014), etc. Learning distributed sentence representation through neural network models requires little external domain knowledge and can reach satisfactory results in related tasks like sentiment classiï¬ cation, text categorization. In many recent sentence representation learning works, neural network models are constructed upon either the input word sequences or the transformed syntactic parse tree. Among them, convolutional neural network (CNN) and recurrent neural network (RNN) are two popular ones. The capability of capturing local correlations along with extracting higher-level correlations through pooling empowers CNN to model sen- tences naturally from consecutive context windows. In (Collobert et al., 2011), Collobert et al. applied convolutional ï¬ lters to successive windows for a given sequence to extract global features by max-pooling. As a slight variant, Kim et al. (2014) proposed a CNN architecture with multiple ï¬ lters (with a varying window size) and two â channelsâ To capture word relations of of word vectors. varying sizes, Kalchbrenner et al. (2014) proposed In a more a dynamic k-max pooling mechanism. apply recent work (Lei et al., 2015), Tao et al. tensor-based operations between words to replace linear operations on concatenated word vectors layer and explore in the standard convolutional the non-linear interactions between nonconsective n-grams. Mou et al. (2015) also explores convolu- tional models on tree-structured sentences. As a sequence model, RNN is able to deal with variable-length input sequences and discover long-term dependencies.
1511.08630#5
1511.08630#7
1511.08630
[ "1511.08630" ]
1511.08630#7
A C-LSTM Neural Network for Text Classification
Various variants of RNN have been proposed to better store and access (Hochreiter and Schmidhuber, 1997; memories Cho et al., 2014). With the ability of explicitly modeling time-series data, RNNs are being increas- ingly applied to sentence modeling. For example, Tai et al. (2015) adjusted the standard LSTM to tree-structured topologies and obtained superior results over a sequential LSTM on related tasks. In this paper, we stack CNN and LSTM in a uniï¬ ed architecture for semantic sentence mod- eling. The combination of CNN and LSTM can be seen in some computer vision tasks like image and speech recogni- caption (Xu et al., 2015) tion (Sainath et al., 2015). Most of these models use multi-layer CNNs and train CNNs and RNNs separately or throw the output of a fully connected layer of CNN into RNN as inputs. Our approach is different: we apply CNN to text data and feed con- secutive window features directly to LSTM, and so our architecture enables LSTM to learn long-range fea- dependencies from higher-order sequential tures. In (Li et al., 2015), the authors suggest that sequence-based models are sufï¬ cient to capture the compositional semantics for many NLP tasks, thus in this work the CNN is directly built upon word sequences other than the syntactic parse tree. Our experiments on sentiment classiï¬ cation and 6-way question classiï¬ cation tasks clearly demonstrate the superiority of our model over single CNN or LSTM model and other related sequence-based models. # 3 C-LSTM Model The architecture of the C-LSTM model is shown in Figure 1, which consists of two main components: convolutional neural network (CNN) and long short- term memory network (LSTM). The following two subsections describe how we apply CNN to extract higher-level sequences of word features and LSTM to capture long-term dependencies over window fea- ture sequences respectively.
1511.08630#6
1511.08630#8
1511.08630
[ "1511.08630" ]
1511.08630#8
A C-LSTM Neural Network for Text Classification
The movie is awesome ! L à d iput x feature maps window feature sequence LSTM Figure 1: The architecture of C-LSTM for sentence modeling. Blocks of the same color in the feature map layer and window feature sequence layer corresponds to features for the same win- dow. The dashed lines connect the feature of a window with the source feature map. The ï¬ nal output of the entire model is the last hidden unit of LSTM. # 3.1 N-gram Feature Extraction through Convolution The one-dimensional convolution involves a ï¬ lter vector sliding over a sequence and detecting fea- tures at different positions. Let xi â Rd be the d-dimensional word vectors for the i-th word in a sentence. Let x â RLà d denote the input sentence where L is the length of the sentence. Let k be the length of the ï¬ lter, and the vector m â Rkà d is a ï¬ l- ter for the convolution operation. For each position j in the sentence, we have a window vector wj with k consecutive word vectors, denoted as: wj = [xj, xj+1, · · · , xj+kâ 1] (1) Here, the commas represent row vector concatena- tion. A ï¬ lter m convolves with the window vectors (k-grams) at each position in a valid way to gener- ate a feature map c â RLâ k+1; each element cj of the feature map for window vector wj is produced as follows: cj = f (wj â ¦ m + b), (2) where â ¦ is element-wise multiplication, b â R is a bias term and f is a nonlinear transformation func- tion that can be sigmoid, hyperbolic tangent, etc. In our case, we choose ReLU (Nair and Hinton, 2010) as the nonlinear function. The C-LSTM model uses multiple ï¬ lters to generate multiple feature maps. For n ï¬ lters with the same length, the generated n feature maps can be rearranged as feature represen- tations for each window wj, W = [c1; c2; · · · ; cn] (3)
1511.08630#7
1511.08630#9
1511.08630
[ "1511.08630" ]
1511.08630#9
A C-LSTM Neural Network for Text Classification
Here, semicolons represent column vector concate- nation and ci is the feature map generated with the i-th ï¬ lter. Each row Wj of W â R(Lâ k+1)à n is the new feature representation generated from n ï¬ lters for the window vector at position j. The new succes- sive higher-order window representations then are fed into LSTM which is described below. A max-over-pooling or dynamic k-max pooling is often applied to feature maps after the convolu- tion to select the most or the k-most important fea- tures. However, LSTM is speciï¬ ed for sequence input, and pooling will break such sequence orga- nization due to the discontinuous selected features. Since we stack an LSTM neural neural network on top of the CNN, we will not apply pooling after the convolution operation. # 3.2 Long Short-Term Memory Networks Recurrent neural networks (RNNs) are able to prop- agate historical information via a chain-like neu- ral network architecture. While processing se- quential data, it looks at the current input xt as well as the previous output of hidden state htâ 1 at each time step. However, standard RNNs be- comes unable to learn long-term dependencies as the gap between two time steps becomes large. To address this issue, LSTM was ï¬ rst introduced in (Hochreiter and Schmidhuber, 1997) and re- emerged as a successful architecture since Ilya et al. (2014) obtained remarkable performance in sta- tistical machine translation. Although many vari- ants of LSTM were proposed, we adopt the standard architecture (Hochreiter and Schmidhuber, 1997) in this work. The LSTM architecture has a range of repeated modules for each time step as in a standard RNN. At each time step, the output of the module is con- trolled by a set of gates in Rd as a function of the old hidden state htâ 1 and the input at the current time step xt: the forget gate ft, the input gate it, and the output gate ot. These gates collectively decide how to update the current memory cell ct and the cur- rent hidden state ht. We use d to denote the mem- ory dimension in the LSTM and all vectors in this architecture share the same dimension.
1511.08630#8
1511.08630#10
1511.08630
[ "1511.08630" ]
1511.08630#10
A C-LSTM Neural Network for Text Classification
The LSTM transition functions are deï¬ ned as follows: it = Ï (Wi · [htâ 1, xt] + bi) ft = Ï (Wf · [htâ 1, xt] + bf ) qt = tanh(Wq · [htâ 1, xt] + bq) ot = Ï (Wo · [htâ 1, xt] + bo) ct = ft â ctâ 1 + it â qt ht = ot â tanh(ct) (4) Here, Ï is the logistic sigmoid function that has an output in [0, 1], tanh denotes the hyperbolic tangent function that has an output in [â 1, 1], and â denotes the elementwise multiplication. To understand the mechanism behind the architecture, we can view ft as the function to control to what extent the informa- tion from the old memory cell is going to be thrown away, it to control how much new information is go- ing to be stored in the current memory cell, and ot to control what to output based on the memory cell ct.
1511.08630#9
1511.08630#11
1511.08630
[ "1511.08630" ]
1511.08630#11
A C-LSTM Neural Network for Text Classification
LSTM is explicitly designed for time-series data for learning long-term dependencies, and therefore we choose LSTM upon the convolution layer to learn such dependencies in the sequence of higher-level features. # 4 Learning C-LSTM for Text Classiï¬ cation For text classiï¬ cation, we regard the output of the hidden state at the last time step of LSTM as the document representation and we add a softmax layer on top. We train the entire model by minimizing the cross-entropy error. Given a training sample x(i) and its true label y(i) â {1, 2, · · · , k} where k is the number of possible labels and the estimated proba- y(i) j â [0, 1] for each label j â {1, 2, · · · , k}, bilities e the error is deï¬ ned as: k L(x(i), y(i)) = X j=1 1{y(i) = j} log( y(i) j ), e (5) such where that otherwise 1{condition is false} = 0.
1511.08630#10
1511.08630#12
1511.08630
[ "1511.08630" ]
1511.08630#12
A C-LSTM Neural Network for Text Classification
We employ stochas- tic gradient descent (SGD) to learn the model parameters optimizer RM- Sprop (Tieleman and Hinton, 2012). # 4.1 Padding and Word Vector Initialization First, we use maxlen to denote the maximum length of the sentence in the training set. As the convo- lution layer in our model requires ï¬ xed-length in- put, we pad each sentence that has a length less than maxlen with special symbols at the end that indicate the unknown words. For a sentence in the test dataset, we pad sentences that are shorter than maxlen in the same way, but for sentences that have a length longer than maxlen, we simply cut extra words at the end of these sentences to reach maxlen. We initialize word vectors with the publicly avail- able word2vec vectors1 that are pre-trained using about 100B words from the Google News Dataset. The dimensionality of the word vectors is 300. We also initialize the word vector for the unknown words from the uniform distribution [-0.25, 0.25].
1511.08630#11
1511.08630#13
1511.08630
[ "1511.08630" ]
1511.08630#13
A C-LSTM Neural Network for Text Classification
We then ï¬ ne-tune the word vectors along with other model parameters during training. # 4.2 Regularization For regularization, we employ two commonly used techniques: dropout (Hinton et al., 2012) and L2 weight regularization. We apply dropout to pre- vent co-adaptation. In our model, we either apply dropout to word vectors before feeding the sequence of words into the convolutional layer or to the output of LSTM before the softmax layer. The L2 regular- ization is applied to the weight of the softmax layer.
1511.08630#12
1511.08630#14
1511.08630
[ "1511.08630" ]
1511.08630#14
A C-LSTM Neural Network for Text Classification
# 5 Experiments We evaluate the C-LSTM model on two tasks: (1) sentiment classiï¬ cation, and (2) question type clas- siï¬ cation. In this section, we introduce the datasets and the experimental settings. # 5.1 Datasets Sentiment Classiï¬ cation: Our task in this regard is to predict the sentiment polarity of movie reviews. We use the Stanford Sentiment Treebank (SST) benchmark (Socher et al., 2013b). This dataset consists of 11855 movie reviews and are split into train (8544), dev (1101), and test (2210). Sentences in this corpus are parsed and all phrases along with the sentences are fully annotated with 1http://code.google.com/p/word2vec/ 5 labels: very positive, positive, neural, negative, very negative.
1511.08630#13
1511.08630#15
1511.08630
[ "1511.08630" ]
1511.08630#15
A C-LSTM Neural Network for Text Classification
We consider two classiï¬ cation tasks on this dataset: ï¬ ne-grained classiï¬ cation with 5 labels and binary classiï¬ cation by removing the neural labels. dataset has a split of train (6920) / dev (872) / test (1821). Since the data is provided in the format of sub-sentences, we train the model on both phrases and sentences but only test on the sentences as in several previous works (Socher et al., 2013b; Kalchbrenner et al., 2014).
1511.08630#14
1511.08630#16
1511.08630
[ "1511.08630" ]
1511.08630#16
A C-LSTM Neural Network for Text Classification
Question type classiï¬ cation: Question classiï¬ ca- tion is an important step in a question answering system that classiï¬ es a question into a speciï¬ c type, e.g. â what is the highest waterfall in the United States?â is a question that belongs to â locationâ . For this task, we use the benchmark TREC (Li and Roth, 2002). TREC divides all ques- including location, tions into 6 categories, human, entity, abbreviation, description and numeric. The training dataset contains 5452 labelled questions while the testing dataset contains 500 questions. # 5.2 Experimental Settings We implement our model based on Theano (Bastien et al., 2012) â a python library, which sup- ports efï¬ cient symbolic differentiation and transpar- ent use of a GPU. To beneï¬ t from the efï¬
1511.08630#15
1511.08630#17
1511.08630
[ "1511.08630" ]
1511.08630#17
A C-LSTM Neural Network for Text Classification
ciency of parallel computation of the tensors, we train the model on a GPU. For text preprocessing, we only convert all characters in the dataset to lower case. For SST, we conduct hyperparameter (number of ï¬ lters, ï¬ lter length in CNN; memory dimension in LSTM; dropout rate and which layer to apply, etc.) tuning on the validation data in the standard split. For TREC, we hold out 1000 samples from the train- ing dataset for hyperparameter search and train the model using the remaining data.
1511.08630#16
1511.08630#18
1511.08630
[ "1511.08630" ]