id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.08669#61 | Visual Dialog | As our second metric to compare datasets in their natural vs. permuted order, we test whether we can reliably classify a given sequence as natural or permuted. Our classiï¬ er is a simple threshold on perplexity of a se- quence. Speciï¬ cally, given a pair of sequences, we compute the perplexity of both from our Seq2Seq model, and predict that the one with higher perplexity is the sequence in per- muted ordering, and the sequence with lower perplexity is the one in natural ordering. The accuracy of this simple classiï¬ er indicates how easy or difï¬ cult it is to tell the dif- ference between natural and permuted sequences. A higher classiï¬ cation rate indicates existence of temporal continuity in the conversation, thus making the ordering important. Tab. 3 shows the classiï¬ cation accuracies achieved on all datasets. We can see that the classiï¬ er on VisDial achieves the highest accuracy (73.3%), followed by Cornell (61.0%). Note that this is a binary classiï¬ cation task with the prior probability of each class by design being equal, thus chance performance is 50%. The classiï¬ ers on VisDial and Cornell both signiï¬ cantly outperforming chance. On the other hand, the classiï¬ er on VQA is near chance (52.8%), indicating a lack of general temporal continuity. To summarize this analysis, our experiments show that VisDial is signiï¬ cantly more dialog-like than VQA, and behaves more like a standard dialog dataset, the Cornell Movie-Dialogs corpus. # A.5. VisDial eliminates visual priming bias in VQA One key difference between VisDial and previous image question answering datasets (VQA [6], Visual 7W [70], Baidu mQA [17]) is the lack of a â visual priming biasâ in VisDial. Speciï¬ | 1611.08669#60 | 1611.08669#62 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#62 | Visual Dialog | cally, in all previous datasets, subjects saw an image while asking questions about it. As described in [69], this leads to a particular bias in the questions â people only ask â Is there a clocktower in the picture?â on pictures actually containing clock towers. This allows language- only models to perform remarkably well on VQA and re- sults in an inï¬ ated sense of progress [69]. As one particu- larly perverse example â for questions in the VQA dataset starting with â Do you see a . . . â , blindly answering â yesâ without reading the rest of the question or looking at the as- sociated image results in an average VQA accuracy of 87%! In VisDial, questioners do not see the image. As a result, | 1611.08669#61 | 1611.08669#63 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#63 | Visual Dialog | 13 this bias is reduced. This lack of visual priming bias (i.e. not being able to see the image while asking questions) and holding a dialog with another person while asking questions results in the follow- ing two unique features in VisDial. Figure 9: Distribution of answers in VisDial by their ï¬ rst four words. The ordering of the words starts towards the center and radiates outwards. The arc length is proportional to the number of questions containing the word. White areas are words with contri- butions too small to show. Uncertainty in Answers in VisDial. Since the answers in VisDial are longer strings, we can visualize their distri- bution based on the starting few words (Fig. 9). An inter- esting category of answers emerges â â I think soâ , â I canâ t tellâ , or â I canâ t seeâ â | 1611.08669#62 | 1611.08669#64 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#64 | Visual Dialog | expressing doubt, uncertainty, or lack of information. This is a consequence of the questioner not being able to see the image â they are asking contex- tually relevant questions, but not all questions may be an- swerable with certainty from that image. We believe this is rich data for building more human-like AI that refuses to answer questions it doesnâ t have enough information to an- swer. See [48] for a related, but complementary effort on question relevance in VQA. Binary Questions # Binary Answers in VisDial. In VQA, binary questions are simply those with â yesâ , â noâ , â maybeâ as answers [6]. In VisDial, we must distinguish between binary questions and binary answers. Binary ques- tions are those starting in â Doâ , â Didâ , â Haveâ , â Hasâ , â Isâ , â Areâ , â Wasâ , â Wereâ , â Canâ , â Couldâ . Answers to such questions can (1) contain only â yesâ or â noâ , (2) begin with â yesâ , â noâ , and contain additional information or clarifica- tion (Q: â Are there any animals in the image?â , A: â yes, 2 cats and a dogâ ), (3) involve ambiguity (â Itâ s hard to seeâ , â Maybeâ ), or (4) answer the question without explicitly say- ing â yesâ or â noâ (Q: â Is there any type of design or pat- tern on the cloth?â , A: â There are circles and lines on the clothâ ). We call answers that contain â yesâ or â noâ as binary answers â 149,367 and 76,346 answers in subsets (1) and (2) from above respectively. Binary answers in VQA are biased towards â yesâ [6,69] â 61.40% of yes/no answers are â yesâ . In VisDial, the trend is reversed. Only 46.96% are â yesâ for all yes/no responses. This is understandable since workers did not see the image, and were more likely to end up with negative responses. # B. Qualitative Examples from VisDial Fig. 10 shows random samples of dialogs from the VisDial dataset. # C. Human-Machine Comparison | 1611.08669#63 | 1611.08669#65 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#65 | Visual Dialog | Model MRR R@1 R@5 Mean « Human-Q 0.441 25.10 67.37 4.19 2 Human-QH 0.485 30.31 70.53 3.91 eI Human-Ql 0.619 46.12 82.54 2.92 Human-QIH 0.635 48.03 83.76 2.83 3 HREA-QIH-G 0.477 31.64 61.61 4.42 3 { MN-QIH-G_ 0.481 32.16 61.94 4.47 s MN-QIH-D 0.553 36.86 69.39 3.48 Table 4: Human-machine performance comparison on VisDial v0.5, measured by mean reciprocal rank (MRR), recall@k for k = {1, 5} and mean rank. Note that higher is better for MRR and recall@k, while lower is better for mean rank. We conducted studies on AMT to quantitatively evaluate human performance on this task for all combinations of {with image, without image}à {with history, without his- tory} on 100 random images at each of the 10 rounds. | 1611.08669#64 | 1611.08669#66 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#66 | Visual Dialog | Speciï¬ cally, in each setting, we show human subjects a jumbled list of 10 candidate answers for a question â top-9 predicted responses from our â LF-QIH-Dâ model and the 1 ground truth answer â and ask them to rank the responses. Each task was done by 3 human subjects. Results of this study are shown in the top-half of Tab. 4. We ï¬ nd that without access to the image, humans perform better when they have access to dialog history â compare the Human-QH row to Human-Q (R@1 of 30.31 vs. 25.10). As perhaps expected, this gap narrows down when humans have access to the image â compare Human-QIH to Human- QI (R@1 of 48.03 vs. 46.12). Note that these numbers are not directly comparable to ma- chine performance reported in the main paper because mod- els are tasked with ranking 100 responses, while humans are asked to rank 10 candidates. This is because the task of | 1611.08669#65 | 1611.08669#67 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#67 | Visual Dialog | 14 ranking 100 candidate responses would be too cumbersome for humans. To compute comparable human and machine performance, we evaluate our best discriminative (MN-QIH-D) and gen- erative (HREA-QIH-G, MN-QIH-G)5 models on the same 10 options that were presented to humans. Note that in this setting, both humans and machines have R@10 = 1.0, since there are only 10 options. Tab. 4 bottom-half shows the results of this comparison. We can see that, as expected, humans with full information (i.e. Human-QIH) perform the best with a large gap in human and machine performance (compare R@5: Human-QIH 83.76% vs. MN-QIH-D 69.39%). This gap is even larger when compared to generative models, which unlike the dis- criminative models are not actively trying to exploit the bi- ases in the answer candidates (compare R@5: Human-QIH 83.76% vs. HREA-QIH-G 61.61%). Furthermore, we see that humans outperform the best ma- chine even when not looking at the image, simply on the basis of the context provided by the history (compare R@5: Human-QH 70.53% vs. MN-QIH-D 69.39%). Perhaps as expected, with access to the image but not the history, humans are signiï¬ cantly better than the best ma- chines (R@5: Human-QI 82.54% vs. MN-QIH-D 69.39%). With access to history humans perform even better. From in-house human studies and worker feedback on AMT, we ï¬ nd that dialog history plays the following roles for humans: (1) provides a context for the question and paints a picture of the scene, which helps eliminate cer- tain answer choices (especially when the image is not avail- able), (2) gives cues about the answererâ s response style, which helps identify the right answer among similar answer choices, and (3) disambiguates amongst likely interpreta- tions of the image (i.e., when objects are small or occluded), again, helping identify the right answer among multiple plausible options. | 1611.08669#66 | 1611.08669#68 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#68 | Visual Dialog | # D. Interface In this section, we show our interface to connect two Ama- zon Mechanical Turk workers live, which we used to collect our data. Instructions. To ensure quality of data, we provide detailed instructions on our interface as shown in Fig. 11a. Since the workers do not know their roles before starting the study, we provide instructions for both questioner and answerer roles. After pairing: Immediately after pairing two workers, we assign them roles of a questioner and a answerer and display role-speciï¬ c instructions as shown in Fig. 11b. Observe that 5 We use both HREA-QIH-G, MN-QIH-G since they have similar ac- curacies. Caption: The skiers stood on top of the mountain Person A (1): how many skiers are there Person B (1): hundreds Person A (2): are they getting ready to go downhill Person B (2); i think so my view is at end of line Person A (3): is it snowing Person B (3): no, there is lot of snow though Person A (4): can you ee anybody going Person B (4): no my view shows people going up small hill on skis i can't see what's going on from there Person A (5): do you see lift Person B (5): no Person A (6): can you tell if they are male or female Person B (6): skiers closest to me are male Person A (7): are there any children Person B (7): i don't s Person A (8): does any but there could be it's huge crowd ybody have hat on Person B (8): they all have winter hat of some sort on Person A (9): is sun shining Person B (9): yes, all blue sky 2e any clouds Person A (10): do you Person B (10): no clouds Caption: an image of a man in a boat with a dog Person A (1): how old does man look Person B (1): he has white hair, so probably 60's Person A (2): can you tell what breed of dog Person B (2): itis dalmatian Person A (3): is this photo in color : | 1611.08669#67 | 1611.08669#69 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#69 | Visual Dialog | Person B (3): yes itis Person A (4): does this photo look like it is older photo or from more recent times Person B (4): it looks fairly recent Person A (5): is boat in water Person B (5): yes it is nae Person A (6): do you see any fishing equipment Person B (6): i do not Person A (7): is boat focal point of photo Person B (7): yes it is Person A (8): does water look smooth Person B (8): it is quite smooth pe Person A (9): is it sunny Person B (9): yes it is Person A (10): does this look like professional photo Person B (10): this is not professional photo (a) (b) Caption: | 1611.08669#68 | 1611.08669#70 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#70 | Visual Dialog | A man and woman on bicycles are looking at a map. Person A (1): where are they located Person B (1): in city Person A (2): are they on road Person B (2): sidewalk next to 1 Person A (3): any vehicles Person B (3): 1 in background Person A (4): any other people Person B (4): no Person A (5): what color bikes Person B (5): 1 silver and 1 yellow Person A (6): do they look old or new Person B (6): new bikes Person A (7): any buildings Person B (7): yes Person A (8): what color Person B (8): brick Person A (9): are they tall or short Person B (9): i can't see enough of them to tell Person A (10): do they look like couple Person B (10): they are Caption: a horse is pulling a white cart with two people a car and buildings Person A (1): is it day time Person B (1): i think it's day Person A (2); can you see other people Person B (2): no Person A (3): can you see trees Person B (3): yes Person A (4): can you see lot of buildings Person B (4): only 2 maybe 3 Person A (5): does it look warm Person B (5): no, there are no leaves on trees Person A (6): do they nave jackets on Person B (6): no, long sleeve shirts though Person A (7): are they driving cart (7): are they driving car Person B (7): yes Person A (8): what color is car Person B (8): i can't tell photo is in black and white Person A (9): is building brick Person B (9): no, looks like wood Person A (10): do trees look old Person B (10): no they are still small # (c) (d) Caption: A statue depicting a bear breaking into a car. | 1611.08669#69 | 1611.08669#71 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#71 | Visual Dialog | Person A (1): how big is statue Person B (1): about size of real full grown bear Person A (2): so is car full size then as well Person B (2): yes replica of car Person A (3): is statue all 1 color Person B (3): no brown and black Person A (4): what color is car Person B (4): dark red Person A (5): where is this, do you think Person B (5): in wooded area someplace Person A (6): do you see any people in image Person B (6): yes 1 man Person A (7): how old is man Person B (7): 35-40 Person A (8): what is man doing Person B (8): sitting in car behind replica Person A (9): do you see any signs : Person B (9): yes, on car door warning sign Person A (10): what else can you tell me about this image Person B (10): there are many trees in background Caption: A dog with goggles is in a mo Person A (1): can you tell what kind of dog this is Person B (1): he ike beautiful pit bull mix Person A (2): can you tell if motorcycle is moving or still Person B (2): it's parked Person A (3): is dog's tongue lolling out Person B (3): not really Person A (4): i Person B (4): y Person A (5): what color is dog Person B (5): light tan with white patch that runs up to bottom of his chin and he has whit Person A (6): can you ¢ motorcycle Person B (6): from side, yes Person A (7): what coâ Person B (7): black wit! scents, sun is glaring so it's h Person A (8): is there anybody sitting on motorcycle Person B (8): no Person A (9):i Person B (9): i Person A (10): do Person B (10): yes (e) (f) Figure 10: Examples from VisDial the questioner does not see the image while the answerer does have access to it. Both questioner and answerer see the caption for the image. | 1611.08669#70 | 1611.08669#72 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#72 | Visual Dialog | # E. Additional Analysis of VisDial In this section, we present additional analyses characteriz- ing our VisDial dataset. 15 Live Question/Answering about an Image. y Instructions In this task, you will be talking to a fellow Turker. You will either be asking questions or answering questions about an image. You will be given more specific instructions once you are connected to a fellow Turker. Stay tuned. A message and a beep will notify you when you have been connected with a fellow Turker. Please keep the following in mind while chatting with your fellow Turker: Please directly start the conversation. Do not make small talk. Please do not write potentially offensive messages. Noawnone Please do not have conversations about something other than the image. Just either ask questions, or answer questions about an image (depending on your role). Please do not use chat/IM language (e.g, "r8" instead of "right"). Please use professional and grammatically correct English. Please have a natural conversation. Unnatural sounding conversation including awkward messages and long silences will be rejected. Please note that you are expected to complete and submit the hit in one go (once you have been connected with a partner). You cannot resume hits. If you see someone who isn't performing HITs as per instructions or is idle for long, do let us know. We'll make sure we keep a close watch on their work and reject it if they have a track record of not doing HITs properly or wasting too much time. Make sure you include a snippet of the conversation and your role (questioner or answerer) in your message to us, so we can look up who the other worker was. 8 Donot wait for your partner to disconnect to be able to type in responses quickly, or your work will be rejected. Please complete one hit before proceeding to the other. Please don't open multiple tabs, you cannot chat with yourself. (a) Detailed instructions for Amazon Mechanical Turkers on our interface | 1611.08669#71 | 1611.08669#73 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#73 | Visual Dialog | Caption: A man, wearing goggles and a backpack on skis pulls a girl on skis behind him. You have to ASK Questions about the image. Fellow Turker connected Now yuan send messages Type Message Here: Caption: A man, wearing goggles and a backpack on skis pulls a girl on skis behind him. â You have to ANSWER questions about the image. | Ss S Type Message Here (b) Left: What questioner sees; Right: What answerer sees. # E.1. Question and Answer Lengths # F. Performance on VisDial v0.5 Fig. 12 shows question lengths by type and round. Aver- age length of question by type is consistent across rounds. Questions starting with â anyâ (â any people?â , â any other fruits?â | 1611.08669#72 | 1611.08669#74 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#74 | Visual Dialog | , etc.) tend to be the shortest. Fig. 13 shows answer lengths by type of question they were said in response to and round. In contrast to questions, there is signiï¬ cant variance in answer lengths. Answers to binary questions (â Any peo- ple?â , â Can you see the dog?â , etc.) tend to be short while answers to â howâ and â whatâ questions tend to be more ex- planatory and long. Across question types, answers tend to be the longest in the middle of conversations. | 1611.08669#73 | 1611.08669#75 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#75 | Visual Dialog | # E.2. Question Types Fig. 14 shows round-wise coverage by question type. We see that as conversations progress, â isâ , â whatâ and â howâ questions reduce while â canâ , â doâ , â doesâ , â anyâ questions occur more often. Questions starting with â Isâ are the most popular in the dataset. Tab. 5 shows the results for our proposed models and base- lines on VisDial v0.5. A few key takeaways â First, as ex- pected, all learning based models signiï¬ cantly outperform non-learning baselines. Second, all discriminative mod- els signiï¬ cantly outperform generative models, which as we discussed is expected since discriminative models can tune to the biases in the answer options. This improve- ment comes with the signiï¬ cant limitation of not being able to actually generate responses, and we recommend the two decoders be viewed as separate use cases. Third, our best generative and discriminative models are MN-QIH-G with 0.44 MRR, and MN-QIH-D with 0.53 MRR that outper- form a suite of models and sophisticated baselines. Fourth, we observe that models with H perform better than Q-only models, highlighting the importance of history in VisDial. Fifth, models looking at I outperform both the blind models (Q, QH) by at least 2% on recall@1 in both decoders. Fi- nally, models that use both H and I have best performance. | 1611.08669#74 | 1611.08669#76 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#76 | Visual Dialog | 16 a OO does fo can c do 2 h i) what Ys how =] is fom & i) Qa s * any 3 2. 5 6 Round Figure 12: Question lengths by type and round. Average length of question by type is fairly consistent across rounds. Questions starting with â anyâ (â any people?â , â any other fruits?â , etc.) tend to be the shortest. 4.0 3.5 a ov = 2 3.0 oo what & how 2) z io} 2.5 = does are * is can 2.0 do any 157 2 3 4 5 6 7 8 9 10 Round Figure 13: Answer lengths by question type and round. Across question types, average response length tends to be longest in the middle of the conversation. Dialog-level evaluation. Using R@5 to deï¬ ne round-level â successâ , our best discriminative model MN-QIH-D gets 7.01 rounds out of 10 correct, while generative MN-QIH- G gets 5.37. Further, the mean ï¬ rst-failure-round (under R@5) for MN-QIH-D is 3.23, and 2.39 for MN-QIH-G. Fig. 16a and Fig. 16b show plots for all values of k in R@k. | 1611.08669#75 | 1611.08669#77 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#77 | Visual Dialog | 17 50% 40% 30% 20% Percentage coverage what are 10% can does any how 0% "lr (2 3. 4 #5 6 7 8 9 10 Round Figure 14: Percentage coverage of question types per round. As conversations progress, â Isâ , â Whatâ and â Howâ questions reduce while â Canâ , â Doâ , â Doesâ , â Anyâ questions occur more often. Questions starting with â Isâ | 1611.08669#76 | 1611.08669#78 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#78 | Visual Dialog | are the most popular in the dataset. # G. Experimental Details In this section, we describe details about our models, data preprocessing, training procedure and hyperparameter se- lection. # G.1. Models Late Fusion (LF) Encoder. We encode the image with a VGG-16 CNN, question and concatenated history with separate LSTMs and concatenate the three representations. This is followed by a fully-connected layer and tanh non- linearity to a 512-d vector, which is used to decode the re- sponse. Fig. 17a shows the model architecture for our LF encoder. Hierarchical Recurrent Encoder (HRE). In this en- coder, the image representation from VGG-16 CNN is early fused with the question. | 1611.08669#77 | 1611.08669#79 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#79 | Visual Dialog | Speciï¬ cally, the image representa- tion is concatenated with every question word as it is fed to an LSTM. Each QA-pair in dialog history is indepen- dently encoded by another LSTM with shared weights. The image-question representation, computed for every round from 1 through t, is concatenated with history representa- tion from the previous round and constitutes a sequence of 20 Counts (x 1000) @ ; SS s SSS a Ss Bio & Â¥ i é - & LS ke oC NS 8 KF ww AN OD GS wv S \e 5Y Nor â 's > CNS SS SF Swe a & & ws © ») ee es & Ss Do ARs Co vs Figure 15: Most frequent answer responses except for â yesâ /â noâ (a) (b) Mean # of correct rounds gee we gy 2 ys @ © k Mean round of first failure ew 2 wo » @ © 2 a0 DSBS k | 1611.08669#78 | 1611.08669#80 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#80 | Visual Dialog | some examples of attention over history facts from our MN encoder. We see that the model learns to attend to facts relevant to the question being asked. For example, when asked â What color are kites?â , the model attends to â A lot of people stand around ï¬ ying kites in a park.â For â Is any- one on bus?â , it attends to â A large yellow bus parked in some grass.â Note that these are selected examples, and not always are these attention weights interpretable. Figure 16: Dialog-level evaluation # G.2. Training question-history vectors. These vectors are fed as input to a dialog-level LSTM, whose output state at t is used to decode the response to Qt. Fig. 17b shows the model architecture for our HRE. | 1611.08669#79 | 1611.08669#81 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#81 | Visual Dialog | Splits. Recall that VisDial v0.9 contained 83k dialogs on COCO-train and 40k on COCO-val images. We split the 83k into 80k for training, 3k for validation, and use the 40k as test. Memory Network. The image is encoded with a VGG- 16 CNN and question with an LSTM. We concatenate the representations and follow it by a fully-connected layer and tanh non-linearity to get a â query vectorâ . Each caption/QA- pair (or â factâ | 1611.08669#80 | 1611.08669#82 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#82 | Visual Dialog | ) in dialog history is encoded independently by an LSTM with shared weights. The query vector is then used to compute attention over the t facts by inner product. Convex combination of attended history vectors is passed through a fully-connected layer and tanh non-linearity, and added back to the query vector. This combined represen- tation is then passed through another fully-connected layer and tanh non-linearity and then used to decode the response. The model architecture is shown in Fig. 17c. Fig. 18 shows Preprocessing. We spell-correct VisDial data using the Bing API [41]. Following VQA, we lowercase all questions and answers, convert digits to words, and remove contrac- tions, before tokenizing using the Python NLTK [1]. We then construct a dictionary of words that appear at least ï¬ ve times in the train set, giving us a vocabulary of around 7.5k. | 1611.08669#81 | 1611.08669#83 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#83 | Visual Dialog | Hyperparameters. All our models are implemented in Torch [2]. Model hyperparameters are chosen by early stop- ping on val based on the Mean Reciprocal Rank (MRR) metric. All LSTMs are 2-layered with 512-dim hidden states. We learn 300-dim embeddings for words and im- ages. These word embeddings are shared across ques- tion, history, and decoder LSTMs. We use Adam [28] 18 No | don't think Decoder they are together Answer A, Do you think the woman is with him? Question Q, The man is riding his bicycle on the sidewalk. Is the man wearing a helmet? No he does not have a helmet on. ... Are there any people nearby? Yes there's a woman walking behind him. t rounds of history (concatenated) | 1611.08669#82 | 1611.08669#84 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#84 | Visual Dialog | (a) Late Fusion Encoder No | don't think -_ Decoder they are together Answer A, Do you think the woman is with him? Question Q, The man is riding his bicycle on the sidewalk Is the man wearing a helmet? No he does not have a = a helmet on. How old is the man? He looks around 40 years old, 9 wi What color is his bike? It has black wheels and handlebars. | can't see the body of the bike that well. â Is anyone else riding a bike? | 1611.08669#83 | 1611.08669#85 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#85 | Visual Dialog | No he's the only one. H 7 t-1 Are there any people nearby? Yes thereâ s a woman _ LsTM, walking behind him \ t rounds of history {(Caption), (Q,,A,), + (Q.4,A,)} (b) Hierarchical Recurrent Encoder No | donâ t think Decoder | they are together Answer A, . Fully-connected Do you think layer the woman is with him? Question Q, The man is riding his bicycle on the sidewalk. Weighted sum Is the man wearing a helmet? No he does not have a helmet on. How old is the man? He looks around 40 years old. |» > [ism What color is his bike? It has black wheels and handlebars. | can't see the body of the bike that well. â | Is anyone else riding a bike? No he's the only one. \. Are there any people nearby? Yes there's a woman walking behind him. \, t rounds of history {(Caption), (Q,,A,), «- (Q.) Ay)} tx 612 Attention over history | 1611.08669#84 | 1611.08669#86 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#86 | Visual Dialog | (c) Memory Network Encoder Figure 17 19 Model MRR R@1 R@5 R@10 Mean Answer prior 0.311 19.85 39.14 44.28 31.56 NN-Q 0.392 30.54 46.99 49.98 30.88 NN-QI 0.385 29.71 46.57 49.86 30.90 LF-Q-G 0.403 29.74 50.10 56.32 24.06 | LF-QH-G 0.425 32.49 51.56 57.80 23.11 Baseline LF-QI-G 0.437 34.06 52.50 58.89 22.31 HRE-QH-G 0.430 32.84 52.36 58.64 22.59 HRE-QIH-G 0.442: 34.37: 53.40 59.74 21.75 HREA-QIH-G 0.442 34.47 53.43 59.73 21.83 Generative HRE-QIH-D = 0.502 36.26 65.67 77.05 7.79 HREA-QIH-D 0.508 36.76 66.54 77.75 7.59 Discriminative < SANI-QI-D â 0.506 36.21 67.08 78.16 7.74 4 HieCoAtt-QI-D 0.509 35.54 66.79 77.94 7.68 Human Accuracies # n a m u H # Human-Q Human-QH Human-QI Human-QIH = 0.441 25.10 67.37 0.485 30.31 70.53 0.619 46.12 82.54 0.635 48.03 83.76 - - - 4.19 3.91 2.92 2.83 Table 5: Performance of methods on VisDial v0.5, measured by mean reciprocal rank (MRR), recall@k for k = {1, 5, 10} and mean rank. | 1611.08669#85 | 1611.08669#87 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#87 | Visual Dialog | Note that higher is better for MRR and recall@k, while lower is better for mean rank. Memory Network has the best performance in both discriminative and generative settings. 20 with a learning rate of 10â 3 for all models. Gradients at each iterations are clamped to [â 5, 5] to avoid explosion. Our code, architectures, and trained models are available at https://visualdialog.org. # References [1] NLTK. http://www.nltk.org/. 18 [2] Torch. http://torch.ch/. 9, 18 [3] A. Agrawal, D. Batra, and D. Parikh. | 1611.08669#86 | 1611.08669#88 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#88 | Visual Dialog | Analyzing the Behavior of Visual Question Answering Models. In EMNLP, 2016. 3, 4 [4] H. Agrawal, A. Chandrasekaran, D. Batra, D. Parikh, and M. Bansal. Sort story: Sorting jumbled images and captions into stories. In EMNLP, 2016. 3 [5] Amazon. Alexa. http://alexa.amazon.com/. 6 [6] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015. 1, 2, 3, 4, 5, 10, 11, 13, 14 [7] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, and T. | 1611.08669#87 | 1611.08669#89 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#89 | Visual Dialog | Yeh. VizWiz: Nearly Real-time Answers to Visual Ques- tions. In UIST, 2010. 1 [8] A. Bordes, N. Usunier, S. Chopra, and J. Weston. Large- scale Simple Question Answering with Memory Networks. arXiv preprint arXiv:1506.02075, 2015. 3 Learning End-to-End Goal- Oriented Dialog. arXiv preprint arXiv:1605.07683, 2016. 3, 6, 8 [10] G. Christie, A. Laddha, A. Agrawal, S. Antol, Y. Goyal, K. Kochersberger, and D. Batra. | 1611.08669#88 | 1611.08669#90 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#90 | Visual Dialog | Resolving language and vision ambiguities together: Joint segmentation and preposi- tional attachment resolution in captioned scenes. In EMNLP, 2016. 3 [11] C. Danescu-Niculescu-Mizil and L. Lee. Chameleons in imagined conversations: A new approach to understanding In Proceedings coordination of linguistic style in dialogs. of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011, 2011. 12 [12] A. Das, H. Agrawal, C. L. Zitnick, D. Parikh, and D. Ba- tra. | 1611.08669#89 | 1611.08669#91 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#91 | Visual Dialog | Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions? In EMNLP, 2016. 3 [13] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. C. Courville. GuessWhat?! Visual object discovery through multi-modal dialogue. In CVPR, 2017. 3 [14] J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. | 1611.08669#90 | 1611.08669#92 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#92 | Visual Dialog | Evaluating Prerequi- site Qualities for Learning End-to-End Dialog Systems. In ICLR, 2016. 2, 3 [15] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Re- current Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. 3 [16] H. Fang, S. Gupta, F. N. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zit- nick, and G. Zweig. | 1611.08669#91 | 1611.08669#93 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#93 | Visual Dialog | From Captions to Visual Concepts and Back. In CVPR, 2015. 3 [17] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. What color are kites? Can you see street signs? The computer on the desk shows an image of a car. What color is car? White A lot of people stand around flying kites in a park. Are these people children? It looks like a mixture of families Do you know make? Volkswagen Are there people? Probably driving car Do you see desk? Yes Is this field trip you think? Just family outing Is there lot of grass? Yes Is it laptop? No, desktop What color is computer? You can't see actual computer just screen and keyboard Are there people on carriage? A street scene with a horse and carriage. Is it real? Yes Are there lot of trees? No Any vehicles around? No Is anyone on bus? Are there any black stripes? Yes 3 black stripes Is there any writing? Yes it says â moon farm day camp" Can you see brand? It's Mac Is picture of car taken outside? Yes What color is horse? Dark brown What color is carriage? Red Is it fairly close up shot? | 1611.08669#92 | 1611.08669#94 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#94 | Visual Dialog | Anice bird standing on a bench. Gazing at? Camera | think Can you tell what kind of bird it is? No it's bright red bird with black face and red beek Is it tiny bird? Yes Is grass well-maintained? No it's all weeds Are they wearing wetsuit? No What sort of area is this in? Looks like it could be back deck A lot of people stand around flying kites in a park. Are these people children? It looks like a mixture of families Is this field trip you think? Just family outing Is there lot of grass? Yes Are there lot of trees? No Any vehicles around? No A street scene with a horse and carriage. Is it real? Yes What color is horse? Dark brown What color is carriage? Red Figure 18: | 1611.08669#93 | 1611.08669#95 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#95 | Visual Dialog | Selected examples of attention over history facts from our Memory Network encoder. The intensity of color in each row indicates the strength of attention placed on that round by the model. Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering. In NIPS, 2015. 3, 4, 11, 13 [18] D. Geman, S. Geman, N. Hallonquist, and L. Younes. A Visual Turing Test for Computer Vision Systems. | 1611.08669#94 | 1611.08669#96 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#96 | Visual Dialog | In PNAS, 2014. 3 21 [19] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017. 3, 4 [20] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016. 1 [21] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. | 1611.08669#95 | 1611.08669#97 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#97 | Visual Dialog | Teaching machines to read and comprehend. In NIPS, 2015. 1, 3 [22] R. Hu, M. Rohrbach, and T. Darrell. Segmentation from natural language expressions. In ECCV, 2016. 3 [23] T.-H. Huang, F. Ferraro, N. Mostafazadeh, I. Misra, A. Agrawal, J. Devlin, R. Girshick, X. He, P. Kohli, D. Ba- tra, L. Zitnick, D. Parikh, L. Vanderwende, M. Galley, and M. Mitchell. Visual storytelling. In NAACL HLT, 2016. 3 [24] Q. V. L. | 1611.08669#96 | 1611.08669#98 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#98 | Visual Dialog | Ilya Sutskever, Oriol Vinyals. Sequence to Sequence Learning with Neural Networks. In NIPS, 2014. 12 [25] A. Jabri, A. Joulin, and L. van der Maaten. Revisiting visual question answering baselines. In ECCV, 2016. 7 [26] A. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukács, M. Ganea, P. Young, et al. | 1611.08669#97 | 1611.08669#99 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#99 | Visual Dialog | Smart Reply: Automated Response Suggestion for Email. In KDD, 2016. 3 [27] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 3 [28] D. Kingma and J. Ba. Adam: A Method for Stochastic Opti- mization. In ICLR, 2015. 18 [29] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. | 1611.08669#98 | 1611.08669#100 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#100 | Visual Dialog | Fidler. What are you talking about? text-to-image coreference. In CVPR, 2014. 3 [30] O. Lemon, K. Georgila, J. Henderson, and M. Stuttle. An ISU dialogue system exhibiting reinforcement learning of di- alogue policies: generic slot-ï¬ lling in the TALK in-car sys- tem. In EACL, 2006. 2 [31] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. | 1611.08669#99 | 1611.08669#101 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#101 | Visual Dialog | Juraf- sky. Deep Reinforcement Learning for Dialogue Generation. In EMNLP, 2016. 3 [32] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollà ¡r, and C. L. Zitnick. Microsoft COCO: Common Objects in Context. In ECCV, 2014. 2, 3 [33] C.-W. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. | 1611.08669#100 | 1611.08669#102 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#102 | Visual Dialog | How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In EMNLP, 2016. 3, 6 [34] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single Shot MultiBox Detector. In ECCV, 2016. 1 [35] R. Lowe, N. Pow, I. Serban, and J. Pineau. | 1611.08669#101 | 1611.08669#103 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#103 | Visual Dialog | The Ubuntu Dia- logue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In SIGDIAL, 2015. 3 Deeper LSTM and Normalized CNN Visual Question Answering https://github.com/VT-vision-lab/ model. VQA_LSTM_CNN, 2015. 8 [37] J. Lu, J. Yang, D. Batra, and D. Parikh. Hierarchical Question-Image Co-Attention for Visual Question Answer- ing. In NIPS, 2016. 3, 8 [38] M. Malinowski and M. Fritz. | 1611.08669#102 | 1611.08669#104 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#104 | Visual Dialog | A Multi-World Approach to Question Answering about Real-World Scenes based on Un- 22 certain Input. In NIPS, 2014. 3, 11 [39] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 3 [40] H. Mei, M. Bansal, and M. R. Walter. | 1611.08669#103 | 1611.08669#105 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#105 | Visual Dialog | Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In AAAI, 2016. 2 [41] Microsoft. Bing Spell Check API. https://www. microsoft.com/cognitive-services/en-us/ bing-spell-check-api/documentation. 18 [42] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Ve- ness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â 533, 02 2015. 1 [43] N. Mostafazadeh, C. Brockett, B. Dolan, M. Galley, J. Gao, G. P. Spithourakis, and L. Vanderwende. | 1611.08669#104 | 1611.08669#106 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#106 | Visual Dialog | Image-Grounded Conversations: Multimodal Context for Natural Question and Response Generation. arXiv preprint arXiv:1701.08251, 2017. 3 [44] T. Paek. Empirical methods for evaluating dialog systems. In Proceedings of the workshop on Evaluation for Language and Dialogue Systems-Volume 9, 2001. 2 [45] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. | 1611.08669#105 | 1611.08669#107 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#107 | Visual Dialog | Flickr30k entities: Col- lecting region-to-phrase correspondences for richer image- to-sentence models. In ICCV, 2015. 3 [46] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In EMNLP, 2016. 3 [47] V. Ramanathan, A. Joulin, P. Liang, and L. Fei-Fei. Linking people with "their" names using coreference resolution. In ECCV, 2014. 3 [48] A. Ray, G. Christie, M. Bansal, D. Batra, and D. Parikh. | 1611.08669#106 | 1611.08669#108 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#108 | Visual Dialog | Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions. In EMNLP, 2016. 5, 13 [49] M. Ren, R. Kiros, and R. Zemel. Exploring Models and Data for Image Question Answering. In NIPS, 2015. 1, 3, 11 [50] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrases in images by re- construction. In ECCV, 2016. 3 [51] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. | 1611.08669#107 | 1611.08669#109 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#109 | Visual Dialog | A dataset for movie description. In CVPR, 2015. 3 [52] I. V. Serban, A. Garcà a-Durán, à . Gülçehre, S. Ahn, S. Chan- dar, A. C. Courville, and Y. Bengio. Generating Factoid Questions With Recurrent Neural Networks: The 30M Fac- toid Question-Answer Corpus. In ACL, 2016. 3 [53] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. | 1611.08669#108 | 1611.08669#110 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#110 | Visual Dialog | Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In AAAI, 2016. 3 [54] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069, 2016. 3, 7 [55] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. | 1611.08669#109 | 1611.08669#111 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#111 | Visual Dialog | Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â 489, 2016. 1 [56] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 7 [57] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. | 1611.08669#110 | 1611.08669#112 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#112 | Visual Dialog | Fidler. MovieQA: Understanding Stories in Movies through Question-Answering. In CVPR, 2016. 1 [58] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. C. Zhu. Joint Video and Text Parsing for Understanding Events and Answering Queries. IEEE MultiMedia, 2014. 1 [59] S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. | 1611.08669#111 | 1611.08669#113 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#113 | Visual Dialog | Saenko. Sequence to Sequence - Video to Text. In ICCV, 2015. 3 [60] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Lan- guage Using Deep Recurrent Neural Networks. In NAACL HLT, 2015. 3 [61] O. Vinyals and Q. Le. A Neural Conversational Model. arXiv preprint arXiv:1506.05869, 2015. 3 [62] O. Vinyals, A. Toshev, S. Bengio, and D. | 1611.08669#112 | 1611.08669#114 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#114 | Visual Dialog | Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. 3 [63] L. Wang, S. Guo, W. Huang, Y. Xiong, and Y. Qiao. Knowledge Guided Disambiguation for Large-Scale Scene Classiï¬ cation with Multi-Resolution CNNs. arXiv preprint arXiv:1610.01119, 2016. 1 23 [64] J. Weizenbaum. ELIZA. http://psych.fullerton. edu/mbirnbaum/psych101/Eliza.htm. 2, 3 [65] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. | 1611.08669#113 | 1611.08669#115 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#115 | Visual Dialog | Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. In ICLR, 2016. 1, 3 [66] S. Wu, H. Pique, and J. Wieland. Intelligence to Help Blind People http://newsroom.fb.com/news/2016/04/using-artiï¬ cial- intelligence-to-help-blind-people-see-facebook/, 1 # Artificial # Facebook. 2016. [67] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola. | 1611.08669#114 | 1611.08669#116 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#116 | Visual Dialog | Stacked Attention Networks for Image Question Answering. In CVPR, 2016. 8 [68] L. Yu, E. Park, A. C. Berg, and T. L. Berg. Visual Madlibs: Fill in the blank Image Generation and Question Answering. In ICCV, 2015. 11 [69] P. Zhang, Y. Goyal, D. Summers-Stay, D. Batra, and D. Parikh. | 1611.08669#115 | 1611.08669#117 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#117 | Visual Dialog | Yin and Yang: Balancing and Answering Binary Visual Questions. In CVPR, 2016. 3, 4, 5, 13, 14 [70] Y. Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7W: Grounded Question Answering in Images. In CVPR, 2016. 4, 11, 13 [71] C. L. Zitnick, A. Agrawal, S. Antol, M. Mitchell, D. Batra, and D. Parikh. | 1611.08669#116 | 1611.08669#118 | 1611.08669 | [
"1605.06069"
]
|
1611.08669#118 | Visual Dialog | Measuring machine intelligence through vi- sual question answering. AI Magazine, 2016. 1 | 1611.08669#117 | 1611.08669 | [
"1605.06069"
]
|
|
1611.06440#0 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 7 1 0 2 n u J 8 ] G L . s c [ 2 v 0 4 4 6 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # PRUNING CONVOLUTIONAL NEURAL NETWORKS FOR RESOURCE EFFICIENT INFERENCE Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz NVIDIA {pmolchanov, styree, tkarras, taila, jkautz}@nvidia.com # ABSTRACT We propose a new formulation for pruning convolutional kernels in neural networks to enable efï¬ cient inference. We interleave greedy criteria-based pruning with ï¬ ne- tuning by backpropagationâ a computationally efï¬ cient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to ï¬ ne-grained classiï¬ cation tasks (Birds-200 and Flowers-102) relaying only on the ï¬ rst order gradient information. We also show that pruning can lead to more than 10à theoretical reduction in adapted 3D-convolutional ï¬ lters with a small drop in accuracy in a recurrent gesture classiï¬ er. Finally, we show results for the large- scale ImageNet dataset to emphasize the ï¬ exibility of our approach. # INTRODUCTION | 1611.06440#1 | 1611.06440 | [
"1512.08571"
]
|
|
1611.06440#1 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Convolutional neural networks (CNN) are used extensively in computer vision applications, including object classiï¬ cation and localization, pedestrian and car detection, and video classiï¬ cation. Many problems like these focus on specialized domains for which there are only small amounts of care- fully curated training data. In these cases, accuracy may be improved by ï¬ ne-tuning an existing deep network previously trained on a much larger labeled vision dataset, such as images from Ima- geNet (Russakovsky et al., 2015) or videos from Sports-1M (Karpathy et al., 2014). While transfer learning of this form supports state of the art accuracy, inference is expensive due to the time, power, and memory demanded by the heavyweight architecture of the ï¬ ne-tuned network. While modern deep CNNs are composed of a variety of layer types, runtime during prediction is dominated by the evaluation of convolutional layers. With the goal of speeding up inference, we prune entire feature maps so the resulting networks may be run efï¬ ciently even on embedded devices. We interleave greedy criteria-based pruning with ï¬ ne-tuning by backpropagation, a computationally efï¬ cient procedure that maintains good generalization in the pruned network. Neural network pruning was pioneered in the early development of neural networks (Reed, 1993). Optimal Brain Damage (LeCun et al., 1990) and Optimal Brain Surgeon (Hassibi & Stork, 1993) leverage a second-order Taylor expansion to select parameters for deletion, using pruning as regu- larization to improve training and generalization. This method requires computation of the Hessian matrix partially or completely, which adds memory and computation costs to standard ï¬ ne-tuning. In line with our work, Anwar et al. (2015) describe structured pruning in convolutional layers at the level of feature maps and kernels, as well as strided sparsity to prune with regularity within kernels. Pruning is accomplished by particle ï¬ ltering wherein conï¬ gurations are weighted by misclassiï¬ cation rate. | 1611.06440#0 | 1611.06440#2 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#2 | Pruning Convolutional Neural Networks for Resource Efficient Inference | The method demonstrates good results on small CNNs, but larger CNNs are not addressed. (2015) introduce a simpler approach by fine-tuning with a strong (2 regularization term and dropping parameters with values below a predefined threshold. Such unstructured pruning is very effective for network compression, and this approach demonstrates good performance for intra-kernel pruning. But compression may not translate directly to faster inference since modern hardware 1 Published as a conference paper at ICLR 2017 exploits regularities in computation for high throughput. So specialized hardware may be needed for efï¬ cient inference of a network with intra-kernel sparsity (Han et al., 2016). This approach also requires long ï¬ ne-tuning times that may exceed the original network training by a factor of 3 or larger. Group sparsity based regularization of network parameters was proposed to penalize unimportant parameters (Wen et al., 2016; Zhou et al., 2016; Alvarez & Salzmann, 2016; Lebedev & Lempitsky, 2016). Regularization-based pruning techniques require per layer sensitivity analysis which adds extra computations. In contrast, our approach relies on global rescaling of criteria for all layers and does not require sensitivity estimation. Moreover, our approach is faster as we directly prune unimportant parameters instead of waiting for their values to be made sufï¬ ciently small by optimization under regularization. Other approaches include combining parameters with correlated weights (Srinivas & Babu, 2015), reducing precision (Gupta et al., 2015; Rastegari et al., 2016) or tensor decomposition (Kim et al., 2015). These approaches usually require a separate training procedure or signiï¬ cant ï¬ ne-tuning, but potentially may be combined with our method for additional speedups. # 2 METHOD The proposed method for pruning consists of the following steps: 1) Fine-tune the network until convergence on the target task; 2) Alternate iterations of pruning and further ï¬ ne-tuning; 3) Stop prun- ing after reaching the target trade-off between accuracy and pruning objective, e.g. ï¬ oating point operations (FLOPs) or memory utiliza- tion. The procedure is simple, but its success hinges on employing the right pruning criterion. In this section, we introduce several efï¬ cient pruning criteria and related technical considerations. | 1611.06440#1 | 1611.06440#3 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#3 | Pruning Convolutional Neural Networks for Resource Efficient Inference | # training examples D = {xv Consider a set of training examples D = {xv = {Xo,X1-eXv}) = {You ayn th, where x and y rep- resent an input and a target output, respectively. The networkâ s parameter] = {(wh, bt), (w?, 02), ...Cw0*, bP*)} are optimized to minimize a cost value C(D|W). The most common choice for a cost function C(-) is a negative log-likelihood function. A cost function is selected independently of pruning and depends only on the task to be solved by the original network. | 1611.06440#2 | 1611.06440#4 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#4 | Pruning Convolutional Neural Networks for Resource Efficient Inference | In the case of transfer learning, we adapt a large network initialized with parameters Wo pretrained on a related but distinct dataset. @ no Stop pruning Figure 1: Network pruning as a backward ï¬ lter. During pruning, we refine a subset of parameters which preserves the accuracy of the adapted network, C(D|Wâ ) = C(D|W). This corresponds to a combinatorial optimization: min C(DIW') â C(D|W)}_ st. ||W' |p < B, (1) where the £9 norm in ||Wâ ||o bounds the number of non-zero parameters B in Wâ | 1611.06440#3 | 1611.06440#5 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#5 | Pruning Convolutional Neural Networks for Resource Efficient Inference | . Intuitively, if W' = W we reach the global minimum of the error function, however ||WVâ ||o will also have its maximum. Finding a good subset of parameters while maintaining a cost value as close as possible to the original is a combinatorial problem. It will require 2|W| evaluations of the cost function for a selected subset of data. For current networks it would be impossible to compute: for example, VGG-16 has |W| = 4224 convolutional feature maps. While it is impossible to solve this optimization exactly for networks of any reasonable size, in this work we investigate a class of greedy methods. Starting with a full set of parameters W, we iteratively identify and remove the least important parameters, as illustrated in Figure [I] By removing parameters at each iteration, we ensure the eventual satisfaction of the ) bound on Wâ | 1611.06440#4 | 1611.06440#6 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#6 | Pruning Convolutional Neural Networks for Resource Efficient Inference | . 1A â parameterâ (w, b) â W might represent an individual weight, a convolutional kernel, or the entire set of kernels that compute a feature map; our experiments operate at the level of feature maps. 2 (1) Published as a conference paper at ICLR 2017 Since we focus our analysis on pruning feature maps from convolutional layers, let us denote a set of image feature maps by ze â ¬ R#¢*exCe with dimensionality Hp x W and Cy individual maps (or channels)P| The feature maps can either be the input to the network, zo, or the output from a convolutional layer, zy with ¢ â ¬ [1,2,..., Z]. Individual feature maps are denoted 2") for k â ¬ [1,2,...,C]. A convolutional layer ¢ applies the convolution operation (*) to a set of input feature maps ze_ with kernels parameterized by wi") â | 1611.06440#5 | 1611.06440#7 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#7 | Pruning Convolutional Neural Networks for Resource Efficient Inference | ¬ RO XPxp, â ¬ RO XPxp, wi" D4. o®), 2) = BR (0 1% wi" D4. o®), (2) where 2i*) â ¬ R%â ¬*W¢ is the result of convolving each of Ce_ kernels of size p x p with its respective input feature map and adding bias otâ ) We introduce a pruning gate g, â ¬ {0,1}', an external switch which determines if a particular feature map is included or pruned during feed-forward propagation, such that when g is vectorized: W! = gW. # 2.1 ORACLE PRUNING Minimizing the difference in accuracy between the full and pruned models depends on the criterion for identifying the â least importantâ parameters, called saliency, at each step. The best criterion would be an exact empirical evaluation of each parameter, which we denote the oracle criterion, accomplished by ablating each non-zero parameter w â | 1611.06440#6 | 1611.06440#8 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#8 | Pruning Convolutional Neural Networks for Resource Efficient Inference | ¬ Wâ in turn and recording the costâ s difference. We distinguish two ways of using this oracle estimation of importance: 1) oracle-loss quantifies importance as the signed change in loss, C(D|Wâ ) â C(D|W), and 2) oracle-abs adopts the absolute difference, |C(D|Wâ ) â C(D|W)|. While both discourage pruning which increases the loss, the oracle-loss version encourages pruning which may decrease the loss, while oracle-abs penalizes any pruning in proportion to its change in loss, regardless of the direction of change. While the oracle is optimal for this greedy procedure, it is prohibitively costly to compute, requiring ||W||o evaluations on a training dataset, one evaluation for each remaining non-zero parameter. Since estimation of parameter importance is key to both the accuracy and the efficiency of this pruning approach, we propose and evaluate several criteria in terms of performance and estimation cost. 2.2 CRITERIA FOR PRUNING There are many heuristic criteria which are much more computationally efficient than the oracle. For the specific case of evaluating the importance of a feature map (and implicitly the set of convolutional kernels from which it is computed), reasonable criteria include: the combined ¢2-norm of the kernel weights, the mean, standard deviation or percentage of the feature mapâ s activation, and mutual information between activations and predictions. We describe these criteria in the following paragraphs and propose a new criterion which is based on the Taylor expansion. | 1611.06440#7 | 1611.06440#9 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#9 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Minimum weight. Pruning by magnitude of kernel weights is perhaps the simplest possible crite- rion, and it does not require any additional computation during the fine-tuning process. In case of prun- ing according to the norm of a set of weights, the criterion is evaluated as: Oxrw : RO-1XPXP _y R, with Oww(w) = Tel >; w?, where |w| is dimensionality of the set of weights after vectorization. The motivation to apply this type of pruning is that a convolutional kernel with low ¢2 norm detects less important features than those with a high norm. This can be aided during training by applying ¢; or ¢2 regularization, which will push unimportant kernels to have smaller values. | 1611.06440#8 | 1611.06440#10 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#10 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Activation. One of the reasons for the popularity of the ReLU activation is the sparsity in activation that is induced, allowing convolutional layers to act as feature detectors. Therefore it is reasonable to assume that if an activation value (an output feature map) is small then this feature detector is not important for prediction task at hand. We may evaluate this by mean activation, Oy : RMxWexCe 5 R with oe = rl = a; for activation a = zi"), ) or by the standard deviation of the activation, Oy74_sta( )= [Dia = Ha)? â Ha)?. 2While our notation is at times speciï¬ | 1611.06440#9 | 1611.06440#11 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#11 | Pruning Convolutional Neural Networks for Resource Efficient Inference | c to 2D convolutions, the methods are applicable to 3D convolutions, as well as fully connected layers. 3 Published as a conference paper at ICLR 2017 Mutual information. Mutual information (MI) is a measure of how much information is present in one variable about another variable. We apply MI as a criterion for pruning, @ yy, : R#*WexCe â R, with Oy7;(a) = MI(a, y), where y is the target of neural network. MI is defined for continuous variables, so to simplify computation, we exchange it with information gain (IG), which is defined for quantized variables IG(y|a) = H(x) + H(y) â H(ax,y), where H(z) is the entropy of variable a. We accumulate statistics on activations and ground truth for a number of updates, then quantize the values and compute IG. Taylor expansion. We phrase pruning as an optimization problem, trying to find Wâ with bounded number of non-zero elements that minimize |AC(h;)| = |C(D|Wâ ) â C(D|W)|. | 1611.06440#10 | 1611.06440#12 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#12 | Pruning Convolutional Neural Networks for Resource Efficient Inference | With this approach based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. Let h; be the output produced from parameter 7. In the case of feature maps, h= {2h), 2), sey 2fOP7F, For notational convenience, we consider the cost function equally depen- dent on parameters and outputs computed from parameters: C(D|h;) = C(D|(w, b);). Assuming independence of parameters, we have: |AC(hi)| = |C(D, hi = 0) â C(D, hi), (3) where C(D, hi = 0) is a cost value if output hi is pruned, while C(D, hi) is the cost if it is not pruned. | 1611.06440#11 | 1611.06440#13 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#13 | Pruning Convolutional Neural Networks for Resource Efficient Inference | While parameters are in reality inter-dependent, we already make an independence assumption at each gradient step during training. To approximate â C(hi), we use the ï¬ rst-degree Taylor polynomial. For a function f (x), the Taylor expansion at point x = a is P f(a (0) = P19 (ea)? + Rl), 4) p=0 p=0 where f (p)(a) is the p-th derivative of f evaluated at point a, and Rp(x) is the p-th order remainder. Approximating C(D, hi = 0) with a ï¬ rst-order Taylor polynomial near hi = 0, we have: C(D, hi = 0) = C(D, hi) â δC δhi hi + R1(hi = 0). (5) The remainder R1(hi = 0) can be calculated through the Lagrange form: R1(hi = 0) = δ2C i = ξ) δ(h2 h2 i 2 , (6) where ξ is a real number between 0 and hi. However, we neglect this ï¬ rst-order remainder, largely due to the signiï¬ cant calculation required, but also in part because the widely-used ReLU activation function encourages a smaller second order term. Finally, by substituting Eq. (5) into Eq. (3) and ignoring the remainder, we have Î T E : RHlà Wlà Cl â R+, with 6c Orp(hi) = |AC(h;)| = |C(D, hi) â | 1611.06440#12 | 1611.06440#14 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#14 | Pruning Convolutional Neural Networks for Resource Efficient Inference | shit â C(D,hj)| = | 6c â h; Ohy . (7) Intuitively, this criterion prunes parameters that have an almost ï¬ at gradient of the cost function w.r.t. feature map hi. This approach requires accumulation of the product of the activation and the gradient of the cost function w.r.t. to the activation, which is easily computed from the same computations for back-propagation. Î T E is computed for a multi-variate output, such as a feature map, by 1 6C (k) M > 52) Zim m â Lm Orn (zt) = ; (8) where M is length of vectorized feature map. | 1611.06440#13 | 1611.06440#15 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#15 | Pruning Convolutional Neural Networks for Resource Efficient Inference | For a minibatch with T > 1 examples, the criterion is computed for each example separately and averaged over T . Independently of our work, Figurnov et al. (2016) came up with similar metric based on the Taylor expansion, called impact, to evaluate importance of spatial cells in a convolutional layer. It shows that the same metric can be applied to evaluate importance of different groups of parameters. 4 Published as a conference paper at ICLR 2017 Relation to Optimal Brain Damage. The Taylor criterion proposed above relies on approximating the change in loss caused by removing a feature map. The core idea is the same as in Optimal Brain Damage (OBD) (LeCun et al., 1990). Here we consider the differences more carefully. The primary difference is the treatment of the ï¬ | 1611.06440#14 | 1611.06440#16 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#16 | Pruning Convolutional Neural Networks for Resource Efficient Inference | rst-order term of the Taylor expansion, in our notation y = δC δh h for cost function C and hidden layer activation h. After sufï¬ cient training epochs, the δh â 0 and E(y) = 0. At face value y offers little useful information, gradient term tends to zero: δC hence OBD regards the term as zero and focuses on the second-order term. However, the variance of y is non-zero and correlates with the stability of the local function w.r.t. activation h. By considering the absolute change in the cost3 induced by pruning (as in Eq. 3), we use the absolute value of the ï¬ rst-order term, |y|. Under assumption that samples come from independent and identical distribution, E(|y|) = Ï Ï where Ï | 1611.06440#15 | 1611.06440#17 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#17 | Pruning Convolutional Neural Networks for Resource Efficient Inference | is the standard deviation of y, known as the expected value of the half-normal distribution. So, while y tends to zero, the expectation of |y| is proportional to the variance of y, a value which is empirically more informative as a pruning criterion. As an additional beneï¬ t, we avoid the computation of the second-order Taylor expansion term, or its simpliï¬ cation - diagonal of the Hessian, as required in OBD. We found important to compare proposed Taylor criteria to OBD. As described in the original papers (LeCun et al., 1990; 1998), OBD can be efï¬ ciently implemented similarly to standard back propagation algorithm doubling backward propagation time and memory usage when used together with standard ï¬ ne-tuning. Efï¬ cient implementation of the original OBD algorithm might require signiï¬ cant changes to the framework based on automatic differentiation like Theano to efï¬ ciently compute only diagonal of the Hessian instead of the full matrix. Several researchers tried to tackle this problem with approximation techniques (Martens, 2010; Martens et al., 2012). In our implementation, we use efï¬ cient way of computing Hessian-vector product (Pearlmutter, 1994) and matrix diagonal approximation proposed by (Bekas et al., 2007), please refer to more details in appendix. With current implementation, OBD is 30 times slower than Taylor technique for saliency estimation, and 3 times slower for iterative pruning, however with different implementation can only be 50% slower as mentioned in the original paper. Average Percentage of Zeros (APoZ). Hu et al. (2016) proposed to explore sparsity in activations for network pruning. ReLU activation function imposes sparsity during inference, and average percentage of positive activations at the output can determine importance of the neuron. Intuitively, it is a good criteria, however feature maps at the ï¬ rst layers have similar APoZ regardless of the networkâ s target as they learn to be Gabor like ï¬ lters. We will use APoZ to estimate saliency of feature maps. 2.3 NORMALIZATION Some criteria return â rawâ values, whose scale varies with the depth of the parameterâ s layer in the network. A simple layer-wise /2-normalization can achieve adequate rescaling across layers: 6(2")= | 1611.06440#16 | 1611.06440#18 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#18 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 2.4 FLOPS REGULARIZED PRUNING One of the main reasons to apply pruning is to reduce number of operations in the network. Feature maps from different layers require different amounts of computation due the number and sizes of input feature maps and convolution kernels. To take this into account we introduce FLOPs regularization: Î (z(k) l ) = Î (z(k) ) â λΠf lops , l (9) l where λ controls the amount of regularization. For our experiments, we use λ = 10â 3. Î f lops is computed under the assumption that convolution is implemented as a sliding window (see Appendix). Other regularization conditions may be applied, e.g. storage size, kernel sizes, or memory footprint. 3OBD approximates the signed difference in loss, while our method approximates absolute difference in loss. | 1611.06440#17 | 1611.06440#19 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#19 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We ï¬ nd in our results that pruning based on absolute difference yields better accuracy. 5 Published as a conference paper at ICLR 2017 4500, a eer rene n ene ee â median 4000) oa -- min y = max 3500 3000 lower better Rank, eopote. °% 2 4 ~~ 8 10 cry 14 Layer, # 10 08 2 a Accuracy © â S \ â oracle-abs 09 ot 0% 95% 90% 85% 80% 75% Parameters Figure 2: Global statistics of oracle ranking, shown by layer for Birds-200 transfer learning. _ Figure 3: Pruning without ï¬ ne-tuning using oracle ranking for Birds-200 transfer learning. # 3 RESULTS We empirically study the pruning criteria and procedure detailed in the previous section for a variety of problems. We focus many experiments on transfer learning problems, a setting where pruning seems to excel. We also present results for pruning large networks on their original tasks for more direct comparison with the existing pruning literature. Experiments are performed within Theano (Theano Development Team, 2016). Training and pruning are performed on the respective training sets for each problem, while results are reported on appropriate holdout sets, unless otherwise indicated. For all experiments we prune a single feature map at every pruning iteration, allowing ï¬ ne-tuning and re-evaluation of the criterion to account for dependency between parameters. | 1611.06440#18 | 1611.06440#20 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#20 | Pruning Convolutional Neural Networks for Resource Efficient Inference | # 3.1 CHARACTERIZING THE ORACLE RANKING We begin by explicitly computing the oracle for a single pruning iteration of a visual transfer learning problem. We ï¬ ne-tune the VGG-16 network (Simonyan & Zisserman, 2014) for classiï¬ cation of bird species using the Caltech-UCSD Birds 200-2011 dataset (Wah et al., 2011). The dataset consists of nearly 6000 training images and 5700 test images, covering 200 species. | 1611.06440#19 | 1611.06440#21 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#21 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We ï¬ ne-tune VGG-16 for 60 epochs with learning rate 0.0001 to achieve a test accuracy of 72.2% using uncropped images. To compute the oracle, we evaluate the change in loss caused by removing each individual feature map from the ï¬ ne-tuned VGG-16 network. (See Appendix A.3 for additional analysis.) We rank feature maps by their contributions to the loss, where rank 1 indicates the most important feature mapâ removing it results in the highest increase in lossâ and rank 4224 indicates the least important. Statistics of global ranks are shown in Fig. 2 grouped by convolutional layer. We observe: (1) Median global importance tends to decrease with depth. (2) Layers with max-pooling tend to be more important than those without. (VGG-16 has pooling after layers 2, 4, 7, 10, and 13.) However, (3) maximum and minimum ranks show that every layer has some feature maps that are globally important and others that are globally less important. Taken together with the results of subsequent experiments, we opt for encouraging a balanced pruning that distributes selection across all layers. Next, we iteratively prune the network using pre-computed oracle ranking. In this experiment, we do not update the parameters of the network or the oracle ranking between iterations. Training accuracy is illustrated in Fig. 3 over many pruning iterations. Surprisingly, pruning by smallest absolute change in loss (Oracle-abs) yields higher accuracy than pruning by the net effect on loss (Oracle-loss). Even though the oracle indicates that removing some feature maps individually may decrease loss, instability accumulates due the large absolute changes that are induced. These results support pruning by absolute difference in cost, as constructed in Eq. 1. # 3.2 EVALUATING PROPOSED CRITERIA VERSUS THE ORACLE To evaluate computationally efï¬ cient criteria as substitutes for the oracle, we compute Spearmanâ s rank correlation, an estimate of how well two predictors provide monotonically related outputs, | 1611.06440#20 | 1611.06440#22 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#22 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 6 Published as a conference paper at ICLR 2017 â AlexNet / Flowers-102 VGG-16 / Birds-200 Weight Activation OBD Taylor Weight â Activation OBD Taylor Mutual Mean S.d. APoZ Mean S.d. APoZ Info. Per layer 017 0.65 067 054 0.64 0.77 0.27 056 057 «(0.35 «(059 «(0.73 0.28 All layers 028 051 053 041 0.68 0.37 0.34 0.35 «030 «043° «(0.65 (0.14 0.35 (w/fs-norm) 0.13 (0.63«0.61«0.60 = (O75, 0.33 «0.64 «(066 «(0.51 2«=«-~=S.73 0.47 AlexNet / Birds-200 VGG-16 / Flowers-102 Per layer 036 «0.57 065 042 054 0.81 0.19 051 047 036 021 06 All layers 032 037 051 0.28 061 0.37 0.35 053 045 0.61 0.28 0.02 (w/fs-norm) 0.23 0.54. 0.57 0.49 - 0.78 0.28 «0.66 «(065 «(061 ~~ - 0.7 AlexNet / ImageNet Per layer 057 0.09 019 0.06 058 0.58 All layers 067 0.00 013 â 0.08 0.72 0.11 (w/fs-norm) 0.44 «0.10 0.19 0.19 = 0.55 | 1611.06440#21 | 1611.06440#23 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#23 | Pruning Convolutional Neural Networks for Resource Efficient Inference | # OBD Taylor Mutual Table 1: Spearmanâ s rank correlation of criteria vs. oracle for convolutional feature maps of VGG-16 and AlexNet ï¬ ne-tuned on Birds-200 and Flowers-102 datasets, and AlexNet trained on ImageNet. 0.8 07 0.6 3g 205 £ S04 g 5 0.3| * + Activation (mean) go i g ++ Minimum weight < 0.2|| = Tver: flops reg / â | 1611.06440#22 | 1611.06440#24 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#24 | Pruning Convolutional Neural Networks for Resource Efficient Inference | #1 es Random â A A From scratch 0.1) .. opp (Lecun et al., 1990) © APoZ (Hu et al., 2016) 0 â 1 80% 80% 60% 40% 20% 0% Parameters 0.8 0.8 07 0.7 0.6 0.6 3g 205 205 8 S04 0.4 e Taylor \ 0.3| * + Activation (mean) 503 Activation (mean) | go i 3g 0. 5 ++ Minimum weight 3 Minimum weight 0.2|| = Tver: flops reg / < 02 Taylor, flops reg â #1 es Random Random. â A A From scratch From scratch \ 0.1) .. opp (Lecun et al., 1990) 0.1 OBD (LeCun et al., 1990) © APoZ (Hu et al., 2016) APoz (Hu et al., 2016) ~ 0 â 1 0.0 80% 80% 60% 40% 20% 0% 30 25 20 15 10 5 0 Parameters GFLOPs 0.8 0.7 0.6 3g 205 8 0.4 e Taylor \ 503 Activation (mean) | 3g 0. 5 3 Minimum weight < 02 Taylor, flops reg Random. From scratch \ 0.1 OBD (LeCun et al., 1990) APoz (Hu et al., 2016) ~ 0.0 30 25 20 15 10 5 0 GFLOPs Figure 4: Pruning of feature maps in VGG-16 ï¬ ne-tuned on the Birds-200 dataset. | 1611.06440#23 | 1611.06440#25 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#25 | Pruning Convolutional Neural Networks for Resource Efficient Inference | even if their relationship is not linear. Given the difference between oracle4 and criterion ranks di = rank(Î oracle(i)) â rank(Î criterion(i)) for each parameter i, the rank correlation is computed: N 6 S=1- â _â _ d 10 N(N? = 1) » (10) where N is the number of parameters (and the highest rank). This correlation coefï¬ cient takes values in [â 1, 1], where â 1 implies full negative correlation, 0 no correlation, and 1 full positive correlation. We show Spearmanâ s correlation in Table |1|to compare the oracle-abs ranking to rankings by different criteria on a set of networks/datasets some of which are going to be introduced later. Data-dependent criteria (all except weight magnitude) are computed on training data during the fine-tuning before or between pruning iterations. As a sanity check, we evaluate random ranking and observe 0.0 correlation across all layers. â Per layerâ analysis shows ranking within each convolutional layer, while â All layersâ describes ranking across layers. While several criteria do not scale well across layers with raw values, a layer-wise £2-normalization significantly improves performance. The Taylor criterion has the highest correlation among the criteria, both within layers and across layers (with C2 normalization). OBD shows the best correlation across layers when no normalization used; it also shows best results for correlation on ImageNet dataset. (See Appendi for further analysis.) # 3.3 PRUNING FINE-TUNED IMAGENET NETWORKS We now evaluate the full iterative pruning procedure on two transfer learning problems. We focus on reducing the number of convolutional feature maps and the total estimated ï¬ oating point operations (FLOPs). Fine-grained recognition is difï¬ cult for relatively small datasets without relying on transfer # 4We use Oracle-abs because of better performance in previous experiment | 1611.06440#24 | 1611.06440#26 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#26 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 7 Published as a conference paper at ICLR 2017 © a © a ° a ° a © ES © ES Accuracy, test set Accuracy, test set Taylor â Taylor 0.3] ++ Activation (mean) 0.3] ++ Activation (mean) â + Minimum weight â + Minimum weight 0.2} ++. Random 0.2} =â *. Random a rom race a romana \ 0.1} «+. OBD (LeCun et al., 1990) 0.1} ++ 08D (LeCun et al., 1990) s+ APoZ (Hu et al., 2016) + APoZ (Hu et al., 2016) \L 0.9 0.0 100% 80% 60% 40% 20% 0% 14.12 «10 #08 O06 04 02 00 Parameters GFLOPs © a ° a © ES Accuracy, test set Taylor 0.3] ++ Activation (mean) â + Minimum weight 0.2} ++. Random a rom race 0.1} «+. OBD (LeCun et al., 1990) s+ APoZ (Hu et al., 2016) 0.9 100% 80% 60% 40% 20% 0% Parameters © a ° a © ES Accuracy, test set â Taylor 0.3] ++ Activation (mean) â + Minimum weight 0.2} =â *. Random a romana \ 0.1} ++ 08D (LeCun et al., 1990) + APoZ (Hu et al., 2016) \L 0.0 14.12 «10 #08 O06 04 02 00 GFLOPs Figure 5: Pruning of feature maps in AlexNet on ï¬ ne-tuned on Flowers-102. | 1611.06440#25 | 1611.06440#27 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#27 | Pruning Convolutional Neural Networks for Resource Efficient Inference | learning. Branson et al. (2014) show that training CNN from scratch on the Birds-200 dataset achieves test accuracy of only 10.9%. We compare results to training a randomly initialized CNN with half the number of parameters per layer, denoted "from scratch". Fig. 4 shows pruning of VGG-16 after ï¬ ne-tuning on the Birds-200 dataset (as described previously). At each pruning iteration, we remove a single feature map and then perform 30 minibatch SGD updates with batch-size 32, momentum 0.9, learning rate 10â 4, and weight decay 10â 4. The ï¬ | 1611.06440#26 | 1611.06440#28 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#28 | Pruning Convolutional Neural Networks for Resource Efficient Inference | gure depicts accuracy relative to the pruning rate (left) and estimated GFLOPs (right). The Taylor criterion shows the highest accuracy for nearly the entire range of pruning ratios, and with FLOPs regularization demonstrates the best performance relative to the number of operations. OBD shows slightly worse performance of pruning in terms of parameters, however signiï¬ cantly worse in terms of FLOPs. In Fig. 5, we show pruning of the CaffeNet implementation of AlexNet (Krizhevsky et al., 2012) after adapting to the Oxford Flowers 102 dataset (Nilsback & Zisserman, 2008), with 2040 training and 6129 test images from 102 species of ï¬ | 1611.06440#27 | 1611.06440#29 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#29 | Pruning Convolutional Neural Networks for Resource Efficient Inference | owers. Criteria correlation with oracle-abs is summarized in Table 1. We initially ï¬ ne-tune the network for 20 epochs using a learning rate of 0.001, achieving a ï¬ nal test accuracy of 80.1%. Then pruning procedes as previously described for Birds-200, except with only 10 mini-batch updates between pruning iterations. We observe the superior performance of the Taylor and OBD criteria in both number of parameters and GFLOPs. We observed that Taylor criterion shows the best performance which is closely followed by OBD with a bit lower Spearmanâ s rank correlation coefï¬ cient. Implementing OBD takes more effort because of computation of diagonal of the Hessian and it is 50% to 300% slower than Taylor criteria that relies on ï¬ rst order gradient only. Fig. 6 shows pruning with the Taylor technique and a varying number of ï¬ ne-tuning updates between pruning iterations. Increasing the number of updates results in higher accuracy, but at the cost of additional runtime of the pruning procedure. During pruning we observe a small drop in accuracy. | 1611.06440#28 | 1611.06440#30 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#30 | Pruning Convolutional Neural Networks for Resource Efficient Inference | One of the reasons is ï¬ ne-tuning between pruning iterations. Accuracy of the initial network can be improved with longer ï¬ ne tunning and search of better optimization parameters. For example accuracy of unpruned VGG16 network on Birds-200 goes up to 75% after extra 128k updates. And AlexNet on Flowers-102 goes up to 82.9% after 130k updates. It should be noted that with farther ï¬ ne-tuning of pruned networks we can achieve higher accuracy as well, therefore the one-to-one comparison of accuracies is rough. 3.4 PRUNING A RECURRENT 3D-CNN NETWORK FOR HAND GESTURE RECOGNITION Molchanov et al. (2016) learn to recognize 25 dynamic hand gestures in streaming video with a large recurrent neural network. The network is constructed by adding recurrent connections to a 3D-CNN pretrained on the Sports-1M video dataset (Karpathy et al., 2014) and ï¬ ne tuning on a gesture dataset. The full network achieves an accuracy of 80.7% when trained on the depth modality, but a single inference requires an estimated 37.8 GFLOPs, too much for deployment on an embedded GPU. After several iterations of pruning with the Taylor criterion with learning rate 0.0003, momentum 0.9, FLOPs regularization 10â 3, we reduce inference to 3.0 GFLOPs, as shown in Fig. 7. While pruning | 1611.06440#29 | 1611.06440#31 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#31 | Pruning Convolutional Neural Networks for Resource Efficient Inference | 8 Published as a conference paper at ICLR 2017 0.9 Accuracy, test set 0.3||¢* Tyler 10 updates â © Taylor, 30 updates t © Taylor, 60 updates \ 0.2) ee Taylor, 1000 updates . â A A From scratch . 0.1 14 12 1.0 0.8 0.6 0.4 0.2 0.0 GFLOPs 2 gq P] Accuracy, test set fine-tuning. 2 Taylor, flops reg, 10 updates A A fine-tuned after pruning 40 35 30 2 20 15 10 5 O GFLOPs Figure 6: Varying the number of minibatch updates between pruning iterations with AlexNet/Flowers-102 and the Taylor criterion. Figure 7: Pruning of a recurrent 3D-CNN for dynamic hand gesture recognition (Molchanov et al., 2016). 0.8 0.7 0.6 â © @ Taylor, 100 updates 0.21). Taylor, 1000 updates © © Weight, 100 updates â © © Random, 100 updates " e-* Random, 1000 updates 0. \Bos 80% 60% 40% Parameters Accuracy (top-5), validation set x! 20% 0% ° wo Accuracy, test set ° £ © Taylor, 100 updates .\ -* Taylor, 1000 updates va © Weight, 100 updates i 0.1}]e © Random, 100 updates â ¢* Random, 1000 updates . 0.0 a ° R â 14 12 10 0.8 0.6 0.4 0.2 0.0 GFLOPs Figure 8: | 1611.06440#30 | 1611.06440#32 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#32 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Pruning of AlexNet on Imagenet with varying number of updates between pruning iterations. increases classiï¬ cation error by nearly 6%, additional ï¬ ne-tuning restores much of the lost accuracy, yielding a ï¬ nal pruned network with a 12.6à reduction in GFLOPs and only a 2.5% loss in accuracy. # 3.5 PRUNING NETWORKS FOR IMAGENET We also test our pruning scheme on the large- scale ImageNet classiï¬ cation task. In the ï¬ | 1611.06440#31 | 1611.06440#33 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#33 | Pruning Convolutional Neural Networks for Resource Efficient Inference | rst experiment, we begin with a trained CaffeNet implementation of AlexNet with 79.2% top-5 validation accuracy. Between pruning iterations, we ï¬ ne-tune with learning rate 10â 4, momen- tum 0.9, weight decay 10â 4, batch size 32, and drop-out 50%. Using a subset of 5000 training images, we compute oracle-abs and Spearmanâ s rank correlation with the criteria, as shown in Table 1. | 1611.06440#32 | 1611.06440#34 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#34 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Pruning traces are illustrated in Fig. 8. We observe: 1) Taylor performs better than ran- dom or minimum weight pruning when 100 up- dates are used between pruning iterations. When results are displayed w.r.t. FLOPs, the differ- ence with random pruning is only 0%â 4%, but the difference is higher, 1%â 10%, when plot- ted with the number of feature maps pruned. 2) Increasing the number of updates from 100 to 1000 improves performance of pruning signiï¬ - cantly for both the Taylor criterion and random pruning. o Ss om @ S_8 Ss a o 2 gq a Accuracy (top-5), validation set ° & & ee Qn aes e-® Taylor, flops reg, 100 updates Fine-tuning 30 25 20 15 10 5 GFLOPs Figure 9: Pruning of the VGG-16 network on ImageNet, with additional following ï¬ ne-tuning at 11.5 and 8 GFLOPs. 9 | 1611.06440#33 | 1611.06440#35 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#35 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Published as a conference paper at ICLR 2017 Hardware Batch Accuracy Time, ms Accuracy Time (speed up) Accuracy Time (speed up) AlexNet / Flowers-102, 1.46 GFLOPs CPU: Intel Core i7-5930K GPU: GeForce GTX TITAN X (Pascal) GPU: GeForce GTX TITAN X (Pascal) GPU: NVIDIA Jetson TX1 16 16 512 32 80.1% 226.4 4.8 88.3 169.2 41% feature maps, 0.4 GFLOPs 79.8%(-0.3%) 121.4 (1.9x) 2.4 (2.0x) 36.6 (2.4x) 73.6 (2.3x) 19.5% feature maps, 0.2 GFLOPs 87.0 (2.6x) 74.1%(-6.0%) 1.9 (2.5x) 27.4 (3.2x) 58.6 (2.9x) VGG-16 / ImageNet, 30.96 GFLOPs CPU: Intel Core i7-5930K GPU: GeForce GTX TITAN X (Pascal) GPU: NVIDIA Jetson TX1 16 16 4 89.3% 2564.7 68.3 456.6 66% feature maps, 11.5 GFLOPs 1483.3 (1.7x) 87.0% (-2.3%) 31.0 (2.2x) 182.5 (2.5x) 52% feature maps, 8.0 GFLOPs 84.5% (-4.8%) 1218.4 (2.1x) 20.2 (3.4x) 138.2 (3.3x) R3DCNN / nvGesture, 37.8 GFLOPs GPU: GeForce GT 730M 1 80.7% 438.0 25% feature maps, 3 GFLOPs 78.2% (-2.5%) 85.0 (5.2x) | 1611.06440#34 | 1611.06440#36 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#36 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Table 2: Actual speed up of networks pruned by Taylor criterion for various hardware setup. All measurements were performed with PyTorch with cuDNN v5.1.0, except R3DCNN which was implemented in C++ with cuDNN v4.0.4). Results for ImageNet dataset are reported as top-5 accuracy on validation set. Results on AlexNet / Flowers-102 are reported for pruning with 1000 updates between iterations and no ï¬ ne-tuning after pruning. For a second experiment, we prune a trained VGG-16 network with the same parameters as before, except enabling FLOPs regularization. We stop pruning at two points, 11.5 and 8.0 GFLOPs, and ï¬ ne-tune both models for an additional ï¬ ve epochs with learning rate 10â 4. Fine-tuning after pruning signiï¬ cantly improves results: the network pruned to 11.5 GFLOPs improves from 83% to 87% top-5 validation accuracy, and the network pruned to 8.0 GFLOPs improves from 77.8% to 84.5%. 3.6 SPEED UP MEASUREMENTS During pruning we were measuring reduction in computations by FLOPs, which is a common practice (Han et al., 2015; Lavin, 2015a;b). Improvements in FLOPs result in monotonically decreasing inference time of the networks because of removing entire feature map from the layer. However, time consumed by inference dependents on particular implementation of convolution operator, parallelization algorithm, hardware, scheduling, memory transfer rate etc. Therefore we measure improvement in the inference time for selected networks to see real speed up compared to unpruned networks in Table 2. We observe signiï¬ cant speed ups by proposed pruning scheme. # 4 CONCLUSIONS We propose a new scheme for iteratively pruning deep convolutional neural networks. | 1611.06440#35 | 1611.06440#37 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#37 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We ï¬ nd: 1) CNNs may be successfully pruned by iteratively removing the least important parametersâ feature maps in this caseâ according to heuristic selection criteria; 2) a Taylor expansion-based criterion demonstrates signiï¬ cant improvement over other criteria; 3) per-layer normalization of the criterion is important to obtain global scaling. # REFERENCES Jose M Alvarez and Mathieu Salzmann. Learning the Number of Neurons in Deep Networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2262â 2270. Curran Associates, Inc., 2016. Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. | 1611.06440#36 | 1611.06440#38 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#38 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Structured pruning of deep convolutional neural networks. arXiv preprint arXiv:1512.08571, 2015. URL http://arxiv.org/abs/1512. 08571. Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad. An estimator for the diagonal of a matrix. Applied numerical mathematics, 57(11):1214â 1229, 2007. Steve Branson, Grant Van Horn, Serge Belongie, and Pietro Perona. | 1611.06440#37 | 1611.06440#39 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#39 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv:1406.2952, 2014. Yann Dauphin, Harm de Vries, and Yoshua Bengio. Equilibrated adaptive learning rates for non- convex optimization. In Advances in Neural Information Processing Systems, pp. 1504â 1512, 2015. 10 Published as a conference paper at ICLR 2017 Mikhail Figurnov, Aizhan Ibraimova, Dmitry P Vetrov, and Pushmeet Kohli. PerforatedCNNs: Acceleration through elimination of redundant convolutions. In Advances in Neural Information Processing Systems, pp. 947â 955, 2016. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. | 1611.06440#38 | 1611.06440#40 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#40 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015. URL http://arxiv.org/ abs/1502.025513. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬ cient neural network. In Advances in Neural Information Processing Systems, pp. 1135â 1143, 2015. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. | 1611.06440#39 | 1611.06440#41 | 1611.06440 | [
"1512.08571"
]
|
1611.06440#41 | Pruning Convolutional Neural Networks for Resource Efficient Inference | Dally. EIE: Efï¬ cient inference engine on compressed deep neural network. In Proceedings of the 43rd International Symposium on Computer Architecture, ISCA â 16, pp. 243â 254, Piscataway, NJ, USA, 2016. IEEE Press. Babak Hassibi and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems (NIPS), pp. 164â 171, 1993. Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. | 1611.06440#40 | 1611.06440#42 | 1611.06440 | [
"1512.08571"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.