id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1511.06789#5 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The rest of this paper proceeds as follows: After an overview of related work in Sec. 2, we provide an analysis of publicly-available noisy data for ï¬ ne-grained recognition in Sec. 3, analyzing its quantity and quality. We describe a more traditional active learning approach for obtaining larger quantities of ï¬ ne-grained data in Sec. 4, which serves as a comparison to purely using noisy data. We present extensive experiments in Sec. 5, and conclude with discussion in Sec. 6. # 2 Related Work Fine-Grained Recognition. The majority of research in ï¬ ne-grained recogni- tion has focused on developing improved models for classiï¬ | 1511.06789#4 | 1511.06789#6 | 1511.06789 | [
"1503.01817"
] |
1511.06789#6 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | cation [1,3,5,7,9,8,14,16,18,20,21,22,28,29,36,37,41,42,49,51,50,66,68,69,71,73,72,76,77,75,78]. The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition 3 While these works have made great progress in modeling ï¬ ne-grained categories given the limited data available, very few works have considered the impact of that data [69,68,58]. Xu et al. [69] augment datasets annotated with category labels and parts with web images in a multiple instance learning framework, and Xie et al. [68] do multitask training, where one task uses a ground truth ï¬ ne- grained dataset and the other does not require ï¬ ne-grained labels. While both of these methods have shown that augmenting ï¬ ne-grained datasets with addi- tional data can help, in our work we present results which completely forgo the use of any curated ground truth dataset. In one experiment hinting at the use of noisy data, Van Horn et al. [58] show the possibility of learning 40 bird classes from Flickr images. Our work validates and extends this idea, using similar intu- ition to signiï¬ cantly improve performance on existing ï¬ ne-grained datasets and scale ï¬ ne-grained recognition to over ten thousand categories, which we believe is necessary in order to fully explore the research direction. Considerable work has also gone into the challenging task of curating ï¬ ne- grained datasets [4,58,27,30,31,59,65,60,70] and developing interactive methods for recognition with a human in the loop [6,62,61,63]. While these works have demonstrated eï¬ ective strategies for collecting images of ï¬ ne-grained categories, their scalability is ultimately limited by the requirement of manual annotation. Our work provides an alternative to these approaches. Learning from Noisy Data. Our work is also inspired by methods that pro- pose to learn from web data [15,10,11,45,34,19] or reason about label noise [39,67,58,52,43]. | 1511.06789#5 | 1511.06789#7 | 1511.06789 | [
"1503.01817"
] |
1511.06789#7 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Works that use web data typically focus on detection and classiï¬ cation of a set of coarse-grained categories, but have not yet examined the ï¬ ne-grained setting. Methods that reason about label noise have been divided in their results: some have shown that reasoning about label noise can have a substantial eï¬ ect on recognition performance [66], while others demonstrate little change from re- ducing the noise level or having a noise-aware model [52,43,58]. In our work, we demonstrate that noisy data can be surprisingly eï¬ ective for ï¬ ne-grained recognition, providing evidence in support of the latter hypothesis. # 3 Noisy Fine-Grained Data In this section we provide an analysis of the imagery publicly available for ï¬ ne- grained recognition, which we collect via web search.1 We describe its quantity, distribution, and levels of noise, reporting each on multiple ï¬ ne-grained domains. # 3.1 Categories We consider four domains of ï¬ ne-grained categories: birds, aircraft, Lepidoptera (a taxonomic order including butterï¬ ies and moths), and dogs. For birds and | 1511.06789#6 | 1511.06789#8 | 1511.06789 | [
"1503.01817"
] |
1511.06789#8 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 1 Google image search: http://images.google.com 4 Krause et al. [Ea tmem]| 100 Num. Images Num. Images Num. Images Fig. 2. Distributions of the number of images per category available via image search for the categories in CUB, Birdsnap, and L-Bird (far left), FGVC and L-Aircraft (mid- dle left), and L-Butterï¬ y (middle right). At far right we aggregate and plot the average number of images per category in each dataset in addition to the training sets of each curated dataset we consider, denoted CUB-GT, Birdsnap-GT, and FGVC-GT. | 1511.06789#7 | 1511.06789#9 | 1511.06789 | [
"1503.01817"
] |
1511.06789#9 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Lepidoptera, we obtained lists of ï¬ ne-grained categories from Wikipedia, result- ing in 10,982 species of birds and 14,553 species of Lepidoptera, denoted L-Bird (â Large Birdâ ) and L-Butterï¬ y. For aircraft, we assembled a list of 409 types of aircraft by hand (including aircraft in the FGVC-Aircraft [38] dataset, abbre- viated FGVC). For dogs, we combine the 120 dog breeds in Stanford Dogs [27] with 395 other categories to obtain the 515-category L-Dog. | 1511.06789#8 | 1511.06789#10 | 1511.06789 | [
"1503.01817"
] |
1511.06789#10 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | We evaluate on two other ï¬ ne-grained datasets in addition to FGVC and Stanford Dogs: CUB- 200-2011 [60] and Birdsnap [4], for a total of four evaluation datasets. CUB and Birdsnap include 200 and 500 species of common birds, respectively, FGVC has 100 aircraft variants, and Stanford Dogs contains 120 breeds of dogs. In this section we focus our analysis on the categories in L-Bird, L-Butterï¬ y, and L-Aircraft in addition to the categories in their evaluation datasets. # 3.2 Images from the Web We obtain imagery via Google image search results, using all returned images as images for a given category. | 1511.06789#9 | 1511.06789#11 | 1511.06789 | [
"1503.01817"
] |
1511.06789#11 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | For L-Bird and L-Butterï¬ y, queries are for the scientiï¬ c name of the category, and for L-Aircraft and L-Dog queries are simply for the category name (e.g. â Boeing 737-200â or â Pembroke Welsh Corgiâ ). Quantifying the Data. How much ï¬ ne-grained data is available? In Fig. 2 we plot distributions of the number of images retrieved for each category and report aggregates across each set of categories. We note several trends: Cate- gories in existing datasets, which are typically common within their ï¬ ne-grained domain, have more images per category than the long-tail of categories present in the larger L-Bird, L-Aircraft, or L-Butterï¬ y, with the eï¬ ect most pronounced in L-Bird and L-Butterï¬ y. Further, domains of ï¬ ne-grained categories have sub- stantially diï¬ erent distributions, i.e. L-Bird and L-Aircraft have more images per category than L-Butterï¬ y. This makes sense â ï¬ | 1511.06789#10 | 1511.06789#12 | 1511.06789 | [
"1503.01817"
] |
1511.06789#12 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | ne-grained categories and domains of categories that are more common and have a larger enthusiast base will have more imagery since more photos are taken of them. We also note that results tend to be limited to roughly 800 images per category, even for the most common categories, which is likely a restriction placed on public search results. The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Fig. 3. Examples of cross-domain noise for birds, butterï¬ ies, airplanes, and dogs. Images are generally of related categories that are outside the domain of interest, e.g. a map of a birdâ s typical habitat or a t-shirt containing the silhouette of a dog. Most striking is the large diï¬ erence between the number of images available via web search and in existing ï¬ ne-grained datasets: even Birdsnap, which has an average of 94.8 images per category, contains only 13% as many images as can be obtained with a simple image search. Though their labels are noisy, web searches unveil an order of magnitude more data which can be used to learn ï¬ ne-grained categories. In total, for all four datasets, we obtained 9.8 million images for 26,458 categories, requiring 151.8GB of disk space.2 | 1511.06789#11 | 1511.06789#13 | 1511.06789 | [
"1503.01817"
] |
1511.06789#13 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Noise. Though large amounts of imagery are freely available for ï¬ ne-grained categories, focusing only on scale ignores a key issue: noise. We consider two types of label noise, which we call cross-domain noise and cross-category noise. We deï¬ ne cross-domain noise to be the portion of images that are not of any category in the same ï¬ ne-grained domain, i.e. for birds, it is the fraction of images that do not contain a bird (examples in Fig. 3). In contrast, cross-category noise is the portion of images that have the wrong label within a ï¬ ne-grained domain, i.e. an image of a bird with the wrong species label. To quantify levels of cross-domain noise, we manually label a 1,000 image sample from each set of search results, with results in Fig. 4. Although levels of noise are not too high for any set of categories (max. 34.2% for L-Butterï¬ y), we notice an interesting correlation: cross-domain noise decreases moderately as the number of images per category (Fig. 2) increases. We hypothesize that categories with many search results have a corresponding large pool of images to draw results from, and thus actual search results will tend to be higher-precision. In contrast to cross-domain noise, cross-category noise is much harder to quantify, since doing so eï¬ ectively requires ground truth ï¬ ne-grained labels of query results. To examine cross-category noise from at least one vantage point, we show the confusion matrix of given versus predicted labels on 30 categories in the CUB [60] test set and their web images in Fig. 6, left and right, which we generate via a classiï¬ er trained on the CUB training set, acting as a noisy | 1511.06789#12 | 1511.06789#14 | 1511.06789 | [
"1503.01817"
] |
1511.06789#14 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 2 URLs available at https://github.com/google/goldfinch 5 6 Krause et al. 0.40-â 0.35) g go. 30) 2 £0.25 é 0.20 I a So. 15) goal. U0.10 0.05, 9-0 CoB FOVe | Buttery Birdsnap L-Aircraft 80, 3 £ 5 S 60 g a & s 40. 5 s 20- â coe Deuter Birdsnap \-aitraft Fig. 4. | 1511.06789#13 | 1511.06789#15 | 1511.06789 | [
"1503.01817"
] |
1511.06789#15 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The cross-domain noise in search results for each domain. Fig. 5. The percentage of images retained after ï¬ ltering. proxy for ground truth labels. In these confusion matrices, cross-category noise is reï¬ ected as a strong oï¬ -diagonal pattern, while cross-domain noise would manifest as a diï¬ use pattern of noise, since images not of the same domain are an equally bad ï¬ t to all categories. Based on this interpretation, the web images show a moderate amount more cross-category noise than the clean CUB test set, though the general confusion pattern is similar. | 1511.06789#14 | 1511.06789#16 | 1511.06789 | [
"1503.01817"
] |
1511.06789#16 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | We propose a simple, yet eï¬ ective strategy to reduce the eï¬ ects of cross- category noise: exclude images that appear in search results for more than one category. This approach, which we refer to as ï¬ ltering, speciï¬ cally targets images for which there is explicit ambiguity in the category label (examples in Fig. 7). As we demonstrate experimentally, ï¬ ltering can improve results while reducing training time via the use of a more compact training set â we show the portion of images kept after ï¬ ltering in Fig. 5. Agreeing with intuition, ï¬ ltering removes more images when there are more categories. Anecdotally, we have also tried a few techniques to combat cross-domain noise, but initial experiments did not see any improvement in recognition so we do not expand upon them here. While reducing cross-domain noise should be beneï¬ cial, we believe that it is not as important as cross-category noise in ï¬ ne-grained recognition due to the absence of out-of-domain classes during testing. # 4 Data via Active Learning | 1511.06789#15 | 1511.06789#17 | 1511.06789 | [
"1503.01817"
] |
1511.06789#17 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | In this section we brieï¬ y describe an active learning-based approach for collecting large quantities of ï¬ ne-grained data. Active learning and other human-in-the- loop systems have previously been used to create datasets in a more cost-eï¬ cient way than manual annotation [74,12,47], and our goal is to compare this more traditional approach with simply using noisy data, particularly when considering the application of ï¬ ne-grained recognition. In this paper, we apply active learning to the 120 dog breeds in the Stanford Dogs [27] dataset. Our system for active learning begins by training a classiï¬ er on a seed set of input images and labels (i.e. the Stanford Dogs training set), then proceeds by iteratively picking a set of images to annotate, obtaining labels with hu- man annotators, and re-training the classiï¬ | 1511.06789#16 | 1511.06789#18 | 1511.06789 | [
"1503.01817"
] |
1511.06789#18 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | er. We use a convolutional neural The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition CUB Web Pagoreater Necklaced Laughingthrush = â Cuban Emerald lscevoade avohintsh catanvieo re Headed Lahingtish Mi, Ke West cuore back Headed satator y Red: Biled Pigeon Northern Potoo AA oiled Toucon (Chestnut Mandibled Toucan | 1511.06789#17 | 1511.06789#19 | 1511.06789 | [
"1503.01817"
] |
1511.06789#19 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Fig. 6. Confusion matrices of the pre- dicted label (column) given the provided label (row) for 30 CUB categories on the CUB test set (left) and search results for CUB categories (right). For visualization purposes we remove the diagonal. Fig. 7. Examples of images removed via ï¬ ltering and the categories whose re- sults they appeared in. Some share similar names (left examples), while others share similar locations (right examples). network [32,54,25] for the classiï¬ er, and now describe the key steps of sample selection and human annotation in more detail. Sample Selection. There are many possible criterion for sample selection [47]. | 1511.06789#18 | 1511.06789#20 | 1511.06789 | [
"1503.01817"
] |
1511.06789#20 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | We employ conï¬ dence-based sampling: For each category c, we select the b Ë P (c) images with the top class scores fc(x) as determined by our current model, where Ë P (c) is a desired prior distribution over classes, b is a budget on the number of images to annotate, and fc(x) is the output of the classiï¬ er. The intuition is as follows: even when fc(x) is large, false positives still occur quite frequently â in Fig. 8 left, observe that the false positive rate is about 20% at the highest conï¬ dence range, which might have a large impact on the model. This contrasts with approaches that focus sampling in uncertain regions [33,2,40,17]. | 1511.06789#19 | 1511.06789#21 | 1511.06789 | [
"1503.01817"
] |
1511.06789#21 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | We ï¬ nd that images sampled with uncertainty criteria are typically ambiguous and dif- ï¬ cult or even impossible for both models and humans to annotate correctly, as demonstrated in Fig. 8 bottom row: unconï¬ dent samples are often heavily oc- cluded, at unusual viewpoints, or of mixed, ambiguous breeds, making it unlikely that they can be annotated eï¬ ectively. This strategy is similar to the â expected model changeâ sampling criteria [48], but done for each class independently. Human Annotation. Our interface for human annotation of the selected im- ages is shown in Fig. 9. Careful construction of the interface, including the addi- tion of both positive and negative examples, as well as hidden â | 1511.06789#20 | 1511.06789#22 | 1511.06789 | [
"1503.01817"
] |
1511.06789#22 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | gold standardâ images for immediate feedback, improves annotation accuracy considerably (see Sec. A.2 for quantitative results). Final category decisions are made via majority vote of three annotators. 7 8 Krause et al. most conf dent: aad| false positive rate â 1-confidence Fig. 8. Left: Classiï¬ er conï¬ dence versus false positive rate on 100,000 randomly sam- pled from Flickr images (YFCC100M [56]) with dog detections. Even the most conï¬ dent images have a 20% false positive rate. Right: Samples from Flickr. Rectangles below images denote correct (green), incorrect (red), or ambiguous (yellow). Top row: | 1511.06789#21 | 1511.06789#23 | 1511.06789 | [
"1503.01817"
] |
1511.06789#23 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Sam- ples with high conï¬ dence for class â Pugâ from YFCC100M. Bottom row: Samples with low conï¬ dence score for class â Pugâ . Fig. 9. Our tool for binary anno- tation of ï¬ ne-grained categories. In- structional positive images are pro- vided in the upper left and negatives are provided in the lower left. # 5 Experiments # 5.1 Implementation Details The base classiï¬ | 1511.06789#22 | 1511.06789#24 | 1511.06789 | [
"1503.01817"
] |
1511.06789#24 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | er we use in all noisy data experiments is the Inception-v3 con- volutional neural network architecture [55], which is among the state of the art methods for generic object recognition [44,53,23]. Learning rate schedules are de- termined by performance on a holdout subset of the training data, which is 10% of the training data for control experiments training on ground truth datasets, or 1% when training on the larger noisy web data. Unless otherwise noted, all recognition results use as input a single crop in the center of the image. Our active learning comparison uses the Yahoo Flickr Creative Commons 100M dataset [56] as its pool of unlabeled images, which we ï¬ rst pre-ï¬ lter with a binary dog classiï¬ er and localizer [54], resulting in 1.71 million candidate dogs. We perform up to two rounds of active learning, with a sampling budget B of 10à the original dataset size per round3. For experiments on Stanford Dogs, we use the CNN of [25], which is pre-trained on a version of ILSVRC [44,13] with dog data removed, since Stanford Dogs is a subset of ILSVRC training data. # 5.2 Removing Ground Truth from Web Images One subtle point to be cautious about when using web images is the risk of inad- vertently including images from ground truth test sets in the web training data. | 1511.06789#23 | 1511.06789#25 | 1511.06789 | [
"1503.01817"
] |
1511.06789#25 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 3 To be released. The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Training Data Acc. Dataset Training Data Acc. Dataset CUB-GT Web (raw) Web (ï¬ ltered) L-Bird L-Bird(MC) L-Bird+CUB-GT L-Bird+CUB-GT(MC) 84.4 87.7 89.0 91.9 92.3 92.2 92.8 CUB [60] 88.1 FGVC-GT 90.7 Web (raw) 91.1 Web (ï¬ ltered) 90.9 L-Aircraft 93.4 L-Aircraft(MC) L-Aircraft+FGVC-GT 94.5 L-Aircraft+FGVC-GT(MC) 95.9 FGVC [38] Stanford-GT Web (raw) Web (ï¬ ltered) L-Dog L-Dog(MC) L-Dog+Stanford-GT L-Dog+Stanford-GT(MC) 80.6 78.5 78.4 78.4 80.8 84.0 85.9 Birdsnap [4] Stanford Dogs [27] 78.2 Birdsnap-GT 76.1 Web (raw) 78.2 Web (ï¬ ltered) 82.8 L-Bird 85.4 L-Bird(MC) L-Bird+Birdsnap-GT 83.9 L-Bird+Birdsnap-GT(MC) 85.4 Table 1. Comparison of data source used during training with recognition perfor- mance, given in terms of Top-1 accuracy. â CUB-GTâ indicates training only on the ground truth CUB training set, â Web (raw)â trains on all search results for CUB categories, and â Web (ï¬ ltered)â applies ï¬ ltering between categories within a domain (birds). L-Bird denotes training ï¬ rst on L-Bird, then ï¬ ne-tuning on the subset of cate- gories under evaluation (i.e. the ï¬ ltered web images), and L-Bird+CUB-GT indicates training on L-Bird, then ï¬ ne-tuning on Web (ï¬ ltered), and ï¬ | 1511.06789#24 | 1511.06789#26 | 1511.06789 | [
"1503.01817"
] |
1511.06789#26 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | nally ï¬ ne-tuning again on CUB-GT. Similar notation is used for the other datasets. â (MC)â indicates using multiple crops at test time (see text for details). We note that only the rows with â -GTâ make use of the ground truth training set; all other rows rely solely on noisy web imagery. To deal with this concern, we performed an aggressive deduplication procedure with all ground truth test sets and their corresponding web images. This process follows Wang et al. [64], which is a state of the art method for learning a simi- larity metric between images. We tuned this procedure for high near-duplicate recall, manually verifying its quality. | 1511.06789#25 | 1511.06789#27 | 1511.06789 | [
"1503.01817"
] |
1511.06789#27 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | More details are included in the Sec. B. # 5.3 Main Results We present our main recognition results in Tab. 1, where we compare perfor- mance when the training set consists of either the ground truth training set, raw web images of the categories in the corresponding evaluation dataset, web im- ages after applying our ï¬ ltering strategy, all web images of a particular domain, or all images including even the ground truth training set. On CUB-200-2011 [60], the smallest dataset we consider, even using raw search results as training data results in a better model than the annotated training set, with ï¬ ltering further improving results by 1.3%. For Birdsnap [4], the largest of the ground truth datasets we evaluate on, raw data mildly under- performs using the ground truth training set, though ï¬ ltering improves results to be on par. On both CUB and Birdsnap, training ï¬ rst on the very large set of categories in L-Bird results in dramatic improvements, improving performance on CUB further by 2.9% and on Birdsnap by 4.6%. | 1511.06789#26 | 1511.06789#28 | 1511.06789 | [
"1503.01817"
] |
1511.06789#28 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | This is an important point: 9 10 Krause et al. even if the end task consists of classifying only a small number of categories, training with more ï¬ ne-grained categories yields signiï¬ cantly more eï¬ ective net- works. This can also be thought of as a form of transfer learning within the same ï¬ ne-grained domain, allowing features learned on a related task to be use- ful for the ï¬ nal classiï¬ cation problem. When permitted access to the annotated ground truth training sets for additional ï¬ ne-tuning and domain transfer, results increase by another 0.3% on CUB and 1.1% on Birdsnap. For the aircraft categories in FGVC, results are largely similar but weaker in magnitude. Training on raw web data results in a signiï¬ cant gain of 2.6% compared to using the curated training set, and ï¬ ltering, which did not aï¬ ect the size of the training set much (Fig. 5), changes results only slightly in a positive direction. Counterintuitively, pre-training on a larger set of aircraft does not improve results on FGVC. | 1511.06789#27 | 1511.06789#29 | 1511.06789 | [
"1503.01817"
] |
1511.06789#29 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Our hypothesis for the diï¬ erence between birds and aircraft in this regard is this: since there are many more species of birds in L- Bird than there are aircraft in L-Aircraft (10,982 vs 409), not only is the training size of L-Bird larger, but each training example provides stronger information because it distinguishes between a larger set of mutually-exclusive categories. Nonetheless, when access to the curated training set is available for ï¬ ne-tuning, performance dramatically increases to 94.5%. On Stanford Dogs we see results similar to FGVC, though for dogs we happen to see a mild loss when comparing to the ground truth training set, not much diï¬ | 1511.06789#28 | 1511.06789#30 | 1511.06789 | [
"1503.01817"
] |
1511.06789#30 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | erence with ï¬ ltering or using L-Dog, and a large boost from adding in the ground truth training set. An additional factor that can inï¬ uence performance of web models is domain shift â if images in the ground truth test set have very diï¬ erent visual properties compared to web images, performance will naturally diï¬ er. Similarly, if category names or deï¬ nitions within a dataset are even mildly oï¬ , web-based methods will be at a disadvantage without access to the ground truth training set. Adding the ground truth training data ï¬ xes this domain shift, making web-trained models quickly recover, with a particularly large gain if the network has already learned a good representation, matching the pattern of results for Stanford Dogs. Limits of Web-Trained Models. To push our models to their limits, we additionally evaluate using 144 image crops at test time, averaging predic- tions across each crop, denoted â | 1511.06789#29 | 1511.06789#31 | 1511.06789 | [
"1503.01817"
] |
1511.06789#31 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | (MC)â in Tab. 1. This brings results up to 92.3%/92.8% on CUB (without/with CUB training data), 85.4%/85.4% on Bird- snap, 93.4%/95.9% on FGVC, and 80.8%/85.9% on Stanford Dogs. We note that this is close to human expert performance on CUB, which is estimated to be be- tween 93% [6] and 95.6% [58]. Comparison with Prior Work. We compare our results to prior work on CUB, the most competitive ï¬ ne-grained dataset, in Tab. 2. While even our baseline model using only ground truth data from Tab. 1 was at state of the art levels, by forgoing the CUB training set and only training using noisy data from the web, our models greatly outperform all prior work. On FGVC, which is more recent and fewer works have evaluated on, the best prior performing | 1511.06789#30 | 1511.06789#32 | 1511.06789 | [
"1503.01817"
] |
1511.06789#32 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Method Alignments [21] PDD [51] PB R-CNN [75] Weak Sup. [78] PN-DCN [5] Two-Level [66] Consensus [49] NAC [50] FG-Without [29] STN [26] Bilinear [36] Augmenting [69] Noisy Data+CNN [55] Web Training Annotations Acc. 53.6 GT 60.6 GT+BB+Parts 73.9 GT+BB+Parts 75.0 GT 75.7 GT+BB+Parts 77.9 GT 78.3 GT+BB+Parts 81.0 GT 82.0 GT+BB GT 84.1 84.1 GT GT+BB+Parts+Web 84.6 92.3 Table 2. Comparison with prior work on CUB-200- 2011 [60]. We only include no methods which annotations at time. | 1511.06789#31 | 1511.06789#33 | 1511.06789 | [
"1503.01817"
] |
1511.06789#33 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Here â GTâ refers to using category Truth Ground labels in the training set of CUB, â BBoxâ indicates using bounding boxes, and â Partsâ uses part annotations. method we are aware of is the Bilinear CNN model of Lin et al. [36], which has accuracy 84.1% (ours is 93.4% without FGVC training data, 95.9% with), and on Birdsnap, which is even more recent, the best performing method we are aware of that uses no extra annotations during test time is the original 66.6% by Berg et al. [4] (ours is 85.4%). On Stanford Dogs, the most competitive related work is [46], which uses an attention-based recurrent neural network to achieve 76.8% (ours is 80.8% without ground truth training data, 85.9% with). We identify two key reasons for these large improvements: | 1511.06789#32 | 1511.06789#34 | 1511.06789 | [
"1503.01817"
] |
1511.06789#34 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The ï¬ rst is the use of a strong generic classiï¬ er [55]. A number of prior works have identiï¬ ed the importance of having well-trained CNNs as components in their systems for ï¬ ne-grained recognition [36,26,29,75,5], which our work provides strong evidence for. On all four evaluation datasets, our CNN of choice [55], trained on the ground truth training set alone and without any architectural modiï¬ cations, performs at levels at or above the previous state-of-the-art. The second reason for improvement is the large utility of noisy web data for ï¬ ne-grained recognition, which is the focus of this work. We ï¬ nally remind the reader that our work focuses on the application-level problem of recognizing a given set of ï¬ ne-grained categories, which might not come with their own expert-annotated training images. The use of existing test sets serves to provide an accurate measure of performance and put our work in a larger context, but results may not be strictly comparable with prior work that operates within a single given dataset. Comparison with Active Learning. We compare using noisy web data with a more traditional active learning-based approach (Sec. 4) under several diï¬ | 1511.06789#33 | 1511.06789#35 | 1511.06789 | [
"1503.01817"
] |
1511.06789#35 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | erent settings in Tab. 3. We ï¬ rst verify the eï¬ cacy of active learning itself: when training the network from scratch (i.e. no ï¬ ne-tuning), active learning improves performance by up to 15.6%, and when ï¬ ne-tuning, results still improve by 1.5%. How does active learning compare to using web data? Purely using ï¬ ltered web data compares favorably to non-ï¬ ne-tuned active learning methods (4.4% better), though lags behind the ï¬ ne-tuned models somewhat. To better compare 12 Krause et al. Table 3. Active learning-based results [27], presented in on Stanford Dogs terms of top-1 accuracy. Methods with â (scratch)â indicate training from scratch and â (ft)â indicates ï¬ ne-tuning from a network pre-trained on ILSVRC, with web models also ï¬ ne-tuned. â subsampleâ refers to downsampling the active learn- ing data to be the same size as the ï¬ ltered web images. Note that Stanford-GT is a subset of active learning data, which is denoted â A.L.â . Acc. | 1511.06789#34 | 1511.06789#36 | 1511.06789 | [
"1503.01817"
] |
1511.06789#36 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Training Procedure 58.4 Stanford-GT (scratch) 65.8 A.L., one round (scratch) 74.0 A.L., two rounds (scratch) 80.6 Stanford-GT (ft) 81.6 A.L., one round (ft) A.L., one round (ft, subsample) 78.8 82.1 A.L., two rounds (ft) Web (ï¬ ltered) 78.4 Web (ï¬ ltered) + Stanford-GT 82.6 the active learning and noisy web data, we factor out the diï¬ erence in scale by performing an experiment with subsampled active learning data, setting it to be the same size as the ï¬ ltered web data. Surprisingly, performance is very similar, with only a 0.4% advantage for the cleaner, annotated active learning data, highlighting the eï¬ ectiveness of noisy web data despite the lack of manual annotation. If we furthermore augment the ï¬ ltered web images with the Stanford Dogs training set, which the active learning method notably used both as training data and its seed set of images, performance improves to even be slightly better than the manually-annotated active learning data (0.5% improvement). These experiments indicate that, while more traditional active learning-based approaches towards expanding datasets are eï¬ ective ways to improve recognition performance given a suitable budget, simply using noisy images retrieved from the web can be nearly as good, if not better. As web images require no manual annotation and are openly available, we believe this is strong evidence for their use in solving ï¬ ne-grained recognition. Very Large-Scale Fine-Grained Recognition. A key advantage of using noisy data is the ability to scale to large numbers of ï¬ ne-grained classes. However, this poses a challenge for evaluation â it is infeasible to manually annotate images with one of the 10,982 categories in L-Bird, 14,553 categories in L-Butterï¬ y, and would even be very time-consuming to annotate images with the 409 categories in L-Aircraft. | 1511.06789#35 | 1511.06789#37 | 1511.06789 | [
"1503.01817"
] |
1511.06789#37 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Therefore, we turn to an approximate evaluation, establishing a rough estimate on true performance. Speciï¬ cally, we query Flickr for up to 25 images of each category, keeping only those images whose title strictly contains the name of each category, and aggressively deduplicate these images with our training set in order to ensure a fair evaluation. Although this is not a perfect evaluation set, and is thus an area where annotation of ï¬ ne-grained datasets is particularly valuable [58], we ï¬ nd that it is remarkably clean on the surface: based on a 1,000-image estimate, we measure the cross-domain noise of L-Bird at only 1%, L-Butterï¬ y at 2.3%, and L-Aircraft at 4.5%. An independent evaluation [58] further measures all sources of noise combined to be only 16% when searching The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Long-Billed â Yellow-Crowne Spiderhunter Gonolek Forest Kingfisher White-Browed Coucal Pacific Reef Heron African Rail Brown Thrasher Zebra Swallowtail c ark Pe Rufous-Naped â Smoke-Colored Lorauin's â Admiral > Be pero General Atomics MQ-1 Predator Blue Glassy Tiger Idas Blue Cessna 150 ornier Do 31 â Aero l-39 Albatross : Boeing B-50 Consolidated C-87 Superfortress Liberator Express Douglas 0-46 Lockheed U~ Fig. 10. Classiï¬ cation results on very large-scale ï¬ ne-grained recognition. From top to bottom, depicted are examples of categories in L-Bird, L-Butterï¬ y, and L-Aircraft, along with their category name. The ï¬ rst examples in each row are correctly predicted by our models, while the last two examples in each row are errors, with our prediction in grey and correct category (according to Flickr metadata) printed below. for bird species. In total, this yields 42,115 testing images for L-Bird, 42,046 for L-Butterï¬ y, and 3,131 for L-Aircraft. Given the diï¬ culty and noise, performance is surprisingly high: | 1511.06789#36 | 1511.06789#38 | 1511.06789 | [
"1503.01817"
] |
1511.06789#38 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | On L-Bird top-1 accuracy is 73.1%/75.8% (1/144 crops), for L-Butterï¬ y it is 65.9%/68.1%, and for L-Aircraft it is 72.7%/77.5%. Corresponding mAP numbers, which are better suited for handling class imbalance, are 61.9, 54.8, and 70.5, reported for the single crop setting. We show qualitative results in Fig. 10. | 1511.06789#37 | 1511.06789#39 | 1511.06789 | [
"1503.01817"
] |
1511.06789#39 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | These cate- gories span multiple continents in space (birds, butterï¬ ies) and decades in time (aircraft), demonstrating the breadth of categories in the world that can be rec- ognized using only public sources of noisy ï¬ ne-grained data. To the best of our knowledge, these results represent the largest number of ï¬ ne-grained categories distinguished by any single system to date. How Much Data is Really Necessary? In order to better understand the utility of noisy web data for ï¬ ne-grained recognition, we perform a control ex- periment on the web data for CUB. Using the ï¬ ltered web images as a base, we train models using progressively larger subsets of the results as training data, taking the top ranked images across categories for each experiment. Performance versus the amount of training data is shown in Fig. 11. Surprisingly, relatively few web images are required to do as well as training on the CUB training set, and adding more noisy web images always helps, even when at the limit of search results. Based on this analysis, we estimate that one noisy web image for CUB categories is â | 1511.06789#38 | 1511.06789#40 | 1511.06789 | [
"1503.01817"
] |
1511.06789#40 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | worthâ 0.507 ground truth training images [57]. Error Analysis. Given the high performance of these models, what room is left for improvement? In Fig. 12 we show the taxonomic distribution of the remaining 13 14 Krause et al. Impact of Training Data Quantity 83 fey 8 5 86 a a 2 2 Web H CUB-GT TOk 20k 30k 40k 50k 60k 70k 80k 90k Num. Web Images | 1511.06789#39 | 1511.06789#41 | 1511.06789 | [
"1503.01817"
] |
1511.06789#41 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Portion of Errors vs. Taxonomic Rank 79 60 $50 Dad ic) 830 20 10 â Genus Family Order Class Fig. 11. Number of web images used for training vs. performance on CUB-200- 2011 [60]. We vary the amount of web training data in multiples of the CUB training set size (5,994 images). Also shown is performance when training on the ground truth CUB training set (CUB-GT). Fig. 12. The errors on L-Bird that fall in each taxonomic rank, represented as a portion of all errors made. For each error made, we calculate the taxonomic rank of the least common ancestor of the predicted and test category. errors on L-Bird. The vast majority of errors (74.3%) are made between very similar classes at the genus level, indicating that most of the remaining errors are indeed between extremely similar categories, and only very few errors (7.4%) are made between dissimilar classes, whose least common ancestor is the â Avesâ (i.e. Bird) taxonomic class. | 1511.06789#40 | 1511.06789#42 | 1511.06789 | [
"1503.01817"
] |
1511.06789#42 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | This suggests that most errors still made by the models are fairly reasonable, corroborating the qualitative results of Fig. 10. # 6 Discussion In this work we have demonstrated the utility of noisy data toward solving the problem of ï¬ ne-grained recognition. We found that the combination of a generic classiï¬ cation model and web data, ï¬ ltered with a simple strategy, was surprisingly eï¬ ective at discriminating ï¬ ne-grained categories. This approach performs favorably when compared to a more traditional active learning method for expanding datasets, but is even more scalable, which we demonstrated ex- perimentally on up to 14,553 ï¬ ne-grained categories. One potential limitation of the approach is the availability of imagery for categories either not found or not described in the public domain, for which an alternative method such as active learning may be better suited. Another limitation is the current focus on classiï¬ cation, which may be problematic if applications arise where multiple objects are present or localization is otherwise required. Nonetheless, with these insights on the unreasonable eï¬ ectiveness of noisy data, we are optimistic for applications of ï¬ ne-grained recognition in the near future. The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition # 7 Acknowledgments We thank Gal Chechik, Chuck Rosenberg, Zhen Li, Timnit Gebru, Vignesh Ra- manathan, Oliver Groth, and the anonymous reviewers for valuable feedback. | 1511.06789#41 | 1511.06789#43 | 1511.06789 | [
"1503.01817"
] |
1511.06789#43 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 15 16 Krause et al. # Appendix # A Active Learning Details Here we provide additional details for our active learning baseline, including further description of the interface, improvements in rater quality as a result of this interface, statistics of the number of positives obtained per class in each round of active learning, and qualitative examples of images obtained. # A.1 Interface Designing an eï¬ ective rater tool is of critical importance when getting non- experts to rate ï¬ ne-grained categories. We seek to give the raters simple decisions and to provide them with as much information as possible to make the correct decision in a generic and scalable way. Fig. 13 shows our rater interface, which includes the following components to serve this purpose: Instructional positive images inform the rater of within-class variation. These images are obtained from the seed dataset input to active learning. Many rater tools only provide this (e.g. [35]), which does not provide a clear class boundary concept on its own. We also provide links to Google Image Search and encourage raters to research the full space of examples of the class concept. Instructional negative images help raters deï¬ ne the decision boundary be- tween the right class and easily confused other classes. We show the top two most confused categories, determined by the active learningâ s current model. This aids in classiï¬ cation: in Fig. 13, if the rater studies the positive class â Bernese moun- tain dogâ , they may form a mental decision rule based on fur color pattern alone. However, when studying the negative, easily confused classes â | 1511.06789#42 | 1511.06789#44 | 1511.06789 | [
"1503.01817"
] |
1511.06789#44 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Entlebucherâ and â Appenzellerâ , the rater can reï¬ ne the decision on more appropriate ï¬ ne-grained distinctions â in this case, hair length is a key discriminative attribute. Batching questions by class has the beneï¬ t of allowing raters to learn about and focus on one ï¬ ne-grained category at a time. Batching questions may also allow raters to build a better mental model of the class via a human form of semi-supervised learning, although this phenomena is more diï¬ cult to isolate and measure. Golden questions for rater feedback and quality control. We use the original supervised seed dataset to add a number of known correct and incor- rect images in the batch to be rated, which we use to give short- and long-term feedback to raters. Short-term feedback comes in the form of a pop-up win- dow informing the rater the moment they make an incorrect judgment, allowing | 1511.06789#43 | 1511.06789#45 | 1511.06789 | [
"1503.01817"
] |
1511.06789#45 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Correct examples Unknowns, please rate: Fig. 13. Our tool for binary annotation of ï¬ ne-grained categories. Instructional posi- tive images are provided in the upper left and negatives are provided in the lower left. This is a higher-resolution version of the ï¬ gure in the main text. them to update their mental model while working on the task. Long-term feed- back summarizes a daysâ worth of rating to give the rater a summary of overall performance. # A.2 Rater Quality Improvements To determine the impact of our annotation framework improvements for ï¬ ne- grained categories, we performed a control experiment with a more standard crowdsourcing interface, which provides only a category name, description, and image search link. Annotation quality is determined on a set of diï¬ cult binary questions (images mistaken by a classiï¬ er on the Stanford Dogs test set). Using our interface, annotators were both more accurate and faster, with a 16.5% relative reduction in error (from 28.5% to 23.8%) and a 2.4à improvement in speed (4.1 to 1.68 seconds per image). # A.3 Annotation Statistics and Examples In Fig. 14 we show the distribution of images judged correct by human anno- tators after active learning selection of 1000 images per class for Stanford Dogs classes. The categories are sorted by the number of positive training examples collected in the ï¬ rst iteration of active learning. The 10 categories with the most positive training examples collected after both rounds of mining are: | 1511.06789#44 | 1511.06789#46 | 1511.06789 | [
"1503.01817"
] |
1511.06789#46 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Pug, Golden Retriever, Boston Terrier, West Highland White Terrier, Labrador Re- triever, Boxer, Maltese, German Shepherd, Pembroke Welsh Corgi, and Beagle. The 10 categories with the fewest positive training examples are: Kerry Blue Terrier, Komondor, Irish Water Spaniel, Curly Coated Retriever, Bouvier des Flandres, Clumber Spaniel, Bedlington Terrier, Afghan Hound, Aï¬ enpinscher, | 1511.06789#45 | 1511.06789#47 | 1511.06789 | [
"1503.01817"
] |
1511.06789#47 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 17 18 Krause et al. 1009 [5 Active learning, round 1 [5 Active learning, round 2 000 3 «og Num. images / class 2 Class id Fig. 14. Counts of positive training examples obtained per category from active learn- ing, for the Stanford Dogs dataset. and Sealyham Terrier. These counts are inï¬ uenced by the true counts of cat- egories in the YFCC100M [56] dataset and our active learnerâ s ability to ï¬ nd them. | 1511.06789#46 | 1511.06789#48 | 1511.06789 | [
"1503.01817"
] |
1511.06789#48 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | In Fig. 15, we show positive training examples obtained from active learning for select categories, comparing examples obtained in iterations 1 and 2. # B Deduplication Details Here we provide more details on our method for removing any ground truth images from web search results, which we took great care in doing. Our general approach follows Wang et al. [64], which is a state of the art method for learning a similarity metric between images. To scale [64] to the millions of images con- sidered in this work, we binarize the output for an eï¬ cient hashing-based exact search. Hamming distance corresponds to dissimilarity: identical images have distance 0, images with diï¬ erent resolutions, aspect ratios, or slightly diï¬ erent crops tend to have distances of up to roughly 4 and 8, and more substantial variations, e.g. images of diï¬ erent views from the same photographer, or very diï¬ erent crops, roughly have distances up to 10, beyond which the vast majority of image pairs are actually distinct. Qualitative examples are provided in Fig. 16. We tuned our dissimilarity threshold for recall and manually veriï¬ | 1511.06789#47 | 1511.06789#49 | 1511.06789 | [
"1503.01817"
] |
1511.06789#49 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | ed it â the goal is to ensure that images that have even a moderate degree of similarity to test images did not appear in our training set. For example, of a sample of 183 image pairs at distance 16 in the large-scale bird experiments, zero were judged by a human to be too similar, and we used a still more conservative threshold of 18. In the case of L-Bird, 2,996 images were removed as being too similar to an image in either the CUB or Birdsnap test set. The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition Pembroke Welsh Corgi Airedale Tecior Siberian Husky Komondor Pomeranian Samoyed Becnase Mountain Dog French Bulldog Gorman Shorthaired Chihuahua | 1511.06789#48 | 1511.06789#50 | 1511.06789 | [
"1503.01817"
] |
1511.06789#50 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 19 Fig. 15. Positive training examples obtained from active learning, from the YFCC100M dataset, for select categories from Stanford Dogs. # C Remaining Errors: Qualitative Here we highlight one type of error that our image search model made on CUB [62] â ï¬ nding errors in the test set. We show an example in Fig. 17, where the true species for each image is actually a bird species not in the 200 CUB bird species. This highlights one potential advantage of our approach: by relying on category names, web training data is tied more strongly to the semantic mean- ing of a category instead of simply a 1-of-K label. This also provides evidence for the â domain shiftâ hypothesis when ï¬ ne-tuning on ground truth datasets, as irregularities like this can be learned, resulting in higher performance on the benchmark dataset under consideration. | 1511.06789#49 | 1511.06789#51 | 1511.06789 | [
"1503.01817"
] |
1511.06789#51 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | # D Network Visualization In order to examine the impact of web-trained models of ï¬ ne-grained recognition from another vantage point, here we present one visualization of network inter- nals. Speciï¬ cally, in Fig. 18 we visualize gradients with respect to the square of the norm of the last convolutional layer in the network, backpropagated into the input image, and visualized as a function of training data. This provides some indication of the importance of each pixel with respect to the overall network activation. Though these examples are only qualitative, we observe that the gra- dients for the network trained on L-Bird are generally more focused on the bird when compared to gradients for the network trained on CUB, indicating that the network has learned a better representation of which parts of an image are discriminative. 20 Krause et al. | 1511.06789#50 | 1511.06789#52 | 1511.06789 | [
"1503.01817"
] |
1511.06789#52 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Distance Distance 0 7 1 8 2 9 3 10 4 11 5 12 6 Fig. 16. Example pairs of images and their distance according to our deduplication method. Distances 1-3 have slight pixel-level diï¬ erences due to compression and the image pair at distance 4 have diï¬ erent scales. At distances 5 and 6 the images are of diï¬ erent crops, with distance 6 additionally exhibiting slight lighting diï¬ erences. The images at distance 7 have slightly diï¬ erent scales and compression, at distance 8 there are cropping and lighting diï¬ erences, and distance 9 features diï¬ erent crops and additional text in the corner of one photo. At distance 10 and higher we have image pairs which have high-level visual similarities but are distinct. | 1511.06789#51 | 1511.06789#53 | 1511.06789 | [
"1503.01817"
] |
1511.06789#53 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | oo us | us oo | Fig. 17. Examples of mistakes made by a web-trained model on the CUB-200-2011 [62] test set, whose ground truth label is â Hooded Orioleâ , but which are actually of another species not in CUB, â Black-Hooded Oriole.â The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition Image CUB-200 L-Bird Image CUB-200 L-Bird Fig. 18. Gradients with respect to the squared norm of the last convolutional layer on ten random CUB test set images. Each row contains, in order, an input image, gradients for a model trained on the CUB-200 [62] training set, and gradients for a model trained on the larger L-Bird. Gradients have been scaled to ï¬ t in [0,255]. Figure best viewed in high resolution on a monitor. | 1511.06789#52 | 1511.06789#54 | 1511.06789 | [
"1503.01817"
] |
1511.06789#54 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 21 22 Krause et al. # References 1. Angelova, A., Zhu, S., Lin, Y.: Image segmentation for large-scale subcategory ï¬ ower recognition. In: Workshop on Applications of Computer Vision (WACV). pp. 39â 45. IEEE (2013) 2. Balcan, M.F., Broder, A., Zhang, T.: Margin based active learning. In: Learning Theory, pp. 35â 50. Springer (2007) 3. Berg, T., Belhumeur, P.N.: Poof: Part-based one-vs.-one features for ï¬ ne-grained categorization, face veriï¬ cation, and attribute estimation. In: Computer Vision and Pattern Recognition (CVPR). pp. 955â 962. IEEE (2013) 4. Berg, T., Liu, J., Lee, S.W., Alexander, M.L., Jacobs, D.W., Belhumeur, P.N.: Birdsnap: Large-scale ï¬ ne-grained visual categorization of birds. In: Computer Vi- sion and Pattern Recognition (CVPR) (June 2014) 5. Branson, S., Van Horn, G., Perona, P., Belongie, S.: Improved bird species recog- nition using pose normalized deep convolutional nets. In: British Machine Vision Conference (BMVC) (2014) 6. Branson, S., Van Horn, G., Wah, C., Perona, P., Belongie, S.: | 1511.06789#53 | 1511.06789#55 | 1511.06789 | [
"1503.01817"
] |
1511.06789#55 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The ignorant led by the blind: A hybrid humanâ machine vision system for ï¬ ne-grained categorization. International Journal of Computer Vision (IJCV) pp. 1â 27 (2014) 7. Chai, Y., Lempitsky, V., Zisserman, A.: Bicos: A bi-level co-segmentation method for image classiï¬ cation. In: International Conference on Computer Vision (ICCV). IEEE (2011) 8. | 1511.06789#54 | 1511.06789#56 | 1511.06789 | [
"1503.01817"
] |
1511.06789#56 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Chai, Y., Lempitsky, V., Zisserman, A.: Symbiotic segmentation and part local- ization for ï¬ ne-grained categorization. In: International Conference on Computer Vision (ICCV). pp. 321â 328. IEEE (2013) 9. Chai, Y., Rahtu, E., Lempitsky, V., Van Gool, L., Zisserman, A.: Tricos: A tri-level class-discriminative co-segmentation method for image classiï¬ cation. In: European Conference on Computer Vision (ECCV), pp. 794â 807. Springer (2012) 10. Chen, X., Gupta, A.: | 1511.06789#55 | 1511.06789#57 | 1511.06789 | [
"1503.01817"
] |
1511.06789#57 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Webly supervised learning of convolutional networks. In: International Conference on Computer Vision (ICCV). IEEE (2015) 11. Chen, X., Shrivastava, A., Gupta, A.: Neil: Extracting visual knowledge from web data. In: International Conference on Computer Vision (ICCV). pp. 1409â 1416. IEEE (2013) 12. Collins, B., Deng, J., Li, K., Fei-Fei, L.: Towards scalable dataset construction: An active learning approach. In: European Conference on Computer Vision (ECCV), pp. 86â 98. Springer (2008) 13. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large- Scale Hierarchical Image Database. In: Computer Vision and Pattern Recognition (CVPR) (2009) 14. Deng, J., Krause, J., Fei-Fei, L.: Fine-grained crowdsourcing for ï¬ ne-grained recog- nition. In: Computer Vision and Pattern Recognition (CVPR). pp. 580â 587 (2013) 15. Divvala, S.K., Farhadi, A., Guestrin, C.: | 1511.06789#56 | 1511.06789#58 | 1511.06789 | [
"1503.01817"
] |
1511.06789#58 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Learning everything about anything: Webly-supervised visual concept learning. In: Computer Vision and Pattern Recog- nition (CVPR). pp. 3270â 3277. IEEE (2014) 16. Duan, K., Parikh, D., Crandall, D., Grauman, K.: Discovering localized at- tributes for ï¬ ne-grained recognition. In: Computer Vision and Pattern Recognition (CVPR). pp. 3474â 3481. IEEE 17. Erkan, A.N.: | 1511.06789#57 | 1511.06789#59 | 1511.06789 | [
"1503.01817"
] |
1511.06789#59 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Semi-supervised learning via generalized maximum entropy. Ph.D. thesis, New York University (2010) The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition 18. Farrell, R., Oza, O., Zhang, N., Morariu, V.I., Darrell, T., Davis, L.S.: Birdlets: Subordinate categorization using volumetric primitives and pose-normalized ap- pearance. In: International Conference on Computer Vision (ICCV). pp. 161â 168. IEEE (2011) 19. Fergus, R., Fei-Fei, L., Perona, P., Zisserman, A.: Learning object categories from internet image searches. Proceedings of the IEEE 98(8), 1453â 1466 (2010) 20. Gavves, E., Fernando, B., Snoek, C.G., Smeulders, A.W., Tuytelaars, T.: Fine- grained categorization by alignments. In: International Conference on Computer Vision (ICCV). pp. 1713â 1720. IEEE 21. Gavves, E., Fernando, B., Snoek, C.G., Smeulders, A.W., Tuytelaars, T.: Local alignments for ï¬ ne-grained categorization. International Journal of Computer Vi- sion (IJCV) pp. 1â | 1511.06789#58 | 1511.06789#60 | 1511.06789 | [
"1503.01817"
] |
1511.06789#60 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 22 (2014) 22. Goering, C., Rodner, E., Freytag, A., Denzler, J.: Nonparametric part transfer for ï¬ ne-grained recognition. In: Computer Vision and Pattern Recognition (CVPR). pp. 2489â 2496. IEEE (2014) 23. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition (CVPR). IEEE (2016) 24. Hinchliï¬ , C.E., Smith, S.A., Allman, J.F., Burleigh, J.G., Chaudhary, R., Coghill, L.M., Crandall, K.A., Deng, J., Drew, B.T., Gazis, R., Gude, K., Hibbett, D.S., Katz, L.A., Laughinghouse, H.D., McTavish, E.J., Midford, P.E., Owen, C.L., Ree, R.H., Rees, J.A., Soltis, D.E., Williams, T., Cranston, K.A.: Synthesis of phy- logeny and taxonomy into a comprehensive tree of life. Proceedings of the National Academy of Sciences (2015), http://www.pnas.org/content/early/2015/09/16/ 1423041112.abstract | 1511.06789#59 | 1511.06789#61 | 1511.06789 | [
"1503.01817"
] |
1511.06789#61 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 25. Ioï¬ e, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (ICML) (2015) 26. Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. In: Neural Information Processing Systems (NIPS) (2015) 27. Khosla, A., Jayadevaprakash, N., Yao, B., Fei-Fei, L.: Novel dataset for ï¬ ne- grained image categorization. In: First Workshop on Fine-Grained Visual Cat- egorization, Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, CO (June 2011) 28. Krause, J., Gebru, T., Deng, J., Li, L.J., Fei-Fei, L.: Learning features and parts for ï¬ ne-grained recognition. In: International Conference on Pattern Recognition (ICPR). Stockholm, Sweden (August 2014) | 1511.06789#60 | 1511.06789#62 | 1511.06789 | [
"1503.01817"
] |
1511.06789#62 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 29. Krause, J., Jin, H., Yang, J., Fei-Fei, L.: Fine-grained recognition without part annotations. In: Conference on Computer Vision and Pattern Recognition (CVPR). IEEE 30. Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3d object representations for ï¬ ne- grained categorization. In: 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13). IEEE (2013) 31. Kumar, N., Belhumeur, P.N., Biswas, A., Jacobs, D.W., Kress, W.J., Lopez, I.C., Soares, J.V.: Leafsnap: A computer vision system for automatic plant species iden- tiï¬ cation. In: European Conference on Computer Vision (ECCV), pp. 502â 516. Springer (2012) 32. | 1511.06789#61 | 1511.06789#63 | 1511.06789 | [
"1503.01817"
] |
1511.06789#63 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | LeCun, Y., Bottou, L., Bengio, Y., Haï¬ ner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278â 2324 (1998) 33. Lewis, D.D., Catlett, J.: Heterogeneous uncertainty sampling for supervised learn- ing. In: International Conference on Machine Learning (ICML). pp. 148â 156 (1994) 23 24 Krause et al. 34. Li, L.J., Fei-Fei, L.: Optimol: automatic online picture collection via incremental model learning. International Journal of Computer Vision (IJCV) 88(2), 147â 168 (2010) 35. Lin, T., Maire, M., Belongie, S., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014), http://arxiv.org/abs/1405.0312 36. Lin, T.Y., RoyChowdhury, A., Maji, S.: Bilinear cnn models for ï¬ ne-grained visual recognition. | 1511.06789#62 | 1511.06789#64 | 1511.06789 | [
"1503.01817"
] |
1511.06789#64 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | In: International Conference on Computer Vision (ICCV). IEEE 37. Liu, J., Kanazawa, A., Jacobs, D., Belhumeur, P.: Dog breed classiï¬ cation using part localization. In: European Conference on Computer Vision (ECCV), pp. 172â 185. Springer (2012) 38. Maji, S., Kannala, J., Rahtu, E., Blaschko, M., Vedaldi, A.: Fine-grained visual classiï¬ | 1511.06789#63 | 1511.06789#65 | 1511.06789 | [
"1503.01817"
] |
1511.06789#65 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | cation of aircraft. Tech. rep. (2013) 39. Mnih, V., Hinton, G.E.: Learning to label aerial images from noisy data. In: Inter- national Conference on Machine Learning (ICML). pp. 567â 574 (2012) 40. Mozafari, B., Sarkar, P., Franklin, M., Jordan, M., Madden, S.: Scaling up crowd- sourcing to very large datasets: a case for active learning. Proceedings of the VLDB Endowment 8(2), 125â 136 (2014) 41. | 1511.06789#64 | 1511.06789#66 | 1511.06789 | [
"1503.01817"
] |
1511.06789#66 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Nilsback, M.E., Zisserman, A.: A visual vocabulary for ï¬ ower classiï¬ cation. In: Computer Vision and Pattern Recognition (CVPR). vol. 2, pp. 1447â 1454. IEEE (2006) 42. Pu, J., Jiang, Y.G., Wang, J., Xue, X.: Which looks like which: Exploring inter- class relationships in ï¬ ne-grained visual categorization. In: European Conference on Computer Vision (ECCV), pp. 425â 440. Springer (2014) 43. Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Train- ing deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014) | 1511.06789#65 | 1511.06789#67 | 1511.06789 | [
"1503.01817"
] |
1511.06789#67 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | 44. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) pp. 1â 42 (April 2015) 45. Schroï¬ , F., Criminisi, A., Zisserman, A.: Harvesting image databases from the web. Pattern Analysis and Machine Intelligence (PAMI) 33(4), 754â 766 (2011) 46. Sermanet, P., Frome, A., Real, E.: Attention for ï¬ ne-grained categorization. arXiv preprint arXiv:1412.7054 (2014) 47. Settles, B.: Active learning literature survey. University of Wisconsin, Madison 52(55-66), 11 (2010) 48. Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems (NIPS). pp. 1289â 1296 (2008) 49. Shih, K.J., Mallya, A., Singh, S., Hoiem, D.: Part localization using multi-proposal consensus for ï¬ ne-grained categorization. In: | 1511.06789#66 | 1511.06789#68 | 1511.06789 | [
"1503.01817"
] |
1511.06789#68 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | British Machine Vision Conference (BMVC) (2015) 50. Simon, M., Rodner, E.: Neural activation constellations: Unsupervised part model discovery with convolutional networks. In: ICCV (2015) 51. Simon, M., Rodner, E., Denzler, J.: Part detector discovery in deep convolutional neural networks. In: Asian Conference on Computer Vision (ACCV). vol. 2, pp. 162â 177 (2014) 52. Sukhbaatar, S., Fergus, R.: | 1511.06789#67 | 1511.06789#69 | 1511.06789 | [
"1503.01817"
] |
1511.06789#69 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Learning from noisy labels with deep neural networks. arXiv preprint arXiv:1406.2080 (2014) 53. Szegedy, C., Ioï¬ e, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 (2016) The Unreasonable Eï¬ ectiveness of Noisy Data for Fine-Grained Recognition 54. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015) 55. Szegedy, C., Vanhoucke, V., Ioï¬ e, S., Shlens, J., Wojna, Z.: Rethinking the incep- tion architecture for computer vision. In: Computer Vision and Pattern Recogni- tion (CVPR). IEEE (2016) 56. Thomee, B., Shamma, D.A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., Li, L.J.: The new data and new challenges in multimedia research. arXiv preprint arXiv:1503.01817 (2015) 57. Torralba, A., Efros, A., et al.: Unbiased look at dataset bias. In: Computer Vision and Pattern Recognition (CVPR). pp. 1521â 1528. IEEE (2011) 58. Van Horn, G., Branson, S., Farrell, R., Haber, S., Barry, J., Ipeirotis, P., Perona, P., Belongie, S.: Building a bird recognition app and large scale dataset with citizen scientists: | 1511.06789#68 | 1511.06789#70 | 1511.06789 | [
"1503.01817"
] |
1511.06789#70 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The ï¬ ne print in ï¬ ne-grained dataset collection. In: Computer Vision and Pattern Recognition (CVPR). IEEE (2015) 59. Vedaldi, A., Mahendran, S., Tsogkas, S., Maji, S., Girshick, B., Kannala, J., Rahtu, E., Kokkinos, I., Blaschko, M.B., Weiss, D., Taskar, B., Simonyan, K., Saphra, N., Mohamed, S.: Understanding objects in detail with ï¬ ne-grained attributes. In: Computer Vision and Pattern Recognition (CVPR) (2014) 60. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011) 61. Wah, C., Belongie, S.: Attribute-based detection of unfamiliar classes with humans in the loop. In: Computer Vision and Pattern Recognition (CVPR). pp. 779â 786. IEEE (2013) 62. Wah, C., Branson, S., Perona, P., Belongie, S.: Multiclass recognition and part localization with humans in the loop. In: International Conference on Computer Vision (ICCV). pp. 2524â 2531. IEEE (2011) 63. Wah, C., Horn, G., Branson, S., Maji, S., Perona, P., Belongie, S.: Similarity com- parisons for interactive ï¬ ne-grained categorization. In: Computer Vision and Pat- tern Recognition (CVPR) (2014) 64. Wang, J., Song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning ï¬ | 1511.06789#69 | 1511.06789#71 | 1511.06789 | [
"1503.01817"
] |
1511.06789#71 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | ne-grained image similarity with deep ranking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1386â 1393 (2014) 65. Welinder, P., Branson, S., Mita, T., Wah, C., Schroï¬ , F., Belongie, S., Perona, P.: Caltech-UCSD Birds 200. Tech. Rep. CNS-TR-2010-001, California Institute of Technology (2010) 66. Xiao, T., Xu, Y., Yang, K., Zhang, J., Peng, Y., Zhang, Z.: | 1511.06789#70 | 1511.06789#72 | 1511.06789 | [
"1503.01817"
] |
1511.06789#72 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | The application of two-level attention models in deep convolutional neural network for ï¬ ne-grained image classiï¬ cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE 67. Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy labeled data for image classiï¬ cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE 68. Xie, S., Yang, T., Wang, X., Lin, Y.: Hyper-class augmented and regularized deep learning for ï¬ ne-grained image classiï¬ cation. In: | 1511.06789#71 | 1511.06789#73 | 1511.06789 | [
"1503.01817"
] |
1511.06789#73 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Computer Vision and Pattern Recognition (CVPR). IEEE 69. Xu, Z., Huang, S., Zhang, Y., Tao, D.: Augmenting strong supervision using web data for ï¬ ne-grained categorization. In: International Conference on Computer Vision (ICCV) (2015) 25 26 Krause et al. 70. Yang, L., Luo, P., Loy, C.C., Tang, X.: A large-scale car dataset for ï¬ ne-grained categorization and veriï¬ cation. In: | 1511.06789#72 | 1511.06789#74 | 1511.06789 | [
"1503.01817"
] |
1511.06789#74 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Computer Vision and Pattern Recognition (CVPR). IEEE 71. Yang, S., Bo, L., Wang, J., Shapiro, L.G.: Unsupervised template learning for ï¬ ne- grained object recognition. In: Advances in Neural Information Processing Systems (NIPS). pp. 3122â 3130 (2012) 72. Yao, B., Bradski, G., Fei-Fei, L.: A codebook-free and annotation-free approach for ï¬ ne-grained image categorization. In: Computer Vision and Pattern Recognition (CVPR). pp. 3466â 3473. IEEE (2012) 73. Yao, B., Khosla, A., Fei-Fei, L.: Combining randomization and discrimination for ï¬ ne-grained image categorization. In: Computer Vision and Pattern Recognition (CVPR). pp. 1577â 1584. IEEE (2011) 74. Yu, F., Zhang, Y., Song, S., Seï¬ , A., Xiao, J.: | 1511.06789#73 | 1511.06789#75 | 1511.06789 | [
"1503.01817"
] |
1511.06789#75 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) 75. Zhang, N., Donahue, J., Girshick, R., Darrell, T.: Part-based r-cnns for ï¬ ne-grained category detection. In: European Conference on Computer Vision (ECCV), pp. 834â 849. Springer (2014) 76. Zhang, N., Farrell, R., Darrell, T.: | 1511.06789#74 | 1511.06789#76 | 1511.06789 | [
"1503.01817"
] |
1511.06789#76 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Pose pooling kernels for sub-category recogni- tion. In: Computer Vision and Pattern Recognition (CVPR). pp. 3665â 3672. IEEE (2012) 77. Zhang, N., Farrell, R., Iandola, F., Darrell, T.: Deformable part descriptors for ï¬ ne-grained recognition and attribute prediction. In: International Conference on Computer Vision (ICCV). pp. 729â 736. IEEE (2013) 78. Zhang, Y., Wei, X.s., Wu, J., Cai, J., Lu, J., Nguyen, V.A., Do, M.N.: Weakly su- pervised ï¬ ne-grained image categorization. arXiv preprint arXiv:1504.04943 (2015) | 1511.06789#75 | 1511.06789 | [
"1503.01817"
] |
|
1511.06488#0 | Resiliency of Deep Neural Networks under Quantization | 6 1 0 2 n a J 7 ] G L . s c [ 3 v 8 8 4 6 0 . 1 1 5 1 : v i X r a # Under review as a conference paper at ICLR 2016 # RESILIENCY OF DEEP NEURAL NETWORKS UNDER QUANTIZATION Wonyong Sung, Sungho Shin & Kyuyeon Hwang Department of Electrical and Computer Engineering Seoul National University Seoul, 08826 Korea [email protected] [email protected] [email protected] # ABSTRACT The complexity of deep neural network algorithms for hardware implementation can be much lowered by optimizing the word-length of weights and signals. Direct quantization of ï¬ oating-point weights, however, does not show good performance when the number of bits assigned is small. Retraining of quantized networks has been developed to relieve this problem. In this work, the effects of quantization are analyzed for a feedforward deep neural network (FFDNN) and a convolutional neural network (CNN) when their network complexity is changed. The complex- ity of the FFDNN is controlled by varying the unit size in each hidden layer and the number of layers, while that of the CNN is done by modifying the feature map conï¬ guration. We ï¬ nd that some performance gap exists between the ï¬ | 1511.06488#1 | 1511.06488 | [
"1505.00256"
] |
|
1511.06488#1 | Resiliency of Deep Neural Networks under Quantization | oating- point and the retrain-based ternary (+1, 0, -1) weight neural networks when the size is not large enough, but the discrepancy almost vanishes in fully complex net- works whose capability is limited by the training data, rather than by the number of connections. This research shows that highly complex DNNs have the capa- bility of absorbing the effects of severe weight quantization through retraining, but connection limited networks are less resilient. This paper also presents the effective compression ratio to guide the trade-off between the network size and the precision when the hardware resource is limited. | 1511.06488#0 | 1511.06488#2 | 1511.06488 | [
"1505.00256"
] |
1511.06488#2 | Resiliency of Deep Neural Networks under Quantization | # INTRODUCTION Deep neural networks (DNNs) begin to ï¬ nd many real-time applications, such as speech recognition, autonomous driving, gesture recognition, and robotic control (Sak et al., 2015; Chen et al., 2015; Jalab et al., 2015; Corradini et al., 2015). Although most of deep neural networks are implemented using GPUs (Graphics Processing Units) in these days, their implementation in hardware can give many beneï¬ ts in terms of power consumption and system size (Ovtcharov et al., 2015). FPGA based implementation examples of CNN show more than 10 times advantage in power consumption (Ovtcharov et al., 2015). Neural network algorithms employ many multiply and add (MAC) operations that mimic the oper- ations of biological neurons. | 1511.06488#1 | 1511.06488#3 | 1511.06488 | [
"1505.00256"
] |
1511.06488#3 | Resiliency of Deep Neural Networks under Quantization | This suggests that reconï¬ gurable hardware arrays that contain quite homogeneous hardware blocks, such as MAC units, can give very efï¬ cient solution to real-time neu- ral network system design. Early studies on word-length determination of neural networks reported the needed precision of at least 8 bits (Holt & Baker, 1991). Our recent works show that the pre- cision required for implementing FFDNN, CNN or RNN needs not be very high, especially when the quantized networks are trained again to learn the effects of lowered precision. | 1511.06488#2 | 1511.06488#4 | 1511.06488 | [
"1505.00256"
] |
1511.06488#4 | Resiliency of Deep Neural Networks under Quantization | In the ï¬ xed-point optimization examples shown in Hwang & Sung (2014); Anwar et al. (2015); Shin et al. (2015), neural networks with ternary weights showed quite good performance which was close to that of ï¬ oating-point arithmetic. In this work, we try to know if retraining can recover the performance of FFDNN and CNN under quantization with only ternary (+1, 0, -1) levels or 3 bits (+3, +2, +1, 0, -1, -2, -3) for weight | 1511.06488#3 | 1511.06488#5 | 1511.06488 | [
"1505.00256"
] |
1511.06488#5 | Resiliency of Deep Neural Networks under Quantization | 1 # Under review as a conference paper at ICLR 2016 representation. Note that bias values are not quantized. For this study, the network complexity is changed to analyze their effects on the performance gap between ï¬ oating-point and retrained low- precision ï¬ xed-point deep neural networks. We conduct our experiments with a feed-forward deep neural network (FFDNN) for phoneme recog- nition and a convolutional neural network (CNN) for image classiï¬ cation. To control the network size, not only the number of units in each layer but also the number of hidden layers are varied in the FFDNN. For the CNN, the number of feature maps for each layer and the number of layers are both changed. The FFDNN uses the TIMIT corpus and the CNN employs the CIFAR-10 dataset. We also propose a metric called effective compression ratio (ECR) for comparing extremely quantized bigger networks with moderately quantized or ï¬ oating-point networks with the smaller size. This analysis intends to ï¬ nd an insight to the knowledge representation capability of highly quantized networks, and also provides a guideline to network size and word-length determination for efï¬ cient hardware implementation of DNNs. # 2 RELATED WORK Fixed-point implementation of signal processing algorithms has long been of interest for VLSI based design of multimedia and communication systems. Some of early works used statistical modeling of quantization noise for application to linear digital ï¬ lters. The simulation-based word-length op- timization method utilized simulation tools to evaluate the ï¬ xed-point performance of a system, by which non-linear algorithms can be optimized (Sung & Kum, 1995). Ternary (+1, 0, -1) coefï¬ - cients based digital ï¬ lters were used to eliminate multiplications at the cost of higher quantization noise. The implementation of adaptive ï¬ lters with ternary weights were developed, but it demanded oversampling to remove the quantization effects (Hussain et al., 2007). Fixed-point neural network design also has been studied with the same purpose of reducing the hard- ware implementation cost (Moerland & Fiesler, 1997). | 1511.06488#4 | 1511.06488#6 | 1511.06488 | [
"1505.00256"
] |
1511.06488#6 | Resiliency of Deep Neural Networks under Quantization | In Holt & Baker (1991), back propagation simulation with 16-bit integer arithmetic was conducted for several problems, such as NetTalk, Par- ity, Protein and so on. This work conducted the experiments while changing the number of hidden units, which was, however, relatively small numbers. The integer simulations showed quite good results for NetTalk and Parity, but not for Protein benchmarks. With direct quantization of trained weights, this work also conï¬ rmed satisfactory operation of neural networks with 8-bit precision. An implementation with ternary weights were reported for neural network design with optical ï¬ ber networks (Fiesler et al., 1990). In this ternary network design, the authors employed retraining after direct quantization to improve the performance of a shallow network. Recently, ï¬ xed-point design of DNNs is revisited, and FFDNN and CNN with ternary weights show quite good performances that are very close to the ï¬ oating-point results. The ternary weight based FFDNN and CNN are used for VLSI and FPGA based implementations, by which the algorithms can operate with only on-chip memory consuming very low power (Kim et al., 2014). Binary weight based deep neural network design is also studied (Courbariaux et al., 2015). | 1511.06488#5 | 1511.06488#7 | 1511.06488 | [
"1505.00256"
] |
1511.06488#7 | Resiliency of Deep Neural Networks under Quantization | Pruned ï¬ oating-point weights are also utilized for efï¬ cient GPU based implementations, where small valued weights are forced to zero to reduce the number of arithmetic operations and the memory space for weight storage (Yu et al., 2012b; Han et al., 2015). A network restructuring technique using singular value decomposition technique is also studied (Xue et al., 2013; Rigamonti et al., 2013). # 3 FIXED-POINT FFDNN AND CNN DESIGN This section explains the design of FFDNN and CNN with varying network complexity and, also, the ï¬ xed-point optimization procedure. 3.1 FFDNN AND CNN DESIGN A feedforward deep neural network with multiple hidden layers are depicted in Figure 1. Each layer k has a signal vector yk, which is propagated to the next layer by multiplying the weight matrix Wk+1, adding biases bk+1, and applying the activation function Ï k+1(·) as follows: Yer = Oep1(Weriyk + be41)- dd) 2 | 1511.06488#6 | 1511.06488#8 | 1511.06488 | [
"1505.00256"
] |
1511.06488#8 | Resiliency of Deep Neural Networks under Quantization | Under review as a conference paper at ICLR 2016 in-hl| h1-h2 h2-h3 h3-h4 h4-out ol PTET Te Input hl h2 h3 h4 Output Figure 1: Feed-forward deep neural network with 4 hidden layers. Input C1 S1 C2 $2 C3 $3 Fil Figure 2: CNN structure with 3 convolution layers and 1 fully-connected layers. One of the most popular activation functions is the rectiï¬ ed linear unit deï¬ ned as Relu(x) = max(0, x). (2) | 1511.06488#7 | 1511.06488#9 | 1511.06488 | [
"1505.00256"
] |
1511.06488#9 | Resiliency of Deep Neural Networks under Quantization | In this work, an FFDNN for phoneme recognition is used. The reference DNN has four hidden layers. Each of the hidden layers has Nh units; the value of Nh is changed to control the complexity of the network. We conduct experiments with the Nh size of 32, 64, 128, 256, 512, and 1024. The number of hidden layers is also reduced. The input layer of the network has 1,353 units to accept 11 frames of a Fourier-transform-based ï¬ lter-bank with 40 coefï¬ cients (+energy) distributed on a mel-scale, together with their ï¬ rst and second temporal derivatives. | 1511.06488#8 | 1511.06488#10 | 1511.06488 | [
"1505.00256"
] |
1511.06488#10 | Resiliency of Deep Neural Networks under Quantization | The output layer consists of 61 softmax units which correspond to 61 target phoneme labels. Phoneme recognition experiments were performed on the TIMIT corpus. The standard 462 speaker set with all SA records removed was used for training, and a separate development set of 50 speaker was used for early stopping. Re- sults are reported for the 24-speaker core test set. The network was trained using a backpropagation algorithm with 128 mini-batch size. Initial learning rate was 10â 5 and it was decreased until 10â 7 during the training. Momentum was 0.9 and RMSProp was adopted for weights update (Tieleman & Hinton, 2012). The dropout technique was employed with 0.2 dropout rate in each layer. The CNN used is for CIFAR-10 dataset. It contains a training set of 50,000 and a test set of 10,000 32Ã 32 RGB color images representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. We divided the training set to 40,000 images for training and 10,000 images for validation. This CNN has 3 convolution and pooling layers and a fully connected hidden layer with 64 units, and the output has 10 softmax units as shown in Figure 2. We control the number of feature maps in each convolution layer. The reference size has 32-32-64 feature maps with 5 by 5 kernel size as used in Krizhevskey (2014). We did not perform any preprocessing and data augmentation such as ZCA whitening and global contrast normalization. To know the effects of network size variation, the number of feature maps is reduced or increased. | 1511.06488#9 | 1511.06488#11 | 1511.06488 | [
"1505.00256"
] |
1511.06488#11 | Resiliency of Deep Neural Networks under Quantization | The conï¬ gurations of the feature maps used for the experiments are 8-8-16, 16-16-32, 32-32-64, 64-64-128, 96-96-192, and 128-128-256. The number of feature map layers is also changed, resulting in 32-32-64, 32-64, 3 # Under review as a conference paper at ICLR 2016 and 64 map conï¬ gurations. | 1511.06488#10 | 1511.06488#12 | 1511.06488 | [
"1505.00256"
] |
1511.06488#12 | Resiliency of Deep Neural Networks under Quantization | Note that the fully connected layer in the CNN is not changed. The network was trained using a backpropagation algorithm with 128 mini-batch size. Initial learning rate was 0.001 and it was decreased to 10â 8 during the training procedure. Momentum was 0.8 and RMSProp was applied for weights update. 3.2 FIXED-POINT OPTIMIZATION OF DNNS Reducing the word-length of weights brings several advantages in hardware based implementation of neural networks. First, it lowers the arithmetic precision, and thereby reduces the number of gates needed for multipliers. Second, the size of memory for storing weights is minimized, which would be a big advantage when keeping them on a chip, instead of external DRAM or NAND ï¬ ash memory. Note that FFDNNs and recurrent neural networks demand a very large number of weights. Third, the reduced arithmetic precision or minimization of off-chip memory accesses leads to low power consumption. However, we need to concern the quantization effects that degrade the system performance. Direct quantization converts a ï¬ oating-point value to the closest integer number, which is conven- tionally used in signal processing system design. However, direct quantization usually demands more than 8 bits, and does not show good performance when the number of bits is small. | 1511.06488#11 | 1511.06488#13 | 1511.06488 | [
"1505.00256"
] |
1511.06488#13 | Resiliency of Deep Neural Networks under Quantization | In ï¬ xed- point deep neural network design, retraining of quantized weights shows quite good performance. The ï¬ xed-point DNN algorithm design consists of three steps: ï¬ oating-point training, direct quan- tization, and retraining of weights. The ï¬ oating-point training procedure can be any of the state of the art techniques, which may include unsupervised learning and dropout. Note that ï¬ xed-point op- timization needs to be based on the best performing ï¬ oating-point weights. Thus, the ï¬ oating-point weight optimization may need to be conducted several times with different initializations, and this step consumes the most of the time. After the ï¬ oating-point training, direct quantization is followed. For direct quantization, uniform quantization function is employed and the function Q(·) is deï¬ ned as follows : fu = sont) -d-min( 05). M=2) 5 where sgn(·) is a sign function, â is a quantization step size, and M represents the number of quantization levels. Note that M needs to be an odd number since the weight values can be posi- tive or negative. When M is 7, the weights are represented by -3·â , -2·â , -1·â , 0, +1·â , +2·â , +3·â ,which can be represented in 3 bits. The quantization step size â is determined to minimize the L2 error, E, depicted as follows. E=- DY (Qw) - wi) (4) where N is the number of weights in each weight group, wi is the i-th weight value represented in ï¬ oating-point. | 1511.06488#12 | 1511.06488#14 | 1511.06488 | [
"1505.00256"
] |
1511.06488#14 | Resiliency of Deep Neural Networks under Quantization | This process needs some iterations, but does not take much time. For network retraining, we maintain both ï¬ oating-point and quantized weights because the amount of weight updates in each training step is much smaller than the quantization step size â . The forward and backward propagation is conducted using quantized weights, but the weight update is applied to the ï¬ oating-point weights and newly quantized values are generated at each iteration. This retraining procedure usually converges quickly and does not take much time when compared to the ï¬ oating-point training. # 4 ANALYSIS OF QUANTIZATION EFFECTS # 4.1 DIRECT QUANTIZATION The performance of the FFDNN and the CNN with directly quantized weights is analyzed while varying the number of units in each layer or the number of feature maps, respectively. In this analysis, the quantization is performed on each weight group, which is illustrated in Figure 1 and | 1511.06488#13 | 1511.06488#15 | 1511.06488 | [
"1505.00256"
] |
1511.06488#15 | Resiliency of Deep Neural Networks under Quantization | 4 # Under review as a conference paper at ICLR 2016 Figure 2, to know the sensitivity of word-length reduction. In this sub-section, we try to analyze the effects of direct quantization. The quantized weight can be represented as follows, wq i = wi + wd i (5) where wd assume that the distortion wd i is the distortion of each weight due to quantization. In the direct quantization, we can i is not dependent each other. (a) (b) Figure 3: Computation model for a unit in the hidden layer j ((a): ï¬ oating-point, (b): distortion). (a) (b) s â @ 8 5 $ 2 2 a Figure 4: Sensitivity analysis of direct quantization ((a): FFDNN, (b): | 1511.06488#14 | 1511.06488#16 | 1511.06488 | [
"1505.00256"
] |
1511.06488#16 | Resiliency of Deep Neural Networks under Quantization | CNN). In the ï¬ gure (b), x-axis label â 8-16â represents the number of feature map is â 8-8-16â . Consider a computation procedure for a unit in a hidden layer, the signal from the previous layer is summed up after multiplication with the weights as illustrated in Figure 3a. We can also assemble a model for distortion, which is shown in Figure 3b. In the distortion model, since wd i is independent each other, we can assume that the effects of the summed distortion is reduced according to the random process theory. This analysis means that the quantization effects are reduced when the number of units in the anterior layer increases, but slowly. Figure 4a illustrates the performance of the FFDNN with ï¬ oating-point arithmetic, 2-bit direct quan- tization of all the weights, and 2-bit direct quantization only on the weight group â In-h1â , â h1-h2â , and â h4-outâ . Consider the quantization performance of the â In-h1â layer, the phone-error rate is higher than the ï¬ oating-point result with an almost constant amount, about 10%. Note that the num- ber of input to the â In-h1â layer is ï¬ xed, 1353, regardless of the hidden unit size. Thus, the amount of distortion delivered to each unit of the hidden layer 1 can be considered unchanged. Figure 4a also shows the quantization performance on â h1-h2â and â h4-outâ | 1511.06488#15 | 1511.06488#17 | 1511.06488 | [
"1505.00256"
] |
1511.06488#17 | Resiliency of Deep Neural Networks under Quantization | layers, which informs the trend of 5 # Under review as a conference paper at ICLR 2016 (a) (b) Figure 5: Performance of direct quantization with multiple precision ((a): FFDNN, (b): CNN). reduced gap to the ï¬ oating-point performance as the network size increases. This can be explained by the sum of increased number of independent distortions when the network size grows. The per- formance of all 2-bit quantization also shows the similar trend of reduced gap to the ï¬ oating-point performance. But, apparently, the performance of 2-bit directly quantized networks is not satisfac- tory. In Figure 4b, a similar analysis is conducted to the CNN with direct quantization when the number of feature maps increases or decreases. In the CNN, the number of input to each output is determined by the number of input feature maps and the kernel size. | 1511.06488#16 | 1511.06488#18 | 1511.06488 | [
"1505.00256"
] |
1511.06488#18 | Resiliency of Deep Neural Networks under Quantization | For example, at the ï¬ rst layer C1, the number of input signal for computing one output is only 75 (=3à 25) regardless of the network size, where the input map size is always 3 and the kernel size is 25. However, at the second layer C2, the number of input feature maps increases as the network size grows. When the feature map of 32-32-64 is considered, the number of input for the C2 layer grows to 800 (=32à | 1511.06488#17 | 1511.06488#19 | 1511.06488 | [
"1505.00256"
] |
1511.06488#19 | Resiliency of Deep Neural Networks under Quantization | 25). Thus, we can expect a reduced distortion as the number of feature maps increases. Figure 5a shows the performance of direct quantization with 2, 4, 6, and 8-bit precision when the network complexity varies. In the FFDNN, 6 bit direct quantization seems enough when the network size is larger than 128. But, small FFDNNs demand 8 bits for near ï¬ oating-point performance. The CNN in Figure 5b also shows the similar trend. The direct quantization requires about 6 bits when the feature map conï¬ guration is 16-16-32 or larger. # 4.2 EFFECTS OF RETRAINING ON QUANTIZED NETWORKS Retraining is conducted on the directly quantized networks using the same data for ï¬ oating-point training. The ï¬ xed-point performance of the FFDNN is shown in Figure 6a when the number of hidden units in each layer varies. The performance of direct 2 bits (ternary levels), direct 3 bits (7- levels), retrain-based 2 bits, and retrain-based 3 bits are compared with the ï¬ oating-point simulation. We can ï¬ nd that the performance gap between the ï¬ oating-point and the retrain-based ï¬ xed-point networks converges very fast as the network size grows. Although the performance gap between the direct and the ï¬ oating-point networks also converges, the rate of convergence is signiï¬ cantly different. In this ï¬ gure, the performance of the ï¬ | 1511.06488#18 | 1511.06488#20 | 1511.06488 | [
"1505.00256"
] |
1511.06488#20 | Resiliency of Deep Neural Networks under Quantization | oating-point network almost saturates when the network size is about 1024. Note that the TIMIT corpus that is used for training has only 3 hours of data. Thus, the network with 1024 hidden units can be considered in the â training-data limited regionâ . Here, the gap between the ï¬ oating-point and ï¬ xed-point networks almost vanishes when the network is in the â training-data limited regionâ . However, when the network size is limited, such as 32, 64, 128, or 256, there is some performance gap between the ï¬ oating-point and highly quantized networks even if retraining on the quantized networks is performed. The similar experiments are conducted for the CNN with varying feature map sizes, and the results are shown in Figure 6b. | 1511.06488#19 | 1511.06488#21 | 1511.06488 | [
"1505.00256"
] |
1511.06488#21 | Resiliency of Deep Neural Networks under Quantization | The conï¬ guration of the feature maps used for the experiments are 8-8-16, 6 # Under review as a conference paper at ICLR 2016 (a) (b) # Phone error rate (%) Figure 6: Comparison of retrain-based and direct quantization for DNN (a) and CNN (b). All the weights are quantized with ternary and 7-level weights. In the ï¬ gure (b), x-axis label â 8-16â represents the number of feature map is â 8-8-16â . 16-16-32, 32-32-64, 64-64-128, 96-96-192, and 128-128-256. | 1511.06488#20 | 1511.06488#22 | 1511.06488 | [
"1505.00256"
] |
1511.06488#22 | Resiliency of Deep Neural Networks under Quantization | The size of the fully connected layer is not changed. In this ï¬ gure, the ï¬ oating-point and the ï¬ xed-point performances with retraining also converge very fast as the number of feature maps increases. The ï¬ oating-point performance saturates when the feature map size is 128-128-256, and the gap is less than 1% when comparing the ï¬ oating-point and the retrain-based 2-bit networks. However, also, there is some performance gap when the number of feature maps is reduced. This suggests that a fairly high performance feature extraction can be designed even using very low-precision weights if the number of feature maps can be increased. # 4.3 FIXED-POINT PERFORMANCES WHEN VARYING THE DEPTH It is well known that increasing the depth usually results in positive effects on the performance of a DNN (Yu et al., 2012a). The network complexity of a DNN is changed by increasing or reducing the number of hidden layers or feature map levels. | 1511.06488#21 | 1511.06488#23 | 1511.06488 | [
"1505.00256"
] |
1511.06488#23 | Resiliency of Deep Neural Networks under Quantization | The result of ï¬ xed-point and ï¬ oating-point performances when varying the number of hidden layers for the FFDNN is summarized in Table 1. The number of units in each hidden layer is 512. This table shows that both the ï¬ oating-point and the ï¬ xed-point performances of the FFDNN increase when adding hidden layers from 0 to 4. The performance gap between the ï¬ oating-point and the ï¬ xed-point networks shrinks as the number of levels increases. | 1511.06488#22 | 1511.06488#24 | 1511.06488 | [
"1505.00256"
] |
1511.06488#24 | Resiliency of Deep Neural Networks under Quantization | Table 1: Framewise phoneme error rate on TIMIT with respect to the depth in DNN Number of layers (Floating-point result) 1 (34.67%) 2 (31.51%) 3 (30.81%) 4 (30.31%) # Quantization levels Direct Retraining Difference 3-level 7-level 3-level 7-level 3-level 7-level 3-level 7-level 69.88% 56.81% 47.74% 36.99% 49.27% 36.58% 48.13% 34.77% 38.58% 36.57% 33.89% 33.04% 33.05% 31.72% 31.86% 31.49% 3.91% 1.90% 2.38% 1.53% 2.24% 0.91% 1.55% 1.18% The network complexity of the CNN is also varied by reducing the level of feature maps as shown in Table 2. As expected, the performance of both the ï¬ oating-point and retrain-based low-precision networks degrades as the number of levels is reduced. The performance gap between them is very small with 7-level quantization for all feature map levels. | 1511.06488#23 | 1511.06488#25 | 1511.06488 | [
"1505.00256"
] |
1511.06488#25 | Resiliency of Deep Neural Networks under Quantization | 7 # Under review as a conference paper at ICLR 2016 These results for the FFDNN and the CNN with varied number of levels also show that the ef- fects of quantization can be much reduced by retraining when the network contains some redundant complexity. Table 2: Miss classiï¬ cation rate on CIFAR-10 with respect to the depth in CNN Layer (Floating-point result) 64 (34.19%) 32-64 (29.29%) 32-32-64 (26.87%) # Quantization levels Direct Retraining Difference 3-level 7-level 3-level 7-level 3-level 7-level 72.95% 46.60% 55.30% 39.80% 79.88% 47.91% 35.37% 34.15% 29.51% 29.32% 27.94% 26.95% 1.18% -0.04% 0.22% 0.03% 1.07% 0.08% # 5 EFFECTIVE COMPRESSION RATIO So far we have examined the effect of direct and retraining-based quantization to the ï¬ nal classiï¬ ca- tion error rates. As the number of quantization level decreases, more memory space can be saved at the cost of sacriï¬ cing the accuracy. Therefore, there is a trade-off between the total memory space for storing weights and the ï¬ nal classiï¬ cation accuracy. In practice, investigating this trade-off is important for deciding the optimal bit-widths for representing weights and implementing the most efï¬ cient neural network hardware. In this section, we propose a guideline for ï¬ nding the optimal bit-widths in terms of the total number of bits consumed by the network weights when the desired accuracy or the network size is given. Note that we assume 2n â 1 quantization levels are represented by n bits (i.e. 2 bits are required for representing a ternary weight). For simplicity, all layers are quantized with the same number of quantization levels. However, the similar approach can be applied to the layer-wise quantization analysis. | 1511.06488#24 | 1511.06488#26 | 1511.06488 | [
"1505.00256"
] |
1511.06488#26 | Resiliency of Deep Neural Networks under Quantization | (a) (b) # Phone error rate (%) Figure 7: Framewise phone error rate of phoneme recognition DNNs with respect to the total number of bits for weights with (a) direct quantization and (b) after retraining. The optimal combination of the bit-width and layer size can be found when the number of total bits or the accuracy is given as shown in Figure 7. The ï¬ gure shows the framewise phoneme error rate on TIMIT with respect to the number of total bits, while varying the layer size of DNNs with various number of quantization bits from 2 to 8 bits. The network has 4 hidden layers with the uniform sizes. With direct quantization, the optimal hardware design can be achieved with about 5 bits. On the other hand, the weight representation with only 2 bits shows the best performance after retraining. | 1511.06488#25 | 1511.06488#27 | 1511.06488 | [
"1505.00256"
] |
1511.06488#27 | Resiliency of Deep Neural Networks under Quantization | 8 # Under review as a conference paper at ICLR 2016 floating result â sâ 2 bit direct â +â 3 bit direct â +â 2 bit retrain â 4~ 3 bit retrain 2 i=} ~ I} Phone error rate (%) a i} 40F 2 i=} 30b # of params Figure 8: Obtaining effective number of parameters for the uncompressed network. (a) (b) # ratio # Effective Figure 9: Effective compression ratio (ECR) with respect to the layer size and the number of bits per weights for (a) direct quantization and (b) retrain-based quantization. The remaining question is how much memory space can be saved by quantization while maintaining the accuracy. To examine this, we introduce a metric called effective compression ratio (ECR), which is deï¬ ned as follows: ECR = Effective uncompressed size Compressed size (6) The compressed size is the total memory bits required for storing all weights with quantization. The effective uncompressed size is the total memory size with 32-bit ï¬ oating point representation when the network achieves the same accuracy as that of the quantized network. Figure 8 describes how to obtain the effective number of parameters for uncompressed networks. | 1511.06488#26 | 1511.06488#28 | 1511.06488 | [
"1505.00256"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.