doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.06709 | 21 | The best results published on this dataset are by Luong and Manning (2015), obtained with an In ensemble of 8 independently trained models. a comparison of single-model results, we outper- form their model on tst2013 by 1 BLEU.
4.2.3 GermanâEnglish WMT 15 Results for GermanâEnglish on the WMT 15 data sets are shown in Table 5. Like for the reverse translation direction, we see substan- tial improvements (3.6â3.7 BLEU) from adding monolingual training data with synthetic source sentences, which is substantially bigger than the improvement observed with deep fusion (Gülçehre et al., 2015); our ensemble outperforms the previous state of the art on newstest2015 by 2.3 BLEU.
4.2.4 TurkishâEnglish IWSLT 14 Table 6 shows results for TurkishâEnglish. On average, we see an improvement of 0.6 BLEU on the test sets from adding monolingual data with a dummy source side in a 1-1 ratio10, although we note a high variance between different test sets. | 1511.06709#21 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 21 | # 5 Experiments
# 5.1 Implementation Details
The base classiï¬er we use in all noisy data experiments is the Inception-v3 con- volutional neural network architecture [55], which is among the state of the art methods for generic object recognition [44,53,23]. Learning rate schedules are de- termined by performance on a holdout subset of the training data, which is 10% of the training data for control experiments training on ground truth datasets, or 1% when training on the larger noisy web data. Unless otherwise noted, all recognition results use as input a single crop in the center of the image.
Our active learning comparison uses the Yahoo Flickr Creative Commons 100M dataset [56] as its pool of unlabeled images, which we ï¬rst pre-ï¬lter with a binary dog classiï¬er and localizer [54], resulting in 1.71 million candidate dogs. We perform up to two rounds of active learning, with a sampling budget B of 10à the original dataset size per round3. For experiments on Stanford Dogs, we use the CNN of [25], which is pre-trained on a version of ILSVRC [44,13] with dog data removed, since Stanford Dogs is a subset of ILSVRC training data.
# 5.2 Removing Ground Truth from Web Images | 1511.06789#21 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 22 | 16-16-32, 32-32-64, 64-64-128, 96-96-192, and 128-128-256. The size of the fully connected layer is not changed. In this ï¬gure, the ï¬oating-point and the ï¬xed-point performances with retraining also converge very fast as the number of feature maps increases. The ï¬oating-point performance saturates when the feature map size is 128-128-256, and the gap is less than 1% when comparing the ï¬oating-point and the retrain-based 2-bit networks. However, also, there is some performance gap when the number of feature maps is reduced. This suggests that a fairly high performance feature extraction can be designed even using very low-precision weights if the number of feature maps can be increased.
# 4.3 FIXED-POINT PERFORMANCES WHEN VARYING THE DEPTH | 1511.06488#22 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 22 | With synthetic training data (Gigawordsynth), we outperform the baseline by 2.7 BLEU on average, and also outperform results obtained via shallow or deep fusion by Gülçehre et al. (2015) by 0.5 BLEU on average. To compare to what extent syn- thetic data has a regularization effect, even without novel training data, we also back-translate the tar- get side of the parallel training text to obtain the training corpus parallelsynth. Mixing the original parallel corpus with parallelsynth (ratio 1-1) gives some improvement over the baseline (1.7 BLEU on average), but the novel monolingual training data (Gigawordmono) gives higher improvements, despite being out-of-domain in relation to the test sets. We speculate that novel in-domain monolin- gual data would lead to even higher improvements.
# 4.2.5 Back-translation Quality for Synthetic Data
One question that our previous experiments leave open is how the quality of the automatic back- translation affects training with synthetic data. To investigate this question, we back-translate the same German monolingual corpus with three dif- ferent GermanâEnglish systems:
⢠with our baseline system and greedy decod- ing
⢠with our baseline system and beam search (beam size 12). This is the same system used for the experiments in Table 3. | 1511.06709#22 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06488 | 23 | # 4.3 FIXED-POINT PERFORMANCES WHEN VARYING THE DEPTH
It is well known that increasing the depth usually results in positive effects on the performance of a DNN (Yu et al., 2012a). The network complexity of a DNN is changed by increasing or reducing the number of hidden layers or feature map levels. The result of ï¬xed-point and ï¬oating-point performances when varying the number of hidden layers for the FFDNN is summarized in Table 1. The number of units in each hidden layer is 512. This table shows that both the ï¬oating-point and the ï¬xed-point performances of the FFDNN increase when adding hidden layers from 0 to 4. The performance gap between the ï¬oating-point and the ï¬xed-point networks shrinks as the number of levels increases.
Table 1: Framewise phoneme error rate on TIMIT with respect to the depth in DNN | 1511.06488#23 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 23 | ⢠with our baseline system and greedy decod- ing
⢠with our baseline system and beam search (beam size 12). This is the same system used for the experiments in Table 3.
10We also experimented with higher ratios of monolingual data, but this led to decreased BLEU scores.
BLEU ENâDE DEâEN 2015 - 22.3 25.0 28.3 - - 2015 23.6 26.0 26.5 26.6 27.0 27.6 2014 20.4 23.2 23.8 23.9 24.2 24.7 back-translation none parallel (greedy) parallel (beam 12) synthetic (beam 12) ensemble of 3 ensemble of 12
Table 7: EnglishâGerman translation perfor- mance (BLEU) on WMT training/test sets (new- stest2014; newstest2015). Systems differ in how the synthetic training data is obtained. Ensembles of 4 models (unless speciï¬ed otherwise).
⢠with the GermanâEnglish system that was itself trained with synthetic data (beam size 12). | 1511.06709#23 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 23 | 3 To be released.
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Training Data Acc. Dataset Training Data Acc. Dataset CUB-GT Web (raw) Web (ï¬ltered) L-Bird L-Bird(MC) L-Bird+CUB-GT L-Bird+CUB-GT(MC) 84.4 87.7 89.0 91.9 92.3 92.2 92.8 CUB [60] 88.1 FGVC-GT 90.7 Web (raw) 91.1 Web (ï¬ltered) 90.9 L-Aircraft 93.4 L-Aircraft(MC) L-Aircraft+FGVC-GT 94.5 L-Aircraft+FGVC-GT(MC) 95.9 FGVC [38] Stanford-GT Web (raw) Web (ï¬ltered) L-Dog L-Dog(MC) L-Dog+Stanford-GT L-Dog+Stanford-GT(MC) 80.6 78.5 78.4 78.4 80.8 84.0 85.9 Birdsnap [4] Stanford Dogs [27] | 1511.06789#23 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 24 | Table 1: Framewise phoneme error rate on TIMIT with respect to the depth in DNN
Number of layers (Floating-point result) 1 (34.67%) 2 (31.51%) 3 (30.81%) 4 (30.31%) # Quantization levels Direct Retraining Difference 3-level 7-level 3-level 7-level 3-level 7-level 3-level 7-level 69.88% 56.81% 47.74% 36.99% 49.27% 36.58% 48.13% 34.77% 38.58% 36.57% 33.89% 33.04% 33.05% 31.72% 31.86% 31.49% 3.91% 1.90% 2.38% 1.53% 2.24% 0.91% 1.55% 1.18%
The network complexity of the CNN is also varied by reducing the level of feature maps as shown in Table 2. As expected, the performance of both the ï¬oating-point and retrain-based low-precision networks degrades as the number of levels is reduced. The performance gap between them is very small with 7-level quantization for all feature map levels.
7
# Under review as a conference paper at ICLR 2016 | 1511.06488#24 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 24 | ⢠with the GermanâEnglish system that was itself trained with synthetic data (beam size 12).
BLEU scores of the GermanâEnglish sys- tems, and of the resulting EnglishâGerman sys- tems that are trained on the different back- translations, are shown in Table 7. The quality of the GermanâEnglish back-translation differs substantially, with a difference of 6 BLEU on new- stest2015. Regarding the EnglishâGerman sys- tems trained on the different synthetic corpora, we ï¬nd that the 6 BLEU difference in back-translation quality leads to a 0.6â0.7 BLEU difference in translation quality. This is balanced by the fact that we can increase the speed of back-translation by trading off some quality, for instance by reduc- ing beam size, and we leave it to future research to explore how much the amount of synthetic data affects translation quality.
We also show results for an ensemble of 3 mod- els (the best single model of each training run), and 12 models (all 4 models of each training run). Thanks to the increased diversity of the ensemble components, these ensembles outperform the en- sembles of 4 models that were all sampled from the same training run, and we obtain another im- provement of 0.8â1.0 BLEU.
# 4.3 Contrast to Phrase-based SMT | 1511.06709#24 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 24 | 78.2 Birdsnap-GT 76.1 Web (raw) 78.2 Web (ï¬ltered) 82.8 L-Bird 85.4 L-Bird(MC) L-Bird+Birdsnap-GT 83.9 L-Bird+Birdsnap-GT(MC) 85.4 Table 1. Comparison of data source used during training with recognition perfor- mance, given in terms of Top-1 accuracy. âCUB-GTâ indicates training only on the ground truth CUB training set, âWeb (raw)â trains on all search results for CUB categories, and âWeb (ï¬ltered)â applies ï¬ltering between categories within a domain (birds). L-Bird denotes training ï¬rst on L-Bird, then ï¬ne-tuning on the subset of cate- gories under evaluation (i.e. the ï¬ltered web images), and L-Bird+CUB-GT indicates training on L-Bird, then ï¬ne-tuning on Web (ï¬ltered), and ï¬nally ï¬ne-tuning again on CUB-GT. Similar notation is used for the other datasets. â(MC)â indicates using multiple | 1511.06789#24 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 25 | 7
# Under review as a conference paper at ICLR 2016
These results for the FFDNN and the CNN with varied number of levels also show that the ef- fects of quantization can be much reduced by retraining when the network contains some redundant complexity.
Table 2: Miss classiï¬cation rate on CIFAR-10 with respect to the depth in CNN
Layer (Floating-point result) 64 (34.19%) 32-64 (29.29%) 32-32-64 (26.87%) # Quantization levels Direct Retraining Difference 3-level 7-level 3-level 7-level 3-level 7-level 72.95% 46.60% 55.30% 39.80% 79.88% 47.91% 35.37% 34.15% 29.51% 29.32% 27.94% 26.95% 1.18% -0.04% 0.22% 0.03% 1.07% 0.08%
# 5 EFFECTIVE COMPRESSION RATIO | 1511.06488#25 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 25 | # 4.3 Contrast to Phrase-based SMT
The back-translation of monolingual target data into the source language to produce synthetic parallel text has been previously explored for phrase-based SMT (Bertoldi and Federico, 2009; Lambert et al., 2011). While our approach is tech- nically similar, synthetic parallel data fulï¬lls novel
name training BLEU tst2013 19.9 21.3 18.4 20.1 19.4 21.8 tst2011 18.4 20.2 18.6 19.9 18.8 21.2 data baseline (Gülçehre et al., 2015) deep fusion (Gülçehre et al., 2015) baseline parallelsynth Gigawordmono Gigawordsynth instances tst2012 18.8 20.2 18.2 20.4 19.6 21.1 parallel parallel/parallelsynth parallel/Gigawordmono parallel/Gigawordsynth 7.2m 6m/6m 7.6m/7.6m 8.4m/8.4m tst2014 18.7 20.6 18.3 20.0 18.2 20.4
Table 6: TurkishâEnglish translation performance (tokenized BLEU) on IWSLT test sets (TED talks). Single models. Number of training instances varies due to early stopping. | 1511.06709#25 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06488 | 26 | # 5 EFFECTIVE COMPRESSION RATIO
So far we have examined the effect of direct and retraining-based quantization to the ï¬nal classiï¬ca- tion error rates. As the number of quantization level decreases, more memory space can be saved at the cost of sacriï¬cing the accuracy. Therefore, there is a trade-off between the total memory space for storing weights and the ï¬nal classiï¬cation accuracy. In practice, investigating this trade-off is important for deciding the optimal bit-widths for representing weights and implementing the most efï¬cient neural network hardware.
In this section, we propose a guideline for ï¬nding the optimal bit-widths in terms of the total number of bits consumed by the network weights when the desired accuracy or the network size is given. Note that we assume 2n â 1 quantization levels are represented by n bits (i.e. 2 bits are required for representing a ternary weight). For simplicity, all layers are quantized with the same number of quantization levels. However, the similar approach can be applied to the layer-wise quantization analysis.
(a) (b)
# Phone error rate (%) | 1511.06488#26 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 26 | Table 6: TurkishâEnglish translation performance (tokenized BLEU) on IWSLT test sets (TED talks). Single models. Number of training instances varies due to early stopping.
system BLEU WMT IWSLT 20.1 20.8 +0.7 +2.9 parallel +synthetic PBSMT gain NMT gain 21.5 21.6 +0.1 +1.2
results Phrase-based Table (EnglishâGerman) on WMT test sets (aver- age of newstest201{4,5}), and IWSLT test sets (average of tst201{3,4,5}), and average BLEU gain from adding synthetic data for both PBSMT and NMT.
8 6 4 2 0 parallel (dev) parallel (train) parallelsynth (dev) parallelsynth (train) Gigawordmono (dev) Gigawordmono (train) Gigawordsynth (dev) Gigawordsynth (train) 15 training time (training instances ·106) 5 10 20 25
y p o r t n e - s s o r c
30
# roles in NMT. | 1511.06709#26 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 26 | To deal with this concern, we performed an aggressive deduplication procedure with all ground truth test sets and their corresponding web images. This process follows Wang et al. [64], which is a state of the art method for learning a simi- larity metric between images. We tuned this procedure for high near-duplicate recall, manually verifying its quality. More details are included in the Sec. B.
# 5.3 Main Results
We present our main recognition results in Tab. 1, where we compare perfor- mance when the training set consists of either the ground truth training set, raw web images of the categories in the corresponding evaluation dataset, web im- ages after applying our ï¬ltering strategy, all web images of a particular domain, or all images including even the ground truth training set. | 1511.06789#26 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 27 | (a) (b)
# Phone error rate (%)
Figure 7: Framewise phone error rate of phoneme recognition DNNs with respect to the total number of bits for weights with (a) direct quantization and (b) after retraining.
The optimal combination of the bit-width and layer size can be found when the number of total bits or the accuracy is given as shown in Figure 7. The ï¬gure shows the framewise phoneme error rate on TIMIT with respect to the number of total bits, while varying the layer size of DNNs with various number of quantization bits from 2 to 8 bits. The network has 4 hidden layers with the uniform sizes. With direct quantization, the optimal hardware design can be achieved with about 5 bits. On the other hand, the weight representation with only 2 bits shows the best performance after retraining.
8
# Under review as a conference paper at ICLR 2016
floating result âsâ 2 bit direct â+â 3 bit direct â+â 2 bit retrain â4~ 3 bit retrain 2 i=} ~ I} Phone error rate (%) a i} 40F 2 i=} 30b # of params
Figure 8: Obtaining effective number of parameters for the uncompressed network.
(a) (b)
# ratio
# Effective | 1511.06488#27 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 27 | y p o r t n e - s s o r c
30
# roles in NMT.
To explore the relative effectiveness of back- translated data for phrase-based SMT and NMT, we train two phrase-based SMT systems (Koehn et al., 2007), using only with Moses WMTparallel, or both WMTparallel and WMTsynth_de for training the translation and reordering model. Both systems contain the same language model, a 5-gram Kneser-Ney model trained on all avail- able WMT data. We use the baseline features described by Haddow et al. (2015). Results are shown in Table 8.
In phrase- based SMT, we ï¬nd that the use of back-translated training data has a moderate positive effect on the WMT test sets (+0.7 BLEU), but not on the IWSLT test sets. This is in line with the ex- pectation that the main effect of back-translated data for phrase-based SMT is domain adaptation (Bertoldi and Federico, 2009). Both the WMT test sets and the News Crawl corpora which we used as monolingual data come from the same source, a web crawl of newspaper articles.11 In contrast, News Crawl is out-of-domain for the IWSLT test sets. | 1511.06709#27 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 27 | On CUB-200-2011 [60], the smallest dataset we consider, even using raw search results as training data results in a better model than the annotated training set, with ï¬ltering further improving results by 1.3%. For Birdsnap [4], the largest of the ground truth datasets we evaluate on, raw data mildly under- performs using the ground truth training set, though ï¬ltering improves results to be on par. On both CUB and Birdsnap, training ï¬rst on the very large set of categories in L-Bird results in dramatic improvements, improving performance on CUB further by 2.9% and on Birdsnap by 4.6%. This is an important point:
9
10 Krause et al. | 1511.06789#27 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 28 | Figure 8: Obtaining effective number of parameters for the uncompressed network.
(a) (b)
# ratio
# Effective
Figure 9: Effective compression ratio (ECR) with respect to the layer size and the number of bits per weights for (a) direct quantization and (b) retrain-based quantization.
The remaining question is how much memory space can be saved by quantization while maintaining the accuracy. To examine this, we introduce a metric called effective compression ratio (ECR), which is deï¬ned as follows:
ECR = Effective uncompressed size Compressed size (6)
The compressed size is the total memory bits required for storing all weights with quantization. The effective uncompressed size is the total memory size with 32-bit ï¬oating point representation when the network achieves the same accuracy as that of the quantized network.
Figure 8 describes how to obtain the effective number of parameters for uncompressed networks. Speciï¬cally, by varying the size, we ï¬nd the number of total parameters of the ï¬oating-point network that shows the same accuracy as the quantized one. After that, the effective uncompressed size can be computed by multiplying 32 bits to the effective number of parameters. | 1511.06488#28 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 28 | 11The WMT test sets are held-out from News Crawl.
Figure 1: TurkishâEnglish training and develop- ment set (tst2010) cross-entropy as a function of training time (number of training instances) for different systems.
In contrast to phrase-based SMT, which can make use of monolingual data via the language model, NMT has so far not been able to use mono- lingual data to great effect, and without requir- ing architectural changes. We ï¬nd that the effect of synthetic parallel data is not limited to domain adaptation, and that even out-of-domain synthetic data improves NMT quality, as in our evaluation on IWSLT. The fact that the synthetic data is more effective on the WMT test sets (+2.9 BLEU) than on the IWSLT test sets (+1.2 BLEU) supports the hypothesis that domain adaptation contributes to the effectiveness of adding synthetic data to NMT training.
It is an important ï¬nding that back-translated data, which is mainly effective for domain adapta- tion in phrase-based SMT, is more generally use- ful in NMT, and has positive effects that go beyond domain adaptation. In the next section, we will in- vestigate further reasons for its effectiveness. | 1511.06709#28 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 28 | 9
10 Krause et al.
even if the end task consists of classifying only a small number of categories, training with more ï¬ne-grained categories yields signiï¬cantly more eï¬ective net- works. This can also be thought of as a form of transfer learning within the same ï¬ne-grained domain, allowing features learned on a related task to be use- ful for the ï¬nal classiï¬cation problem. When permitted access to the annotated ground truth training sets for additional ï¬ne-tuning and domain transfer, results increase by another 0.3% on CUB and 1.1% on Birdsnap. | 1511.06789#28 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 29 | Once we get the corresponding effective uncompressed size for the speciï¬c network size and the number of quantization bits, the ECR can be computed by (6). The ECRs for the direct and retrain- based quantization for various network sizes and quantization bits are shown in Figure 9. For the direct quantization, 5 bit quantization shows the best ECR except for the layer size of 1024. On the other hand, even 2 bit quantization performs better than the others after retraining. That is, after retraining, a bigger network with extreme ternary (2 bit) quantization is more efï¬cient in terms of
9
# Under review as a conference paper at ICLR 2016
the memory usage for weights than any other smaller networks with higher quantization bits when they are compared at the same accuracy.
# 6 DISCUSSION | 1511.06488#29 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 29 | 8 WMTparallel (dev) WMTparallel (train) WMTsynth (dev) WMTsynth (train) y p o r t n e - s s o r c 6 4 2 0 20 60 80
40 training time (training instances ·106)
Figure 2: EnglishâGerman training and develop- ment set (newstest2013) cross-entropy as a func- tion of training time (number of training instances) for different systems.
# 4.4 Analysis
We previously indicated that overï¬tting is a con- cern with our baseline system, especially on small data sets of several hundred thousand training sentences, despite the regularization employed. This overï¬tting is illustrated in Figure 1, which plots training and development set cross-entropy by training time for TurkishâEnglish models. For comparability, we measure training set cross- entropy for all models on the same random sam- training set. We can see ple of the parallel that train- the model ing data quickly overï¬ts, while all three mono- lingual data sets (parallelsynth, Gigawordmono, or Gigawordsynth) delay overï¬tting, and give bet- ter perplexity on the development set. The best development set cross-entropy is reached by Gigawordsynth. 2 | 1511.06709#29 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 29 | For the aircraft categories in FGVC, results are largely similar but weaker in magnitude. Training on raw web data results in a signiï¬cant gain of 2.6% compared to using the curated training set, and ï¬ltering, which did not aï¬ect the size of the training set much (Fig. 5), changes results only slightly in a positive direction. Counterintuitively, pre-training on a larger set of aircraft does not improve results on FGVC. Our hypothesis for the diï¬erence between birds and aircraft in this regard is this: since there are many more species of birds in L- Bird than there are aircraft in L-Aircraft (10,982 vs 409), not only is the training size of L-Bird larger, but each training example provides stronger information because it distinguishes between a larger set of mutually-exclusive categories. Nonetheless, when access to the curated training set is available for ï¬ne-tuning, performance dramatically increases to 94.5%. On Stanford Dogs we see results similar to FGVC, though for dogs we happen to see a mild loss when comparing to the ground truth training set, not much diï¬erence with ï¬ltering or using L-Dog, and a large boost from adding in the ground truth training set. | 1511.06789#29 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 30 | the memory usage for weights than any other smaller networks with higher quantization bits when they are compared at the same accuracy.
# 6 DISCUSSION
In this study, we control the network size by changing the number of units in the hidden layers, the number of feature maps, or the number of levels. At any case, reduced complexity lowers the resiliency to quantization. We are now conducting similar experiments to the recurrent neural networks that are known to be more sensitive to quantization (Shin et al., 2015). This work seems to be directly related to several network optimization methods, such as pruning, fault tolerance, and decomposition (Yu et al., 2012b; Han et al., 2015; Xue et al., 2013; Rigamonti et al., 2013). In the pruning, retraining of weights is conducted after zeroing small valued weights. The effects of pruning, fault tolerance, and network decomposition efï¬ciency would be dependent on the redundant representation capability of DNNs. | 1511.06488#30 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 30 | for EnglishâGerman, comparing the system trained on only parallel data and the system that includes synthetic training data. Since more training data is available for EnglishâGerman, there is no indi- cation that overï¬tting happens during the ï¬rst 40 million training instances (or 7 days of training); while both systems obtain comparable training set cross-entropies, the system with synthetic data reaches a lower cross-entropy on the development set. One explanation for this is the domain effect discussed in the previous section.
A central theoretical expectation is that mono- lingual target-side data improves the modelâs ï¬usystem parallel +mono +synthetic produced attested natural 53.4% 74.9% 61.6% 84.6% 56.4% 82.5% 1078 994 1217
Table 9: Number of words in system out- put that do not occur in parallel training data (countref = 1168), and proportion that is attested in data, or natural according to native speaker. EnglishâGerman; newstest2015; ensemble sys- tems. | 1511.06709#30 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 30 | An additional factor that can inï¬uence performance of web models is domain shift â if images in the ground truth test set have very diï¬erent visual properties compared to web images, performance will naturally diï¬er. Similarly, if category names or deï¬nitions within a dataset are even mildly oï¬, web-based methods will be at a disadvantage without access to the ground truth training set. Adding the ground truth training data ï¬xes this domain shift, making web-trained models quickly recover, with a particularly large gain if the network has already learned a good representation, matching the pattern of results for Stanford Dogs.
Limits of Web-Trained Models. To push our models to their limits, we additionally evaluate using 144 image crops at test time, averaging predic- tions across each crop, denoted â(MC)â in Tab. 1. This brings results up to 92.3%/92.8% on CUB (without/with CUB training data), 85.4%/85.4% on Bird- snap, 93.4%/95.9% on FGVC, and 80.8%/85.9% on Stanford Dogs. We note that this is close to human expert performance on CUB, which is estimated to be be- tween 93% [6] and 95.6% [58]. | 1511.06789#30 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 31 | This study can be applied to hardware efï¬cient DNN design. For design with limited hardware resources, when the size of the reference DNN is relatively small, it is advised to employ a very low-precision arithmetic and, instead, increase the network complexity as much as the hardware capacity allows. But, when the DNNs are in the performance saturation region, this strategy does not always gain much because growing the âalready-bigâ network size brings almost no performance advantages. This can be observed in Figure 7b and Figure 9b where 6 bit quantization performed best at the largest layer size (1,024).
# 7 CONCLUSION | 1511.06488#31 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 31 | ency, its ability to produce natural target-language sentences. As a proxy to sentence-level ï¬u- ency, we investigate word-level ï¬uency, specif- ically words produced as sequences of subword units, and whether NMT systems trained with ad- ditional monolingual data produce more natural words. For instance, the EnglishâGerman sys- tems translate the English phrase civil rights pro- tections as a single compound, composed of three subword units: Bürger|rechts|schutzes12 , and we analyze how many of these multi-unit words that the translation systems produce are well-formed German words. | 1511.06709#31 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 31 | Comparison with Prior Work. We compare our results to prior work on CUB, the most competitive ï¬ne-grained dataset, in Tab. 2. While even our baseline model using only ground truth data from Tab. 1 was at state of the art levels, by forgoing the CUB training set and only training using noisy data from the web, our models greatly outperform all prior work. On FGVC, which is more recent and fewer works have evaluated on, the best prior performing
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Method Alignments [21] PDD [51] PB R-CNN [75] Weak Sup. [78] PN-DCN [5] Two-Level [66] Consensus [49] NAC [50] FG-Without [29] STN [26] Bilinear [36] Augmenting [69] Noisy Data+CNN [55] Web Training Annotations Acc. 53.6 GT 60.6 GT+BB+Parts 73.9 GT+BB+Parts 75.0 GT 75.7 GT+BB+Parts 77.9 GT 78.3 GT+BB+Parts 81.0 GT 82.0 GT+BB GT 84.1 84.1 GT GT+BB+Parts+Web 84.6 92.3 | 1511.06789#31 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 32 | # 7 CONCLUSION
We analyze the performance of ï¬xed-point deep neural networks, an FFDNN for phoneme recogni- tion and a CNN for image classiï¬cation, while not only changing the arithmetic precision but also varying their network complexity. The low-precision networks for this analysis are obtained by us- ing the retrain based quantization method, and the network complexity is controlled by changing the conï¬gurations of the hidden layers or feature maps. The performance gap between the ï¬oating- point and the ï¬xed-point neural networks with ternary weights (+1, 0, -1) almost vanishes when the DNNs are in the performance saturation region for the given training data. However, when the complexity of DNNs are reduced, by lowering either the number of units, feature maps, or hidden layers, the performance gap between them increases. In other words, a large size network that may contain redundant representation capability for the given training data does not hurt by the lowered precision, but a very compact network does.
# ACKNOWLEDGMENTS
This work was supported in part by the Brain Korea 21 Plus Project and the National Re- search Foundation of Korea (NRF) grants funded by the Korea government (MSIP) (No. 2015R1A2A1A10056051).
# REFERENCES | 1511.06488#32 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 32 | We compare the number of words in the system output for the newstest2015 test set which are pro- duced via subword units, and that do not occur in the parallel training corpus. We also count how many of them are attested in the full monolingual corpus or the reference translation, which we all consider ânaturalâ. Additionally, the main authors, a native speaker of German, annotated a random subset (n = 100) of unattested words of each sys- tem according to their naturalness13, distinguish- ing between natural German words (or names) such as Literatur|klassen âliterature classesâ, and nonsensical ones such as *As|best|atten (a miss- spelling of Astbestmatten âasbestos matsâ).
In the results (Table 9), we see that the sys- tems trained with additional monolingual or syn- thetic data have a higher proportion of novel words attested in the non-parallel data, and a higher proportion that is deemed natural by our annota- tor. This supports our expectation that additional monolingual data improves the (word-level) ï¬u- ency of the NMT system.
12Subword boundaries are marked with â|â. 13For the annotation, the words were blinded regarding the
system that produced them.
# 5 Related Work | 1511.06709#32 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 32 | Table 2. Comparison with prior work on CUB-200- 2011 [60]. We only include no methods which annotations at time. Here âGTâ refers to using category Truth Ground labels in the training set of CUB, âBBoxâ indicates using bounding boxes, and âPartsâ uses part annotations.
method we are aware of is the Bilinear CNN model of Lin et al. [36], which has accuracy 84.1% (ours is 93.4% without FGVC training data, 95.9% with), and on Birdsnap, which is even more recent, the best performing method we are aware of that uses no extra annotations during test time is the original 66.6% by Berg et al. [4] (ours is 85.4%). On Stanford Dogs, the most competitive related work is [46], which uses an attention-based recurrent neural network to achieve 76.8% (ours is 80.8% without ground truth training data, 85.9% with). | 1511.06789#32 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 33 | # REFERENCES
Anwar, Sajid, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point optimization of deep convo- In Acoustics, Speech and Signal Processing lutional neural networks for object recognition. (ICASSP), 2015 IEEE International Conference on, pp. 1131â1135. IEEE, 2015.
Chen, Chenyi, Seff, Ari, Kornhauser, Alain, and Xiao, Jianxiong. Deepdriving: Learning affordance for direct perception in autonomous driving. arXiv preprint arXiv:1505.00256, 2015.
Corradini, Maria Letizia, Giantomassi, Andrea, Ippoliti, Gianluca, Longhi, Sauro, and Orlando, Giuseppe. Robust control of robot arms via quasi sliding modes and neural networks. In Advances and Applications in Sliding Mode Control systems, pp. 79â105. Springer, 2015.
Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Binaryconnect: Training deep neu- ral networks with binary weights during propagations. arXiv preprint arXiv:1511.00363, 2015.
10
# Under review as a conference paper at ICLR 2016 | 1511.06488#33 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 33 | 12Subword boundaries are marked with â|â. 13For the annotation, the words were blinded regarding the
system that produced them.
# 5 Related Work
To our knowledge, the integration of mono- lingual data for pure neural machine trans- lation architectures was ï¬rst investigated by (Gülçehre et al., 2015), who train monolingual language models independently, and then integrate them during decoding through rescoring of the beam (shallow fusion), or by adding the recur- rent hidden state of the language model to the de- coder state of the encoder-decoder network, with an additional controller mechanism that controls the magnitude of the LM signal (deep fusion). In deep fusion, the controller parameters and output parameters are tuned on further parallel training data, but the language model parameters are ï¬xed Jean et al. (2015b) during the ï¬netuning stage. also report on experiments with reranking of NMT output with a 5-gram language model, but im- provements are small (between 0.1â0.5 BLEU).
The production of synthetic parallel texts bears resemblance to data augmentation techniques used in computer vision, where datasets are often augmented with rotated, scaled, or otherwise distorted variants of the (limited) training set (Rowley et al., 1996). | 1511.06709#33 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 33 | We identify two key reasons for these large improvements: The ï¬rst is the use of a strong generic classiï¬er [55]. A number of prior works have identiï¬ed the importance of having well-trained CNNs as components in their systems for ï¬ne-grained recognition [36,26,29,75,5], which our work provides strong evidence for. On all four evaluation datasets, our CNN of choice [55], trained on the ground truth training set alone and without any architectural modiï¬cations, performs at levels at or above the previous state-of-the-art. The second reason for improvement is the large utility of noisy web data for ï¬ne-grained recognition, which is the focus of this work.
We ï¬nally remind the reader that our work focuses on the application-level problem of recognizing a given set of ï¬ne-grained categories, which might not come with their own expert-annotated training images. The use of existing test sets serves to provide an accurate measure of performance and put our work in a larger context, but results may not be strictly comparable with prior work that operates within a single given dataset. | 1511.06789#33 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 34 | 10
# Under review as a conference paper at ICLR 2016
Fiesler, Emile, Choudry, Amar, and Caulï¬eld, H John. Weight discretization paradigm for optical neural networks. In The Hagueâ90, 12-16 April, pp. 164â173. International Society for Optics and Photonics, 1990.
Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. 2015.
Holt, Jordan L and Baker, Thomas E. Back propagation simulations using limited precision calcula- tions. In Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on, volume 2, pp. 121â126. IEEE, 1991.
Hussain, B Zahir M et al. Short word-length lms ï¬ltering. In Signal Processing and Its Applications, 2007. ISSPA 2007. 9th International Symposium on, pp. 1â4. IEEE, 2007.
Hwang, Kyuyeon and Sung, Wonyong. Fixed-point feedforward deep neural network design using weights +1, 0, and -1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1â6. IEEE, 2014. | 1511.06488#34 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 34 | Another similar avenue of research is self- training (McClosky et al., 2006; Schwenk, 2008). The main difference is that self-training typically refers to scenario where the training set is en- hanced with training instances with artiï¬cially produced output labels, whereas we start with human-produced output (i.e. the translation), and artiï¬cially produce an input. We expect that this is more robust towards noise in the automatic translation. Improving NMT with monolingual source data, following similar work on phrase- based SMT (Schwenk, 2008), remains possible fu- ture work. Domain
networks via continued training has been shown to language models by be effective for neural and in work par- (Ter-Sarkisov et al., 2015), allel translation models (Luong and Manning, 2015). We are the ï¬rst to show that we can effectively adapt neural translation models with monolingual data.
# 6 Conclusion
In this paper, we propose two simple methods to use monolingual training data during training of NMT systems, with no changes to the network | 1511.06709#34 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 34 | Comparison with Active Learning. We compare using noisy web data with a more traditional active learning-based approach (Sec. 4) under several diï¬erent settings in Tab. 3. We ï¬rst verify the eï¬cacy of active learning itself: when training the network from scratch (i.e. no ï¬ne-tuning), active learning improves performance by up to 15.6%, and when ï¬ne-tuning, results still improve by 1.5%. How does active learning compare to using web data? Purely using ï¬ltered web data compares favorably to non-ï¬ne-tuned active learning methods (4.4% better), though lags behind the ï¬ne-tuned models somewhat. To better compare
12 Krause et al. | 1511.06789#34 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 35 | Jalab, Hamid A, Omer, Herman, et al. Human computer interface using hand gesture recognition based on neural network. In Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, pp. 1â6. IEEE, 2015.
Kim, Jonghong, Hwang, Kyuyeon, and Sung, Wonyong. X1000 real-time phoneme recognition In Acoustics, Speech and Signal Processing VLSI using feed-forward deep neural networks. (ICASSP), 2014 IEEE International Conference on, pp. 7510â7514. IEEE, 2014.
Krizhevskey, A. CUDA-convnet, 2014.
Moerland, Perry and Fiesler, Emile. Neural network adaptations to hardware implementations. Technical report, IDIAP, 1997.
Ovtcharov, Kalin, Ruwase, Olatunji, Kim, Joo-Young, Fowers, Jeremy, Strauss, Karin, and Chung, Eric S. Accelerating deep convolutional neural networks using specialized hardware. Microsoft Research Whitepaper, 2, 2015. | 1511.06488#35 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 35 | # 6 Conclusion
In this paper, we propose two simple methods to use monolingual training data during training of NMT systems, with no changes to the network
architecture. Providing training examples with dummy source context was successful to some ex- tent, but we achieve substantial gains in all tasks, and new SOTA results, via back-translation of monolingual target data into the source language, and treating this synthetic data as additional train- ing data. We also show that small amounts of in- domain monolingual data, back-translated into the source language, can be effectively used for do- main adaptation. In our analysis, we identiï¬ed do- main adaptation effects, a reduction of overï¬tting, and improved ï¬uency as reasons for the effective- ness of using monolingual data for training.
While our experiments did make use of mono- lingual training data, we only used a small ran- dom sample of the available data, especially for the experiments with synthetic parallel data. It is conceivable that larger synthetic data sets, or data sets obtained via data selection, will provide big- ger performance beneï¬ts. | 1511.06709#35 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 35 | 12 Krause et al.
Table 3. Active learning-based results [27], presented in on Stanford Dogs terms of top-1 accuracy. Methods with â(scratch)â indicate training from scratch and â(ft)â indicates ï¬ne-tuning from a network pre-trained on ILSVRC, with web models also ï¬ne-tuned. âsubsampleâ refers to downsampling the active learn- ing data to be the same size as the ï¬ltered web images. Note that Stanford-GT is a subset of active learning data, which is denoted âA.L.â.
Acc. Training Procedure 58.4 Stanford-GT (scratch) 65.8 A.L., one round (scratch) 74.0 A.L., two rounds (scratch) 80.6 Stanford-GT (ft) 81.6 A.L., one round (ft) A.L., one round (ft, subsample) 78.8 82.1 A.L., two rounds (ft) Web (ï¬ltered) 78.4 Web (ï¬ltered) + Stanford-GT 82.6 | 1511.06789#35 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 36 | Rigamonti, Roberto, Sironi, Amos, Lepetit, Vincent, and Fua, Pascal. Learning separable ï¬lters. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 2754â2761. IEEE, 2013.
Sak, Has¸im, Senior, Andrew, Rao, Kanishka, and Beaufays, Franc¸oise. Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947, 2015.
Shin, Sungho, Hwang, Kyuyeon, and Sung, Wonyong. Fixed point performance analysis of recurrent neural networks. arXiv preprint arXiv:1512.01322, 2015.
Sung, Wonyong and Kum, Ki-II. Simulation-based word-length optimization method for ï¬xed-point digital signal processing systems. Signal Processing, IEEE Transactions on, 43(12):3087â3090, 1995.
Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. | 1511.06488#36 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 36 | Because we do not change the neural net- work architecture to integrate monolingual train- ing data, our approach can be easily applied to other NMT systems. We expect that the effective- ness of our approach not only varies with the qual- ity of the MT system used for back-translation, but also depends on the amount (and similarity to the test set) of available parallel and monolingual data, and the extent of overï¬tting of the baseline model. Future work will explore the effectiveness of our approach in more settings.
# Acknowledgments
The research presented in this publication was conducted in cooperation with Samsung Elec- tronics Polska sp. z o.o. - Samsung R&D In- stitute Poland. This project received funding from the European Unionâs Horizon 2020 research and innovation programme under grant agreement 645452 (QT21).
# References
[Bahdanau et al.2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Trans- late. In Proceedings of the International Conference on Learning Representations (ICLR).
[Bertoldi and Federico2009] Nicola Bertoldi and Mar- cello Federico. 2009. Domain adaptation for sta- tistical machine translation with monolingual re- sources. In Proceedings of the Fourth Workshop on | 1511.06709#36 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 36 | the active learning and noisy web data, we factor out the diï¬erence in scale by performing an experiment with subsampled active learning data, setting it to be the same size as the ï¬ltered web data. Surprisingly, performance is very similar, with only a 0.4% advantage for the cleaner, annotated active learning data, highlighting the eï¬ectiveness of noisy web data despite the lack of manual annotation. If we furthermore augment the ï¬ltered web images with the Stanford Dogs training set, which the active learning method notably used both as training data and its seed set of images, performance improves to even be slightly better than the manually-annotated active learning data (0.5% improvement).
These experiments indicate that, while more traditional active learning-based approaches towards expanding datasets are eï¬ective ways to improve recognition performance given a suitable budget, simply using noisy images retrieved from the web can be nearly as good, if not better. As web images require no manual annotation and are openly available, we believe this is strong evidence for their use in solving ï¬ne-grained recognition. | 1511.06789#36 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 37 | Xue, Jian, Li, Jinyu, and Gong, Yifan. Restructuring of deep neural network acoustic models with singular value decomposition. In INTERSPEECH, pp. 2365â2369, 2013.
Yu, Dong, Deng, Alex Acero, Dahl, George, Seide, Frank, and Li, Gang. More data + deeper model = better accuracy. In keynote at International Workshop on Statistical Machine Learning for Speech Processing, 2012a.
Yu, Dong, Seide, Frank, Li, Gang, and Deng, Li. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pp. 4409â4412. IEEE, 2012b.
11 | 1511.06488#37 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 37 | Statistical Machine Translation StatMT 09. Associ- ation for Computational Linguistics.
[Bojar et al.2015] OndËrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of the Tenth Workshop on Statistical Machine Transla- tion, pages 1â46, Lisbon, Portugal. Association for Computational Linguistics.
[Brown et al.1990] P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, F. Jelinek, J.D. Lafferty, R.L. Mercer, and P.S. Roossin. 1990. A Statistical Approach to Machine Translation. Computational Linguistics, 16(2):79â85. | 1511.06709#37 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 37 | Very Large-Scale Fine-Grained Recognition. A key advantage of using noisy data is the ability to scale to large numbers of ï¬ne-grained classes. However, this poses a challenge for evaluation â it is infeasible to manually annotate images with one of the 10,982 categories in L-Bird, 14,553 categories in L-Butterï¬y, and would even be very time-consuming to annotate images with the 409 categories in L-Aircraft. Therefore, we turn to an approximate evaluation, establishing a rough estimate on true performance. Speciï¬cally, we query Flickr for up to 25 images of each category, keeping only those images whose title strictly contains the name of each category, and aggressively deduplicate these images with our training set in order to ensure a fair evaluation. Although this is not a perfect evaluation set, and is thus an area where annotation of ï¬ne-grained datasets is particularly valuable [58], we ï¬nd that it is remarkably clean on the surface: based on a 1,000-image estimate, we measure the cross-domain noise of L-Bird at only 1%, L-Butterï¬y at 2.3%, and L-Aircraft at 4.5%. An independent evaluation [58] further measures all sources of noise combined to be only 16% when searching | 1511.06789#37 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 38 | [Cettolo et al.2012] Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web Inven- tory of Transcribed and Translated Talks. In Pro- ceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261â268, Trento, Italy.
[Cettolo et al.2014] Mauro Cettolo, Jan Niehues, Se- bastian Stüker, Luisa Bentivogli, and Marcello Fed- erico. 2014. Report on the 11th IWSLT Evaluation Campaign, IWSLT 2014. In Proceedings of the 11th Workshop on Spoken Language Translation, pages 2â16, Lake Tahoe, CA, USA.
[Cho et al.2014] Kyunghyun Cho, Bart van Merrien- boer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN EncoderâDecoder for Statistical Machine Transla- tion. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724â1734, Doha, Qatar. Associa- tion for Computational Linguistics. | 1511.06709#38 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 38 | The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Long-Billed âYellow-Crowne Spiderhunter Gonolek Forest Kingfisher White-Browed Coucal Pacific Reef Heron African Rail Brown Thrasher Zebra Swallowtail c ark Pe Rufous-Naped â Smoke-Colored Lorauin's âAdmiral > Be pero General Atomics MQ-1 Predator Blue Glassy Tiger Idas Blue Cessna 150 ornier Do 31 âAero l-39 Albatross : Boeing B-50 Consolidated C-87 Superfortress Liberator Express Douglas 0-46 Lockheed U~
Fig. 10. Classiï¬cation results on very large-scale ï¬ne-grained recognition. From top to bottom, depicted are examples of categories in L-Bird, L-Butterï¬y, and L-Aircraft, along with their category name. The ï¬rst examples in each row are correctly predicted by our models, while the last two examples in each row are errors, with our prediction in grey and correct category (according to Flickr metadata) printed below.
for bird species. In total, this yields 42,115 testing images for L-Bird, 42,046 for L-Butterï¬y, and 3,131 for L-Aircraft. | 1511.06789#38 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 39 | 2011. Practical Varia- tional Inference for Neural Networks. In J. Shawe- Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger, editors, Advances in Neural In- formation Processing Systems 24, pages 2348â2356. Curran Associates, Inc.
[Gülçehre et al.2015] Ãaglar Gülçehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loïc Barrault, Huei- Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On Using Monolingual Corpora in Neural Machine Translation. CoRR, abs/1503.03535.
[Haddow et al.2015] Barry Haddow, Matthias Huck, Alexandra Birch, Nikolay Bogoychev, and Philipp Koehn. 2015. The Edinburgh/JHU Phrase-based In Machine Translation Systems for WMT 2015. Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 126â133, Lisbon, Por- tugal. Association for Computational Linguistics. | 1511.06709#39 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 39 | Given the diï¬culty and noise, performance is surprisingly high: On L-Bird top-1 accuracy is 73.1%/75.8% (1/144 crops), for L-Butterï¬y it is 65.9%/68.1%, and for L-Aircraft it is 72.7%/77.5%. Corresponding mAP numbers, which are better suited for handling class imbalance, are 61.9, 54.8, and 70.5, reported for the single crop setting. We show qualitative results in Fig. 10. These cate- gories span multiple continents in space (birds, butterï¬ies) and decades in time (aircraft), demonstrating the breadth of categories in the world that can be rec- ognized using only public sources of noisy ï¬ne-grained data. To the best of our knowledge, these results represent the largest number of ï¬ne-grained categories distinguished by any single system to date. | 1511.06789#39 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 40 | [Hinton et al.2012] Geoffrey E. Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Rus- lan Salakhutdinov. 2012. Improving neural net- works by preventing co-adaptation of feature detec- tors. CoRR, abs/1207.0580.
[Jean et al.2015a] Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On Using Very Large Target Vocabulary for Neural Ma- chine Translation. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 1â10, Beijing, China. Associa- tion for Computational Linguistics.
Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal Neural Machine Transla- tion Systems for WMTâ15 . In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 134â140, Lisbon, Portugal. Association for Computational Linguistics. | 1511.06709#40 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 40 | How Much Data is Really Necessary? In order to better understand the utility of noisy web data for ï¬ne-grained recognition, we perform a control ex- periment on the web data for CUB. Using the ï¬ltered web images as a base, we train models using progressively larger subsets of the results as training data, taking the top ranked images across categories for each experiment. Performance versus the amount of training data is shown in Fig. 11. Surprisingly, relatively few web images are required to do as well as training on the CUB training set, and adding more noisy web images always helps, even when at the limit of search results. Based on this analysis, we estimate that one noisy web image for CUB categories is âworthâ 0.507 ground truth training images [57].
Error Analysis. Given the high performance of these models, what room is left for improvement? In Fig. 12 we show the taxonomic distribution of the remaining
13
14 Krause et al.
Impact of Training Data Quantity
83 fey 8 5 86 a a 2 2 Web H CUB-GT TOk 20k 30k 40k 50k 60k 70k 80k 90k Num. Web Images
Portion of Errors vs. Taxonomic Rank 79 60 $50 Dad ic) 830 20 10 â Genus Family Order Class | 1511.06789#40 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 41 | [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndËrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177â180, Prague, Czech Republic. Association for Computational Linguistics.
[Lambert et al.2011] Patrik Lambert, Holger Schwenk, Christophe Servan, and Sadaf Abdul-Rauf. 2011. Investigations on Translation Model Adaptation Us- ing Monolingual Data. the Sixth Workshop on Statistical Machine Translation, pages 284â293, Edinburgh, Scotland. Association for Computational Linguistics.
[Luong and Manning2015] Minh-Thang Luong and Christopher D. Manning. 2015. Stanford Neural Machine Translation Systems for Spoken Language Domains. the International Workshop on Spoken Language Translation 2015, Da Nang, Vietnam. | 1511.06709#41 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 41 | Portion of Errors vs. Taxonomic Rank 79 60 $50 Dad ic) 830 20 10 â Genus Family Order Class
Fig. 11. Number of web images used for training vs. performance on CUB-200- 2011 [60]. We vary the amount of web training data in multiples of the CUB training set size (5,994 images). Also shown is performance when training on the ground truth CUB training set (CUB-GT). Fig. 12. The errors on L-Bird that fall in each taxonomic rank, represented as a portion of all errors made. For each error made, we calculate the taxonomic rank of the least common ancestor of the predicted and test category.
errors on L-Bird. The vast majority of errors (74.3%) are made between very similar classes at the genus level, indicating that most of the remaining errors are indeed between extremely similar categories, and only very few errors (7.4%) are made between dissimilar classes, whose least common ancestor is the âAvesâ (i.e. Bird) taxonomic class. This suggests that most errors still made by the models are fairly reasonable, corroborating the qualitative results of Fig. 10.
# 6 Discussion | 1511.06789#41 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 42 | [Luong et al.2015] Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Ap- proaches to Attention-based Neural Machine Trans- In Proceedings of the 2015 Conference on lation. Empirical Methods in Natural Language Process- ing, pages 1412â1421, Lisbon, Portugal. Associa- tion for Computational Linguistics.
[McClosky et al.2006] David McClosky, Eugene Char- niak, and Mark Johnson. 2006. Effective Self- training for Parsing. In Proceedings of the Main Conference on Human Language Technology Con- ference of the North American Chapter of the Asso- ciation of Computational Linguistics, HLT-NAACL â06, pages 152â159, New York. Association for Computational Linguistics.
[Rowley et al.1996] Henry Rowley, Shumeet Baluja, and Takeo Kanade. 1996. Neural Network-Based Face Detection. In Computer Vision and Pattern Recognition â96.
[Sak et al.2007] Ha¸sim Sak, Tunga Güngör, and Mu- rat Saraçlar. 2007. Morphological Disambiguation of Turkish Text with Perceptron Algorithm. In CI- CLing 2007, pages 107â118. | 1511.06709#42 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 42 | # 6 Discussion
In this work we have demonstrated the utility of noisy data toward solving the problem of ï¬ne-grained recognition. We found that the combination of a generic classiï¬cation model and web data, ï¬ltered with a simple strategy, was surprisingly eï¬ective at discriminating ï¬ne-grained categories. This approach performs favorably when compared to a more traditional active learning method for expanding datasets, but is even more scalable, which we demonstrated ex- perimentally on up to 14,553 ï¬ne-grained categories. One potential limitation of the approach is the availability of imagery for categories either not found or not described in the public domain, for which an alternative method such as active learning may be better suited. Another limitation is the current focus on classiï¬cation, which may be problematic if applications arise where multiple objects are present or localization is otherwise required. Nonetheless, with these insights on the unreasonable eï¬ectiveness of noisy data, we are optimistic for applications of ï¬ne-grained recognition in the near future.
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
# 7 Acknowledgments
We thank Gal Chechik, Chuck Rosenberg, Zhen Li, Timnit Gebru, Vignesh Ra- manathan, Oliver Groth, and the anonymous reviewers for valuable feedback.
15 | 1511.06789#42 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 43 | [Schwenk2008] Holger Schwenk. 2008. Investigations on Large-Scale Lightly-Supervised Training for Sta- tistical Machine Translation. In International Work- shop on Spoken Language Translation, pages 182â 189.
[Sennrich and Haddow2015] Rico Sennrich and Barry Haddow. 2015. A Joint Dependency Model of Morphological and Syntactic Structure for Statisti- cal Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2081â2087, Lisbon, Portu- gal. Association for Computational Linguistics.
[Sennrich et al.2016] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Trans- lation of Rare Words with Subword Units. In Pro- ceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2016), Berlin, Germany.
[Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, 2014. Sequence to Sequence and Quoc V. Le. Learning with Neural Networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Sys- tems 2014, pages 3104â3112, Montreal, Quebec, Canada. | 1511.06709#43 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 43 | 15
16 Krause et al.
# Appendix
# A Active Learning Details
Here we provide additional details for our active learning baseline, including further description of the interface, improvements in rater quality as a result of this interface, statistics of the number of positives obtained per class in each round of active learning, and qualitative examples of images obtained.
# A.1 Interface
Designing an eï¬ective rater tool is of critical importance when getting non- experts to rate ï¬ne-grained categories. We seek to give the raters simple decisions and to provide them with as much information as possible to make the correct decision in a generic and scalable way. Fig. 13 shows our rater interface, which includes the following components to serve this purpose:
Instructional positive images inform the rater of within-class variation. These images are obtained from the seed dataset input to active learning. Many rater tools only provide this (e.g. [35]), which does not provide a clear class boundary concept on its own. We also provide links to Google Image Search and encourage raters to research the full space of examples of the class concept. | 1511.06789#43 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06709 | 44 | [Ter-Sarkisov et al.2015] Alex Ter-Sarkisov, Holger Schwenk, Fethi Bougares, and Loïc Barrault. 2015. Incremental Adaptation Strategies for Neural Net- work Language Models. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 48â56, Beijing, China. Association for Computational Linguistics.
[Tyers and Alperen2010] Francis M. Tyers and Mu- rat S. Alperen. 2010. SETimes: A parallel corpus of Balkan languages. In Workshop on Exploitation of multilingual resources and tools for Central and (South) Eastern European Languages at the Lan- guage Resources and Evaluation Conference, pages 1â5. | 1511.06709#44 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 44 | Instructional negative images help raters deï¬ne the decision boundary be- tween the right class and easily confused other classes. We show the top two most confused categories, determined by the active learningâs current model. This aids in classiï¬cation: in Fig. 13, if the rater studies the positive class âBernese moun- tain dogâ, they may form a mental decision rule based on fur color pattern alone. However, when studying the negative, easily confused classes âEntlebucherâ and âAppenzellerâ, the rater can reï¬ne the decision on more appropriate ï¬ne-grained distinctions â in this case, hair length is a key discriminative attribute.
Batching questions by class has the beneï¬t of allowing raters to learn about and focus on one ï¬ne-grained category at a time. Batching questions may also allow raters to build a better mental model of the class via a human form of semi-supervised learning, although this phenomena is more diï¬cult to isolate and measure. | 1511.06789#44 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 45 | Golden questions for rater feedback and quality control. We use the original supervised seed dataset to add a number of known correct and incor- rect images in the batch to be rated, which we use to give short- and long-term feedback to raters. Short-term feedback comes in the form of a pop-up win- dow informing the rater the moment they make an incorrect judgment, allowing
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Correct examples Unknowns, please rate:
Fig. 13. Our tool for binary annotation of ï¬ne-grained categories. Instructional posi- tive images are provided in the upper left and negatives are provided in the lower left. This is a higher-resolution version of the ï¬gure in the main text.
them to update their mental model while working on the task. Long-term feed- back summarizes a daysâ worth of rating to give the rater a summary of overall performance.
# A.2 Rater Quality Improvements | 1511.06789#45 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 46 | # A.2 Rater Quality Improvements
To determine the impact of our annotation framework improvements for ï¬ne- grained categories, we performed a control experiment with a more standard crowdsourcing interface, which provides only a category name, description, and image search link. Annotation quality is determined on a set of diï¬cult binary questions (images mistaken by a classiï¬er on the Stanford Dogs test set). Using our interface, annotators were both more accurate and faster, with a 16.5% relative reduction in error (from 28.5% to 23.8%) and a 2.4à improvement in speed (4.1 to 1.68 seconds per image).
# A.3 Annotation Statistics and Examples | 1511.06789#46 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 47 | # A.3 Annotation Statistics and Examples
In Fig. 14 we show the distribution of images judged correct by human anno- tators after active learning selection of 1000 images per class for Stanford Dogs classes. The categories are sorted by the number of positive training examples collected in the ï¬rst iteration of active learning. The 10 categories with the most positive training examples collected after both rounds of mining are: Pug, Golden Retriever, Boston Terrier, West Highland White Terrier, Labrador Re- triever, Boxer, Maltese, German Shepherd, Pembroke Welsh Corgi, and Beagle. The 10 categories with the fewest positive training examples are: Kerry Blue Terrier, Komondor, Irish Water Spaniel, Curly Coated Retriever, Bouvier des Flandres, Clumber Spaniel, Bedlington Terrier, Afghan Hound, Aï¬enpinscher,
17
18 Krause et al.
1009
[5 Active learning, round 1 [5 Active learning, round 2 000 3 «og Num. images / class 2 Class id
Fig. 14. Counts of positive training examples obtained per category from active learn- ing, for the Stanford Dogs dataset. | 1511.06789#47 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 48 | Fig. 14. Counts of positive training examples obtained per category from active learn- ing, for the Stanford Dogs dataset.
and Sealyham Terrier. These counts are inï¬uenced by the true counts of cat- egories in the YFCC100M [56] dataset and our active learnerâs ability to ï¬nd them.
In Fig. 15, we show positive training examples obtained from active learning for select categories, comparing examples obtained in iterations 1 and 2.
# B Deduplication Details | 1511.06789#48 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 49 | Here we provide more details on our method for removing any ground truth images from web search results, which we took great care in doing. Our general approach follows Wang et al. [64], which is a state of the art method for learning a similarity metric between images. To scale [64] to the millions of images con- sidered in this work, we binarize the output for an eï¬cient hashing-based exact search. Hamming distance corresponds to dissimilarity: identical images have distance 0, images with diï¬erent resolutions, aspect ratios, or slightly diï¬erent crops tend to have distances of up to roughly 4 and 8, and more substantial variations, e.g. images of diï¬erent views from the same photographer, or very diï¬erent crops, roughly have distances up to 10, beyond which the vast majority of image pairs are actually distinct. Qualitative examples are provided in Fig. 16. We tuned our dissimilarity threshold for recall and manually veriï¬ed it â the goal is to ensure that images that have even a moderate degree of similarity to test images did not appear in our training set. For example, of a sample of 183 image pairs at distance 16 | 1511.06789#49 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 50 | to ensure that images that have even a moderate degree of similarity to test images did not appear in our training set. For example, of a sample of 183 image pairs at distance 16 in the large-scale bird experiments, zero were judged by a human to be too similar, and we used a still more conservative threshold of 18. In the case of L-Bird, 2,996 images were removed as being too similar to an image in either the CUB or Birdsnap test set. | 1511.06789#50 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 51 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition Pembroke Welsh Corgi Airedale Tecior Siberian Husky Komondor Pomeranian Samoyed Becnase Mountain Dog French Bulldog Gorman Shorthaired Chihuahua
19
Fig. 15. Positive training examples obtained from active learning, from the YFCC100M dataset, for select categories from Stanford Dogs.
# C Remaining Errors: Qualitative
Here we highlight one type of error that our image search model made on CUB [62] â ï¬nding errors in the test set. We show an example in Fig. 17, where the true species for each image is actually a bird species not in the 200 CUB bird species. This highlights one potential advantage of our approach: by relying on category names, web training data is tied more strongly to the semantic mean- ing of a category instead of simply a 1-of-K label. This also provides evidence for the âdomain shiftâ hypothesis when ï¬ne-tuning on ground truth datasets, as irregularities like this can be learned, resulting in higher performance on the benchmark dataset under consideration.
# D Network Visualization | 1511.06789#51 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 52 | # D Network Visualization
In order to examine the impact of web-trained models of ï¬ne-grained recognition from another vantage point, here we present one visualization of network inter- nals. Speciï¬cally, in Fig. 18 we visualize gradients with respect to the square of the norm of the last convolutional layer in the network, backpropagated into the input image, and visualized as a function of training data. This provides some indication of the importance of each pixel with respect to the overall network activation. Though these examples are only qualitative, we observe that the gra- dients for the network trained on L-Bird are generally more focused on the bird when compared to gradients for the network trained on CUB, indicating that the network has learned a better representation of which parts of an image are discriminative.
20 Krause et al.
Distance Distance 0 7 1 8 2 9 3 10 4 11 5 12 6 | 1511.06789#52 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 53 | Fig. 16. Example pairs of images and their distance according to our deduplication method. Distances 1-3 have slight pixel-level diï¬erences due to compression and the image pair at distance 4 have diï¬erent scales. At distances 5 and 6 the images are of diï¬erent crops, with distance 6 additionally exhibiting slight lighting diï¬erences. The images at distance 7 have slightly diï¬erent scales and compression, at distance 8 there are cropping and lighting diï¬erences, and distance 9 features diï¬erent crops and additional text in the corner of one photo. At distance 10 and higher we have image pairs which have high-level visual similarities but are distinct.
oo us |
us
oo |
Fig. 17. Examples of mistakes made by a web-trained model on the CUB-200-2011 [62] test set, whose ground truth label is âHooded Orioleâ, but which are actually of another species not in CUB, âBlack-Hooded Oriole.â
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Image CUB-200 L-Bird Image CUB-200 L-Bird | 1511.06789#53 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 54 | Fig. 18. Gradients with respect to the squared norm of the last convolutional layer on ten random CUB test set images. Each row contains, in order, an input image, gradients for a model trained on the CUB-200 [62] training set, and gradients for a model trained on the larger L-Bird. Gradients have been scaled to ï¬t in [0,255]. Figure best viewed in high resolution on a monitor.
21
22 Krause et al.
# References
1. Angelova, A., Zhu, S., Lin, Y.: Image segmentation for large-scale subcategory ï¬ower recognition. In: Workshop on Applications of Computer Vision (WACV). pp. 39â45. IEEE (2013)
2. Balcan, M.F., Broder, A., Zhang, T.: Margin based active learning. In: Learning Theory, pp. 35â50. Springer (2007)
3. Berg, T., Belhumeur, P.N.: Poof: Part-based one-vs.-one features for ï¬ne-grained categorization, face veriï¬cation, and attribute estimation. In: Computer Vision and Pattern Recognition (CVPR). pp. 955â962. IEEE (2013) | 1511.06789#54 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 55 | 4. Berg, T., Liu, J., Lee, S.W., Alexander, M.L., Jacobs, D.W., Belhumeur, P.N.: Birdsnap: Large-scale ï¬ne-grained visual categorization of birds. In: Computer Vi- sion and Pattern Recognition (CVPR) (June 2014)
5. Branson, S., Van Horn, G., Perona, P., Belongie, S.: Improved bird species recog- nition using pose normalized deep convolutional nets. In: British Machine Vision Conference (BMVC) (2014)
6. Branson, S., Van Horn, G., Wah, C., Perona, P., Belongie, S.: The ignorant led by the blind: A hybrid humanâmachine vision system for ï¬ne-grained categorization. International Journal of Computer Vision (IJCV) pp. 1â27 (2014)
7. Chai, Y., Lempitsky, V., Zisserman, A.: Bicos: A bi-level co-segmentation method for image classiï¬cation. In: International Conference on Computer Vision (ICCV). IEEE (2011) | 1511.06789#55 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 56 | 8. Chai, Y., Lempitsky, V., Zisserman, A.: Symbiotic segmentation and part local- ization for ï¬ne-grained categorization. In: International Conference on Computer Vision (ICCV). pp. 321â328. IEEE (2013)
9. Chai, Y., Rahtu, E., Lempitsky, V., Van Gool, L., Zisserman, A.: Tricos: A tri-level class-discriminative co-segmentation method for image classiï¬cation. In: European Conference on Computer Vision (ECCV), pp. 794â807. Springer (2012)
10. Chen, X., Gupta, A.: Webly supervised learning of convolutional networks. In: International Conference on Computer Vision (ICCV). IEEE (2015)
11. Chen, X., Shrivastava, A., Gupta, A.: Neil: Extracting visual knowledge from web data. In: International Conference on Computer Vision (ICCV). pp. 1409â1416. IEEE (2013)
12. Collins, B., Deng, J., Li, K., Fei-Fei, L.: Towards scalable dataset construction: An active learning approach. In: European Conference on Computer Vision (ECCV), pp. 86â98. Springer (2008) | 1511.06789#56 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 57 | 13. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large- Scale Hierarchical Image Database. In: Computer Vision and Pattern Recognition (CVPR) (2009)
14. Deng, J., Krause, J., Fei-Fei, L.: Fine-grained crowdsourcing for ï¬ne-grained recog- nition. In: Computer Vision and Pattern Recognition (CVPR). pp. 580â587 (2013) 15. Divvala, S.K., Farhadi, A., Guestrin, C.: Learning everything about anything: Webly-supervised visual concept learning. In: Computer Vision and Pattern Recog- nition (CVPR). pp. 3270â3277. IEEE (2014)
16. Duan, K., Parikh, D., Crandall, D., Grauman, K.: Discovering localized at- tributes for ï¬ne-grained recognition. In: Computer Vision and Pattern Recognition (CVPR). pp. 3474â3481. IEEE
17. Erkan, A.N.: Semi-supervised learning via generalized maximum entropy. Ph.D. thesis, New York University (2010)
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition | 1511.06789#57 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 58 | The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
18. Farrell, R., Oza, O., Zhang, N., Morariu, V.I., Darrell, T., Davis, L.S.: Birdlets: Subordinate categorization using volumetric primitives and pose-normalized ap- pearance. In: International Conference on Computer Vision (ICCV). pp. 161â168. IEEE (2011)
19. Fergus, R., Fei-Fei, L., Perona, P., Zisserman, A.: Learning object categories from internet image searches. Proceedings of the IEEE 98(8), 1453â1466 (2010)
20. Gavves, E., Fernando, B., Snoek, C.G., Smeulders, A.W., Tuytelaars, T.: Fine- grained categorization by alignments. In: International Conference on Computer Vision (ICCV). pp. 1713â1720. IEEE
21. Gavves, E., Fernando, B., Snoek, C.G., Smeulders, A.W., Tuytelaars, T.: Local alignments for ï¬ne-grained categorization. International Journal of Computer Vi- sion (IJCV) pp. 1â22 (2014) | 1511.06789#58 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 60 | 24. Hinchliï¬, C.E., Smith, S.A., Allman, J.F., Burleigh, J.G., Chaudhary, R., Coghill, L.M., Crandall, K.A., Deng, J., Drew, B.T., Gazis, R., Gude, K., Hibbett, D.S., Katz, L.A., Laughinghouse, H.D., McTavish, E.J., Midford, P.E., Owen, C.L., Ree, R.H., Rees, J.A., Soltis, D.E., Williams, T., Cranston, K.A.: Synthesis of phy- logeny and taxonomy into a comprehensive tree of life. Proceedings of the National Academy of Sciences (2015), http://www.pnas.org/content/early/2015/09/16/ 1423041112.abstract
25. Ioï¬e, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (ICML) (2015)
26. Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. In: Neural Information Processing Systems (NIPS) (2015) | 1511.06789#60 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 61 | 27. Khosla, A., Jayadevaprakash, N., Yao, B., Fei-Fei, L.: Novel dataset for ï¬ne- grained image categorization. In: First Workshop on Fine-Grained Visual Cat- egorization, Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, CO (June 2011)
28. Krause, J., Gebru, T., Deng, J., Li, L.J., Fei-Fei, L.: Learning features and parts for ï¬ne-grained recognition. In: International Conference on Pattern Recognition (ICPR). Stockholm, Sweden (August 2014)
29. Krause, J., Jin, H., Yang, J., Fei-Fei, L.: Fine-grained recognition without part annotations. In: Conference on Computer Vision and Pattern Recognition (CVPR). IEEE
30. Krause, J., Stark, M., Deng, J., Fei-Fei, L.: 3d object representations for ï¬ne- grained categorization. In: 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13). IEEE (2013) | 1511.06789#61 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 62 | 31. Kumar, N., Belhumeur, P.N., Biswas, A., Jacobs, D.W., Kress, W.J., Lopez, I.C., Soares, J.V.: Leafsnap: A computer vision system for automatic plant species iden- tiï¬cation. In: European Conference on Computer Vision (ECCV), pp. 502â516. Springer (2012)
32. LeCun, Y., Bottou, L., Bengio, Y., Haï¬ner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278â2324 (1998)
33. Lewis, D.D., Catlett, J.: Heterogeneous uncertainty sampling for supervised learn- ing. In: International Conference on Machine Learning (ICML). pp. 148â156 (1994)
23
24 Krause et al.
34. Li, L.J., Fei-Fei, L.: Optimol: automatic online picture collection via incremental model learning. International Journal of Computer Vision (IJCV) 88(2), 147â168 (2010) | 1511.06789#62 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 63 | 35. Lin, T., Maire, M., Belongie, S., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014), http://arxiv.org/abs/1405.0312
36. Lin, T.Y., RoyChowdhury, A., Maji, S.: Bilinear cnn models for ï¬ne-grained visual recognition. In: International Conference on Computer Vision (ICCV). IEEE 37. Liu, J., Kanazawa, A., Jacobs, D., Belhumeur, P.: Dog breed classiï¬cation using part localization. In: European Conference on Computer Vision (ECCV), pp. 172â 185. Springer (2012)
38. Maji, S., Kannala, J., Rahtu, E., Blaschko, M., Vedaldi, A.: Fine-grained visual classiï¬cation of aircraft. Tech. rep. (2013) | 1511.06789#63 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 64 | 39. Mnih, V., Hinton, G.E.: Learning to label aerial images from noisy data. In: Inter- national Conference on Machine Learning (ICML). pp. 567â574 (2012)
40. Mozafari, B., Sarkar, P., Franklin, M., Jordan, M., Madden, S.: Scaling up crowd- sourcing to very large datasets: a case for active learning. Proceedings of the VLDB Endowment 8(2), 125â136 (2014)
41. Nilsback, M.E., Zisserman, A.: A visual vocabulary for ï¬ower classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). vol. 2, pp. 1447â1454. IEEE (2006)
42. Pu, J., Jiang, Y.G., Wang, J., Xue, X.: Which looks like which: Exploring inter- class relationships in ï¬ne-grained visual categorization. In: European Conference on Computer Vision (ECCV), pp. 425â440. Springer (2014)
43. Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Train- ing deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014) | 1511.06789#64 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 65 | 44. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) pp. 1â42 (April 2015)
45. Schroï¬, F., Criminisi, A., Zisserman, A.: Harvesting image databases from the web. Pattern Analysis and Machine Intelligence (PAMI) 33(4), 754â766 (2011)
46. Sermanet, P., Frome, A., Real, E.: Attention for ï¬ne-grained categorization. arXiv preprint arXiv:1412.7054 (2014)
47. Settles, B.: Active learning literature survey. University of Wisconsin, Madison 52(55-66), 11 (2010)
48. Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems (NIPS). pp. 1289â1296 (2008) | 1511.06789#65 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 66 | 49. Shih, K.J., Mallya, A., Singh, S., Hoiem, D.: Part localization using multi-proposal consensus for ï¬ne-grained categorization. In: British Machine Vision Conference (BMVC) (2015)
50. Simon, M., Rodner, E.: Neural activation constellations: Unsupervised part model discovery with convolutional networks. In: ICCV (2015)
51. Simon, M., Rodner, E., Denzler, J.: Part detector discovery in deep convolutional neural networks. In: Asian Conference on Computer Vision (ACCV). vol. 2, pp. 162â177 (2014)
52. Sukhbaatar, S., Fergus, R.: Learning from noisy labels with deep neural networks. arXiv preprint arXiv:1406.2080 (2014)
53. Szegedy, C., Ioï¬e, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 (2016)
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition | 1511.06789#66 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 67 | The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
54. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015)
55. Szegedy, C., Vanhoucke, V., Ioï¬e, S., Shlens, J., Wojna, Z.: Rethinking the incep- tion architecture for computer vision. In: Computer Vision and Pattern Recogni- tion (CVPR). IEEE (2016)
56. Thomee, B., Shamma, D.A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., Li, L.J.: The new data and new challenges in multimedia research. arXiv preprint arXiv:1503.01817 (2015)
57. Torralba, A., Efros, A., et al.: Unbiased look at dataset bias. In: Computer Vision and Pattern Recognition (CVPR). pp. 1521â1528. IEEE (2011) | 1511.06789#67 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 68 | 58. Van Horn, G., Branson, S., Farrell, R., Haber, S., Barry, J., Ipeirotis, P., Perona, P., Belongie, S.: Building a bird recognition app and large scale dataset with citizen scientists: The ï¬ne print in ï¬ne-grained dataset collection. In: Computer Vision and Pattern Recognition (CVPR). IEEE (2015)
59. Vedaldi, A., Mahendran, S., Tsogkas, S., Maji, S., Girshick, B., Kannala, J., Rahtu, E., Kokkinos, I., Blaschko, M.B., Weiss, D., Taskar, B., Simonyan, K., Saphra, N., Mohamed, S.: Understanding objects in detail with ï¬ne-grained attributes. In: Computer Vision and Pattern Recognition (CVPR) (2014)
60. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011) | 1511.06789#68 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 69 | 61. Wah, C., Belongie, S.: Attribute-based detection of unfamiliar classes with humans in the loop. In: Computer Vision and Pattern Recognition (CVPR). pp. 779â786. IEEE (2013)
62. Wah, C., Branson, S., Perona, P., Belongie, S.: Multiclass recognition and part localization with humans in the loop. In: International Conference on Computer Vision (ICCV). pp. 2524â2531. IEEE (2011)
63. Wah, C., Horn, G., Branson, S., Maji, S., Perona, P., Belongie, S.: Similarity com- parisons for interactive ï¬ne-grained categorization. In: Computer Vision and Pat- tern Recognition (CVPR) (2014)
64. Wang, J., Song, Y., Leung, T., Rosenberg, C., Wang, J., Philbin, J., Chen, B., Wu, Y.: Learning ï¬ne-grained image similarity with deep ranking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1386â1393 (2014) | 1511.06789#69 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 70 | 65. Welinder, P., Branson, S., Mita, T., Wah, C., Schroï¬, F., Belongie, S., Perona, P.: Caltech-UCSD Birds 200. Tech. Rep. CNS-TR-2010-001, California Institute of Technology (2010)
66. Xiao, T., Xu, Y., Yang, K., Zhang, J., Peng, Y., Zhang, Z.: The application of two-level attention models in deep convolutional neural network for ï¬ne-grained image classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE 67. Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy labeled data for image classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE
68. Xie, S., Yang, T., Wang, X., Lin, Y.: Hyper-class augmented and regularized deep learning for ï¬ne-grained image classiï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE
69. Xu, Z., Huang, S., Zhang, Y., Tao, D.: Augmenting strong supervision using web data for ï¬ne-grained categorization. In: International Conference on Computer Vision (ICCV) (2015)
25 | 1511.06789#70 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 71 | 25
26 Krause et al.
70. Yang, L., Luo, P., Loy, C.C., Tang, X.: A large-scale car dataset for ï¬ne-grained categorization and veriï¬cation. In: Computer Vision and Pattern Recognition (CVPR). IEEE
71. Yang, S., Bo, L., Wang, J., Shapiro, L.G.: Unsupervised template learning for ï¬ne- grained object recognition. In: Advances in Neural Information Processing Systems (NIPS). pp. 3122â3130 (2012)
72. Yao, B., Bradski, G., Fei-Fei, L.: A codebook-free and annotation-free approach for ï¬ne-grained image categorization. In: Computer Vision and Pattern Recognition (CVPR). pp. 3466â3473. IEEE (2012)
73. Yao, B., Khosla, A., Fei-Fei, L.: Combining randomization and discrimination for ï¬ne-grained image categorization. In: Computer Vision and Pattern Recognition (CVPR). pp. 1577â1584. IEEE (2011) | 1511.06789#71 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06789 | 72 | 74. Yu, F., Zhang, Y., Song, S., Seï¬, A., Xiao, J.: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)
75. Zhang, N., Donahue, J., Girshick, R., Darrell, T.: Part-based r-cnns for ï¬ne-grained category detection. In: European Conference on Computer Vision (ECCV), pp. 834â849. Springer (2014)
76. Zhang, N., Farrell, R., Darrell, T.: Pose pooling kernels for sub-category recogni- tion. In: Computer Vision and Pattern Recognition (CVPR). pp. 3665â3672. IEEE (2012)
77. Zhang, N., Farrell, R., Iandola, F., Darrell, T.: Deformable part descriptors for ï¬ne-grained recognition and attribute prediction. In: International Conference on Computer Vision (ICCV). pp. 729â736. IEEE (2013) | 1511.06789#72 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06434 | 0 | 6 1 0 2
# n a J
7
] G L . s c [ 2 v 4 3 4 6 0 . 1 1 5 1 : v i X r a
Under review as a conference paper at ICLR 2016
UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS
Alec Radford & Luke Metz indico Research Boston, MA {alec,luke}@indico.io
# Soumith Chintala Facebook AI Research New York, NY [email protected]
# ABSTRACT
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsuper- vised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolu- tional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image repre- sentations.
1
# INTRODUCTION | 1511.06434#0 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 1 | # ABSTRACT
We propose the neural programmer-interpreter (NPI): a recurrent and composi- tional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value pro- gram memory, and domain-speciï¬c encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to- sequence LSTMs. The program memory allows efï¬cient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of com- putation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these pro- grams and all 21 associated subprograms. | 1511.06279#1 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 1 | # ABSTRACT
Deep learning has become the state-of-art tool in many applications, but the eval- uation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More speciï¬- cally, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a pol- icy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversiï¬cation of the dropout policy. We present encouraging empirical results showing that this approach im- proves the speed of computation without impacting the quality of the approxima- tion.
Keywords Neural Networks, Conditional Computing, REINFORCE
1
# INTRODUCTION | 1511.06297#1 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 1 | # ABSTRACT
The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. To- wards this goal, we deï¬ne a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultane- ously, and then generalize its knowledge to new domains. This method, termed âActor-Mimicâ, exploits the use of deep reinforcement learning and model com- pression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of general- izing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods.
# INTRODUCTION | 1511.06342#1 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 1 | 1
# INTRODUCTION
Learning reusable feature representations from large unlabeled datasets has been an area of active research. In the context of computer vision, one can leverage the practically unlimited amount of unlabeled images and videos to learn good intermediate representations, which can then be used on a variety of supervised learning tasks such as image classiï¬cation. We propose that one way to build good image representations is by training Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), and later reusing parts of the generator and discriminator networks as feature extractors for supervised tasks. GANs provide an attractive alternative to maximum likelihood techniques. One can additionally argue that their learning process and the lack of a heuristic cost function (such as pixel-wise independent mean-square error) are attractive to representation learning. GANs have been known to be unstable to train, often resulting in generators that produce nonsensical outputs. There has been very limited published research in trying to understand and visualize what GANs learn, and the intermediate representations of multi-layer GANs.
In this paper, we make the following contributions
⢠We propose and evaluate a set of constraints on the architectural topology of Convolutional GANs that make them stable to train in most settings. We name this class of architectures Deep Convolutional GANs (DCGAN) | 1511.06434#1 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 2 | # INTRODUCTION
Teaching machines to learn new programs, to rapidly compose new programs from existing pro- grams, and to conditionally execute these programs automatically so as to solve a wide variety of tasks is one of the central challenges of AI. Programs appear in many guises in various AI prob- lems; including motor behaviours, image transformations, reinforcement learning policies, classical algorithms, and symbolic relations.
In this paper, we develop a compositional architecture that learns to represent and interpret pro- grams. We refer to this architecture as the Neural Programmer-Interpreter (NPI). The core module is an LSTM-based sequence model that takes as input a learnable program embedding, program arguments passed on by the calling program, and a feature representation of the environment. The output of the core module is a key indicating what program to call next, arguments for the following program and a ï¬ag indicating whether the program should terminate. In addition to the recurrent core, the NPI architecture includes a learnable key-value memory of program embeddings. This program-memory is essential for learning and re-using programs in a continual manner. Figures 1 and 2 illustrate the NPI on two different tasks. | 1511.06279#2 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 2 | Keywords Neural Networks, Conditional Computing, REINFORCE
1
# INTRODUCTION
Large-scale neural networks, and in particular deep learning architectures, have seen a surge in popularity in recent years, due to their impressive empirical performance in complex supervised learning tasks, including state-of-the-art performance in image and speech recognition (He et al., 2015). Yet the task of training such networks remains a challenging optimization problem. Several related problems arise: very long training time (several weeks on modern computers, for some prob- lems), potential for over-ï¬tting (whereby the learned function is too speciï¬c to the training data and generalizes poorly to unseen data), and more technically, the vanishing gradient problem (Hochre- iter, 1991; Bengio et al., 1994), whereby the gradient information gets increasingly diffuse as it propagates from layer to layer. | 1511.06297#2 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 2 | # INTRODUCTION
Deep Reinforcement Learning (DRL), the combination of reinforcement learning methods and deep neural network function approximators, has recently shown considerable success in high- dimensional challenging tasks, such as robotic manipulation (Levine et al., 2015; Lillicrap et al., 2015) and arcade games (Mnih et al., 2015). These methods exploit the ability of deep networks to learn salient descriptions of raw state input, allowing the agent designer to essentially bypass the lengthy process of feature engineering. In addition, these automatically learnt descriptions often sig- niï¬cantly outperform hand-crafted feature representations that require extensive domain knowledge. One such DRL approach, the Deep Q-Network (DQN) (Mnih et al., 2015), has achieved state-of- the-art results on the Arcade Learning Environment (ALE) (Bellemare et al., 2013), a benchmark of Atari 2600 arcade games. The DQN uses a deep convolutional neural network over pixel inputs to parameterize a state-action value function. The DQN is trained using Q-learning combined with sev- eral tricks that stabilize the training of the network, such as a replay memory to store past transitions and target networks to deï¬ne a more consistent temporal difference error. | 1511.06342#2 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 2 | ⢠We use the trained discriminators for image classiï¬cation tasks, showing competitive per- formance with other unsupervised algorithms.
⢠We visualize the ï¬lters learnt by GANs and empirically show that speciï¬c ï¬lters have learned to draw speciï¬c objects.
1
# Under review as a conference paper at ICLR 2016
⢠We show that the generators have interesting vector arithmetic properties allowing for easy manipulation of many semantic qualities of generated samples.
2 RELATED WORK
2.1 REPRESENTATION LEARNING FROM UNLABELED DATA | 1511.06434#2 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 3 | We show in our experiments that the NPI architecture can learn 21 programs, including addition, sorting, and trajectory planning from image pixels. Crucially, this can be achieved using a single core model with the same parameters shared across all tasks. Different environments (for example images, text, and scratch-pads) may require speciï¬c perception modules or encoders to produce the features used by the shared core, as well as environment-speciï¬c actuators. Both perception modules and actuators can be learned from data when training the NPI architecture.
To train the NPI we use curriculum learning and supervision via example execution traces. Each program has example sequences of calls to the immediate subprograms conditioned on the input.
1
Published as a conference paper at ICLR 2016
HGOTO [fe ACT 12 | 12 . 72] ine 12 ne fla iz +2] 72] 12) * [ap GOTO() HGOTO() LGOTO() ACT(LEFT) LGOTO() ACT(LEFT) GOTO() | VGOTO() DGOTO() ACT(DOWN) end state
Figure 1: Example execution of canonicalizing 3D car models. The task is to move the camera such that a target angle and elevation are reached. There is a read-only scratch pad containing the target (angle 1, elevation 2 here). The image encoder is a convnet trained from scratch on pixels. | 1511.06279#3 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 3 | Recent approaches (Bengio et al., 2013; Davis & Arel, 2013) have proposed the use of conditional computation in order to address this problem. Conditional computation refers to activating only some of the units in a network, in an input-dependent fashion. For example, if we think weâre looking at a car, we only need to compute the activations of the vehicle detecting units, not of all features that a network could possible compute. The immediate effect of activating fewer units is that propagating information through the network will be faster, both at training as well as at test time. However, one needs to be able to decide in an intelligent fashion which units to turn on and off, depending on the input data. This is typically achieved with some form of gating structure, learned in parallel with the original network.
A secondary effect of conditional computation is that during training, information will be propagated along fewer links. Intuitively, this allows sharper gradients on the links that do get activated. More- over, because only parts of the network are active, and fewer parameters are used in the computation,
1
# Under review as a conference paper at ICLR 2016
the net effect can be viewed as a form of regularization of the main network, as the approximator has to use only a small fraction of the possible parameters in order to produce an action. | 1511.06297#3 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.