doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.06856 | 29 | # 4.1 SCALING AND LEARNING ALGORITHMS
We begin our evaluation by measuring and comparing the relative change rate ËCk,i,j of all weights in the network (see Equation (2)) for different initializations. We estimate ËCk,i,j using 100 images of the VOC 2007 validation set. We compare our models to an ImageNet pretrained model, ini- tialized with random Gaussian weights (with standard deviation Ï = 0.01), an unscaled k-means initialization, as well as the Gaussian initialization in Caffe (Jia et al., 2014), for which biases and standard deviations were handpicked per layer. Figure 1a visualizes the average change rate per layer. Our initialization, as well as the ImageNet pretrained model, have similar change rates for all layers (i.e., all layers learn at the same rate), while random initializations and k-means have a
6
Published as a conference paper at ICLR 2016 | 1511.06856#29 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 29 | Table 3: Recall@20 and MRR@20 for different types of a single layer of GRU, compared to the best baseline (item-KNN). Best results per dataset are highlighted.
Loss / #Units Recall@20 RSC15 MRR@20 Recall@20 VIDEO MRR@20 TOP1 100 BPR 100 Cross-entropy 100 TOP1 1000 BPR 1000 Cross-entropy 1000 0.5853 (+15.55%) 0.6069 (+19.82%) 0.6074 (+19.91%) 0.6206 (+22.53%) 0.6322 (+24.82%) 0.5777 (+14.06%) 0.2305 (+12.58%) 0.2407 (+17.54%) 0.2430 (+18.65%) 0.2693 (+31.49%) 0.2467 (+20.47%) 0.2153 (+5.16%) 0.6141 (+11.50%) 0.5999 (+8.92%) 0.6372 (+15.69%) 0.6624 (+20.27%) 0.6311 (+14.58%) â 0.3511 (+3.84%) 0.3260 (-3.56%) 0.3720 (+10.04%) 0.3891 (+15.08%) 0.3136 (-7.23%) â | 1511.06939#29 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 30 | Bousquet, Olivier and Bottou, L´eon. The tradeoffs of large scale learning. In NIPS, 2008.
Cho, Kyunghyun, Van Merri¨enboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning phrase representations using RNN encoder- decoder for statistical machine translation. In EMNLP, 2014.
Choromanska, Anna, Henaff, Mikael, Mathieu, Micha¨el, Arous, G´erard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In AISTATS, 2015.
9
# Under review as a conference paper at ICLR 2016
Dean, Jeffrey, Corrado, Greg, Monga, Rajat, Chen, Kai, Devin, Matthieu, Mao, Mark, Senior, An- drew, Tucker, Paul, Yang, Ke, Le, Quoc V, et al. Large scale distributed deep networks. In NIPS, 2012.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011. | 1511.06807#30 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 30 | 6
Published as a conference paper at ICLR 2016
drastically different change rates. Figure 1b measures the coefï¬cient of variation of the change rate for each layer, deï¬ned as the standard deviation of the change rate, divided by their mean value. Our coefï¬cient of variation is low throughout all layers, despite scaling the rate of change of columns of the weight matrix, instead of individual elements. Note that the low values are mirrored in the hand-tuned Caffe initialization.
Next we explore how those different initializations perform on the VOC 2007 classiï¬cation task, as shown in Table 1. We train both a random Gaussian and k-means initialization using different initial scalings. Without scaling the random Gaussian initialization fares quite well, however the k-means initialization does poorly, due to the worse initial change rate as shown in Figure 1. Correcting for the within-layer scaling alone does not improve the performance much, as it worsens the between- layer scaling for both initializations. However in combination with the between-layer adjustment both initializations perform very well. | 1511.06856#30 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 30 | lifespan of the sessions not requiring multiple time scales of different resolutions to be properly represented. However the exact reason of this is unknown as of yet and requires further research. Using embedding of the items gave slightly worse results, therefore we kept the 1-of-N encoding. Also, putting all previous events of the session on the input instead of the preceding one did not result in additional accuracy gain; which is not surprising as GRU â like LSTM â has both long and short term memory. Adding additional feed-forward layers after the GRU layer did not help either. However increasing the size of the GRU layer improved the performance. We also found that it is beneï¬cial to use tanh as the activation function of the output layer.
4.3 RESULTS | 1511.06939#30 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 31 | Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011.
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. In Proc. AISTATS, pp. 249â256, 2010.
Graves, Alex. Practical variational inference for neural networks. In NIPS, 2011.
Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arxiv:1308.0850, 2013.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural Turing Machines. arXiv preprint arXiv:1410.5401, 2014.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬cation. ICCV, 2015.
Hinton, Geoffrey and Roweis, Sam. Stochastic neighbor embedding. In NIPS, 2002. | 1511.06807#31 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 31 | Both the between-layer and within-layer scaling could potentially be addressed by a stronger second order optimization method, such as ADAM (Kingma & Ba, 2015) or batch normalization (Ioffe & Szegedy, 2015). In general, ADAM is able to slightly improve on SGD for an unscaled initializa- tion, especially when combined with batch normalization. Neither batch-norm nor ADAM alone or combined does perform as well as simple SGD with our k-means initialization. More interestingly, our initialization complements those stronger optimization methods and we see an improvement by combining them with our initialization.
4.2 WEIGHT INITIALIZATION
Next we compare our Gaussian, PCA and k-means based weights, with initializations proposed by Glorot & Bengio (2010) (commonly known as âxavierâ), He et al. (2015), and a carefully chosen Gaussian initialization of Jia et al. (2014). We followed the suggestions of He et al. and used their initialization only for the convolutional layers, while choosing a random Gaussian initialization for the fully connected layers. We compare all methods on both classiï¬cation and detection performance in Table 2. | 1511.06856#31 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 31 | 4.3 RESULTS
Table 3 shows the results of the best performing networks. Cross-entropy for the VIDEO data with 1000 hidden units was numerically unstable and thus we present no results for that scenario. The results are compared to the best baseline (item-KNN). We show results with 100 and 1000 hidden units. The running time depends on the parameters and the dataset. Generally speaking the difference in runtime between the smaller and the larger variant is not too high on a GeForce GTX Titan X GPU and the training of the network can be done in a few hours2. On CPU, the smaller network can be trained in a practically acceptable timeframe. Frequent retraining is often desirable for recommender systems, because new users and items are introduced frequently. | 1511.06939#31 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 32 | Hinton, Geoffrey and Roweis, Sam. Stochastic neighbor embedding. In NIPS, 2002.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George, rahman Mohamed, Abdel, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara, and Kingsbury, Brian. Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural Computation, 1997.
Kaiser, Lukasz and Sutskever, Ilya. Neural GPUs learn algorithms. In Arxiv, 2015.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Kirkpatrick, Scott, Vecchi, Mario P, et al. Optimization by simulated annealing. Science, 1983.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classiï¬cation with deep con- volutional neural networks. In NIPS, 2012. | 1511.06807#32 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 32 | The ï¬rst thing to notice is that both Glorot & Bengio and He et al. perform worse than a carefully chosen random Gaussian initialization. One possibility for the drop in performance comes from the additional layers, such as Pooling or LRN used in CaffeNet. Neither Glorot & Bengio nor He et al. consider those layers but rather focus on linear layers followed by tanh or ReLU non- linearities.
Our initialization on the other hand has no trouble with those additional layers and substantially improves on the random Gaussian initialization.
4.3 COMPARISON TO UNSUPERVISED PRE-TRAINING | 1511.06856#32 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 32 | The GRU-based approach has substantial gain over the item-KNN in both evaluation metrics on both datasets, even if the number of units is 1003. Increasing the number of units further improves the results for pairwise losses, but the accuracy decreases for cross-entropy. Even though cross-entropy gives better results with 100 hidden units, the pairwise loss variants surpass these results as the number of units increase. Although, increasing the number of units increases the training times, we found that it was not too expensive to move from 100 units to 1000 on GPU. Also, the cross-entropy based loss was found to be numerically unstable as the result of the network individually trying to increase the score for the target items, while the negative push is relatively small for the other items. Therefore we suggest using any of the two pairwise losses. The TOP1 loss performs slightly better on these two datasets, resulting in â¼ 20 â 30% accuracy gain over the best performing baseline.
# 5 CONCLUSION & FUTURE WORK | 1511.06939#32 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 33 | Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random access machine. In Arxiv, 2015.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Nair, Vinod and Hinton, Geoffrey. Rectiï¬ed linear units improve Restricted Boltzmann Machines. In ICML, 2010.
Neal, Radford M. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2011.
Neelakantan, Arvind, Le, Quoc V., and Sutskever, Ilya. Neural Programmer: Inducing latent pro- grams with gradient descent. In Arxiv, 2015.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. Proc. ICML, 2013.
Peng, Baolin, Lu, Zhengdong, Li, Hang, and Wong, Kam-Fai. Towards neural network-based rea- soning. arXiv preprint arxiv:1508.05508, 2015. | 1511.06807#33 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 33 | 4.3 COMPARISON TO UNSUPERVISED PRE-TRAINING
We now compare our simple, properly scaled initializations to the state-of-the-art unsupervised pre- training methods on VOC 2007 classiï¬cation and detection. Table 3 shows a summary of the results, including the amount of pre-training time, as well as the type of supervision used. Agrawal et al. (2015) uses egomotion, as measured by a moving car in a city to pre-train a model. While this information is not always readily available, it can be read from sensors and is thus âfree.â We believe egomotion information does not often correlate with the kind of semantic information that is required for classiï¬cation or detection, and hence the egomotion pretrained model performs worse than our random baseline. Wang & Gupta (2015) supervise their pre-training using relative motion | 1511.06856#33 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 33 | # 5 CONCLUSION & FUTURE WORK
In this paper we applied a kind of modern recurrent neural network (GRU) to new application do- main: recommender systems. We chose the task of session based recommendations, because it is a practically important area, but not well researched. We modiï¬ed the basic GRU in order to ï¬t the task better by introducing session-parallel mini-batches, mini-batch based output sampling and ranking loss function. We showed that our method can signiï¬cantly outperform popular baselines that are used for this task. We think that our work can be the basis of both deep learning applications in recommender systems and session based recommendations in general.
2Using Theano with ï¬xes for the subtensor operators on GPU. 3Except for using the BPR loss on the VIDEO data and evaluating for MRR.
8
Published as a conference paper at ICLR 2016
Our immediate future work will focus on the more thorough examination of the proposed network. We also plan to train the network on automatically extracted item representation that is built on content of the item itself (e.g. thumbnail, video, text) instead of the current input.
# ACKNOWLEDGMENTS
The work leading to these results has received funding from the European Unionâs Seventh Frame- work Programme (FP7/2007-2013) under CrowdRec Grant Agreement n⦠610594. | 1511.06939#33 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 34 | Polyak, Boris Teodorovich. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 1964.
Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The annals of mathemat- ical statistics, 1951.
10
# Under review as a conference paper at ICLR 2016
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬tting. JMLR, 2014.
Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, J¨urgen. Training very deep networks. NIPS, 2015.
Steijvers, Mark. A recurrent network that performs a context-sensitive prediction task. In CogSci, 1996.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. End-to-end memory net- works. In NIPS, 2015.
Sussillo, David and Abbott, L.F. Random walks: Training very deep nonlinear feed-forward net- works with smart initialization. Arxiv, 2014. | 1511.06807#34 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 34 | SGD SGD + BN ADAM ADAM + BN Scaling Gaus. k-mns. Gaus. k-mns. Gaus. k-mns. Gaus. no scaling 50.8% 41.2% 51.6% 49.4% 50.9% 52.0% 55.7% 53.8% Within-layer (Ours) Between-layer (Ours) Both (Ours) - - - - - - - Table 1: Classiï¬cation performance of various initializations, training algorithms and with and with- out batch normalization (BN) on PASCAL VOC2007 for both random Gaussian (Gaus.) and k- means (k-mns.) initialized weights.
7
Published as a conference paper at ICLR 2016
Method Classiï¬cation Detection Xavier Glorot & Bengio (2010) MSRA He et al. (2015) Random Gaussian (hand tuned) 51.1% 43.3% 53.4% 40.4% 37.2% 41.3% Ours (Random Gaussian) Ours (PCA) Ours (k-means) 53.3% 52.8% 56.6% 43.4% 43.1% 45.6%
Table 2: Comparison of different initialization methods on PASCAL VOC2007 classiï¬cation and detection. | 1511.06856#34 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 34 | The work leading to these results has received funding from the European Unionâs Seventh Frame- work Programme (FP7/2007-2013) under CrowdRec Grant Agreement n⦠610594.
# REFERENCES
Cho, Kyunghyun, van Merri¨enboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the proper- ties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
Dauphin, Yann N, de Vries, Harm, Chung, Junyoung, and Bengio, Yoshua. Rmsprop and equi- librated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015.
Davidson, James, Liebald, Benjamin, Liu, Junning, et al. The YouTube video recommendation system. In Recsysâ10: ACM Conf. on Recommender Systems, pp. 293â296, 2010. ISBN 978-1- 60558-906-0.
Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121â2159, 2011. | 1511.06939#34 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 35 | Sussillo, David and Abbott, L.F. Random walks: Training very deep nonlinear feed-forward net- works with smart initialization. Arxiv, 2014.
Sutskever, Ilya, Martens, James, Dahl, George, and Hinton, Geoffrey. On the importance of initial- ization and momentum in deep learning. In ICML, 2013.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural net- works. In NIPS, 2014.
Welling, Max and Teh, Yee Whye. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
Weston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards AI-complete ques- tion answering: a set of prerequisite toy tasks. In ICML, 2015.
Zeiler, Matthew D. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
11 | 1511.06807#35 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 35 | Table 2: Comparison of different initialization methods on PASCAL VOC2007 classiï¬cation and detection.
of objects in pre-selected youtube videos, as obtained by a tracker. Their model is generally quite well scaled and trains well for both classiï¬cation and detection. Doersch et al. (2015) predict the relative arrangement of image patches to pre-train a model. Their model is trained the longest with 4 weeks of training. It does well on detection, but lags behind other methods in classiï¬cation.
Interestingly our k-means initialization is able to keep up with most unsupervised pre-training meth- ods, despite containing very little semantic information. To analyze what information is actually captured, we sampled 100 random ImageNet images and found nearest neighbors for them from a pool of 50,000 other random ImageNet images, using the high-level feature spaces from differ- ent methods. Figure 2 shows the results. Overall, different unsupervised methods seem to focus on different attributes for matching. For example, ours appears to have some texture and material information, whereas the method of Doersch et al. (2015) seems to preserve more speciï¬c shape information. | 1511.06856#35 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 35 | Hidasi, B. and Tikk, D. Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback. In ECML-PKDDâ12, Part II, number 7524 in LNCS, pp. 67â82. Springer, 2012.
Hidasi, Bal´azs and Tikk, Domonkos. General factorization framework for context-aware recommen- dations. Data Mining and Knowledge Discovery, pp. 1â30, 2015. ISSN 1384-5810. doi: 10.1007/ s10618-015-0417-y. URL http://dx.doi.org/10.1007/s10618-015-0417-y.
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net- works for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82â97, 2012.
Koren, Y. Factorization meets the neighborhood: a multifaceted collaborative ï¬ltering model. In SIGKDDâ08: ACM Int. Conf. on Knowledge Discovery and Data Mining, pp. 426â434, 2008. | 1511.06939#35 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06856 | 36 | As a ï¬nal experiment we reinitialize all unsupervised pre-training methods to be properly scaled and compare with our initializations which use no auxiliary training beyond the proposed initializations. In particular, we take their pretrained network weights and apply the between-layer adjustment de- scribed in Section 3.2. (We do not perform local scaling as we ï¬nd that the activations in these mod- els are already scaled reasonably well locally.) The bottom three rows of Table 3 give our results for our rescaled versions of these models on the VOC classiï¬cation and detection tasks. We ï¬nd that for two of the three models (Agrawal et al., 2015; Doersch et al., 2015) this rescaling improves results signiï¬cantly; our rescaling of Wang & Gupta (2015) on the other hand does not improve its perfor- mance, indicating it was likely relatively well-scaled globally to begin with. The best-performing method with auxiliary self-supervision using our rescaled features is that of Doersch et al. (2015) â in this case our rescaling improves its results on the classiï¬cation task by a relative margin of 18%. This suggests that our method nicely complements existing unsupervised and self-supervised methods and could facilitate easier future exploration of this rich space of methods. | 1511.06856#36 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 36 | Koren, Yehuda, Bell, Robert, and Volinsky, Chris. Matrix factorization techniques for recommender systems. Computer, 42(8):30â37, 2009.
Linden, G., Smith, B., and York, J. Amazon. com recommendations: Item-to-item collaborative ï¬ltering. Internet Computing, IEEE, 7(1):76â80, 2003.
Liu, Qiwen, Chen, Tianjian, Cai, Jing, and Yu, Dianhai. Enlister: Baiduâs recommender system for the biggest Chinese Q&A website. In RecSys-12: Proc. of the 6th ACM Conf. on Recommender Systems, pp. 285â288, 2012.
Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt-Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. In UAIâ09: 25th Conf. on Uncertainty in Artiï¬cial Intelligence, pp. 452â461, 2009. ISBN 978-0-9749039-5-8. | 1511.06939#36 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06856 | 37 | 4.4 DIFFERENT ARCHITECTURES
Finally we compare our initialization across different architectures, again using PASCAL 2007 clas- siï¬cation and detection. We train both the deep architecture of Szegedy et al. (2015) and Simonyan & Zisserman (2015) using our k-means and Gaussian initializations. Unlike prior work we are able
Method Supervision Agrawal et al. (2015) Wang & Gupta (2015)2 Doersch et al. (2015) egomotion motion unsupervised 10 hours 1 week 4 weeks 52.9% 62.8% 55.3% 41.8% 47.4% 46.6% Krizhevsky et al. (2012) 1000 class labels 3 days 78.2% 56.8% Ours (k-means) initialization 54 seconds 56.6% 45.6% Ours + Agrawal et al. (2015) Ours + Wang & Gupta (2015) Ours + Doersch et al. (2015) egomotion motion unsupervised 10 hours 1 week 4 weeks 54.2% 63.1% 65.3% 43.9% 47.2% 51.1%
# Pretraining time Classiï¬cation Detection | 1511.06856#37 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 37 | Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael S., Berg, Alexander C., and Li, Fei-Fei. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014. URL http://arxiv.org/abs/1409.0575.
9
Published as a conference paper at ICLR 2016
Salakhutdinov, Ruslan, Mnih, Andriy, and Hinton, Geoffrey. Restricted boltzmann machines for collaborative ï¬ltering. In Proceedings of the 24th international conference on Machine learning, pp. 791â798. ACM, 2007.
Sarwar, Badrul, Karypis, George, Konstan, Joseph, and Riedl, John. Item-based collaborative ï¬lter- ing recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pp. 285â295. ACM, 2001. | 1511.06939#37 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06856 | 38 | # Pretraining time Classiï¬cation Detection
Table 3: Comparison of classiï¬cation and detection results on the PASCAL VOC2007 test set. 2an earlier version of this paper reported 58.4% and 44.0% for the color model of Wand & Gupta, this
version uses the grayscale model which performs better.
8
Published as a conference paper at ICLR 2016
to train those models without any intermediate losses or stage-wise supervised pre-training. We simply add a sigmoid cross-entropy loss to the top of both networks. Unfortunately neither network outperformed CaffeNet in the classiï¬cation tasks. GoogLeNet achieves a 50.0% and 55.0% mAP for the two initializations respectively, while 16-layer VGG performs as 53.8% and 56.5%. This might have to do with the limited amount of supervised training data available to the model at during train- ing. The training time was 4 and 12 times slower than CaffeNet, which made them prohibitively slow for detection.
IMAGENET TRAINING | 1511.06856#38 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 38 | Shani, Guy, Brafman, Ronen I, and Heckerman, David. An mdp-based recommender system. In Proceedings of the Eighteenth conference on Uncertainty in artiï¬cial intelligence, pp. 453â460. Morgan Kaufmann Publishers Inc., 2002.
Shi, Yue, Karatzoglou, Alexandros, Baltrunas, Linas, Larson, Martha, Oliver, Nuria, and Hanjalic, Alan. Climf: Learning to maximize reciprocal rank with collaborative less-is-more ï¬ltering. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys â12, pp. 139â146, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1270-7. doi: 10.1145/2365952.2365981. URL http://doi.acm.org/10.1145/2365952.2365981. | 1511.06939#38 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06856 | 39 | IMAGENET TRAINING
Finally, we test our data-dependent initializations on two well-known CNN architectures which have been successfully applied to the ImageNet LSVRC 1000-way classiï¬cation task: CaffeNet (Jia et al., 2014) and GoogLeNet (Szegedy et al., 2015). We initialize the 1000-way classiï¬cation layers to 0 in these experiments (except in our reproductions of the reference models), as we ï¬nd this improves the initial learning velocity.
CaffeNet We train instances of CaffeNet using our initializations, with the architecture and all other hyperparameters set to those used to train the reference model: learning rate 0.01 (dropped by a factor of 0.1 every 105 iterations), momentum 0.9, and batch size 256. We also train a variant of the architecture with no local response normalization (LRN) layers.
Our CaffeNet training results are presented in Figure 3. Over the ï¬rst 100,000 iterations (Figure 3, middle row), and particularly over the ï¬rst 10,000 (Figure 3, top row), our initializations reduce the networkâs classiï¬cation error on both the training and validation sets at a much faster rate than the reference initialization. | 1511.06856#39 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 39 | Steck, Harald. Gaussian ranking by matrix factorization. In Proceedings of the 9th ACM Confer- ence on Recommender Systems, RecSys â15, pp. 115â122, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3692-5. doi: 10.1145/2792838.2800185. URL http://doi.acm.org/ 10.1145/2792838.2800185.
Van den Oord, Aaron, Dieleman, Sander, and Schrauwen, Benjamin. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp. 2643â2651, 2013.
Wang, Hao, Wang, Naiyan, and Yeung, Dit-Yan. Collaborative deep learning for recommender In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge systems. Discovery and Data Mining, KDD â15, pp. 1235â1244, New York, NY, USA, 2015. ACM.
Weimer, Markus, Karatzoglou, Alexandros, Le, Quoc Viet, and Smola, Alex. Maximum margin ma- trix factorization for collaborative ranking. Advances in neural information processing systems, 2007.
10 | 1511.06939#39 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06856 | 40 | With the full 320,000 training iterations, all initializations achieve similar accuracy on the training and validation sets; however, in these experiments the carefully chosen reference initialization pulled non-trivially ahead of our initializationsâ error after the second learning rate drop to a rate of 10â4. We do not yet know why this occurs, or whether the difference is signiï¬cant.
Over the ï¬rst 100,000 iterations, among models initialized using our method, the k-means initializa- tion reduces the loss slightly faster than the random initialization. Interestingly, the model variant without LRN layers seems to learn just as quickly as the directly comparable network with LRNs, suggesting such normalizations may not be necessary given a well-chosen initialization.
GoogLeNet We apply our best-performing initialization from the CaffeNet experimentsâk- meansâto a deeper network, GoogLeNet (Szegedy et al., 2015). We use the SGD hyperparam- eters from the Caffe (Jia et al., 2014) GoogleNet implementation (speciï¬cally, the âquickâ version which is trained for 2.4 million iterations), and also retrain our own instance of the model with the initialization used in the reference model (based on Glorot & Bengio (2010)). | 1511.06856#40 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 41 | Due to the depth of the architecture (22 layers, compared to CaffeNetâs 8) and the difï¬culty of prop- agating gradient signal to the early layers of the network, GoogLeNet includes additional âauxiliary classiï¬ersâ branching off from intermediate layers of the network to amplify the gradient signal to learn these early layers. To verify that networks initialized using our proposed method should have no problem backpropagating appropriately scaled gradients through all layers of arbitrarily deep networks, we also train a variant of GoogLeNet which omits the two intermediate loss towers, otherwise keeping the rest of the architecture ï¬xed. | 1511.06856#41 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 42 | Our GoogLeNet training results are presented in Figure 4. We plot only the loss of the ï¬nal clas- siï¬er for comparability with the single-classiï¬er model. The models initialized with our method learn much faster than the model using the reference initialization stategy. Furthermore, the model trained using only a single classiï¬er learns at roughly the same rate as the original three loss tower architecture, and each iteration of training in the single classiï¬er model is slightly faster due to the removal of layers to compute the additional losses. This result suggests that our initialization could signiï¬cantly ease exploration of new, deeper CNN architectures, bypassing the need for architectural tweaks like the intermediate losses used to train GoogLeNet.
9
Published as a conference paper at ICLR 2016
# 5 DISCUSSION | 1511.06856#42 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 43 | 9
Published as a conference paper at ICLR 2016
# 5 DISCUSSION
Our method is a conceptually simple data-dependent initialization strategy for CNNs which en- forces empirically identically distributed activations locally (within a layer), and roughly uniform global scaling of weight gradients across all layers of arbitrarily deep networks. Our experiments (Section 4) demonstrate that this rescaling of weights results in substantially improved CNN repre- sentations for tasks with limited labeled data (as in the PASCAL VOC classiï¬cation and detection training sets), improves representations learned by existing self-supervised and unsupervised meth- ods, and substantially accelerates the early stages of CNN training on large-scale datasets (e.g., ImageNet). We hope that our initializations will facilitate further advancement in unsupervised and self-supervised learning as well as more efï¬cient exploration of deeper and larger CNN architec- tures.
# ACKNOWLEDGEMENTS
The thank Alyosha Efros for his input and encouragement, without his âGelato betâ most of this work would not have been explored. We thank NVIDIA for their generous GPU donations.
# REFERENCES
Agrawal, Pulkit, Carreira, Joao, and Malik, Jitendra. Learning to see by moving. ICCV, 2015. 7, 8 | 1511.06856#43 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 44 | Agrawal, Pulkit, Carreira, Joao, and Malik, Jitendra. Learning to see by moving. ICCV, 2015. 7, 8
Bradley, David M. Learning in modular systems. Technical report, DTIC Document, 2010. 2
Coates, Adam and Ng, Andrew Y. Learning feature representations with k-means. In Neural Net- works: Tricks of the Trade, pp. 561â580. Springer, 2012. 5
Doersch, Carl, Gupta, Abhinav, and Efros, Alexei A. Unsupervised visual representation learning by context prediction. ICCV, 2015. 6, 8, 11
Everingham, Mark, Eslami, SM Ali, Van Gool, Luc, Williams, Christopher KI, Winn, John, and Zisserman, Andrew. The Pascal Visual Object Classes challenge: A retrospective. IJCV, 111(1): 98â136, 2014. 5, 6
Girshick, Ross. Fast R-CNN. ICCV, 2015. 1, 6
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. In AISTATS, pp. 249â256, 2010. 2, 7, 8, 9 | 1511.06856#44 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 45 | He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpass- ing human-level performance on ImageNet classiï¬cation. In ICCV, 2015. 2, 7, 8, 12
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 2, 7
Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross B., Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional architecture for fast feature em- bedding. In ACM Multimedia, MM, 2014. 5, 6, 7, 9, 12
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. ICLR, 2015. 7
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classiï¬cation with deep con- volutional neural networks. In NIPS, 2012. 2, 6, 8 | 1511.06856#45 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 46 | LeCun, Y., Bottou, L., Orr, G., and Muller, K. Efï¬cient backprop. In Neural Networks: Tricks of the trade. Springer, 1998. 2, 3
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet large scale visual recognition challenge. IJCV, 2015. 1
10
Published as a conference paper at ICLR 2016
oa Po] Qa c
Figure 2: Comparison of nearest neighbors for the given input image (top row) in the feature spaces of CaffeNet-based CNNs initialized using our method, the fully supervised CaffeNet, an untrained CaffeNet using Gaussian initialization, and three unsupervised or self-supervised methods from prior work. (For Doersch et al. (2015) we display neighbors in fc6 feature space; the rest use the fc7 features.) While our initialization is clearly missing the semantics of CaffeNet, it does preserve some non-speciï¬c texture and shape information, which is often enough for meaningful matches. | 1511.06856#46 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 47 | Saxe, Andrew M, McClelland, James L, and Ganguli, Surya. Exact solutions to the nonlinear dy- namics of learning in deep linear neural networks. arXiv preprint, 2013. 2
Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. ICLR, 2015. 1, 2, 6, 8
Sussillo, David and Abbot, Larry. Random walk initialization for training very deep feedforward networks. ICLR, 2015. 2
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. CVPR, 2015. 1, 2, 4, 6, 8, 9
Wang, Xiaolong and Gupta, Abhinav. Unsupervised learning of visual representations using videos. ICCV, 2015. 7, 8
Yosinski, Jason, Clune, Jeff, Bengio, Yoshua, and Lipson, Hod. How transferable are features in deep neural networks? In NIPS, 2014. 1
11
Published as a conference paper at ICLR 2016 | 1511.06856#47 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 48 | 11
Published as a conference paper at ICLR 2016
7 7 6 6 5 5 4 4 3 0K 7 2K 4K 6K 8K 10K 0K 7 2K 4K 6K 8K 6 6 5 5 4 4 3 3 2 0K 7 20K 40K 60K 80K 100K 2 0K 7 20K 40K 60K 80K 6 5 4 6 5 4 Reference MSRA Random (ours) k-means (ours) k-means, no LRN (ours) 3 3 2 1 0K 50K 100K 150K 200K 250K 300K 350K 2 0K 50K 100K 150K 200K 250K 300K 350K (a) Training loss (b) Validation loss 10K 100K | 1511.06856#48 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 49 | Figure 3: Training and validation loss curves for the CaffeNet architecture trained for the ILSVRC- 2012 classiï¬cation task. The training error is unsmoothed in the topmost plot (10K); smoothed over one epoch in the others. The validation error is computed over the full validation set every 2000 iterations and is unsmoothed. Our initializations (k-means, Random) handily outperform both the carefully chosen reference initialization (Jia et al., 2014) and the MSRA initialization (He et al., 2015) over the ï¬rst 100,000 iterations, but the other initializations catch up after the second learning rate drop at iteration 200,000.
6 6 Reference k-means (ours) k-means, single loss (ours) 4 4 2 2 0M 0.5M 1M 1.5M 2M 0M 0.5M 1M 1.5M 2M (a) Training loss (b) Validation loss | 1511.06856#49 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06856 | 50 | Figure 4: Training and validation loss curves for the GoogLeNet architecture trained for the ILSVRC-2012 classiï¬cation task. The training error plot is again smoothed over roughly the length of an epoch; the validation error (computed every 4000 iterations) is unsmoothed. Note that our k- means initializations outperform the reference initialization, and the single loss model (lacking the auxiliary classiï¬ers) learns at roughly the same rate as the model with auxiliary classiï¬ers. The ï¬nal top-5 validation error are 11.57% for the reference model, 10.85% for our single loss, and 10.69% for our auxiliary loss model.
12 | 1511.06856#50 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06789 | 0 | 6 1 0 2
t c O 8 1 ] V C . s c [ 3 v 9 8 7 6 0 . 1 1 5 1 : v i X r a
# The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
Jonathan Krause!* Benjamin Sapp?** | Andrew Howard? Howard Zhou? Alexander Toshev? Tom Duerig? James Philbin?** Li Fei-Fei!
# 1Stanford University
# 2Zoox
# 3Google
{jkrause,feifeili}@cs.stanford.edu {bensapp,james}@zoox.com {howarda,howardzhou,toshev,tduerig}@google.com | 1511.06789#0 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 1 | # ABSTRACT
The complexity of deep neural network algorithms for hardware implementation can be much lowered by optimizing the word-length of weights and signals. Direct quantization of ï¬oating-point weights, however, does not show good performance when the number of bits assigned is small. Retraining of quantized networks has been developed to relieve this problem. In this work, the effects of quantization are analyzed for a feedforward deep neural network (FFDNN) and a convolutional neural network (CNN) when their network complexity is changed. The complex- ity of the FFDNN is controlled by varying the unit size in each hidden layer and the number of layers, while that of the CNN is done by modifying the feature map conï¬guration. We ï¬nd that some performance gap exists between the ï¬oating- point and the retrain-based ternary (+1, 0, -1) weight neural networks when the size is not large enough, but the discrepancy almost vanishes in fully complex net- works whose capability is limited by the training data, rather than by the number of connections. This research shows that highly complex DNNs have the capa- bility of absorbing the effects of severe weight quantization through retraining, but connection limited networks are less resilient. This paper also presents the effective compression ratio to guide the trade-off between the network size and the precision when the hardware resource is limited. | 1511.06488#1 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 1 | cal machine translation, and we investigate the use of monolingual data for NMT.
Neural Machine Translation (NMT) has obtained state-of-the art performance for language pairs, while only us- several Target- ing parallel data for training. side monolingual data plays an impor- tant role in boosting ï¬uency for phrase- based statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with sep- arately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolin- gual data without changing the neural net- work architecture. By pairing monolin- gual training data with an automatic back- translation, we can treat it as additional parallel training data, and we obtain sub- stantial improvements on the WMT 15 task EnglishâGerman (+2.8â3.7 BLEU), and for the low-resourced IWSLT 14 task TurkishâEnglish (+2.1â3.4 BLEU), ob- taining new state-of-the-art results. We also show that ï¬ne-tuning on in-domain monolingual and parallel data gives sub- stantial improvements for the IWSLT 15 task EnglishâGerman. | 1511.06709#1 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 1 | Abstract. Current approaches for ï¬ne-grained recognition do the fol- lowing: First, recruit experts to annotate a dataset of images, optionally also collecting more structured data in the form of part annotations and bounding boxes. Second, train a model utilizing this data. Toward the goal of solving ï¬ne-grained recognition, we introduce an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition. This approach has beneï¬ts in both performance and scalability. We demonstrate its eï¬cacy on four ï¬ne-grained datasets, greatly exceeding existing state of the art without the manual collec- tion of even a single label, and furthermore show ï¬rst results at scaling to more than 10,000 ï¬ne-grained categories. Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using their an- notated training sets. We compare our approach to an active learning approach for expanding ï¬ne-grained datasets.
# 1 Introduction | 1511.06789#1 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 2 | # INTRODUCTION
Deep neural networks (DNNs) begin to ï¬nd many real-time applications, such as speech recognition, autonomous driving, gesture recognition, and robotic control (Sak et al., 2015; Chen et al., 2015; Jalab et al., 2015; Corradini et al., 2015). Although most of deep neural networks are implemented using GPUs (Graphics Processing Units) in these days, their implementation in hardware can give many beneï¬ts in terms of power consumption and system size (Ovtcharov et al., 2015). FPGA based implementation examples of CNN show more than 10 times advantage in power consumption (Ovtcharov et al., 2015). | 1511.06488#2 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 2 | Language models trained on monolingual data role in statistical ma- have played a central chine translation since the ï¬rst IBM models (Brown et al., 1990). There are two major rea- sons for their importance. Firstly, word-based and phrase-based translation models make strong independence assumptions, with the probability of translation units estimated independently from context, and language models, by making different independence assumptions, can model how well these translation units ï¬t together. Secondly, the amount of available monolingual data in the tar- get language typically far exceeds the amount of parallel data, and models typically improve when trained on more data, or data more similar to the translation task.
In archi- tectures for translation Bahdanau et al., 2015), (Sutskever et al., 2014; the decoder is essentially an RNN language model that is also conditioned on source context, so the ï¬rst rationale, adding a language model to compensate for the independence assumptions of the translation model, does not apply. However, the data argument is still valid in NMT, and we expect monolingual data to be especially helpful if parallel data is sparse, or a poor ï¬t for the translation task, for instance because of a domain mismatch.
# 1 Introduction | 1511.06709#2 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 2 | # 1 Introduction
Fine-grained recognition refers to the task of distinguishing very similar cate- gories, such as breeds of dogs [27,37], species of birds [60,58,5,4], or models of cars [70,30]. Since its inception, great progress has been made, with accuracies on the popular CUB-200-2011 bird dataset [60] steadily increasing from 10.3% [60] to 84.6% [69].
The predominant approach in ï¬ne-grained recognition today consists of two steps. First, a dataset is collected. Since ï¬ne-grained recognition is a task in- herently diï¬cult for humans, this typically requires either recruiting a team of experts [58,38] or extensive crowd-sourcing pipelines [30,4]. Second, a method for recognition is trained using these expert-annotated labels, possibly also re- quiring additional annotations in the form of parts, attributes, or relation- ships [75,26,36,5]. While methods following this approach have shown some suc- cess [5,75,36,28], their performance and scalability is constrained by the paucity
Work done while J. Krause was interning at Google ** Work done while B. Sapp and J. Philbin were at Google
2 Krause et al. | 1511.06789#2 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 3 | Neural network algorithms employ many multiply and add (MAC) operations that mimic the oper- ations of biological neurons. This suggests that reconï¬gurable hardware arrays that contain quite homogeneous hardware blocks, such as MAC units, can give very efï¬cient solution to real-time neu- ral network system design. Early studies on word-length determination of neural networks reported the needed precision of at least 8 bits (Holt & Baker, 1991). Our recent works show that the pre- cision required for implementing FFDNN, CNN or RNN needs not be very high, especially when the quantized networks are trained again to learn the effects of lowered precision. In the ï¬xed-point optimization examples shown in Hwang & Sung (2014); Anwar et al. (2015); Shin et al. (2015), neural networks with ternary weights showed quite good performance which was close to that of ï¬oating-point arithmetic.
In this work, we try to know if retraining can recover the performance of FFDNN and CNN under quantization with only ternary (+1, 0, -1) levels or 3 bits (+3, +2, +1, 0, -1, -2, -3) for weight
1
# Under review as a conference paper at ICLR 2016 | 1511.06488#3 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 3 | # 1 Introduction
Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Target-side monolingual data plays an important role in boosting ï¬uency for phrase-based statistiIn contrast to previous work, which integrates a separately trained RNN language model into the NMT model (Gülçehre et al., 2015), we explore strategies to include monolingual training data in the training process without changing the neural network architecture. This makes our approach applicable to different NMT architectures. | 1511.06709#3 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 3 | Fig. 1. There are more than 14,000 species of birds in the world. In this work we show that using noisy data from publicly-available online sources can not only improve recognition of categories in todayâs datasets, but also scale to very large numbers of ï¬ne-grained categories, which is extremely expensive with the traditional approach of manually collecting labels for ï¬ne-grained datasets. Here we show 4,225 of the 10,982 categories recognized in this work.
of data available due to these limitations. With this traditional approach it is prohibitive to scale up to all 14,000 species of birds in the world (Fig. 1), 278,000 species of butterï¬ies and moths, or 941,000 species of insects [24]. | 1511.06789#3 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 4 | 1
# Under review as a conference paper at ICLR 2016
representation. Note that bias values are not quantized. For this study, the network complexity is changed to analyze their effects on the performance gap between ï¬oating-point and retrained low- precision ï¬xed-point deep neural networks.
We conduct our experiments with a feed-forward deep neural network (FFDNN) for phoneme recog- nition and a convolutional neural network (CNN) for image classiï¬cation. To control the network size, not only the number of units in each layer but also the number of hidden layers are varied in the FFDNN. For the CNN, the number of feature maps for each layer and the number of layers are both changed. The FFDNN uses the TIMIT corpus and the CNN employs the CIFAR-10 dataset. We also propose a metric called effective compression ratio (ECR) for comparing extremely quantized bigger networks with moderately quantized or ï¬oating-point networks with the smaller size. This analysis intends to ï¬nd an insight to the knowledge representation capability of highly quantized networks, and also provides a guideline to network size and word-length determination for efï¬cient hardware implementation of DNNs.
# 2 RELATED WORK | 1511.06488#4 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 4 | The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland.
The main contributions of this paper are as fol- lows:
⢠we show that we can improve the machine
translation quality of NMT systems by mix- target sentences into the ing monolingual training set.
⢠we investigate two different methods to ï¬ll the source side of monolingual training in- stances: using a dummy source sentence, and using a source sentence obtained via back- translation, which we call synthetic. We ï¬nd that the latter is more effective.
⢠we successfully adapt NMT models to a new domain by ï¬ne-tuning with either monolin- gual or parallel in-domain data.
# 2 Neural Machine Translation
We follow the neural machine translation archi- tecture by Bahdanau et al. (2015), which we will brieï¬y summarize here. However, we note that our approach is not speciï¬c to this architecture.
The neural machine translation system is imple- mented as an encoder-decoder network with recur- rent neural networks. | 1511.06709#4 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 4 | In this paper, we show that it is possible to train eï¬ective models of ï¬ne- grained recognition using noisy data from the web and simple, generic methods of recognition [55,54]. We demonstrate recognition abilities greatly exceeding current state of the art methods, achieving top-1 accuracies of 92.3% on CUB- 200-2011 [60], 85.4% on Birdsnap [4], 93.4% on FGVC-Aircraft [38], and 80.8% on Stanford Dogs [27] without using a single manually-annotated training label from the respective datasets. On CUB, this is nearly at the level of human ex- perts [6,58]. Building upon this, we scale up the number of ï¬ne-grained classes recognized, reporting ï¬rst results on over 10,000 species of birds and 14,000 species of butterï¬ies and moths. | 1511.06789#4 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 5 | # 2 RELATED WORK
Fixed-point implementation of signal processing algorithms has long been of interest for VLSI based design of multimedia and communication systems. Some of early works used statistical modeling of quantization noise for application to linear digital ï¬lters. The simulation-based word-length op- timization method utilized simulation tools to evaluate the ï¬xed-point performance of a system, by which non-linear algorithms can be optimized (Sung & Kum, 1995). Ternary (+1, 0, -1) coefï¬- cients based digital ï¬lters were used to eliminate multiplications at the cost of higher quantization noise. The implementation of adaptive ï¬lters with ternary weights were developed, but it demanded oversampling to remove the quantization effects (Hussain et al., 2007). | 1511.06488#5 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 5 | The neural machine translation system is imple- mented as an encoder-decoder network with recur- rent neural networks.
is a bidirectional neural net- work with gated recurrent units (Cho et al., 2014) that reads an input sequence x = (x1, ..., xm) and calculates a forward sequence of hidden ââ h m), and a backward sequence states ( ââ ââ h j are h 1, ..., ( concatenated to obtain the annotation vector hj.
The decoder is a recurrent neural network that predicts a target sequence y = (y1, ..., yn). Each word yi is predicted based on a recurrent hidden state si, the previously predicted word yiâ1, and a context vector ci. ci is computed as a weighted sum of the annotations hj. The weight of each annotation hj is computed through an alignment model αij, which models the probability that yi is aligned to xj. The alignment model is a single- layer feedforward neural network that is learned jointly with the rest of the network through back- propagation.
A detailed description can be found in (Bahdanau et al., 2015). Training is performed on a parallel corpus with stochastic gradient descent. For translation, a beam search with small beam size is employed.
# 3 NMT Training with Monolingual Training Data | 1511.06709#5 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 5 | The rest of this paper proceeds as follows: After an overview of related work in Sec. 2, we provide an analysis of publicly-available noisy data for ï¬ne-grained recognition in Sec. 3, analyzing its quantity and quality. We describe a more traditional active learning approach for obtaining larger quantities of ï¬ne-grained data in Sec. 4, which serves as a comparison to purely using noisy data. We present extensive experiments in Sec. 5, and conclude with discussion in Sec. 6.
# 2 Related Work
Fine-Grained Recognition. The majority of research in ï¬ne-grained recogni- tion has focused on developing improved models for classiï¬cation [1,3,5,7,9,8,14,16,18,20,21,22,28,29,36,37,41,42,49,51,50,66,68,69,71,73,72,76,77,75,78].
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
3 | 1511.06789#5 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 6 | Fixed-point neural network design also has been studied with the same purpose of reducing the hard- ware implementation cost (Moerland & Fiesler, 1997). In Holt & Baker (1991), back propagation simulation with 16-bit integer arithmetic was conducted for several problems, such as NetTalk, Par- ity, Protein and so on. This work conducted the experiments while changing the number of hidden units, which was, however, relatively small numbers. The integer simulations showed quite good results for NetTalk and Parity, but not for Protein benchmarks. With direct quantization of trained weights, this work also conï¬rmed satisfactory operation of neural networks with 8-bit precision. An implementation with ternary weights were reported for neural network design with optical ï¬ber networks (Fiesler et al., 1990). In this ternary network design, the authors employed retraining after direct quantization to improve the performance of a shallow network. | 1511.06488#6 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 6 | # 3 NMT Training with Monolingual Training Data
In machine translation, more monolingual data (or monolingual data more similar to the test set)
serves to improve the estimate of the prior prob- ability p(T ) of the target sentence T , before tak- ing the source sentence S into account. In con- trast to (Gülçehre et al., 2015), who train separate language models on monolingual training data and incorporate them into the neural network through shallow or deep fusion, we propose techniques to train the main NMT model with monolingual data, exploiting the fact that encoder-decoder neu- ral networks already condition the probability dis- tribution of the next target word on the previous target words. We describe two strategies to do this: providing monolingual training examples with an empty (or dummy) source sentence, or providing monolingual training data with a synthetic source sentence that is obtained from automatically trans- lating the target sentence into the source language, which we will refer to as back-translation.
# 3.1 Dummy Source Sentences | 1511.06709#6 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 6 | The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
3
While these works have made great progress in modeling ï¬ne-grained categories given the limited data available, very few works have considered the impact of that data [69,68,58]. Xu et al. [69] augment datasets annotated with category labels and parts with web images in a multiple instance learning framework, and Xie et al. [68] do multitask training, where one task uses a ground truth ï¬ne- grained dataset and the other does not require ï¬ne-grained labels. While both of these methods have shown that augmenting ï¬ne-grained datasets with addi- tional data can help, in our work we present results which completely forgo the use of any curated ground truth dataset. In one experiment hinting at the use of noisy data, Van Horn et al. [58] show the possibility of learning 40 bird classes from Flickr images. Our work validates and extends this idea, using similar intu- ition to signiï¬cantly improve performance on existing ï¬ne-grained datasets and scale ï¬ne-grained recognition to over ten thousand categories, which we believe is necessary in order to fully explore the research direction. | 1511.06789#6 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 7 | Recently, ï¬xed-point design of DNNs is revisited, and FFDNN and CNN with ternary weights show quite good performances that are very close to the ï¬oating-point results. The ternary weight based FFDNN and CNN are used for VLSI and FPGA based implementations, by which the algorithms can operate with only on-chip memory consuming very low power (Kim et al., 2014). Binary weight based deep neural network design is also studied (Courbariaux et al., 2015). Pruned ï¬oating-point weights are also utilized for efï¬cient GPU based implementations, where small valued weights are forced to zero to reduce the number of arithmetic operations and the memory space for weight storage (Yu et al., 2012b; Han et al., 2015). A network restructuring technique using singular value decomposition technique is also studied (Xue et al., 2013; Rigamonti et al., 2013).
# 3 FIXED-POINT FFDNN AND CNN DESIGN
This section explains the design of FFDNN and CNN with varying network complexity and, also, the ï¬xed-point optimization procedure.
3.1 FFDNN AND CNN DESIGN | 1511.06488#7 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 7 | # 3.1 Dummy Source Sentences
The ï¬rst technique we employ is to treat mono- lingual training examples as parallel examples with empty source side, essentially adding train- ing examples whose context vector ci is uninfor- mative, and for which the network has to fully rely on the previous target words for its predic- tion. This could be conceived as a form of dropout (Hinton et al., 2012), with the difference that the training instances that have the context vector dropped out constitute novel training data. We can also conceive of this setup as multi-task learn- ing, with the two tasks being translation when the source is known, and language modelling when it is unknown.
During training, we use both parallel and mono- lingual training examples in the ratio 1-to-1, and randomly shufï¬e them. We deï¬ne an epoch as one iteration through the parallel data set, and resam- ple from the monolingual data set for every epoch. We pair monolingual sentences with a single-word dummy source side <null> to allow processing of both parallel and monolingual training examples with the same network graph.1 For monolingual minibatches2, we freeze the network parameters of the encoder and the attention model. | 1511.06709#7 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 7 | Considerable work has also gone into the challenging task of curating ï¬ne- grained datasets [4,58,27,30,31,59,65,60,70] and developing interactive methods for recognition with a human in the loop [6,62,61,63]. While these works have demonstrated eï¬ective strategies for collecting images of ï¬ne-grained categories, their scalability is ultimately limited by the requirement of manual annotation. Our work provides an alternative to these approaches.
Learning from Noisy Data. Our work is also inspired by methods that pro- pose to learn from web data [15,10,11,45,34,19] or reason about label noise [39,67,58,52,43]. Works that use web data typically focus on detection and classiï¬cation of a set of coarse-grained categories, but have not yet examined the ï¬ne-grained setting. Methods that reason about label noise have been divided in their results: some have shown that reasoning about label noise can have a substantial eï¬ect on recognition performance [66], while others demonstrate little change from re- ducing the noise level or having a noise-aware model [52,43,58]. In our work, we demonstrate that noisy data can be surprisingly eï¬ective for ï¬ne-grained recognition, providing evidence in support of the latter hypothesis. | 1511.06789#7 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 8 | This section explains the design of FFDNN and CNN with varying network complexity and, also, the ï¬xed-point optimization procedure.
3.1 FFDNN AND CNN DESIGN
A feedforward deep neural network with multiple hidden layers are depicted in Figure 1. Each layer k has a signal vector yk, which is propagated to the next layer by multiplying the weight matrix Wk+1, adding biases bk+1, and applying the activation function Ïk+1(·) as follows:
Yer = Oep1(Weriyk + be41)- dd)
2
Under review as a conference paper at ICLR 2016
in-hl| h1-h2 h2-h3 h3-h4 h4-out ol PTET Te Input hl h2 h3 h4 Output
Figure 1: Feed-forward deep neural network with 4 hidden layers.
Input C1 S1 C2 $2 C3 $3 Fil
Figure 2: CNN structure with 3 convolution layers and 1 fully-connected layers.
One of the most popular activation functions is the rectiï¬ed linear unit deï¬ned as
Relu(x) = max(0, x). (2) | 1511.06488#8 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 8 | One problem with this integration of monolin1One could force the context vector ci to be 0 for monolin- gual training instances, but we found that this does not solve the main problem with this approach, discussed below.
2For efï¬ciency, Bahdanau et al. (2015) sort sets of 20 minibatches according to length. This also groups monolin- gual training instances together.
gual data is that we cannot arbitrarily increase the ratio of monolingual training instances, or ï¬ne- tune a model with only monolingual training data, because different output layer parameters are opti- mal for the two tasks, and the network âunlearnsâ its conditioning on the source context if the ratio of monolingual training instances is too high.
# 3.2 Synthetic Source Sentences
To ensure that the output layer remains sensitive to the source context, and that good parameters are not unlearned from monolingual data, we propose to pair monolingual training instances with a syn- thetic source sentence from which a context vec- tor can be approximated. We obtain these through back-translation, i.e. an automatic translation of the monolingual target text into the source lan- guage. | 1511.06709#8 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 8 | # 3 Noisy Fine-Grained Data
In this section we provide an analysis of the imagery publicly available for ï¬ne- grained recognition, which we collect via web search.1 We describe its quantity, distribution, and levels of noise, reporting each on multiple ï¬ne-grained domains.
# 3.1 Categories
We consider four domains of ï¬ne-grained categories: birds, aircraft, Lepidoptera (a taxonomic order including butterï¬ies and moths), and dogs. For birds and
1 Google image search: http://images.google.com
4 Krause et al.
[Ea tmem]| 100 Num. Images Num. Images Num. Images
Fig. 2. Distributions of the number of images per category available via image search for the categories in CUB, Birdsnap, and L-Bird (far left), FGVC and L-Aircraft (mid- dle left), and L-Butterï¬y (middle right). At far right we aggregate and plot the average number of images per category in each dataset in addition to the training sets of each curated dataset we consider, denoted CUB-GT, Birdsnap-GT, and FGVC-GT. | 1511.06789#8 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 9 | In this work, an FFDNN for phoneme recognition is used. The reference DNN has four hidden layers. Each of the hidden layers has Nh units; the value of Nh is changed to control the complexity of the network. We conduct experiments with the Nh size of 32, 64, 128, 256, 512, and 1024. The number of hidden layers is also reduced. The input layer of the network has 1,353 units to accept 11 frames of a Fourier-transform-based ï¬lter-bank with 40 coefï¬cients (+energy) distributed on a mel-scale, together with their ï¬rst and second temporal derivatives. The output layer consists of 61 softmax units which correspond to 61 target phoneme labels. Phoneme recognition experiments were performed on the TIMIT corpus. The standard 462 speaker set with all SA records removed was used for training, and a separate development set of 50 speaker was used for early stopping. Re- sults are reported for the 24-speaker core test set. The network was trained using a backpropagation algorithm with 128 mini-batch size. Initial learning rate was 10â5 and it was decreased until 10â7 during the training. Momentum was 0.9 and RMSProp was adopted for weights update (Tieleman & Hinton, 2012). The dropout technique was employed with 0.2 dropout rate in each layer. | 1511.06488#9 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 9 | During training, we mix synthetic parallel text into the original (human-translated) parallel text and do not distinguish between the two: no net- work parameters are frozen. Importantly, only the source side of these additional training examples is synthetic, and the target side comes from the monolingual corpus.
# 4 Evaluation
We evaluate NMT training on parallel and with additional monolingual data, EnglishâGerman using training and test data for EnglishâGerman, 15 EnglishâGerman, TurkishâEnglish.
4.1 Data and Methods We use Groundhog3 the implementation experiments of (Bahdanau et al., 2015; Jean et al., 2015a). We generally follow the settings and training procedure described by Sennrich et al. (2016).
For EnglishâGerman, we report case-sensitive BLEU on detokenized text with mteval-v13a.pl for comparison to ofï¬cial WMT and IWSLT results. For TurkishâEnglish, we report case-sensitive BLEU on tokenized text with multi-bleu.perl for comparison to results by Gülçehre et al. (2015).
Gülçehre et al. (2015) determine the network vocabulary based on the parallel training data,
# 3github.com/sebastien-j/LV_groundhog | 1511.06709#9 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 9 | Lepidoptera, we obtained lists of ï¬ne-grained categories from Wikipedia, result- ing in 10,982 species of birds and 14,553 species of Lepidoptera, denoted L-Bird (âLarge Birdâ) and L-Butterï¬y. For aircraft, we assembled a list of 409 types of aircraft by hand (including aircraft in the FGVC-Aircraft [38] dataset, abbre- viated FGVC). For dogs, we combine the 120 dog breeds in Stanford Dogs [27] with 395 other categories to obtain the 515-category L-Dog. We evaluate on two other ï¬ne-grained datasets in addition to FGVC and Stanford Dogs: CUB- 200-2011 [60] and Birdsnap [4], for a total of four evaluation datasets. CUB and Birdsnap include 200 and 500 species of common birds, respectively, FGVC has 100 aircraft variants, and Stanford Dogs contains 120 breeds of dogs. In this section we focus our analysis on the categories in L-Bird, L-Butterï¬y, and L-Aircraft in addition to the categories in their evaluation datasets.
# 3.2 Images from the Web | 1511.06789#9 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 10 | The CNN used is for CIFAR-10 dataset. It contains a training set of 50,000 and a test set of 10,000 32Ã32 RGB color images representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. We divided the training set to 40,000 images for training and 10,000 images for validation. This CNN has 3 convolution and pooling layers and a fully connected hidden layer with 64 units, and the output has 10 softmax units as shown in Figure 2. We control the number of feature maps in each convolution layer. The reference size has 32-32-64 feature maps with 5 by 5 kernel size as used in Krizhevskey (2014). We did not perform any preprocessing and data augmentation such as ZCA whitening and global contrast normalization. To know the effects of network size variation, the number of feature maps is reduced or increased. The conï¬gurations of the feature maps used for the experiments are 8-8-16, 16-16-32, 32-32-64, 64-64-128, 96-96-192, and 128-128-256. The number of feature map layers is also changed, resulting in 32-32-64, 32-64,
3
# Under review as a conference paper at ICLR 2016 | 1511.06488#10 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 10 | Gülçehre et al. (2015) determine the network vocabulary based on the parallel training data,
# 3github.com/sebastien-j/LV_groundhog
dataset WMTparallel WITparallel WMTmono_de WMTsynth_de WMTmono_en WMTsynth_en sentences 4 200 000 200 000 160 000 000 3 600 000 118 000 000 4 200 000
Table 1: EnglishâGerman training data.
and replace out-of-vocabulary words with a spe- cial UNK symbol. They remove monolingual sen- tences with more than 10% UNK symbols. In con- trast, we represent unseen words as sequences of subword units (Sennrich et al., 2016), and can rep- resent any additional training data with the exist- ing network vocabulary that was learned on the parallel data. In all experiments, the network vo- cabulary remains ï¬xed.
4.1.1 EnglishâGerman We use all parallel training data provided by WMT 2015 (Bojar et al., 2015)4. We use the News Crawl corpora as additional training data for the experiments with monolingual data. The amount of training data is shown in Table 1. | 1511.06709#10 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06488 | 11 | 3
# Under review as a conference paper at ICLR 2016
and 64 map conï¬gurations. Note that the fully connected layer in the CNN is not changed. The network was trained using a backpropagation algorithm with 128 mini-batch size. Initial learning rate was 0.001 and it was decreased to 10â8 during the training procedure. Momentum was 0.8 and RMSProp was applied for weights update.
3.2 FIXED-POINT OPTIMIZATION OF DNNS
Reducing the word-length of weights brings several advantages in hardware based implementation of neural networks. First, it lowers the arithmetic precision, and thereby reduces the number of gates needed for multipliers. Second, the size of memory for storing weights is minimized, which would be a big advantage when keeping them on a chip, instead of external DRAM or NAND ï¬ash memory. Note that FFDNNs and recurrent neural networks demand a very large number of weights. Third, the reduced arithmetic precision or minimization of off-chip memory accesses leads to low power consumption. However, we need to concern the quantization effects that degrade the system performance. | 1511.06488#11 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 11 | Baseline models are trained for a week. Ensem- bles are sampled from the last 4 saved models of training (saved at 12h-intervals). Each model is ï¬ne-tuned with ï¬xed embeddings for 12 hours.
For the experiments with synthetic parallel data, we back-translate a random sample of 3 600 000 sentences from the German monolin- gual data set into English. The GermanâEnglish this is the baseline system system used for (parallel). Translation took about a week on an NVIDIA Titan Black GPU. For experiments in GermanâEnglish, we back-translate 4 200 000 monolingual English sentences into German, us- ing the EnglishâGerman system +synthetic. Note that we always use single models for back- translation, not ensembles. We leave it to fu- ture work to explore how sensitive NMT training with synthetic data is to the quality of the back- translation.
training the truecase and tokenize We rare words via BPE data, and represent (Sennrich et al., 2016). Speciï¬cally, we fol- low Sennrich et al. (2016) in performing BPE on the joint vocabulary with 89 500 merge operations.
# 4http://www.statmt.org/wmt15/ | 1511.06709#11 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 11 | Quantifying the Data. How much ï¬ne-grained data is available? In Fig. 2 we plot distributions of the number of images retrieved for each category and report aggregates across each set of categories. We note several trends: Cate- gories in existing datasets, which are typically common within their ï¬ne-grained domain, have more images per category than the long-tail of categories present in the larger L-Bird, L-Aircraft, or L-Butterï¬y, with the eï¬ect most pronounced in L-Bird and L-Butterï¬y. Further, domains of ï¬ne-grained categories have sub- stantially diï¬erent distributions, i.e. L-Bird and L-Aircraft have more images per category than L-Butterï¬y. This makes sense â ï¬ne-grained categories and domains of categories that are more common and have a larger enthusiast base will have more imagery since more photos are taken of them. We also note that results tend to be limited to roughly 800 images per category, even for the most common categories, which is likely a restriction placed on public search results.
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition | 1511.06789#11 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 12 | Direct quantization converts a ï¬oating-point value to the closest integer number, which is conven- tionally used in signal processing system design. However, direct quantization usually demands more than 8 bits, and does not show good performance when the number of bits is small. In ï¬xed- point deep neural network design, retraining of quantized weights shows quite good performance.
The ï¬xed-point DNN algorithm design consists of three steps: ï¬oating-point training, direct quan- tization, and retraining of weights. The ï¬oating-point training procedure can be any of the state of the art techniques, which may include unsupervised learning and dropout. Note that ï¬xed-point op- timization needs to be based on the best performing ï¬oating-point weights. Thus, the ï¬oating-point weight optimization may need to be conducted several times with different initializations, and this step consumes the most of the time. After the ï¬oating-point training, direct quantization is followed.
For direct quantization, uniform quantization function is employed and the function Q(·) is deï¬ned as follows :
fu = sont) -d-min( 05). M=2) 5 | 1511.06488#12 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 12 | # 4http://www.statmt.org/wmt15/
dataset WIT SETimes Gigawordmono Gigawordsynth sentences 160 000 160 000 177 000 000 3 200 000
Table 2: TurkishâEnglish training data.
The network vocabulary size is 90 000.
We also perform experiments on the IWSLT 15 test sets to investigate a cross-domain setting.5 The test sets consist of TED talk transcripts. As in- domain training data, IWSLT provides the WIT3 parallel corpus (Cettolo et al., 2012), which also consists of TED talks.
4.1.2 TurkishâEnglish We use data provided for the IWSLT 14 machine translation track (Cettolo et al., 2014), namely the WIT3 parallel corpus (Cettolo et al., 2012), which consists of TED talks, and the SETimes corpus (Tyers and Alperen, 2010).6 After removal of sen- tence pairs which contain empty lines or lines with a length ratio above 9, we retain 320 000 sen- tence pairs of training data. For the experiments with monolingual training data, we use the En- glish LDC Gigaword corpus (Fifth Edition). The amount of training data is shown in Table 2. With only 320 000 sentences of parallel data available for training, this is a much lower-resourced trans- lation setting than EnglishâGerman. | 1511.06709#12 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 12 | Fig. 3. Examples of cross-domain noise for birds, butterï¬ies, airplanes, and dogs. Images are generally of related categories that are outside the domain of interest, e.g. a map of a birdâs typical habitat or a t-shirt containing the silhouette of a dog.
Most striking is the large diï¬erence between the number of images available via web search and in existing ï¬ne-grained datasets: even Birdsnap, which has an average of 94.8 images per category, contains only 13% as many images as can be obtained with a simple image search. Though their labels are noisy, web searches unveil an order of magnitude more data which can be used to learn ï¬ne-grained categories.
In total, for all four datasets, we obtained 9.8 million images for 26,458 categories, requiring 151.8GB of disk space.2 | 1511.06789#12 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 13 | fu = sont) -d-min( 05). M=2) 5
where sgn(·) is a sign function, â is a quantization step size, and M represents the number of quantization levels. Note that M needs to be an odd number since the weight values can be posi- tive or negative. When M is 7, the weights are represented by -3·â, -2·â, -1·â, 0, +1·â, +2·â, +3·â,which can be represented in 3 bits. The quantization step size â is determined to minimize the L2 error, E, depicted as follows.
E=- DY (Qw) - wi) (4)
where N is the number of weights in each weight group, wi is the i-th weight value represented in ï¬oating-point. This process needs some iterations, but does not take much time. | 1511.06488#13 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 13 | Gülçehre et al. (2015) segment the Turkish text with the morphology tool Zemberek, followed by a disambiguation of the morphological analysis (Sak et al., 2007), and removal of non-surface to- kens produced by the analysis. We use the same preprocessing7. For both Turkish and English, we represent rare words (or morphemes in the case of Turkish) as character bigram sequences (Sennrich et al., 2016). The 20 000 most frequent words (morphemes) are left unsegmented. The networks have a vocabulary size of 23 000 sym- bols. To
training synthetic set, we back-translate a random sample of 3 200 000 sentences from Gigaword. We use an EnglishâTurkish NMT system trained with the same settings as the TurkishâEnglish baseline system.
# 5http://workshop2015.iwslt.org/ 6http://workshop2014.iwslt.org/ 7github.com/orhanf/zemberekMorphTR | 1511.06709#13 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 13 | In total, for all four datasets, we obtained 9.8 million images for 26,458 categories, requiring 151.8GB of disk space.2
Noise. Though large amounts of imagery are freely available for ï¬ne-grained categories, focusing only on scale ignores a key issue: noise. We consider two types of label noise, which we call cross-domain noise and cross-category noise. We deï¬ne cross-domain noise to be the portion of images that are not of any category in the same ï¬ne-grained domain, i.e. for birds, it is the fraction of images that do not contain a bird (examples in Fig. 3). In contrast, cross-category noise is the portion of images that have the wrong label within a ï¬ne-grained domain, i.e. an image of a bird with the wrong species label. | 1511.06789#13 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 14 | For network retraining, we maintain both ï¬oating-point and quantized weights because the amount of weight updates in each training step is much smaller than the quantization step size â. The forward and backward propagation is conducted using quantized weights, but the weight update is applied to the ï¬oating-point weights and newly quantized values are generated at each iteration. This retraining procedure usually converges quickly and does not take much time when compared to the ï¬oating-point training.
# 4 ANALYSIS OF QUANTIZATION EFFECTS
# 4.1 DIRECT QUANTIZATION
The performance of the FFDNN and the CNN with directly quantized weights is analyzed while varying the number of units in each layer or the number of feature maps, respectively. In this analysis, the quantization is performed on each weight group, which is illustrated in Figure 1 and
4
# Under review as a conference paper at ICLR 2016
Figure 2, to know the sensitivity of word-length reduction. In this sub-section, we try to analyze the effects of direct quantization.
The quantized weight can be represented as follows,
wq i = wi + wd i (5)
where wd assume that the distortion wd i is the distortion of each weight due to quantization. In the direct quantization, we can i is not dependent each other.
(a) (b) | 1511.06488#14 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 14 | # 5http://workshop2015.iwslt.org/ 6http://workshop2014.iwslt.org/ 7github.com/orhanf/zemberekMorphTR
We found overï¬tting to be a bigger problem than with the larger EnglishâGerman data set, and follow Gülçehre et al. (2015) in using (Graves, 2011), Gaussian noise (stddev 0.01) (p=0.5) layer and dropout on the output (Hinton et al., 2012). We also use early stop- ping, based on BLEU measured every three hours on tst2010, which we treat as development set. For TurkishâEnglish, we use gradient clipping with threshold 5, following Gülçehre et al. (2015), in contrast to the threshold 1 that we use for EnglishâGerman, following Jean et al. (2015a). | 1511.06709#14 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 14 | To quantify levels of cross-domain noise, we manually label a 1,000 image sample from each set of search results, with results in Fig. 4. Although levels of noise are not too high for any set of categories (max. 34.2% for L-Butterï¬y), we notice an interesting correlation: cross-domain noise decreases moderately as the number of images per category (Fig. 2) increases. We hypothesize that categories with many search results have a corresponding large pool of images to draw results from, and thus actual search results will tend to be higher-precision. In contrast to cross-domain noise, cross-category noise is much harder to quantify, since doing so eï¬ectively requires ground truth ï¬ne-grained labels of query results. To examine cross-category noise from at least one vantage point, we show the confusion matrix of given versus predicted labels on 30 categories in the CUB [60] test set and their web images in Fig. 6, left and right, which we generate via a classiï¬er trained on the CUB training set, acting as a noisy
2 URLs available at https://github.com/google/goldfinch
5
6
Krause et al. | 1511.06789#14 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 15 | (a) (b)
Figure 3: Computation model for a unit in the hidden layer j ((a): ï¬oating-point, (b): distortion).
(a) (b)
s â@ 8 5 $ 2 2 a
Figure 4: Sensitivity analysis of direct quantization ((a): FFDNN, (b): CNN). In the ï¬gure (b), x-axis label â8-16â represents the number of feature map is â8-8-16â.
Consider a computation procedure for a unit in a hidden layer, the signal from the previous layer is summed up after multiplication with the weights as illustrated in Figure 3a. We can also assemble a model for distortion, which is shown in Figure 3b. In the distortion model, since wd i is independent each other, we can assume that the effects of the summed distortion is reduced according to the random process theory. This analysis means that the quantization effects are reduced when the number of units in the anterior layer increases, but slowly. | 1511.06488#15 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 15 | 4.2 Results 4.2.1 EnglishâGerman WMT 15 Table 3 shows EnglishâGerman results with WMT training and test data. We ï¬nd that mixing parallel training data with monolingual data with a dummy source side in a ratio of 1-1 improves qual- ity by 0.4â0.5 BLEU for the single system, 1 BLEU for the ensemble. We train the system for twice as long as the baseline to provide the training al- gorithm with a similar amount of parallel training instances. To ensure that the quality improvement is due to the monolingual training instances, and not just increased training time, we also continued training our baseline system for another week, but saw no improvements in BLEU. | 1511.06709#15 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 15 | 2 URLs available at https://github.com/google/goldfinch
5
6
Krause et al.
0.40-â 0.35) g go. 30) 2 £0.25 é 0.20 I a So. 15) goal. U0.10 0.05, 9-0 CoB FOVe | Buttery Birdsnap L-Aircraft
80, 3 £ 5 S 60 g a & s 40. 5 s 20- âcoe Deuter Birdsnap \-aitraft
Fig. 4. The cross-domain noise in search results for each domain.
Fig. 5. The percentage of images retained after ï¬ltering.
proxy for ground truth labels. In these confusion matrices, cross-category noise is reï¬ected as a strong oï¬-diagonal pattern, while cross-domain noise would manifest as a diï¬use pattern of noise, since images not of the same domain are an equally bad ï¬t to all categories. Based on this interpretation, the web images show a moderate amount more cross-category noise than the clean CUB test set, though the general confusion pattern is similar. | 1511.06789#15 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 16 | Figure 4a illustrates the performance of the FFDNN with ï¬oating-point arithmetic, 2-bit direct quan- tization of all the weights, and 2-bit direct quantization only on the weight group âIn-h1â, âh1-h2â, and âh4-outâ. Consider the quantization performance of the âIn-h1â layer, the phone-error rate is higher than the ï¬oating-point result with an almost constant amount, about 10%. Note that the num- ber of input to the âIn-h1â layer is ï¬xed, 1353, regardless of the hidden unit size. Thus, the amount of distortion delivered to each unit of the hidden layer 1 can be considered unchanged. Figure 4a also shows the quantization performance on âh1-h2â and âh4-outâ layers, which informs the trend of
5
# Under review as a conference paper at ICLR 2016
(a) (b)
Figure 5: Performance of direct quantization with multiple precision ((a): FFDNN, (b): CNN). | 1511.06488#16 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 16 | Including synthetic data during training is very effective, and yields an improvement over Our best our baseline by 2.8â3.4 BLEU. ensemble system also outperforms a syntax- based baseline (Sennrich and Haddow, 2015) by 1.2â2.1 BLEU. We also substantially outper- form NMT results reported by Jean et al. (2015a) and Luong et al. (2015), who previously reported SOTA result.8 We note that the difference is par- ticularly large for single systems, since our ensem- ble is not as diverse as that of Luong et al. (2015), who used 8 independently trained ensemble com- ponents, whereas we sampled 4 ensemble compo- nents from the same training run.
4.2.2 EnglishâGerman IWSLT 15 Table 4 shows EnglishâGerman results on IWSLT test sets. IWSLT test sets consist of TED talks, and are thus very dissimilar from the WMT
8Luong et al. (2015) report 20.9 BLEU (tokenized) on newstest2014 with a single model, and 23.0 BLEU with an ensemble of 8 models. Our best single system achieves a to- kenized BLEU (as opposed to untokenized scores reported in Table 3) of 23.8, and our ensemble reaches 25.0 BLEU. | 1511.06709#16 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 16 | We propose a simple, yet eï¬ective strategy to reduce the eï¬ects of cross- category noise: exclude images that appear in search results for more than one category. This approach, which we refer to as ï¬ltering, speciï¬cally targets images for which there is explicit ambiguity in the category label (examples in Fig. 7). As we demonstrate experimentally, ï¬ltering can improve results while reducing training time via the use of a more compact training set â we show the portion of images kept after ï¬ltering in Fig. 5. Agreeing with intuition, ï¬ltering removes more images when there are more categories. Anecdotally, we have also tried a few techniques to combat cross-domain noise, but initial experiments did not see any improvement in recognition so we do not expand upon them here. While reducing cross-domain noise should be beneï¬cial, we believe that it is not as important as cross-category noise in ï¬ne-grained recognition due to the absence of out-of-domain classes during testing.
# 4 Data via Active Learning | 1511.06789#16 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 17 | (a) (b)
Figure 5: Performance of direct quantization with multiple precision ((a): FFDNN, (b): CNN).
reduced gap to the ï¬oating-point performance as the network size increases. This can be explained by the sum of increased number of independent distortions when the network size grows. The per- formance of all 2-bit quantization also shows the similar trend of reduced gap to the ï¬oating-point performance. But, apparently, the performance of 2-bit directly quantized networks is not satisfac- tory.
In Figure 4b, a similar analysis is conducted to the CNN with direct quantization when the number of feature maps increases or decreases. In the CNN, the number of input to each output is determined by the number of input feature maps and the kernel size. For example, at the ï¬rst layer C1, the number of input signal for computing one output is only 75 (=3Ã25) regardless of the network size, where the input map size is always 3 and the kernel size is 25. However, at the second layer C2, the number of input feature maps increases as the network size grows. When the feature map of 32-32-64 is considered, the number of input for the C2 layer grows to 800 (=32Ã25). Thus, we can expect a reduced distortion as the number of feature maps increases. | 1511.06488#17 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 17 | BLEU name training instances newstest2014 ens-4 single - 22.6 - - 20.4 19.9 21.4 20.4 23.8 22.7 newstest2015 ens-4 single - 24.4 - 22.4 23.6 22.8 24.6 23.2 26.5 25.7 syntax-based (Sennrich and Haddow, 2015) Neural MT (Jean et al., 2015b) parallel +monolingual +synthetic 37m (parallel) 49m (parallel) / 49m (monolingual) 44m (parallel) / 36m (synthetic)
Table 3: EnglishâGerman translation performance (BLEU) on WMT training/test sets. Ens-4: ensemble of 4 models. Number of training instances varies due to differences in training time and speed. | 1511.06709#17 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 17 | # 4 Data via Active Learning
In this section we brieï¬y describe an active learning-based approach for collecting large quantities of ï¬ne-grained data. Active learning and other human-in-the- loop systems have previously been used to create datasets in a more cost-eï¬cient way than manual annotation [74,12,47], and our goal is to compare this more traditional approach with simply using noisy data, particularly when considering the application of ï¬ne-grained recognition. In this paper, we apply active learning to the 120 dog breeds in the Stanford Dogs [27] dataset.
Our system for active learning begins by training a classiï¬er on a seed set of input images and labels (i.e. the Stanford Dogs training set), then proceeds by iteratively picking a set of images to annotate, obtaining labels with hu- man annotators, and re-training the classiï¬er. We use a convolutional neural
The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
CUB Web | 1511.06789#17 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 18 | Figure 5a shows the performance of direct quantization with 2, 4, 6, and 8-bit precision when the network complexity varies. In the FFDNN, 6 bit direct quantization seems enough when the network size is larger than 128. But, small FFDNNs demand 8 bits for near ï¬oating-point performance. The CNN in Figure 5b also shows the similar trend. The direct quantization requires about 6 bits when the feature map conï¬guration is 16-16-32 or larger.
# 4.2 EFFECTS OF RETRAINING ON QUANTIZED NETWORKS | 1511.06488#18 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 18 | BLEU tst2014 - 27.6 22.6 23.5 23.6 24.4 25.9 name ï¬ne-tuning instances tst2013 29.4 31.4 25.2 26.5 26.6 28.2 30.4 data NMT (Luong and Manning, 2015) (single model) NMT (Luong and Manning, 2015) (ensemble of 8) parallel 1 2 +synthetic 3 4 5 - - 200k/200k 200k 200k - - 2+WITmono_de WMTparallel / WITmono 2+WITsynth_de WITsynth 2+WITparallel WIT tst2015 - 30.1 24.0 25.5 25.4 26.7 28.4
Table 4: EnglishâGerman translation performance (BLEU) on IWSLT test sets (TED talks). Single models.
test sets, which are news texts. We investigate if monolingual training data is especially valuable if it can be used to adapt a model to a new genre or domain, speciï¬cally adapting a system trained on WMT data to translating TED talks. | 1511.06709#18 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 18 | The Unreasonable Eï¬ectiveness of Noisy Data for Fine-Grained Recognition
CUB Web
Pagoreater Necklaced Laughingthrush = âCuban Emerald lscevoade avohintsh catanvieo re Headed Lahingtish Mi, Ke West cuore back Headed satator y Red: Biled Pigeon Northern Potoo AA oiled Toucon (Chestnut Mandibled Toucan
Fig. 6. Confusion matrices of the pre- dicted label (column) given the provided label (row) for 30 CUB categories on the CUB test set (left) and search results for CUB categories (right). For visualization purposes we remove the diagonal.
Fig. 7. Examples of images removed via ï¬ltering and the categories whose re- sults they appeared in. Some share similar names (left examples), while others share similar locations (right examples).
network [32,54,25] for the classiï¬er, and now describe the key steps of sample selection and human annotation in more detail. | 1511.06789#18 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 19 | Retraining is conducted on the directly quantized networks using the same data for ï¬oating-point training. The ï¬xed-point performance of the FFDNN is shown in Figure 6a when the number of hidden units in each layer varies. The performance of direct 2 bits (ternary levels), direct 3 bits (7- levels), retrain-based 2 bits, and retrain-based 3 bits are compared with the ï¬oating-point simulation. We can ï¬nd that the performance gap between the ï¬oating-point and the retrain-based ï¬xed-point networks converges very fast as the network size grows. Although the performance gap between the direct and the ï¬oating-point networks also converges, the rate of convergence is signiï¬cantly different. In this ï¬gure, the performance of the ï¬oating-point network almost saturates when the network size is about 1024. Note that the TIMIT corpus that is used for training has only 3 hours of data. Thus, the network with 1024 hidden units can be considered in the âtraining-data limited regionâ. Here, the gap between the ï¬oating-point and ï¬xed-point networks almost vanishes when the network is | 1511.06488#19 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 19 | Systems 1 and 2 correspond to systems in Table 3, trained only on WMT data. System 2, trained on parallel and synthetic WMT data, obtains a BLEU score of 25.5 on tst2015. We observe that even a small amount of ï¬ne-tuning9, i.e. continued train- ing of an existing model, on WIT data can adapt a system trained on WMT data to the TED do- main. By back-translating the monolingual WIT corpus (using a GermanâEnglish system trained on WMT data, i.e. without in-domain knowledge), we obtain the synthetic data set WITsynth. A sin- gle epoch of ï¬ne-tuning on WITsynth (system 4) re- sults in a BLEU score of 26.7 on tst2015, or an im- provement of 1.2 BLEU. We observed no improve- ment from ï¬ne-tuning on WITmono, the monolin- gual TED corpus with dummy input (system 3). | 1511.06709#19 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 19 | Sample Selection. There are many possible criterion for sample selection [47]. We employ conï¬dence-based sampling: For each category c, we select the b ËP (c) images with the top class scores fc(x) as determined by our current model, where ËP (c) is a desired prior distribution over classes, b is a budget on the number of images to annotate, and fc(x) is the output of the classiï¬er. The intuition is as follows: even when fc(x) is large, false positives still occur quite frequently â in Fig. 8 left, observe that the false positive rate is about 20% at the highest conï¬dence range, which might have a large impact on the model. This contrasts with approaches that focus sampling in uncertain regions [33,2,40,17]. We ï¬nd that images sampled with uncertainty criteria are typically ambiguous and dif- ï¬cult or even impossible for both models and humans to annotate correctly, as demonstrated in Fig. 8 bottom row: unconï¬dent samples are often heavily oc- cluded, at unusual viewpoints, or of mixed, ambiguous breeds, making it unlikely that they can be annotated eï¬ectively. This strategy is similar to the âexpected model changeâ sampling criteria [48], but done for each class independently. | 1511.06789#19 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 20 | âtraining-data limited regionâ. Here, the gap between the ï¬oating-point and ï¬xed-point networks almost vanishes when the network is in the âtraining-data limited regionâ. However, when the network size is limited, such as 32, 64, 128, or 256, there is some performance gap between the ï¬oating-point and highly quantized networks even if retraining on the quantized networks is performed. | 1511.06488#20 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
1511.06709 | 20 | BLEU name PBSMT (Haddow et al., 2015) NMT (Gülçehre et al., 2015) +shallow fusion +deep fusion parallel +synthetic +synthetic (ensemble of 4) 2014 28.8 23.6 23.7 24.0 25.9 29.5 30.8 2015 29.3 - - - 26.7 30.4 31.6
Table 5: GermanâEnglish translation perfor- mance (BLEU) on WMT training/test sets (new- stest2014; newstest2015).
ment of 2.9 BLEU. While it is unsurprising that in-domain parallel data is most valuable, we ï¬nd it encouraging that NMT domain adaptation with monolingual data is also possible, and effective, since there are settings where only monolingual in-domain data is available.
These adaptation experiments with monolin- gual data are slightly artiï¬cial in that parallel train- ing data is available. System 5, which is ï¬ne- tuned with the original WIT training data, obtains a BLEU of 28.4 on tst2015, which is an improve9We leave the word embeddings ï¬xed for ï¬ne-tuning. | 1511.06709#20 | Improving Neural Machine Translation Models with Monolingual Data | Neural Machine Translation (NMT) has obtained state-of-the art performance
for several language pairs, while only using parallel data for training.
Target-side monolingual data plays an important role in boosting fluency for
phrase-based statistical machine translation, and we investigate the use of
monolingual data for NMT. In contrast to previous work, which combines NMT
models with separately trained language models, we note that encoder-decoder
NMT architectures already have the capacity to learn the same information as a
language model, and we explore strategies to train with monolingual data
without changing the neural network architecture. By pairing monolingual
training data with an automatic back-translation, we can treat it as additional
parallel training data, and we obtain substantial improvements on the WMT 15
task English<->German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task
Turkish->English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We
also show that fine-tuning on in-domain monolingual and parallel data gives
substantial improvements for the IWSLT 15 task English->German. | http://arxiv.org/pdf/1511.06709 | Rico Sennrich, Barry Haddow, Alexandra Birch | cs.CL | accepted to ACL 2016; new section on effect of back-translation
quality | null | cs.CL | 20151120 | 20160603 | [] |
1511.06789 | 20 | Human Annotation. Our interface for human annotation of the selected im- ages is shown in Fig. 9. Careful construction of the interface, including the addi- tion of both positive and negative examples, as well as hidden âgold standardâ images for immediate feedback, improves annotation accuracy considerably (see Sec. A.2 for quantitative results). Final category decisions are made via majority vote of three annotators.
7
8 Krause et al.
most conf dent: aad| false positive rate â1-confidence
Fig. 8. Left: Classiï¬er conï¬dence versus false positive rate on 100,000 randomly sam- pled from Flickr images (YFCC100M [56]) with dog detections. Even the most conï¬dent images have a 20% false positive rate. Right: Samples from Flickr. Rectangles below images denote correct (green), incorrect (red), or ambiguous (yellow). Top row: Sam- ples with high conï¬dence for class âPugâ from YFCC100M. Bottom row: Samples with low conï¬dence score for class âPugâ.
Fig. 9. Our tool for binary anno- tation of ï¬ne-grained categories. In- structional positive images are pro- vided in the upper left and negatives are provided in the lower left. | 1511.06789#20 | The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition | Current approaches for fine-grained recognition do the following: First,
recruit experts to annotate a dataset of images, optionally also collecting
more structured data in the form of part annotations and bounding boxes.
Second, train a model utilizing this data. Toward the goal of solving
fine-grained recognition, we introduce an alternative approach, leveraging
free, noisy data from the web and simple, generic methods of recognition. This
approach has benefits in both performance and scalability. We demonstrate its
efficacy on four fine-grained datasets, greatly exceeding existing state of the
art without the manual collection of even a single label, and furthermore show
first results at scaling to more than 10,000 fine-grained categories.
Quantitatively, we achieve top-1 accuracies of 92.3% on CUB-200-2011, 85.4% on
Birdsnap, 93.4% on FGVC-Aircraft, and 80.8% on Stanford Dogs without using
their annotated training sets. We compare our approach to an active learning
approach for expanding fine-grained datasets. | http://arxiv.org/pdf/1511.06789 | Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, Li Fei-Fei | cs.CV | ECCV 2016, data is released | null | cs.CV | 20151120 | 20161018 | [
{
"id": "1503.01817"
},
{
"id": "1602.07261"
},
{
"id": "1504.04943"
},
{
"id": "1506.03365"
}
] |
1511.06488 | 21 | The similar experiments are conducted for the CNN with varying feature map sizes, and the results are shown in Figure 6b. The conï¬guration of the feature maps used for the experiments are 8-8-16,
6
# Under review as a conference paper at ICLR 2016
(a) (b)
# Phone error rate (%)
Figure 6: Comparison of retrain-based and direct quantization for DNN (a) and CNN (b). All the weights are quantized with ternary and 7-level weights. In the ï¬gure (b), x-axis label â8-16â represents the number of feature map is â8-8-16â. | 1511.06488#21 | Resiliency of Deep Neural Networks under Quantization | The complexity of deep neural network algorithms for hardware implementation
can be much lowered by optimizing the word-length of weights and signals.
Direct quantization of floating-point weights, however, does not show good
performance when the number of bits assigned is small. Retraining of quantized
networks has been developed to relieve this problem. In this work, the effects
of retraining are analyzed for a feedforward deep neural network (FFDNN) and a
convolutional neural network (CNN). The network complexity is controlled to
know their effects on the resiliency of quantized networks by retraining. The
complexity of the FFDNN is controlled by varying the unit size in each hidden
layer and the number of layers, while that of the CNN is done by modifying the
feature map configuration. We find that the performance gap between the
floating-point and the retrain-based ternary (+1, 0, -1) weight neural networks
exists with a fair amount in 'complexity limited' networks, but the discrepancy
almost vanishes in fully complex networks whose capability is limited by the
training data, rather than by the number of connections. This research shows
that highly complex DNNs have the capability of absorbing the effects of severe
weight quantization through retraining, but connection limited networks are
less resilient. This paper also presents the effective compression ratio to
guide the trade-off between the network size and the precision when the
hardware resource is limited. | http://arxiv.org/pdf/1511.06488 | Wonyong Sung, Sungho Shin, Kyuyeon Hwang | cs.LG, cs.NE | null | null | cs.LG | 20151120 | 20160107 | [
{
"id": "1505.00256"
},
{
"id": "1511.00363"
},
{
"id": "1507.06947"
},
{
"id": "1512.01322"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.