doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.07289 | 37 | 9
Published as a conference paper at ICLR 2016
(a) Training loss (b) Top-5 test error (c) Top-1 test error
Figure 6: ELU networks applied to ImageNet. The x-axis gives the number of iterations and the y-axis the (a) training loss, (b) top-5 error, and (c) the top-1 error of 5,000 random validation samples, evaluated on the center crop. Both activation functions ELU (blue) and ReLU (purple) lead for convergence, but ELUs start reducing the error earlier and reach the 20% top-5 error after 160k iterations, while ReLUs need 200k iterations to reach the same error rate. | 1511.07289#37 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 38 | closer to zero. Therefore ELUs decrease the gap between the normal gradient and the unit natural gradient and, thereby speed up learning. We believe that this property is also the reason for the success of activation functions like LReLUs and PReLUs and of batch normalization. In contrast to LReLUs and PReLUs, ELUs have a clear saturation plateau in its negative regime, allowing them to learn a more robust and stable representation. Experimental results show that ELUs signiï¬cantly outperform other activation functions on different vision datasets. Further ELU networks perform signiï¬cantly better than ReLU networks trained with batch normalization. ELU networks achieved one of the top 10 best reported results on CIFAR-10 and set a new state of the art in CIFAR-100 without the need for multi-view test evaluation or model averaging. Furthermore, ELU networks produced competitive results on the ImageNet in much fewer epochs than a corresponding ReLU network. Given their outstanding performance, we expect ELU networks to become a real time saver in convolutional networks, which are notably time-intensive to train from scratch otherwise.
Acknowledgment. We thank the NVIDIA Corporation for supporting this research with several Titan X GPUs and Roland Vollgraf and Martin Heusel for helpful discussions and comments on this work.
# REFERENCES | 1511.07289#38 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 39 | # REFERENCES
Amari, S.-I. Natural gradient works efï¬ciently in learning. Neural Computation, 10(2):251â276, 1998. Clevert, D.-A., Unterthiner, T., Mayr, A., and Hochreiter, S. Rectiï¬ed factor networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28. Curran Associates, Inc., 2015.
Desjardins, G., Simonyan, K., Pascanu, R., and Kavukcuoglu, K. Natural neural networks. CoRR, abs/1507.00210, 2015. URL http://arxiv.org/abs/1507.00210.
Glorot, X., Bordes, A., and Bengio, Y. Deep sparse rectiï¬er neural networks. In Gordon, G., Dunson, D., and Dudk, M. (eds.), JMLR W&CP: Proceedings of the Fourteenth International Conference on Artiï¬cial Intelligence and Statistics (AISTATS 2011), volume 15, pp. 315â323, 2011. | 1511.07289#39 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 40 | Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., and Bengio, Y. Maxout networks. ArXiv e-prints, 2013.
Graham, Benjamin. Fractional max-pooling. CoRR, abs/1412.6071, 2014. URL http://arxiv.org/ abs/1412.6071.
Grosse, R. and Salakhudinov, R. Scaling up natural gradient by sparsely factorizing the inverse Fisher Journal of Machine Learning Research, 37:2304â2313, 2015. URL http://jmlr.org/ matrix. proceedings/papers/v37/grosse15.pdf. Proceedings of the 32nd International Conference on Machine Learning (ICML15).
He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In IEEE International Conference on Computer Vision (ICCV), 2015.
10
Published as a conference paper at ICLR 2016
Hochreiter, S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(2):107â116, 1998. | 1511.07289#40 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 41 | Hochreiter, S. and Schmidhuber, J. Feature extraction through LOCOCODE. Neural Computation, 11(3): 679â714, 1999.
Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. Gradient ï¬ow in recurrent nets: the difï¬culty of learning long-term dependencies. In Kremer and Kolen (eds.), A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001.
Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. Journal of Machine Learning Research, 37:448â456, 2015. URL http://jmlr.org/ proceedings/papers/v37/ioffe15.pdf. Proceedings of the 32nd International Conference on Machine Learning (ICML15).
Jia, Yangqing. Learning Semantic Image Representations at a Large Scale. PhD thesis, EECS Department, University of California, Berkeley, May 2014. URL http://www.eecs.berkeley.edu/Pubs/ TechRpts/2014/EECS-2014-93.html. | 1511.07289#41 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 42 | Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet classiï¬cation with deep convolutional neural net- In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q. (eds.), Advances in Neural works. Information Processing Systems 25, pp. 1097â1105. Curran Associates, Inc., 2012.
Kurita, T. Iterative weighted least squares algorithms for neural networks classiï¬ers. In Proceedings of the Third Workshop on Algorithmic Learning Theory (ALT92), volume 743 of Lecture Notes in Computer Science, pp. 77â86. Springer, 1993.
LeCun, Y., Kanter, I., and Solla, S. A. Eigenvalues of covariance matrices: Application to neural-network learning. Physical Review Letters, 66(18):2396â2399, 1991.
In Orr, G. B. and M¨uller, K.-R. (eds.), Neural Networks: Tricks of the Trade, volume 1524 of Lecture Notes in Computer Science, pp. 9â50. Springer, 1998. | 1511.07289#42 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 43 | Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick W., Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. In AISTATS, 2015.
LeRoux, N., Manzagol, P.-A., and Bengio, Y. Topmoumoute online natural gradient algorithm. In Platt, J. C., Koller, D., Singer, Y., and Roweis, S. T. (eds.), Advances in Neural Information Processing Systems 20 (NIPS), pp. 849â856, 2008.
Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. CoRR, abs/1312.4400, 2013. URL http: //arxiv.org/abs/1312.4400.
Maas, A. L., Hannun, A. Y., and Ng, A. Y. Rectiï¬er nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning (ICML13), 2013.
Martens, J. Deep learning via Hessian-free optimization. In F¨urnkranz, J. and Joachims, T. (eds.), Proceedings of the 27th International Conference on Machine Learning (ICML10), pp. 735â742, 2010. | 1511.07289#43 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 44 | Mayr, A., Klambauer, G., Unterthiner, T., and Hochreiter, S. DeepTox: Toxicity prediction using deep learning. Front. Environ. Sci., 3(80), 2015. doi: 10.3389/fenvs.2015.00080. URL http://journal. frontiersin.org/article/10.3389/fenvs.2015.00080.
Nair, V. and Hinton, G. E. Rectiï¬ed linear units improve restricted Boltzmann machines. In F¨urnkranz, J. and Joachims, T. (eds.), Proceedings of the 27th International Conference on Machine Learning (ICML10), pp. 807â814, 2010.
Olivier, Y. Riemannian metrics for neural networks i: feedforward networks. CoRR, abs/1303.0818, 2013. URL http://arxiv.org/abs/1303.0818.
In International Con- ference on Learning Representations 2014, 2014. URL http://arxiv.org/abs/1301.3584. arXiv:1301.3584. | 1511.07289#44 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 45 | Raiko, T., Valpola, H., and LeCun, Y. Deep learning made easier by linear transformations in perceptrons. In Lawrence, N. D. and Girolami, M. A. (eds.), Proceedings of the 15th International Conference on Artiï¬cial Intelligence and Statistics (AISTATS12), volume 22, pp. 924â932, 2012.
Schraudolph, N. N. Centering neural network gradient factor. In Orr, G. B. and M¨uller, K.-R. (eds.), Neural Networks: Tricks of the Trade, volume 1524 of Lecture Notes in Computer Science, pp. 207â226. Springer, 1998.
Schraudolph, Nicol N. A Fast, Compact Approximation of the Exponential Function. Neural Computation, 11: 853â862, 1999.
Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Thomas, and Riedmiller, Martin A. Striving for simplic- ity: The all convolutional net. CoRR, abs/1412.6806, 2014. URL http://arxiv.org/abs/1412. 6806. | 1511.07289#45 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 46 | Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, J¨urgen. Training very deep networks. CoRR, abs/1507.06228, 2015. URL http://arxiv.org/abs/1507.06228.
11
Published as a conference paper at ICLR 2016
Unterthiner, T., Mayr, A., Klambauer, G., and Hochreiter, S. Toxicity prediction using deep learning. CoRR, abs/1503.01445, 2015. URL http://arxiv.org/abs/1503.01445.
Vinyals, O. and Povey, D. Krylov subspace descent for deep learning. In AISTATS, 2012. URL http: //arxiv.org/pdf/1111.4259v1. arXiv:1111.4259.
Xu, B., Wang, N., Chen, T., and Li, M. Empirical evaluation of rectiï¬ed activations in convolutional network. CoRR, abs/1505.00853, 2015. URL http://arxiv.org/abs/1505.00853. | 1511.07289#46 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 47 | Yang, H. H. and Amari, S.-I. Complexity issues in natural gradient descent method for training multilayer perceptrons. Neural Computation, 10(8), 1998.
# A INVERSE OF BLOCK MATRICES
Lemma 1. The positive deï¬nite matrix M is in block format with matrix A, vector b, and scalar c. The inverse of M is
w= (% = Y), 09
where
K = Aâ1 + u sâ1uT u = â s Aâ1 b
(17)
(18)
s= (c - bab) | . (19)
Proof. For block matrices the inverse is
A B" K U (ar é) = (ur 3) (20)
where the matrices on the right hand side are:
K=A!+A'B (c - BâAâB) | BT A} (21)
U=â-A'B (c - BTAâB)* (22)
UT =- (c - BâA'B) | BT A (23)
s=(c- BâAâB) ' (24)
Further if follows that
K = Aâ1 + U Sâ1U T . (25)
We now use this formula for B = b being a vector and C = c a scalar. We obtain | 1511.07289#47 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 48 | K = Aâ1 + U Sâ1U T . (25)
We now use this formula for B = b being a vector and C = c a scalar. We obtain
A b" K (i â) = (i *), (26)
where the right hand side matrices, vectors, and the scalar s are:
-1 K=A'+A'b (c - b"A-âb) Bt At (27)
-1 u=âA'b (c - bâ Ab) (28)
-1 ut = - (c - b"A-'b) bo A (29)
-1 s= (c - b7 Ab) . (30)
Again it follows that
K = Aâ1 + u sâ1uT . (31)
12
Published as a conference paper at ICLR 2016
A reformulation using u gives
K = Aâ1 + u sâ1uT u = â s Aâ1 b uT = â s bT Aâ1
(32)
(33)
(34)
s= (c - BTA) | (35)
# B QUADRATIC FORM OF MEAN AND INVERSE SECOND MOMENT
Lemma 2. For a random variable a holds
ET (a) Eâ1(a aT ) E(a) ⤠1 (36)
and | 1511.07289#48 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 49 | Lemma 2. For a random variable a holds
ET (a) Eâ1(a aT ) E(a) ⤠1 (36)
and
(1 â E"(a)E-â(aaâ) B(a)) = 1 + E*(a) Var~'(a) E(a) . (37)
Furthermore holds
(a â E?(a)E-'(aaâ) B(a)) (a â ET (a) E-'(aa") B(a)) = 1+ (E(a) â E,(a))â Var~*(a) E(a) . (38)
Proof. The Sherman-Morrison Theorem states
Avâbeâ A 1+ cTAq1b- (A be") = Al (39)
Therefore we have
rT t\-} T 4-1 clâ At bb? A-tb eT (A+ bb") 6 = chao ~ cP Anh (1 + BT Aâ¢1b) â (c7 A~! b) (67 AW! b) 1+ bTA~âb _ c"A-'b ~ 1+ bTA-'b- (40)
Using the identity
E(a aT ) = Var(a) + E(a) ET (a) (41) | 1511.07289#49 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 50 | Using the identity
E(a aT ) = Var(a) + E(a) ET (a) (41)
for the second moment and Eq. (40), we get
E' (a) E-'(a a") E(a) = E®(a) (Var(a) + E(a) ET(a)) E(a) _ E7 (a) Var~'(a) E(a) 1 + ET(a) Var-'(a) E(a) â (42)
The last inequality follows from the fact that Var(a) is positive deï¬nite. From last equation, we obtain further
(a â Eâ(a)E â(aaâ) B(a)) = 1 + Eâ(a)Var â(a) E(a). (43)
For the mixed quadratic form we get from Eq. (40)
ET (a) E~!(aa") E(a) = EZ (a) (Var(a) + E(a) ET(a)) E(a) E; (a) Var~'(a) E(a) ; 1 + ET(a) Var '(a) E(a) (44)
13
Published as a conference paper at ICLR 2016
From this equation follows | 1511.07289#50 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 51 | 13
Published as a conference paper at ICLR 2016
From this equation follows
1 â ET p (a) Eâ1(a aT ) E(a) = 1 â p (a) Varâ1(a) E(a) 1 + ET (a) Varâ1(a) E(a) ET (45)
= 1 + ET (a) Varâ1(a) E(a) â ET p (a) Varâ1(a) E(a) 1 + ET (a) Varâ1(a) E(a) = 1 + (E(a) â Ep(a))T Varâ1(a) E(a) 1 + ET (a) Varâ1(a) E(a)
Therefore we get
(1 - ET(a)E â(a a") E(a)) (1 â Ej (a) Eâ(aa") E(a)) (46) = 1+ (E(a) â E,(a))â Varâ (a) E(a) .
# C VARIANCE OF MEAN ACTIVATIONS IN ELU AND RELU NETWORKS | 1511.07289#51 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.07289 | 52 | # C VARIANCE OF MEAN ACTIVATIONS IN ELU AND RELU NETWORKS
To compare the variance of median activation in ReLU and ELU networks, we trained a neural network with 5 hidden layers of 256 hidden units for 200 epochs using a learning rate of 0.01, once using ReLU and once using ELU activation functions on the MNIST dataset. After each epoch, we calculated the median activation of each hidden unit on the whole training set. We then calculated the variance of these changes, which is depicted in Figure 7 . The median varies much more in ReLU networks. This indicates that ReLU networks continuously try to correct the bias shift introduced by previous weight updates while this effect is much less prominent in ELU networks.
Layer1 or lo] 0.00 0.01 0.02 0.03 0.04 0.05 Layer 2 S' Or 0.00 0.02 0.04 0.06 0.08 0.10 Layer 3 8 o 0.00 0.02 0.04 0.06 0.08 0.10 200 150 50 0 0.00 0.05 0.10 0.15 0.20 0.25 0.30 300 mm relu 100 = elu 50 0 ee 0.0 0.1 0.2 0.3 0.4 0.5 Layer 4 S| lo] Layer 5 or lo] | 1511.07289#52 | Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) | We introduce the "exponential linear unit" (ELU) which speeds up learning in
deep neural networks and leads to higher classification accuracies. Like
rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs
(PReLUs), ELUs alleviate the vanishing gradient problem via the identity for
positive values. However, ELUs have improved learning characteristics compared
to the units with other activation functions. In contrast to ReLUs, ELUs have
negative values which allows them to push mean unit activations closer to zero
like batch normalization but with lower computational complexity. Mean shifts
toward zero speed up learning by bringing the normal gradient closer to the
unit natural gradient because of a reduced bias shift effect. While LReLUs and
PReLUs have negative values, too, they do not ensure a noise-robust
deactivation state. ELUs saturate to a negative value with smaller inputs and
thereby decrease the forward propagated variation and information. Therefore,
ELUs code the degree of presence of particular phenomena in the input, while
they do not quantitatively model the degree of their absence. In experiments,
ELUs lead not only to faster learning, but also to significantly better
generalization performance than ReLUs and LReLUs on networks with more than 5
layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with
batch normalization while batch normalization does not improve ELU networks.
ELU networks are among the top 10 reported CIFAR-10 results and yield the best
published result on CIFAR-100, without resorting to multi-view evaluation or
model averaging. On ImageNet, ELU networks considerably speed up learning
compared to a ReLU network with the same architecture, obtaining less than 10%
classification error for a single crop, single model network. | http://arxiv.org/pdf/1511.07289 | Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter | cs.LG | Published as a conference paper at ICLR 2016 | null | cs.LG | 20151123 | 20160222 | [] |
1511.06807 | 0 | 5 1 0 2
v o N 1 2 ] L M . t a t s [ 1 v 7 0 8 6 0 . 1 1 5 1 : v i X r a
# Under review as a conference paper at ICLR 2016
# ADDING GRADIENT NOISE IMPROVES LEARNING FOR VERY DEEP NETWORKS
Arvind Neelakantanâ, Luke Vilnisâ College of Information and Computer Sciences University of Massachusetts Amherst {arvind,luke}@cs.umass.edu
# Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach Google Brain {qvl,ilyasu,lukaszkaiser,kkurach}@google.com
# James Martens University of Toronto [email protected]
# ABSTRACT | 1511.06807#0 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 0 | 6 1 0 2
p e S 2 2 ] V C . s c [
3 v 6 5 8 6 0 . 1 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# DATA-DEPENDENT INITIALIZATIONS OF CONVOLUTIONAL NEURAL NETWORKS
Philipp Kr¨ahenb ¨uhl1, Carl Doersch1,2, Jeff Donahue1, Trevor Darrell1 1Department of Electrical Engineering and Computer Science, UC Berkeley 2Machine Learning Department, Carnegie Mellon {philkr,jdonahue,trevor}@eecs.berkeley.edu; [email protected]
# ABSTRACT | 1511.06856#0 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 0 | 6 1 0 2
r a M 9 2 ] G L . s c [
4 v 9 3 9 6 0 . 1 1 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# SESSION-BASED RECOMMENDATIONS WITH RECURRENT NEURAL NETWORKS
Bal´azs Hidasi â Gravity R&D Inc. Budapest, Hungary [email protected]
# Alexandros Karatzoglou Telefonica Research Barcelona, Spain [email protected]
Linas Baltrunas â Netï¬ix Los Gatos, CA, USA [email protected]
Domonkos Tikk Gravity R&D Inc. Budapest, Hungary [email protected]
# ABSTRACT | 1511.06939#0 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 1 | # James Martens University of Toronto [email protected]
# ABSTRACT
Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement tech- nique of adding gradient noise which we ï¬nd to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overï¬tting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient de- scent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a dou- bling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.
1
# INTRODUCTION | 1511.06807#1 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 1 | # ABSTRACT
Convolutional Neural Networks spread through computer vision like a wildï¬re, impacting almost all visual tasks imaginable. Despite this, few researchers dare to train their models from scratch. Most work builds on one of a handful of Im- ageNet pre-trained models, and ï¬ne-tunes or adapts these for speciï¬c tasks. This is in large part due to the difï¬culty of properly initializing these networks from scratch. A small miscalibration of the initial weights leads to vanishing or explod- ing gradients, as well as poor convergence properties. In this work we present a fast and simple data-dependent initialization procedure, that sets the weights of a network such that all units in the network train at roughly the same rate, avoiding vanishing or exploding gradients. Our initialization matches the current state-of-the-art unsupervised or self-supervised pre-training methods on standard computer vision tasks, such as image classiï¬cation and object detection, while reducing the pre-training time by three orders of magnitude. When combined with pre-training methods, our initialization signiï¬cantly outperforms prior work, narrowing the gap between supervised and unsupervised pre-training.
# INTRODUCTION | 1511.06856#1 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 1 | Domonkos Tikk Gravity R&D Inc. Budapest, Hungary [email protected]
# ABSTRACT
We apply recurrent neural networks (RNN) on a new domain, namely recom- mender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netï¬ix). In this situation the frequently praised matrix factorization approaches are not accurate. This prob- lem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN- based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modiï¬cations to classic RNNs such as a ranking loss function that make it more viable for this speciï¬c problem. Experimental results on two data-sets show marked improvements over widely used approaches.
# INTRODUCTION | 1511.06939#1 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 2 | 1
# INTRODUCTION
Deep neural networks have shown remarkable success in diverse domains including image recog- nition (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012) and language processing applications (Sutskever et al., 2014; Bahdanau et al., 2014). This broad success comes from a con- ï¬uence of several factors. First, the creation of massive labeled datasets has allowed deep networks to demonstrate their advantages in expressiveness and scalability. The increase in computing power has also enabled training of far larger networks with more forgiving optimization dynamics (Choro- manska et al., 2015). Additionally, architectures such as convolutional networks (LeCun et al., 1998) and long short-term memory networks (Hochreiter & Schmidhuber, 1997) have proven to be easier to optimize than classical feedforward and recurrent models. Finally, the success of deep networks is also a result of the development of simple and broadly applicable learning techniques such as dropout (Srivastava et al., 2014), ReLUs (Nair & Hinton, 2010), gradient clipping (Pascanu
âFirst two authors contributed equally. Work was done when all authors were at Google, Inc.
1
# Under review as a conference paper at ICLR 2016 | 1511.06807#2 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 2 | In recent years, Convolutional Neural Networks (CNNs) have improved performance across a wide variety of computer vision tasks (Szegedy et al., 2015; Simonyan & Zisserman, 2015; Girshick, 2015). Much of this improvement stems from the ability of CNNs to use large datasets better than previous methods. In fact, good performance seems to require large datasets: the best-performing methods usually begin by âpre-trainingâ CNNs to solve the million-image ImageNet classiï¬cation challenge (Russakovsky et al., 2015). This âpre-trainedâ representation is then âï¬ne-tunedâ on a smaller dataset where the target labels may be more expensive to obtain. These ï¬ne-tuning datasets generally do not fully constrain the CNN learning: different initializations can be trained until they achieve equally high training-set performance, but they will often perform very differently at test time. For example, initialization via ImageNet pre-training is known to produce a better-performing network at test time across many problems. However, little else is known about which other factors affect a CNNâs generalization performance when trained on small datasets. There | 1511.06856#2 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 2 | # INTRODUCTION
Session-based recommendation is a relatively unappreciated problem in the machine learning and recommender systems community. Many e-commerce recommender systems (particularly those of small retailers) and most of news and media sites do not typically track the user-idâs of the users that visit their sites over a long period of time. While cookies and browser ï¬ngerprinting can provide some level of user recognizability, those technologies are often not reliable enough and moreover raise privacy concerns. Even if tracking is possible, lots of users have only one or two sessions on a smaller e-commerce site, and in certain domains (e.g. classiï¬ed sites) the behavior of users often shows session-based traits. Thus subsequent sessions of the same user should be handled independently. Consequently, most session-based recommendation systems deployed for e-commerce are based on relatively simple methods that do not make use of a user proï¬le e.g. item- to-item similarity, co-occurrence, or transition probabilities. While effective, those methods often take only the last click or selection of the user into account ignoring the information of past clicks. | 1511.06939#2 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 3 | âFirst two authors contributed equally. Work was done when all authors were at Google, Inc.
1
# Under review as a conference paper at ICLR 2016
et al., 2013; Graves, 2013), optimization and weight initialization strategies (Glorot & Bengio, 2010; Sutskever et al., 2013; He et al., 2015).
Recent work has aimed to push neural network learning into more challenging domains, such as question answering or program induction. These more complicated problems demand more com- plicated architectures (e.g., Graves et al. (2014); Sukhbaatar et al. (2015)) thereby posing new opti- mization challenges. In order to achieve good performance, researchers have reported the necessity of additional techniques such as supervision in intermediate steps (Weston et al., 2014), warm- starts (Peng et al., 2015), random restarts, and the removal of certain activation functions in early stages of training (Sukhbaatar et al., 2015). | 1511.06807#3 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 3 | network at test time across many problems. However, little else is known about which other factors affect a CNNâs generalization performance when trained on small datasets. There is a pressing need to understand these factors, ï¬rst because we can potentially exploit them to improve performance on tasks where few labels are available. Second they may already be confounding our attempts to evaluate pre-training methods. A pre-trained network which extracts useful semantic information but cannot be ï¬ne-tuned for spurious reasons can be easily overlooked. Hence, this work aims to explore how to better ï¬ne-tune CNNs. We show that simple statistical properties of the network, which can be easily measured using training data, can have a signiï¬cant impact on test time performance. Surprisingly, we show that controlling for these statistical properties leads to a fast and general way to improve performance when training on relatively little data. | 1511.06856#3 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 3 | The most common methods used in recommender systems are factor models (Koren et al., 2009; Weimer et al., 2007; Hidasi & Tikk, 2012) and neighborhood methods (Sarwar et al., 2001; Ko- ren, 2008). Factor models work by decomposing the sparse user-item interactions matrix to a set of d dimensional vectors one for each item and user in the dataset. The recommendation problem is then treated as a matrix completion/reconstruction problem whereby the latent factor vectors are then used to ï¬ll the missing entries by e.g. taking the dot product of the corresponding userâitem latent factors. Factor models are hard to apply in session-based recommendation due to the absence
âThe author spent 3 months at Telefonica Research during the research of this topic. â This work was done while the author was a member of the Telefonica Research group in Barcelona, Spain
1
Published as a conference paper at ICLR 2016
of a user proï¬le. On the other hand, neighborhood methods, which rely on computing similari- ties between items (or users) are based on co-occurrences of items in sessions (or user proï¬les). Neighborhood methods have been used extensively in session-based recommendations. | 1511.06939#3 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 4 | A recurring theme in recent works is that commonly-used optimization techniques are not always sufï¬cient to robustly optimize the models. In this work, we explore a simple technique of adding annealed Gaussian noise to the gradient, which we ï¬nd to be surprisingly effective in training deep neural networks with stochastic gradient descent. While there is a long tradition of adding random weight noise in classical neural networks, it has been under-explored in the optimization of modern deep architectures. In contrast to theoretical and empirical results on the regularizing effects of conventional stochastic gradient descent, we ï¬nd that in practice the added noise can actually help us achieve lower training loss by encouraging active exploration of parameter space. This exploration proves especially necessary and fruitful when optimizing neural network models containing many layers or complex latent structures. | 1511.06807#4 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 4 | Empirical evaluations have found that when transferring deep features across tasks, freezing weights of some layers during ï¬ne-tuning generally harms performance (Yosinski et al., 2014). These results suggest that, given a small dataset, it is better to adjust all of the layers a little rather than to adjust just a few layers a large amount, and so perhaps the ideal setting will adjust all of the layers the
Code available: https://github.com/philkr/magic_init
1
Published as a conference paper at ICLR 2016 | 1511.06856#4 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 4 | The past few years have seen the tremendous success of deep neural networks in a number of tasks such as image and speech recognition (Russakovsky et al., 2014; Hinton et al., 2012) where unstruc- tured data is processed through several convolutional and standard layers of (usually rectiï¬ed linear) units. Sequential data modeling has recently also attracted a lot of attention with various ï¬avors of RNNs being the model of choice for this type of data. Applications of sequence modeling range from test-translation to conversation modeling to image captioning. | 1511.06939#4 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 5 | The main contribution of this work is to demonstrate the broad applicability of this simple method to the training of many complex modern neural architectures. Furthermore, to the best of our knowl- edge, our added noise schedule has not been used before in the training of deep networks. We consistently see improvement from injected gradient noise when optimizing a wide variety of mod- els, including very deep fully-connected networks, and special-purpose architectures for question answering and algorithm learning. For example, this method allows us to escape a poor initializa- tion and successfully train a 20-layer rectiï¬er network on MNIST with standard gradient descent. It also enables a 72% relative reduction in error in question-answering, and doubles the number of ac- curate binary multiplication models learned across 7,000 random restarts. We hope that practitioners will see similar improvements in their own research by adding this simple technique, implementable in a single line of code, to their repertoire.
# 2 RELATED WORK
Adding random noise to the weights, gradient, or the hidden units has been a known technique amongst neural network practitioners for many years (e.g., An (1996)). However, the use of gradient noise has been rare and its beneï¬ts have not been fully documented with modern deep networks. | 1511.06807#5 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 5 | same amount. While these studies did indeed set the learning rate to be the same for all layers, somewhat counterintuitively this does not actually enforce that all layers learn at the same rate. To see this, say we have a network where there are two convolution layers separated by a ReLU. Multiplying the weights and bias term of the ï¬rst layer by a scalar α > 0, and then dividing the weights (but not bias) of the next (higher) layer by the same constant α will result in a network which computes exactly the same function. However, note that the gradients of the two layers are not the same: they will be divided by α for the ï¬rst layer, and multiplied by α for the second. Worse, an update of a given magnitude will have a smaller effect on the lower layer than the higher layer, simply because the lower layerâs norm is now larger. Using this kind of reparameterization, it is easy to make the gradients for certain layers vanish during ï¬ne-tuning, or even to make them explode, resulting in a network that is impossible to ï¬ne-tune despite representing exactly the same function. Conversely, this sort of re-parameterization gives us a | 1511.06856#5 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 5 | While RNNs have been applied to the aforementioned domains with remarkable success little atten- tion, has been paid to the area of recommender systems. In this work we argue that RNNs can be applied to session-based recommendation with remarkable results, we deal with the issues that arise when modeling such sparse sequential data and also adapt the RNN models to the recommender setting by introducing a new ranking loss function suited to the task of training these models. The session-based recommendation problem shares some similarities with some NLP-related problems in terms of modeling as long as they both deals with sequences. In the session-based recommenda- tion we can consider the ï¬rst item a user clicks when entering a web-site as the initial input of the RNN, we then would like to query the model based on this initial input for a recommendation. Each consecutive click of the user will then produce an output (a recommendation) that depends on all the previous clicks. Typically the item-set to choose from in recommenders systems can be in the tens of thousands or even hundreds of thousands. Apart from the large size of the item set, another challenge is that click-stream datasets are typically quite large thus training time and scalability are really | 1511.06939#5 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 6 | Weight noise (Steijvers, 1996) and adaptive weight noise (Graves, 2011; Blundell et al., 2015), which usually maintains a Gaussian variational posterior over network weights, similarly aim to improve learning by added noise during training. They normally differ slightly from our proposed method in that the noise is not annealed and at convergence will be non-zero. Additionally, in adaptive weight noise, an extra set of parameters for the variance must be maintained.
Similarly, the technique of dropout (Srivastava et al., 2014) randomly sets groups of hidden units to zero at train time to improve generalization in a manner similar to ensembling.
An annealed Gaussian gradient noise schedule was used to train the highly non-convex Stochastic Neighbor Embedding model in Hinton & Roweis (2002). The gradient noise schedule that we found to be most effective is very similar to the Stochastic Gradient Langevin Dynamics algorithm of Welling & Teh (2011), who use gradients with added noise to accelerate MCMC inference for logistic regression and independent component analysis models. This use of gradient information in MCMC sampling for machine learning to allow faster exploration of state space was previously proposed by Neal (2011). | 1511.06807#6 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06939 | 6 | hundreds of thousands. Apart from the large size of the item set, another challenge is that click-stream datasets are typically quite large thus training time and scalability are really important. As in most information retrieval and recommendation settings, we are interested in focusing the modeling power on the top-items that the user might be interested in, to this end we use ranking loss function to train the RNNs. | 1511.06939#6 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 7 | Various optimization techniques have been proposed to improve the training of neural networks. Most notable is the use of Momentum (Polyak, 1964; Sutskever et al., 2013; Kingma & Ba, 2014) or adaptive learning rates (Duchi et al., 2011; Dean et al., 2012; Zeiler, 2012). These methods are normally developed to provide good convergence rates for the convex setting, and then heuristically
2
# Under review as a conference paper at ICLR 2016 | 1511.06807#7 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 7 | Where can we look to ï¬nd such a principle? A number of works have already suggested that statisti- cal properties of network activations can impact network performance. Many focus on initializations which control the variance of network activations. Krizhevsky et al. (2012) carefully designed their architecture to ensure gradients neither vanish nor explode. However, this is no longer possible for deeper architectures such as VGG (Simonyan & Zisserman, 2015) or GoogLeNet (Szegedy et al., 2015). Glorot & Bengio (2010); Saxe et al. (2013); Sussillo & Abbot (2015); He et al. (2015); Bradley (2010) show that properly scaled random initialization can deal with the vanishing gradi- ent problem, if the architectures are limited to linear transformations, followed by a very speciï¬c non-linearities. Saxe et al. (2013) focus on linear networks, Glorot & Bengio (2010) derive an initial- ization for networks with tanh non-linearities, while He et al. (2015) focus on the more commonly used ReLUs. However, none of the above papers consider more general network including pooling, dropout, | 1511.06856#7 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 7 | 2 RELATED WORK
2.1 SESSION-BASED RECOMMENDATION
Much of the work in the area of recommender systems has focused on models that work when a user identiï¬er is available and a clear user proï¬le can be built. In this setting, matrix factorization methods and neighborhood models have dominated the literature and are also employed on-line. One of the main approaches that is employed in session-based recommendation and a natural solution to the problem of a missing user proï¬le is the item-to-item recommendation approach (Sarwar et al., 2001; Linden et al., 2003) in this setting an item to item similarity matrix is precomputed from the available session data, that is items that are often clicked together in sessions are deemed to be similar. This similarity matrix is then simply used during the session to recommend the most similar items to the one the user has currently clicked. While simple, this method has been proven to be effective and is widely employed. While effective, these methods are only taking into account the last click of the user, in effect ignoring the information of the past clicks. | 1511.06939#7 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 8 | 2
# Under review as a conference paper at ICLR 2016
applied to nonconvex problems. On the other hand, injecting noise in the gradient is more suitable for nonconvex problems. By adding even more stochasticity, this technique gives the model more chances to escape local minima (see a similar argument in Bottou (1992)), or to traverse quickly through the âtransientâ plateau phase of early learning (see a similar analysis for momentum in Sutskever et al. (2013)). This is born out empirically in our observation that adding gradient noise can actually result in lower training loss. In this sense, we suspect adding gradient noise is similar to simulated annealing (Kirkpatrick et al., 1983) which exploits random noise to explore complex optimization landscapes. This can be contrasted with well-known beneï¬ts of stochastic gradient descent as a learning algorithm (Robbins & Monro, 1951; Bousquet & Bottou, 2008), where both theory and practice have shown that the noise induced by the stochastic process aids generalization by reducing overï¬tting.
# 3 METHOD
We consider a simple technique of adding time-dependent Gaussian noise to the gradient g at every training step t: | 1511.06807#8 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 8 | while He et al. (2015) focus on the more commonly used ReLUs. However, none of the above papers consider more general network including pooling, dropout, LRN layers (Krizhevsky et al., 2012), or DAG-structured networks (Szegedy et al., 2015). We argue that initializing the network with real training data improves these approximations and achieves a better performance. Early approaches to data-driven initializations showed that whiten- ing the activations at all layers can mitigate the vanishing gradient problem (LeCun et al., 1998), but it does not ensure all layers train at an equal rate. More recently, batch normalization (Ioffe & Szegedy, 2015) enforces that the output of each convolution and fully-connected layer are zero mean with unit variance for every batch. In practice, however, this means that the networkâs behavior on a single example depends on the other members of the batch, and removing this dependency at test-time relies on approximating batch statistics. The fact that these methods show improved con- vergence speed at training time suggests we are justiï¬ed in investigating the statistics of activations. However, the main goal of our work differs in two important respects. | 1511.06856#8 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 8 | A somewhat different approach to session-based recommendation are Markov Decision Processes (MDPs) (2002). MDPs are models of sequential stochastic decision problems. An MDP is defined as a four-tuple (S, A, Rwd, tr) where S is the set of states, A is a set of actions Rwd is a reward function and tr is the state-transition function. In recommender systems actions can be equated with recommendations and the simplest MPDs are essentially first order Markov chains where the next recommendation can be simply computed on the basis of the transition probability between items. The main issue with applying Markov chains in session-based recommendation is that the state space quickly becomes unmanageable when trying to include all possible sequences of user selections.
The extended version of the General Factorization Framework (GFF) (Hidasi & Tikk, 2015) is ca- pable of using session data for recommendations. It models a session by the sum of its events. It uses two kinds of latent representations for items, one represents the item itself, the other is for representing the item as part of a session. The session is then represented as the average of the feature vectors of part-of-a-session item representation. However, this approach does not consider any ordering within the session.
2
Published as a conference paper at ICLR 2016
2.2 DEEP LEARNING IN RECOMMENDERS | 1511.06939#8 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 9 | # 3 METHOD
We consider a simple technique of adding time-dependent Gaussian noise to the gradient g at every training step t:
gt â gt + N (0, Ï2 t ) Our experiments indicate that adding annealed Gaussian noise by decaying the variance works better than using ï¬xed Gaussian noise. We use a schedule inspired from Welling & Teh (2011) for most of our experiments and take:
Ï2 t = η (1 + t)γ (1)
with η selected from {0.01, 0.3, 1.0} and γ = 0.55. Higher gradient noise at the beginning of training forces the gradient away from 0 in the early stages.
# 4 EXPERIMENTS
In the following experiments, we consider a variety of complex neural network architectures: Deep networks for MNIST digit classiï¬cation, End-To-End Memory Networks (Sukhbaatar et al., 2015) and Neural Programmer (Neelakantan et al., 2015) for question answering, Neural Random Access Machines (Kurach et al., 2015) and Neural GPUs (Kaiser & Sutskever, 2015) for algorithm learning. The models and results are described as follows.
4.1 DEEP FULLY-CONNECTED NETWORKS | 1511.06807#9 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 9 | speed at training time suggests we are justiï¬ed in investigating the statistics of activations. However, the main goal of our work differs in two important respects. First, these previous works pay relatively little attention to the behavior on smaller training sets, instead focusing on training speed. Second, while all above initializations require a random initialization, our approach aims to handle structured initialization, and even improve pre-trained networks. | 1511.06856#9 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 9 | 2
Published as a conference paper at ICLR 2016
2.2 DEEP LEARNING IN RECOMMENDERS
One of the ï¬rst related methods in the neural networks literature where the use of Restricted Boltz- mann Machines (RBM) for Collaborative Filtering (Salakhutdinov et al., 2007). In this work an RBM is used to model user-item interaction and perform recommendations. This model has been shown to be one of the best performing Collaborative Filtering models. Deep Models have been used to extract features from unstructured content such as music or images that are then used together with more conventional collaborative ï¬ltering models. In Van den Oord et al. (2013) a convolutional deep network is used to extract feature from music ï¬les that are then used in a factor model. More recently Wang et al. (2015) introduced a more generic approach whereby a deep network is used to extract generic content-features from any types of items, these features are then incorporated in a standard collaborative ï¬ltering model to enhance the recommendation performance. This approach seems to be particularly useful in settings where there is not sufï¬cient user-item interaction information.
# 3 RECOMMENDATIONS WITH RNNS | 1511.06939#9 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 10 | 4.1 DEEP FULLY-CONNECTED NETWORKS
For our ï¬rst set of experiments, we examine the impact of adding gradient noise when training a very deep fully-connected network on the MNIST handwritten digit classiï¬cation dataset (LeCun et al., 1998). Our network is deep: it has 20 hidden layers, with each layer containing 50 hidden units. We use the ReLU activation function (Nair & Hinton, 2010).
In this experiment, we add gradient noise sampled from a Gaussian distribution with mean 0, and decaying variance according to the schedule in Equation (1) with η = 0.01. We train with SGD without momentum, using the ï¬xed learning rates of 0.1 and 0.01. Unless otherwise speciï¬ed, the weights of the network are initialized from a Gaussian with mean zero, and standard deviation of 0.1, which we call Simple Init. | 1511.06807#10 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06939 | 10 | # 3 RECOMMENDATIONS WITH RNNS
Recurrent Neural Networks have been devised to model variable-length sequence data. The main difference between RNNs and conventional feedforward deep models is the existence of an internal hidden state in the units that compose the network. Standard RNNs update their hidden state h using the following update function:
ht = g(W xt + U htâ1) (1) Where g is a smooth and bounded function such as a logistic sigmoid function xt is the input of the unit at time t. An RNN outputs a probability distribution over the next element of the sequence, given its current state ht.
A Gated Recurrent Unit (GRU) (Cho et al., 2014) is a more elaborate model of an RNN unit that aims at dealing with the vanishing gradient problem. GRU gates essentially learn when and by how much to update the hidden state of the unit. The activation of the GRU is a linear interpolation between the previous activation and the candidate activation Ëht: ht = (1 â zt)htâ1 + zt Ëht
where the update gate is given by:
zt = Ï(Wzxt + Uzhtâ1) (3)
while the candidate activation function Ëht is computed in a similar manner: | 1511.06939#10 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 11 | The results of our experiment are in Table 1. When trained from Simple Init we can see that adding noise to the gradient helps in achieving higher average and best accuracy over 20 runs using each learning rate for a total of 40 runs (Table 1, Experiment 1). We note that the average is closer to 50% because the small learning rate of 0.01 usually gives very slow convergence. We also try our approach on a more shallow network of 5 layers, but adding noise does not improve the training in that case.
Next, we experiment with clipping the gradients with two threshold values: 100 and 10 (Table 1, Experiment 2, and 3). Here, we ï¬nd training with gradient noise is insensitive to the gradient clipping values. By tuning the clipping threshold, it is possible to get comparable accuracy without noise for this problem.
3
# Under review as a conference paper at ICLR 2016 | 1511.06807#11 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 11 | zk = fk(zkâ1; θk),
where z;, is a vector of hidden activations of the network, and f;, is a transformation with parameters 0. fx may be a linear transformation fx(z,; 9%) = Wkz~â1 + bp, or it may be a non-linearity frt1(Zki 9k) = Oh41(z;) Such as a rectified linear unit (ReLU) o(x) = max(x,0). Other common non-linearities include local response normalization or pooling (Krizhevsky et al., 2012; Szegedy et al., 2015; Simonyan & Zisserman, 2015). However, as is common in neural networks, we assume these nonlinearities are not parametrized and kept fixed during training. Hence, 6; contains only (W,,, by) for each affine layer k. To deal with spatially-structured inputs like images, most hidden activations z, ⬠RC**4**8r are arranged in a two dimensional grid of size A, x B, (for image width A; and height B,) with C;,
2
Published as a conference paper at ICLR 2016 | 1511.06856#11 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06807 | 12 | 3
# Under review as a conference paper at ICLR 2016
In our fourth and ï¬fth experiment (Table 1, Experiment 4, and 5), we use two analytically-derived ReLU initialization techniques (which we term Good Init) recently-proposed by Sussillo & Abbott (2014) and He et al. (2015), and ï¬nd that adding gradient noise does not help. Previous work has found that stochastic gradient descent with carefully tuned initialization, momentum, learning rate, and learning rate decay can optimize such extremely deep fully-connected ReLU networks (Srivastava et al., 2015). It would be harder to ï¬nd such a robust initialization technique for the more complex heterogeneous architectures considered in later sections. Accordingly, we ï¬nd in later experiments (e.g., Section 4.3) that random restarts and the use of a momentum-based optimizer like Adam are not sufï¬cient to achieve the best results in the absence of added gradient noise.
To understand how sensitive the methods are to poor initialization, in addition to the sub-optimal Simple Init, we run an experiment where all the weights in the neural network are initialized at zero. The results (Table 1, Experiment 5) show that if we do not add noise to the gradient, the networks fail to learn. If we add some noise, the networks can learn and reach 94.5% accuracy. | 1511.06807#12 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 12 | 2
Published as a conference paper at ICLR 2016
channels per grid cell. We let z0 denote the input image. The ï¬nal output, however, is generally not spatial, and so later layers are reduced to the form zN = RCN Ã1Ã1, where CN is the number of output units. The last of these outputs is converted into a loss with respect to some label; for classiï¬cation, the approach is to convert the ï¬nal output into a probability distribution over labels via a Softmax function. Learning aims to minimize the expected loss over the training dataset. Despite the non-convexity of this learning problem, backpropagation and Stochastic Gradient Descent often ï¬nds good local minima if initialized properly (LeCun et al., 1998).
Given an arbitrary neural network, we next aim for a good parameterization. A good parameteriza- tion should be able to learn all weights of a network equally well. We measure how well a certain weight in the network learns by how much the gradient of a loss function would change it. A large change means it learns more quickly, while a small change implies it learns more slowly. We initial- ize our network such that all weights in all layers learn equally fast.
# 3 DATA-DEPENDENT INITIALIZATION | 1511.06856#12 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 12 | rt = Ï(Wrxt + Urhtâ1) (5)
3.1 CUSTOMIZING THE GRU MODEL
We used the GRU-based RNN in our models for session-based recommendations. The input of the network is the actual state of the session while the output is the item of the next event in the session. The state of the session can either be the item of the actual event or the events in the session so far. In the former case 1-of-N encoding is used, i.e. the input vectorâs length equals to the number of items and only the coordinate corresponding to the active item is one, the others are zeros. The latter setting uses a weighted sum of these representations, in which events are discounted if they have occurred earlier. For the stake of stability, the input vector is then normalized. We expect this to help because it reinforces the memory effect: the reinforcement of very local ordering constraints which are not well captured by the longer memory of RNN. We also experimented with adding an additional embedding layer, but the 1-of-N encoding always performed better. | 1511.06939#12 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 13 | Experiment 1: Simple Init, No Gradient Clipping Best Test Accuracy Average Test Accuracy 89.9% 96.7% 11.3%
Setting No Noise With Noise No Noise + Dropout 43.1% 52.7% 10.8%
No Noise With Noise 90.0% 96.7% 46.3% 52.3%
No Noise With Noise 95.7% 97.0% 51.6% 53.6%
Experiment 4: Good Init (Sussillo & Abbott, 2014) + Gradient Clipping Threshold = 10 No Noise With Noise
Experiment 5: Good Init (He et al., 2015) + Gradient Clipping Threshold = 10 No Noise With Noise 97.4% 97.2% 91.7% 91.7%
No Noise With Noise 11.4% 94.5% 10.1% 49.7%
Table 1: Average and best test accuracy percentages on MNIST over 40 runs. Higher values are better.
In summary, these experiments show that if we are careful with initialization and gradient clipping values, it is possible to train a very deep fully-connected network without adding gradient noise. However, if the initialization is poor, optimization can be difï¬cult, and adding noise to the gradient is a good mechanism to overcome the optimization difï¬culty. | 1511.06807#13 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 13 | # 3 DATA-DEPENDENT INITIALIZATION
Given an N-layer neural network with loss function ¢(z)), we first define Cc? j,k 10 be the expected norm of the gradient with respect to weights W; (i, j) in layer k:
Chg =Baow ( amyâ) ] =Banr[(a ad g2aeem)) | ye (@)
where D is a set of input images and y;, is the backpropagated error. Similar reasoning can be applied to the biases b;, but where the activations are replaced by the constant 1. To not rely on any labels during initialization, we use a random linear loss function £(zy) = n' Zn, where 7 ~ N(0, I) is sampled from a unit Gaussian distribution. In other words, we initialize the top gradient to a random Gaussian noise vector 7 during backpropagation. We sample a different random loss 7 for each image.
In order for all parameters to learn at the same ârate,â we require the change in eq. | to be propor- tional to the magnitude of the weights ||W;,||3 of the current layer; i.e.,
2 20 = Cnig (2) bo Wal | 1511.06856#13 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 13 | The core of the network is the GRU layer(s) and additional feedforward layers can be added between the last layer and the output. The output is the predicted preference of the items, i.e. the likelihood of being the next in the session for each item. When multiple GRU layers are used, the hidden state of the previous layer is the input of the next one. The input can also be optionally connected
3
Published as a conference paper at ICLR 2016
dake] Buippaqui3 wa}! Uo sas09s :ndjno Bulpoo N-JO-T âway! jenqoe :ynduj
Figure 1: General architecture of the network. Processing of one event of the event stream at once.
to GRU layers deeper in the network, as we found that this improves performance. See the whole architecture on Figure 1, which depicts the representation of a single event within a time series of events.
Since recommender systems are not the primary application area of recurrent neural networks, we modiï¬ed the base network to better suit the task. We also considered practical points so that our solution could be possibly applied in a live environment.
3.1.1 SESSION-PARALLEL MINI-BATCHES | 1511.06939#13 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 14 | The implication of this set of results is that added gradient noise can be an effective mechanism for training very complex networks. This is because it is more difï¬cult to initialize the weights properly for complex networks. In the following, we explore the training of more complex networks such as End-To-End Memory Networks and Neural Programmer, whose initialization is less well studied.
4
# Under review as a conference paper at ICLR 2016
4.2 END-TO-END MEMORY NETWORKS
We test added gradient noise for training End-To-End Memory Networks (Sukhbaatar et al., 2015), a new approach for Q&A using deep networks.1 Memory Networks have been demonstrated to perform well on a relatively challenging toy Q&A problem (Weston et al., 2015). | 1511.06807#14 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 14 | 2 20 = Cnig (2) bo Wal
is constant for all weights. However this is hard to enforce, because for non-linear networks the backpropagated error yk is a function of the activations zkâ1. A change in weights that affects the activations zkâ1 will indirectly change yk. This effect is often non-linear and hard to control or predict.
We thus simplify Equation (2): rather than enforce that the individual weights all learn at the same rate, we enforce that the columns of weight matrix Wk do so, i.e.:
~ 1 ~ 1 2 2 Ci N > Cini NW gE v~p [2x1 9)" llvell3] 5 (3) | 1511.06856#14 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 14 | 3.1.1 SESSION-PARALLEL MINI-BATCHES
RNNs for natural language processing tasks usually use in-sequence mini-batches. For example it is common to use a sliding window over the words of sentences and put these windowed fragments next to each other to form mini-batches. This does not ï¬t our task, because (1) the length of sessions can be very different, even more so than that of sentences: some sessions consist of only 2 events, while others may range over a few hundreds; (2) our goal is to capture how a session evolves over time, so breaking down into fragments would make no sense. Therefore we use session-parallel mini-batches. First, we create an order for the sessions. Then, we use the ï¬rst event of the ï¬rst X sessions to form the input of the ï¬rst mini-batch (the desired output is the second events of our active sessions). The second mini-batch is formed from the second events and so on. If any of the sessions end, the next available session is put in its place. Sessions are assumed to be independent, thus we reset the appropriate hidden state when this switch occurs. See Figure 2 for more details. | 1511.06939#14 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 15 | In Memory Networks, the model has access to a context, a question, and is asked to predict an answer. Internally, the model has an attention mechanism which focuses on the right clue to answer the question. In the original formulation (Weston et al., 2015), Memory Networks were provided with additional supervision as to what pieces of context were necessary to answer the question. This was replaced in the End-To-End formulation by a latent attention mechanism implemented by a softmax over contexts. As this greatly complicates the learning problem, the authors implement a two-stage training procedure: First train the networks with a linear attention, then use those weights to warmstart the model with softmax attention. | 1511.06807#15 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 15 | ~ 1 ~ 1 2 2 Ci N > Cini NW gE v~p [2x1 9)" llvell3] 5 (3)
should be approximately constant, where N is the number of rows of the weight matrix. As we will show in Section 4.1, all weights tend to train at roughly the same rate even though the objective does not enforce this. Looking at Equation (3), the relative change of a column of the weight matrix is a function of 1) the magnitude of a single activation of the bottom layer, and 2) the norm of the backpropagated gradient. The value of a single input to a layer will generally have a relatively small impact on the norm of the gradient to the entire layer. Hence, we assume z,â1(j) and ||y,|| are independent, leading to the following simplification of the objective: E.o~ [llyell3]
E.o~ [llyell3] Chg * Bow [2-10] Sar (4)
This approximation conveniently decouples the change rate per column, which depends on z;â1(j)?, from the global change rate per layer, which depends on the gradient magnitude ||y;,||3, allowing us to correct them in two separate steps.
3
Published as a conference paper at ICLR 2016
Algorithm 1 Within-layer initialization. | 1511.06856#15 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 15 | Mini-batch1 Mini-batch2 Mini-batch3 Session1 Session2 is fina bos Input Session3 â Session4 el SessionS fsa fs2 fsa] Output
Figure 2: Session-parallel mini-batch creation
3.1.2 SAMPLING ON THE OUTPUT
Recommender systems are especially useful when the number of items is large. Even for a medium- sized webshop this is in the range of tens of thousands, but on larger sites it is not rare to have
4
Published as a conference paper at ICLR 2016
hundreds of thousands of items or even a few millions. Calculating a score for each item in each step would make the algorithm scale with the product of the number of items and the number of events. This would be unusable in practice. Therefore we have to sample the output and only compute the score for a small subset of the items. This also entails that only some of the weights will be updated. Besides the desired output, we need to compute scores for some negative examples and modify the weights so that the desired output is highly ranked. | 1511.06939#15 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 16 | In our experiments with Memory Networks, we use our standard noise schedule, using noise sam- pled from a Gaussian distribution with mean 0, and decaying variance according to Equation (1) with η = 1.0. This noise is added to the gradient after clipping. We also ï¬nd for these experiments that a ï¬xed standard deviation also works, but its value has to be tuned, and works best at 0.001. We set the number of training epochs to 200 because we would like to understand the behaviors of Memory Networks near convergence. The rest of the training is identical to the experimental setup proposed by the original authors. We test this approach with the published two-stage training approach, and additionally with a one-stage training approach where we train the networks with softmax attention and without warmstarting. Results are reported in Table 2. We ï¬nd some ï¬uctuations during each run of the training, but the reported results reï¬ect the typical gains obtained by adding random noise.
Setting One-stage training No Noise 9.6% Training error: Validation error: 19.5% Validation error: 16.6% 5.9% Validation error: 10.9% Validation error: 10.8% With Noise 10.5% Training error: Two-stage training Training error: 6.2% Training error: | 1511.06807#16 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 16 | 3
Published as a conference paper at ICLR 2016
Algorithm 1 Within-layer initialization.
for each afï¬ne layer k do Initialize weights from a zero-mean Gaussian Wk â¼ N (0, I) and biases bk = 0 Draw samples z0 â ËD â D and pass them through the ï¬rst k layers of the network Compute the per-channel sample mean ˵k(i) and variance ËÏk(i)2 of zk(i) Rescale the weights by Wk(i, :) â Wk(i, :)/ËÏk(i) Set the bias bk(i) â β â ˵k(i)/ËÏk(i) to center activations around β end for
In Section 3.1, we show how to satisfy E.,.p [znâ1(i)?] = cy, for a layer-wise constant c;,,. In Sec- tion 3.2, we then adjust this layer-wise constant c;, to ensure that all gradients are properly calibrated between layers, in a way that can be applied to pre-initialized networks. Finally, in Section 3.3 we present multiple data-driven weight initializations.
3.1 WITHIN-LAYER WEIGHT NORMALIZATION | 1511.06856#16 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 16 | The natural interpretation of an arbitrary missing event is that the user did not know about the existence of the item and thus there was no interaction. However there is a low probability that the user did know about the item and chose not to interact, because she disliked the item. The more popular the item, the more probable it is that the user knows about it, thus it is more likely that a missing event expresses dislike. Therefore we should sample items in proportion of their popularity. Instead of generating separate samples for each training example, we use the items from the other training examples of the mini-batch as negative examples. The beneï¬t of this approach is that we can further reduce computational times by skipping the sampling. Additionally, there are also beneï¬ts on the implementation side from making the code less complex to faster matrix operations. Meanwhile, this approach is also a popularity-based sampling, because the likelihood of an item being in the other training examples of the mini-batch is proportional to its popularity.
# 3.1.3 RANKING LOSS | 1511.06939#16 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 17 | Table 2: The effects of adding gradient noise to End-To-End Memory Networks. Lower values are better.
We ï¬nd that warmstarting does indeed help the networks. In both cases, adding random noise to the gradient also helps the network both in terms of training errors and validation errors. Added noise, however, is especially helpful for the training of End-To-End Memory Networks without the warmstarting stage.
4.3 NEURAL PROGRAMMER
Neural Programmer is a neural network architecture augmented with a small set of built-in arithmetic and logic operations that learns to induce latent programs. It is proposed for the task of question answering from tables (Neelakantan et al., 2015). Examples of operations on a table include the sum of a set of numbers, or the list of numbers greater than a particular value. Key to Neural Programmer is the use of âsoft selectionâ to assign a probability distribution over the list of operations. This probability distribution weighs the result of each operation, and the cost function compares this weighted result to the ground truth. This soft selection, inspired by the soft attention mechanism of Bahdanau et al. (2014), allows for full differentiability of the model. Running the model for several steps of selection allows the model to induce a complex program by chaining the operations, one after the other. Figure 1 shows the architecture of Neural Programmer at a high level. | 1511.06807#17 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 17 | 3.1 WITHIN-LAYER WEIGHT NORMALIZATION
We aim to ensure that each channel that a layer k + 1 receives a similarly distributed input. It is straightforward to initialize weights in affine layers such that the units have outputs following similar distributions. E.g., we could enforce that layer k activations z),(i, a, b) have Ez, p,a,0 [2x (i, a, b)| = B and Ez,~p,a,b [(ze(i,@, 6) â B)?] = 1 simply via properly-scaled random projections, where a and b index over the 2D spatial extent of the feature map. However, we next have to contend with the nonlinearity o(.). Thankfully, most nonlinearities (such as sigmoid or ReLU) operate independently on different channels. Hence, the different channels will undergo the same transformation, and the output channels will follow the same distribution if the input channels do (though the outputs will generally not be the same distribution as the inputs). In fact, most common CNN layers that apply a homogeneous operation to uniformly-sized windows of the input with regular stride, such as local response normalization, and pooling, empirically preserve this identical distribution requirement as well, making it broadly applicable. | 1511.06856#17 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 17 | # 3.1.3 RANKING LOSS
The core of recommender systems is the relevance-based ranking of items. Although the task can also be interpreted as a classiï¬cation task, learning-to-rank approaches (Rendle et al., 2009; Shi et al., 2012; Steck, 2015) generally outperform other approaches. Ranking can be pointwise, pair- wise or listwise. Pointwise ranking estimates the score or the rank of items independently of each other and the loss is deï¬ned in a way so that the rank of relevant items should be low. Pairwise rank- ing compares the score or the rank of pairs of a positive and a negative item and the loss enforces that the rank of the positive item should be lower than that of the negative one. Listwise ranking uses the scores and ranks of all items and compares them to the perfect ordering. As it includes sorting, it is usually computationally more expensive and thus not used often. Also, if there is only one relevant item â as in our case â listwise ranking can be solved via pairwise ranking.
We included several pointwise and pairwise ranking losses into our solution. We found that point- wise ranking was unstable with this network (see Section 4 for more comments). Pairwise ranking losses on the other hand performed well. We use the following two. | 1511.06939#17 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 18 | In a synthetic table comprehension task, Neural Programmer takes a question and a table (or database) as input and the goal is to predict the correct answer. To solve this task, the model has to induce a program and execute it on the table. A major challenge is that the supervision signal is
# 1Code available at: https://github.com/facebook/MemNN
5
# Under review as a conference paper at ICLR 2016
Timestep t=1,2,...,T Arithmetic and LZ | logic operations + Sa Pr) Controller f selection Apply . Ss Data +| Memory |+â>
# Input Pur
# Output
Figure 1: Neural Programmer, a neural network with built-in arithmetic and logic operations. At every time step, the controller selectes an operation and a data segment. Figure reproduced with permission from Neelakantan et al. (2015).
in the form of the correct answer and not the program itself. The model runs for a ï¬xed number of steps, and at each step selects a data segment and an operation to apply to the selected data segment. Soft selection is performed at training time so that the model is differentiable, while at test time hard selection is employed. Table 3 shows examples of programs induced by the model. | 1511.06807#18 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 18 | We normalize the network activations using empirical estimates of activation statistics obtained from actual data samples z0 â¼ D. In particular, for each afï¬ne layer k â {1, 2, . . . , N } in a topological ordering of the network graph, we compute the empirical mean and standard deviations for all outgoing activations and normalize the weights Wk such that all activations have unit variance and mean β. This procedure is summarized in Algorithm 1. The variance of our estimate of the sample statistics falls with the size of the sample | ËD|. In practice, for CNN initialization, we ï¬nd that on the order of just dozens of samples is typically sufï¬cient.
Note that this simple empirical initialization strategy guarantees afï¬ne layer activations with a par- ticular center and scale while making no assumptions (beyond non-zero variance) about the inputs to the layer, making it robust to any exotic choice of non-linearity or other intermediate operation. This is in contrast with existing approaches designed for particular non-linearities and with archi- tectural constraints. Extending these methods to handle operations for which they werenât designed while maintaining the desired scaling properties may be possible, but it would at least require careful thought, while our simple empirical initialization strategy generalizes to any operations and DAG architecture with no additional implementation effort. | 1511.06856#18 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 18 | ⢠BPR: Bayesian Personalized Ranking (Rendle et al., 2009) is a matrix factorization method that uses pairwise ranking loss. It compares the score of a positive and a sampled negative item. Here we compare the score of the positive item with several sampled items and use their average as the loss. The loss at a given point in one session is deï¬ned as: Ls = â 1 j=1 log (Ï (Ërs,i â Ërs,j)), where NS is the sample size, Ërs,k is the score on item k NS at the given point of the session, i is the desired item (next item in the session) and j are the negative samples. | 1511.06939#18 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 19 | Question greater 17.27 A and lesser -19.21 D count What are the number of elements whose ï¬eld in column A is greater than 17.27 and ï¬eld in Column D is lesser than -19.21. t Selected Op 1 Greater Lesser 2 And 3 Count 4 Selected Column A D - Table 3: Example program induced by the model using T = 4 time steps. We show the selected columns in cases in which the selected operation acts on a particular column.
Similar to the above experiments with Memory Networks, in our experiments with Neural Pro- grammer, we add noise sampled from a Gaussian distribution with mean 0, and decaying variance according to Equation (1) with η = 1.0 to the gradient after clipping. The model is optimized with Adam (Kingma & Ba, 2014), which combines momentum and adaptive learning rates. | 1511.06807#19 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 19 | On the other hand, note that for architectures which are not purely feed-forward, the assumption of identically distributed afï¬ne layer inputs may not hold. GoogLeNet (Szegedy et al., 2015), for example, concatenates layers which are computed via different operations on the same input, and hence may not be identically distributed, before feeding the result into a convolution. Our method cannot guarantee identically distributed inputs for arbitrary DAG-structured networks, so it should be applied to non-feed-forward networks with care.
# 3.2 BETWEEN-LAYER SCALE ADJUSTMENT
Because the initialization given in Section 3.1 results in activations zk(i) with unit variance, the expected change rate C 2 k,i of a column i of the weight matrix Wk is constant across all columns i,
4
Published as a conference paper at ICLR 2016
Algorithm 2 Between-layer normalization. | 1511.06856#19 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 19 | ⢠TOP1: This ranking loss was devised by us for this task. It is the regularized approximation of the relative rank of the relevant item. The relative rank of the relevant item is given by 1 j=1 I{Ërs,j > Ërs,i}. We approximate I{·} with a sigmoid. Optimizing for this NS would modify parameters so that the score for i would be high. However this is unstable as certain positive items also act as negative examples and thus scores tend to become increasingly higher. To avoid this, we want to force the scores of the negative examples to be around zero. This is a natural expectation towards the scores of negative items. Thus we added a regularization term to the loss. It is important that this term is in the same range as the relative rank and acts similarly to it. The ï¬nal loss function is as follows: Ls = 1 NS
# 4 EXPERIMENTS
We evaluate the proposed recursive neural network against popular baselines on two datasets.
5
Published as a conference paper at ICLR 2016 | 1511.06939#19 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 20 | For our ï¬rst experiment, we train Neural Programmer to answer questions involving a single column of numbers. We use 72 different hyper-parameter conï¬gurations with and without adding annealed random noise to the gradients. We also run each of these experiments for 3 different random ini- tializations of the model parameters and we ï¬nd that only 1/216 runs achieve 100% test accuracy without adding noise while 9/216 runs achieve 100% accuracy when random noise is added. The 9 successful runs consisted of models initialized with all the three different random seeds, demon- strating robustness to initialization. We ï¬nd that when using dropout (Srivastava et al., 2014) none of the 216 runs give 100% accuracy.
We consider a more difï¬cult question answering task where tables have up to ï¬ve columns contain- ing numbers. We also experiment on a task containing one column of numbers and another column of text entries. Table 4 shows the performance of adding noise vs. no noise on Neural Programmer.
Figure 2 shows an example of the effect of adding random noise to the gradients in our experiment with 5 columns. The differences between the two models are much more pronounced than in Table 4 because Table 4 shows the results after careful hyperparameter selection. | 1511.06807#20 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 20 | 4
Published as a conference paper at ICLR 2016
Algorithm 2 Between-layer normalization.
Draw samples zp ⬠DcD repeat Compute the ratio Ch = E; [Cx] Compute the average ratio C = (II. Cy) ~ .\ a/2 Compute a scale correction r, = (Ce / c) with a damping factor a < 1 Correct the weights and biases of layer k: by < rpbg, We â reWe Undo the scaling r;, in the layer above until Convergence (roughly 10 iterations) | 1511.06856#20 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 20 | # 4 EXPERIMENTS
We evaluate the proposed recursive neural network against popular baselines on two datasets.
5
Published as a conference paper at ICLR 2016
The ï¬rst dataset is that of RecSys Challenge 20151. This dataset contains click-streams of an e- commerce site that sometimes end in purchase events. We work with the training set of the challenge and keep only the click events. We ï¬lter out sessions of length 1. The network is trained on â¼ 6 months of data, containing 7,966,257 sessions of 31,637,239 clicks on 37,483 items. We use the sessions of the subsequent day for testing. Each session is assigned to either the training or the test set, we do not split the data mid-session. Because of the nature of collaborative ï¬ltering methods, we ï¬lter out clicks from the test set where the item clicked is not in the train set. Sessions of length one are also removed from the test set. After the preprocessing we are left with 15,324 sessions of 71,222 events for the test set. This dataset will be referred to as RSC15. | 1511.06939#20 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 21 | In all cases, we see that added gradient noise improves performance of Neural Programmer. Its performance when combined with or used instead of dropout is mixed depending on the problem, but the positive results indicate that it is worth attempting on a case-by-case basis.
6
# Under review as a conference paper at ICLR 2016
Setting Five columns Text entries No Noise With Noise Dropout Dropout With Noise 97.4% 95.3% 99.1% 97.6% 98.7% 98.8% 99.2% 97.3%
Table 4: The effects of adding random noise to the gradient on Neural Programmer. Higher values are better. Adding random noise to the gradient always helps the model. When the models are applied to these more complicated tasks than the single column experiment, using dropout and noise together seems to be beneï¬cial in one case while using only one of them achieves the best result in the other case. | 1511.06807#21 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 21 | under the approximation given in Equation (4). However, this does not provide any guarantee of the scaling of the change rates between layers. We use an iterative procedure to obtain roughly constant parameter change rates C 2 k,i across all layers k (as well as all columns i within a layer), given previously-initialized weights. At each iteration we estimate the average change ratio ( ËCk,i,j) per layer. We also estimate a global change ratio, as the geometric mean of all layer-wise change ratios. The geometric mean ensures that the output remains unchanged in completely homogeneous networks. We then scale the parameters for each layer to be closer to this global change ratio. We simultaneously undo this scaling in the layer above, such that the function that the entire network computes is unchanged. This scaling can be undone by inserting an auxiliary scaling layer after each afï¬ne layer. However for homogeneous non-linearities, such as ReLU, Pooling or LRN, this scaling can be undone at in the next afï¬ne layer without the need of a special scaling layer. The between-layer scale adjustment procedure is summarized in Algorithm 2. Adjusting the scale of all layers simultaneously can lead to an oscillatory behavior. To prevent this we add a small damping factor α (usually α = 0.25). | 1511.06856#21 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 21 | The second dataset is collected from a Youtube-like OTT video service platform. Events of watching a video for at least a certain amount of time were collected. Only certain regions were subject to this collection that lasted for somewhat shorter than 2 months. During this time item-to-item recommendations were provided after each video at the left side of the screen. These were provided by a selection of different algorithms and inï¬uenced the behavior of the users. Preprocessing steps are similar to that of the other dataset with the addition of ï¬ltering out very long sessions as they were probably generated by bots. The training data consists of all but the last day of the aforementioned period and has â¼ 3 million sessions of â¼ 13 million watch events on 330 thousand videos. The test set contains the sessions of the last day of the collection period and has â¼ 37 thousand sessions with â¼ 180 thousand watch events. This dataset will be referred to as VIDEO. | 1511.06939#21 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 22 | 300 Train Loss: Noise Vs. No Noise roo zest Accuracy: Noise Vs. No Noise --- no noise --- no noise { 3000 --- noise go|| "77 Boise ' Fy i g 2500 = 60 : 6 ' 8 g i g ' 4 < ' = 2000hy,.. . 8 40 : lho . PS ! x ep ert tieebrore ! â 1500 an Se Natal 100% 3010015030020 300 "030100150 20025000 No. of epochs No. of epochs
Figure 2: Noise vs. No Noise in our experiment with tables containing 5 columns. The models trained with noise generalizes almost always better.
4.4 NEURAL RANDOM ACCESS MACHINES | 1511.06807#22 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06939 | 22 | The evaluation is done by providing the events of a session one-by-one and checking the rank of the item of the next event. The hidden state of the GRU is reset to zero after a session ï¬nishes. Items are ordered in descending order by their score and their position in this list is their rank. With RSC15, all of the 37,483 items of the train set were ranked. However, this would have been impractical with VIDEO, due to the large number of items. There we ranked the desired item against the most popular 30,000 items. This has negligible effect on the evaluations as rarely visited items often get low scores. Also, popularity based pre-ï¬ltering is common in practical recommender systems. | 1511.06939#22 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 23 | 4.4 NEURAL RANDOM ACCESS MACHINES
We now conduct experiments with Neural Random-Access Machines (NRAM) (Kurach et al., 2015). NRAM is a model for algorithm learning that can store data, and explicitly manipulate and derefer- ence pointers. NRAM consists of a neural network controller, memory, registers and a set of built-in operations. This is similar to the Neural Programmer in that it uses a controller network to compose built-in operations, but both reads and writes to an external memory. An operation can either read (a subset of) contents from the memory, write content to the memory or perform an arithmetic opera- tion on either input registers or outputs from other operations. The controller runs for a ï¬xed number of time steps. At every step, the model selects both the operation to be executed and its inputs. These selections are made using soft attention (Bahdanau et al., 2014) making the model end-to-end dif- ferentiable. NRAM uses an LSTM (Hochreiter & Schmidhuber, 1997) controller. Figure 3 gives an overview of the model. | 1511.06807#23 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 23 | 3.3 WEIGHT INITIALIZATIONS
Until now, we used a random Gaussian initialization of the weights, but our procedure does not require this. Hence, we explored two data-driven initializations: a PCA-based initialization and a k-means based initialization. For the PCA-based initialization, we set the weights such that the layer outputs are white and decorrelated. For each layer k we record the features activations zkâ1 of each channel c across all spatial locations for all images in D. Then then use the ï¬rst M principal components of those activations as our weight matrix Wk. For the k-means based initialization, we follow Coates & Ng (2012) and apply spherical k-means on whitened feature activations. We use the cluster centers of k-means as initial weights for our layers, such that each output unit corresponds to one centroid of k-means. k-means usually does a better job than PCA, as it captures the modes of the input data, instead of merely decorrelating it. We use both k-means and PCA on just the convo- lutional layers of the architecture, as we donât have enough data to estimate the required number of weights for fully connected layers. | 1511.06856#23 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 23 | As recommender systems can only recommend a few items at once, the actual item a user might pick should be amongst the ï¬rst few items of the list. Therefore, our primary evaluation metric is recall@20 that is the proportion of cases having the desired item amongst the top-20 items in all test cases. Recall does not consider the actual rank of the item as long as it is amongst the top-N. This models certain practical scenarios well where there is no highlighting of recommendations and the absolute order does not matter. Recall also usually correlates well with important online KPIs, such as click-through rate (CTR)(Liu et al., 2012; Hidasi & Tikk, 2012). The second metric used in the experiments is MRR@20 (Mean Reciprocal Rank). That is the average of reciprocal ranks of the desired items. The reciprocal rank is set to zero if the rank is above 20. MRR takes into account the rank of the item, which is important in cases where the order of recommendations matter (e.g. the lower ranked items are only visible after scrolling).
4.1 BASELINES
We compare the proposed network to a set of commonly used baselines.
⢠POP: Popularity predictor that always recommends the most popular items of the training set. Despite its simplicity it is often a strong baseline in certain domains. | 1511.06939#23 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 24 | For our experiment, we consider a problem of searching k-th elementâs value on a linked list. The network is given a pointer to the head of the linked list, and has to ï¬nd the value of the k-th element. Note that this is highly nontrivial because pointers and their values are stored at random locations in memory, so the model must learn to traverse a complex graph for k steps.
Because of this complexity, training the NRAM architecture can be unstable, especially when the number of steps and operations is large. We once again experiment with the decaying noise schedule from Equation (1), setting η = 0.3. We run a large grid search over the model hyperparameters (de- tailed in Kurach et al. (2015)), and use the top 3 for our experiments. For each of these 3 settings, we try 100 different random initializations and look at the percentage of runs that give 100% accuracy across each one for training both with and without noise.
As in our experiments with Neural Programmer, we ï¬nd that gradient clipping is crucial when training with noise. This is likely because the effect of random noise is washed away when gradients become too large. For models trained with noise we observed much better reproduce rates, which are presented in Table 5. Although it is possible to train the model to achieve 100% accuracy without
7
# Under review as a conference paper at ICLR 2016 | 1511.06807#24 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 24 | In summary, we initialize weights or all ï¬lters (§ 3.3), then normalize those weights such that all activations are equally distributed (§ 3.1), and ï¬nally rescale each layer such that the gradient ratio is constant across layers (§ 3.2). This initialization encures that all weights learn at approximately the same rate, leading to a better convergence and more accurate models, as we will show next.
# 4 EVALUATION
We implement our initialization and all experiments in the open-source deep learning framework Caffe (Jia et al., 2014). To assess how easily a network can be ï¬ne-tuned with limited data, we use the classiï¬cation and detection challenges in PASCAL VOC 2007 (Everingham et al., 2014), which contains 5011 images for training and 4952 for testing.
5
Published as a conference paper at ICLR 2016
e t a r
# e g n a h c
e g a r e v a | 1511.06856#24 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 24 | ⢠POP: Popularity predictor that always recommends the most popular items of the training set. Despite its simplicity it is often a strong baseline in certain domains.
S-POP: This baseline recommends the most popular items of the current session. The rec- ommendation list changes during the session as items gain more events. Ties are broken up using global popularity values. This baseline is strong in domains with high repetitiveness. ⢠Item-KNN: Items similar to the actual item are recommended by this baseline and simi- larity is deï¬ned as the cosine similarity between the vector of their sessions, i.e. it is the number of co-occurrences of two items in sessions divided by the square root of the product of the numbers of sessions in which the individual items are occurred. Regularization is also included to avoid coincidental high similarities of rarely visited items. This baseline is one of the most common item-to-item solutions in practical systems, that provides recom- mendations in the âothers who viewed this item also viewed these onesâ setting. Despite of its simplicity it is usually a strong baseline (Linden et al., 2003; Davidson et al., 2010).
# 1http://2015.recsyschallenge.com/
6
Published as a conference paper at ICLR 2016
Table 1: Recall@20 and MRR@20 using the baseline methods | 1511.06939#24 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 25 | 7
# Under review as a conference paper at ICLR 2016
binarized LSTM ï¬nish? r1 r2 r3 r4r s r e t s i g e m1 m2 m3 r1 r2 r3 r4
# memory tape
Figure 3: One timestep of the NRAM architecture with R = 4 registers and a memory tape. m1, m2 and m3 are example operations built-in to the model. The operations can read and write from memory. At every time step, the LSTM controller softly selects the operation and its inputs. Figure reproduced with permission from Kurach et al. (2015).
noise, it is less robust across multiple random restarts, with over 10x as many initializations leading to a correct answer when using noise.
Hyperparameter-1 Hyperparameter-2 Hyperparameter-3 Average No Noise With Noise 1% 5% 0% 22% 3% 7% 1.3% 11.3%
Table 5: Percentage of successful runs on k-th element task. Higher values are better. All tests were performed with the same set of 100 random initializations (seeds).
4.5 CONVOLUTIONAL GATED RECURRENT NETWORKS (NEURAL GPUS) | 1511.06807#25 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 25 | 5
Published as a conference paper at ICLR 2016
e t a r
# e g n a h c
e g a r e v a
103 30 102 101 Gaussian Gaussian (ours) K-Means Gaussian (caffe) ImageNet K-Means (ours) n o i t a i r a v f o t n e i c ï¬ f e o c 20 10 1 v n o c 2 v n o c 3 v n o c 4 v n o c 5 v n o c 6 c f 7 c f 8 c f 0 1 v n o c 2 v n o c 3 v n o c 4 v n o c 5 v n o c 6 c f (a) average change rate (b) coefï¬cient of variation 7 c f 8 c f
Figure 1: Visualization of the relative change rate ËCk,i,j in CaffeNet for various initializations esti- mated on 100 images. (a) shows the average change rate per layer, a ï¬at curve is better, as all layers learn at the same rate. (b) shows the coefï¬cient of variation for the change rate within each layer, lower is better as weights within a layer train more uniformly. | 1511.06856#25 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 25 | # 1http://2015.recsyschallenge.com/
6
Published as a conference paper at ICLR 2016
Table 1: Recall@20 and MRR@20 using the baseline methods
Baseline RSC15 VIDEO Recall@20 MRR@20 Recall@20 MRR@20 POP S-POP Item-KNN BPR-MF 0.0050 0.2672 0.5065 0.2574 0.0012 0.1775 0.2048 0.0618 0.0499 0.1301 0.5508 0.0692 0.0117 0.0863 0.3381 0.0374
# Table 2: Best parametrizations for datasets/loss functions
Dataset Loss Mini-batch Dropout Learning rate Momentum RSC15 RSC15 RSC15 VIDEO VIDEO VIDEO TOP1 BPR Cross-entropy TOP1 BPR Cross-entropy 50 50 500 50 50 200 0.5 0.2 0 0.4 0.3 0.1 0.01 0.05 0.01 0.05 0.1 0.05 0 0.2 0 0 0 0.3 | 1511.06939#25 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 26 | 4.5 CONVOLUTIONAL GATED RECURRENT NETWORKS (NEURAL GPUS)
Convolutional Gated Recurrent Networks (CGRN) or Neural GPUs (Kaiser & Sutskever, 2015) are a recently proposed model that is capable of learning arbitrary algorithms. CGRNs use a stack of convolution layers, unfolded with tied parameters like a recurrent network. The input data (usually a list of symbols) is ï¬rst converted to a three dimensional tensor representation containing a sequence of embedded symbols in the ï¬rst two dimensions, and zeros padding the next dimension. Then, multiple layers of modiï¬ed convolution kernels are applied at each step. The modiï¬ed kernel is a combination of convolution and Gated Recurrent Units (GRU) (Cho et al., 2014). The use of con- volution kernels allows computation to be applied in parallel across the input data, while the gating mechanism helps the gradient ï¬ow. The additional dimension of the tensor serves as a working memory while the repeated operations are applied at each layer. The output at the ï¬nal layer is the predicted answer. | 1511.06807#26 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 26 | Architectures Most of our experiments are performed on the 8 layer CaffeNet architecture a small modiï¬cation of AlexNet (Krizhevsky et al., 2012). We use the default architecture for all com- parisons, except for Doersch et al. (2015) which removed groups in the convolutional layers. We also show results on the much deeper GoogLeNet (Szegedy et al., 2015) and VGG (Simonyan & Zisserman, 2015) architectures. | 1511.06856#26 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 26 | ⢠BPR-MF: BPR-MF (Rendle et al., 2009) is one of the commonly used matrix factorization methods. It optimizes for a pairwise ranking objective function (see Section 3) via SGD. Matrix factorization cannot be applied directly to session-based recommendations, because the new sessions do not have feature vectors precomputed. However we can overcome this by using the average of item feature vectors of the items that had occurred in the session so far as the user feature vector. In other words we average the similarities of the feature vectors between a recommendable item and the items of the session so far.
Table 1 shows the results for the baselines. The item-KNN approach clearly dominates the other methods.
4.2 PARAMETER & STRUCTURE OPTIMIZATION
We optimized the hyperparameters by running 100 experiments at randomly selected points of the parameter space for each dataset and loss function. The best parametrization was further tuned by individually optimizing each parameter. The number of hidden units was set to 100 in all cases. The best performing parameters were then used with hidden layers of different sizes. The optimization was done on a separate validation set. Then the networks were retrained on the training plus the validation set and evaluated on the ï¬nal test set. | 1511.06939#26 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 27 | The key difference between Neural GPUs and other architectures for algorithmic tasks (e.g., Neural Turing Machines (Graves et al., 2014)) is that instead of using sequential data access, convolution kernels are applied in parallel across the input, enabling the use of very deep and wide models. The model is referred to as Neural GPUs because the input data is accessed in parallel. Neural GPUs were shown to outperform previous sequential architectures for algorithm learning on tasks such as binary addition and multiplication, by being able to generalize from much shorter to longer data cases.
In our experiments, we use Neural GPUs for the task of binary multiplication. The input consists two concatenated sequences of binary digits separated by an operator token, and the goal is to multiply
8
# Under review as a conference paper at ICLR 2016
the given numbers. During training, the model is trained on 20-digit binary numbers while at test time, the task is to multiply 200-digit numbers. Once again, we add noise sampled from Gaussian distribution with mean 0, and decaying variance according to the schedule in Equation (1) with η = 1.0, to the gradient after clipping. The model is optimized using Adam (Kingma & Ba, 2014). | 1511.06807#27 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 27 | Image classiï¬cation The VOC image classiï¬cation task is to predict the presence or absence of each of 20 object classes in an image. For this task we ï¬ne-tune all networks using a sigmoid cross- entropy loss on random crops of each image. We optimize each network via Stochastic Gradient Descent (SGD) for 80,000 iterations with an initial learning rate of 0.001 (dropped by 0.5 every 10,000 iterations), batch size of 10, and momentum of 0.9. The total training takes one hour on a Titan X GPU for CaffeNet. We tried different settings for various methods, but found these setting to work best for all initializations. At test time we average 10 random crops of the image to determine the presence or absence of an object. The CNN estimates the likelihood that each object is present, which we use as a score to compute a precision-recall curve per class. We evaluate all algorithms using mean average precision (mAP) (Everingham et al., 2014). | 1511.06856#27 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 27 | The best performing parametrizations are summarized in table 2. Weight matrices were initialized by random numbers drawn uniformly from [âx, x] where x depends on the number of rows and columns of the matrix. We experimented with both rmsprop (Dauphin et al., 2015) and adagrad (Duchi et al., 2011). We found adagrad to give better results.
We brieï¬y experimented with other units than GRU. We found both the classic RNN unit and LSTM to perform worse.
We tried out several loss functions. Pointwise ranking based losses, such as cross-entropy and MRR optimization (as in Steck (2015)) were usually unstable, even with regularization. For example cross-entropy yielded only 10 and 6 numerically stable networks of the 100 random runs for RSC15 and VIDEO respectively. We assume that this is due to independently trying to achieve high scores for the desired items and the negative push is small for the negative samples. On the other hand pairwise ranking-based losses performed well. We found the ones introduced in Section 3 (BPR and TOP1) to perform the best. | 1511.06939#27 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 28 | Table 6 gives the results of a large-scale experiment using Neural GPUs over 7290 experimental runs. The experiment shows that models trained with added gradient noise are more robust across many random initializations and parameter settings. As you can see, adding gradient noise both allows us to achieve the best performance, with the number of models with < 1% error over twice as large as without noise. But it also helps throughout, improving the robustness of training, with more models training to lower error rates as well. This experiment shows that the simple technique of added gradient noise is effective even in regimes where we can afford a very large numbers of random restarts.
Setting No Noise With Noise Error < 1% Error < 2% Error < 3% Error < 5% 28 58 90 159 172 282 387 570
Table 6: Number of successful runs on 7290 random trials. Higher values are better. The models are trained on length 20 and tested on length 200.
# 5 CONCLUSION
In this paper, we discussed a set of experiments which show the effectiveness of adding noise to the gradient. We found that adding noise to the gradient during training helps training and generalization of complicated neural networks. We suspect that the effects are pronounced for complex models because they have many local minima. | 1511.06807#28 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
1511.06856 | 28 | Object detection In addition to predicting the presence of absence of an object in a scene, object detection requires the precise localization of each object using a bounding box. We again eval- uate mean average precision (Everingham et al., 2014). We ï¬ne-tune all our models using Fast R-CNN (Girshick, 2015). For a fair comparison we varied the parameters of the ï¬ne-tuning for each of the different initializations. We tried three different learning rates (0.01, 0.002 and 0.001) dropped by 0.1 every 50,000 iterations, with a total of 150,000 training iterations. We used multi- scale training and ï¬ne-tuned all layers. We evaluate all models on single scale. All other settings were kept at their default values. Training and evaluation took roughly 8 hours in a Titan X GPU for CaffeNet. All models are trained from scratch unless otherwise stated.
For both experiments we use 160 images of the VOC2007 training set for our initialization. 160 images are sufï¬cient to robustly estimate activation statistics, as each unit usually sees tens of thou- sands of activations throughout all spacial locations in an images. At the same time, this relatively small set of images keeps the computational cost low.
# 4.1 SCALING AND LEARNING ALGORITHMS | 1511.06856#28 | Data-dependent Initializations of Convolutional Neural Networks | Convolutional Neural Networks spread through computer vision like a wildfire,
impacting almost all visual tasks imaginable. Despite this, few researchers
dare to train their models from scratch. Most work builds on one of a handful
of ImageNet pre-trained models, and fine-tunes or adapts these for specific
tasks. This is in large part due to the difficulty of properly initializing
these networks from scratch. A small miscalibration of the initial weights
leads to vanishing or exploding gradients, as well as poor convergence
properties. In this work we present a fast and simple data-dependent
initialization procedure, that sets the weights of a network such that all
units in the network train at roughly the same rate, avoiding vanishing or
exploding gradients. Our initialization matches the current state-of-the-art
unsupervised or self-supervised pre-training methods on standard computer
vision tasks, such as image classification and object detection, while being
roughly three orders of magnitude faster. When combined with pre-training
methods, our initialization significantly outperforms prior work, narrowing the
gap between supervised and unsupervised pre-training. | http://arxiv.org/pdf/1511.06856 | Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell | cs.CV, cs.LG | ICLR 2016 | null | cs.CV | 20151121 | 20160922 | [] |
1511.06939 | 28 | Several architectures were examined and a single layer of GRU units was found to be the best performer. Adding addition layers always resulted in worst performance w.r.t. both training loss and recall and MRR measured on the test set. We assume that this is due to the generally short
7
Published as a conference paper at ICLR 2016
Table 3: Recall@20 and MRR@20 for different types of a single layer of GRU, compared to the best baseline (item-KNN). Best results per dataset are highlighted. | 1511.06939#28 | Session-based Recommendations with Recurrent Neural Networks | We apply recurrent neural networks (RNN) on a new domain, namely recommender
systems. Real-life recommender systems often face the problem of having to base
recommendations only on short session-based data (e.g. a small sportsware
website) instead of long user histories (as in the case of Netflix). In this
situation the frequently praised matrix factorization approaches are not
accurate. This problem is usually overcome in practice by resorting to
item-to-item recommendations, i.e. recommending similar items. We argue that by
modeling the whole session, more accurate recommendations can be provided. We
therefore propose an RNN-based approach for session-based recommendations. Our
approach also considers practical aspects of the task and introduces several
modifications to classic RNNs such as a ranking loss function that make it more
viable for this specific problem. Experimental results on two data-sets show
marked improvements over widely used approaches. | http://arxiv.org/pdf/1511.06939 | Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk | cs.LG, cs.IR, cs.NE | Camera ready version (17th February, 2016) Affiliation update (29th
March, 2016) | null | cs.LG | 20151121 | 20160329 | [
{
"id": "1502.04390"
}
] |
1511.06807 | 29 | We believe that this surprisingly simple yet effective idea, essentially a single line of code, should be in the toolset of neural network practitioners when facing issues with training neural networks. We also believe that this set of empirical results can give rise to further formal analysis of why adding noise is so effective for very deep neural networks.
Acknowledgements We sincerely thank Marcin Andrychowicz, Dmitry Bahdanau, Samy Bengio, Oriol Vinyals for suggestions and the Google Brain team for help with the project.
# REFERENCES
An, Guozhong. The effects of adding noise during backpropagation training on a generalization performance. Neural Computation, 1996.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. ICLR, 2014.
Blundell, Charles, Cornebise, Julien, Kavukcuoglu, Koray, and Wierstra, Daan. Weight uncertainty in neural networks. ICML, 2015.
Bottou, L´eon. Stochastic gradient learning in neural networks. In Neuro-N¨ımes, 1992.
Bousquet, Olivier and Bottou, L´eon. The tradeoffs of large scale learning. In NIPS, 2008. | 1511.06807#29 | Adding Gradient Noise Improves Learning for Very Deep Networks | Deep feedforward and recurrent networks have achieved impressive results in
many perception and language processing applications. This success is partially
attributed to architectural innovations such as convolutional and long
short-term memory networks. The main motivation for these architectural
innovations is that they capture better domain knowledge, and importantly are
easier to optimize than more basic architectures. Recently, more complex
architectures such as Neural Turing Machines and Memory Networks have been
proposed for tasks including question answering and general computation,
creating a new set of optimization challenges. In this paper, we discuss a
low-overhead and easy-to-implement technique of adding gradient noise which we
find to be surprisingly effective when training these very deep architectures.
The technique not only helps to avoid overfitting, but also can result in lower
training loss. This method alone allows a fully-connected 20-layer deep network
to be trained with standard gradient descent, even starting from a poor
initialization. We see consistent improvements for many complex models,
including a 72% relative reduction in error rate over a carefully-tuned
baseline on a challenging question-answering task, and a doubling of the number
of accurate binary multiplication models learned across 7,000 random restarts.
We encourage further application of this technique to additional complex modern
architectures. | http://arxiv.org/pdf/1511.06807 | Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens | stat.ML, cs.LG | null | null | stat.ML | 20151121 | 20151121 | [
{
"id": "1508.05508"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.