doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1703.04933
40
(c) Loss function with another reparametrization Figure 5: A one-dimensional example on how much the geometry of the loss function depends on the parameter space chosen. The x-axis is the parameter value and the y-axis is the loss. The points correspond to a regular grid in the default parametrization. In the default parametrization, all minima have roughly the same curvature but with a careful choice of reparametrization, it is possible to turn a minimum significantly flatter or sharper than the others. Reparametrizations in this figure are of the form η = (|θ − ˆθ|2 + b)a(θ − ˆθ) where b ≥ 0, a > − 1 2 and ˆθ is shown with the red vertical line. but the intuition with batch normalization will be similar. Weight normalization reparametrizes a nonzero weight w as w= °Toe with the new parameter being the scale s and the unnormalized weight v 4 0.
1703.04933#40
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
40
Train Episodes 5-way Acc. 20-way Acc. Model Dist. Shot Query Way 1-shot 5-shot 1-shot 5-shot PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. 1 1 1 15 15 5 5 20 60 97.4% 99.3% 92.0% 97.8% 98.7% 99.6% 95.4% 98.8% 98.8% 99.7% 96.0% 99.0% PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. 5 5 5 15 15 5 5 20 60 96.9% 99.3% 90.7% 97.8% 98.1% 99.6% 94.1% 98.7% 98.5% 99.7% 94.7% 98.9% Figure 3 shows a sample t-SNE visualization [18] of the embeddings learned by prototypical networks. We visualize a subset of test characters from the same alphabet in order to gain better insight, despite the fact that classes in actual test episodes are likely to come from different alphabets. Even though the visualized characters are minor variations of each other, the network is able to cluster the hand-drawn characters closely around the class prototypes. # B Additional miniImageNet Results
1703.05175#40
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
41
Frank, M. C.; Jurafsky, D.; and Potts, C. 2014. Learning to reason pragmatically with cognitive limitations. In Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 3055–3060. Wheat Ridge, CO: Cognitive Science Society. [Wang, Liang, and Manning 2016] Wang, S. I.; Liang, P.; and Manning, C. 2016. Learning language games through In Association for Computational Linguistics interaction. (ACL).
1703.04908#41
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
41
But this infinitesimal reasoning is once again very dependent of the local geometry of the input space. For an invertible preprocessing ξ−1, e.g. feature standardization, whitening or gaussianization (Chen & Gopinath, 2001), we will call fξ = f ◦ ξ the prediction function on the preprocessed input u = ξ−1(x). We can reproduce the derivation in Section 5 to obtain ∂f ∂xT As we can alter significantly the relative magnitude of the gradient at each point, analyzing the amplitude of the gradi- ent of the prediction function might prove problematic if the choice of the input space have not been explained before- hand. This remark applies in applications involving images, sound or other signals with invariances (Larsen et al., 2015). For example, Theis et al. (2016) show for images how a small drift of one to four pixels can incur a large difference in terms of L2 norm. Since we can observe that w is invariant to scaling of v, reasoning similar to Section 3 can be applied with the sim- pler transformations T’, : v ++ av for a # 0. Moreover, since this transformation is a simpler isotropic scaling, the conclusion that we can draw can be actually more powerful with respect to v: # 6 Discussion
1703.04933#41
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
41
# B Additional miniImageNet Results In Table 5 we show the full results for the comparison of training episode configuration in Figure 2 of the main paper. We also compared Euclidean-distance prototypical networks trained with a different number of classes per episode. Here we vary the classes per training episode from 5 up to 30 while keeping the number of query points per class fixed at 15. The results are shown in Figure 4. Our findings indicate that construction of training episodes is an important consideration in order to achieve good results for few-shot classification. Table 6 contains the full results for this set of experiments. 11
1703.05175#41
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04908
42
[Winograd 1973] Winograd, T. 1973. A procedural model of language understanding. # Appendix: Physical State and Dynamics is specified by x = The physical state of the agent [ p ˙p v d ] where ˙p is the velocity of p. d ∈ R3 is the color associted with the agent. Landmarks have similar state, but without gaze and velocity components. The physical state transition dynamics for a single agent i are given by: p+ pat 1 YP + (up + £(x1,...,x))At Uy t ,_{P x,=|]P] = vi. i i Where f (x1, ..., xN ) are the physical interaction forces (such as collision) between all agents in the environment and any obstacles, ∆t is the simulation timestep (we use 0.1), and (1 − γ) is a damping coefficient (we use 0.5). The action space of the agent is a = [ up uv c ]. The ob- servation of any location pj in reference frame of agent i is ipj = Ri(pj − pi), where Ri is the random rotation matrix of agent i. Giving each agent a private random orientation prevents identifying landmarks in a shared coordinate frame (using words such as top-most or left-most).
1703.04908#42
Emergence of Grounded Compositional Language in Multi-Agent Populations
By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.
http://arxiv.org/pdf/1703.04908
Igor Mordatch, Pieter Abbeel
cs.AI, cs.CL
null
null
cs.AI
20170315
20180724
[ { "id": "1603.08887" }, { "id": "1611.01779" }, { "id": "1612.07182" }, { "id": "1609.00777" }, { "id": "1612.08810" } ]
1703.04933
42
# 6 Discussion It has been observed empirically that minima found by stan- dard deep learning algorithms that generalize well tend to be flatter than found minima that did not generalize Sharp Minima Can Generalize For Deep Nets well (Chaudhari et al., 2017; Keskar et al., 2017). How- ever, when following several definitions of flatness, we have shown that the conclusion that flat minima should generalize better than sharp ones cannot be applied as is without fur- ther context. Previously used definitions fail to account for the complex geometry of some commonly used deep archi- tectures. In particular, the non-identifiability of the model induced by symmetries, allows one to alter the flatness of a minimum without affecting the function it represents. Addi- tionally the whole geometry of the error surface with respect to the parameters can be changed arbitrarily under different parametrizations. In the spirit of (Swirszcz et al., 2016), our work indicates that more care is needed to define flatness to avoid degeneracies of the geometry of the model under study. Also such a concept can not be divorced from the particular parametrization of the model or input space. # Acknowledgements
1703.04933#42
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
42
Figure 3: A t-SNE visualization of the embeddings learned by prototypical networks on the Omniglot dataset. A subset of the Tengwar script is shown (an alphabet in the test set). Class prototypes are indicated in black. Several misclassified characters are highlighted in red along with arrows pointing to the correct prototype. 51% 1-shot 69.0% 5-shot S som 08.5% 7 Ea 4 2 68.0% > ann aor 6 “Tt 8 - . © 49% u 2 -- ~ > fo fee _ > 67.5% ra Ae © age, ao © 67.0% ‘ a § 40% ” § 67.0% im . g 2 66.5% ‘ 47% “ < , a B 66.0% | |. 46% 5 65.5% 45% 65.0% 5 10 15 20 25 30 5 10 15 20 25 30 Training Classes per Episode Training Classes per Episode Figure 4: Comparison of the effect of training “way” (number of classes per episode) for prototypical networks trained on miniImageNet. Each training episode contains 15 query points per class. Error bars indicate 95% confidence intervals as computed over 600 test episodes. 12
1703.05175#42
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04933
43
# Acknowledgements The authors would like to thank Grzegorz ´Swirszcz for an insightful discussion of the paper, Harm De Vries, Yann Dauphin, Jascha Sohl-Dickstein and César Laurent for use- ful discussions about optimization, Danilo Rezende for ex- plaining universal approximation using normalizing flows and Kyle Kastner, Adriana Romero, Junyoung Chung, Nico- las Ballas, Aaron Courville, George Dahl, Yaroslav Ganin, Prajit Ramachandran, Ça˘glar Gülçehre, Ahmed Touati and the ICML reviewers for useful feedback. # References Roweis, S. (eds.), Advances in Neural Information Process- ing Systems, volume 20, pp. 161–168. NIPS Foundation (http://books.nips.cc), 2008. URL http://leon.bottou. org/papers/bottou-bousquet-2008. Bottou, Léon and LeCun, Yann. On-line learning for very large datasets. Applied Stochastic Models in Business and Industry, 21(2):137–151, 2005. URL http://leon.bottou.org/ papers/bottou-lecun-2004a.
1703.04933#43
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
43
12 Table 5: Comparison of matching and prototypical networks on miniImageNet under cosine vs. Euclidean distance, 5-way vs. 20-way, and 1-shot vs. 5-shot. All experiments use a shared encoder for both support and query points with embedding dimension 1,600 (architecture and training details are provided in Section 3.2 of the main paper). Classification accuracy is averaged over 600 randomly generated episodes from the test set and 95% confidence intervals are shown.
1703.05175#43
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04933
44
Bottou, Léon, Curtis, Frank E, and Nocedal, Jorge. Optimiza- tion methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016. Bousquet, Olivier and Elisseeff, André. Stability and generaliza- tion. Journal of Machine Learning Research, 2(Mar):499–526, 2002. Chan, William, Jaitly, Navdeep, Le, Quoc V., and Vinyals, Oriol. Listen, attend and spell: A neural network for large vocab- In 2016 IEEE In- ulary conversational speech recognition. ternational Conference on Acoustics, Speech and Signal Pro- cessing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pp. 4960–4964. IEEE, 2016. ISBN 978-1-4799-9988-0. doi: 10.1109/ICASSP.2016.7472621. URL http://dx.doi. org/10.1109/ICASSP.2016.7472621.
1703.04933#44
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
44
Model Dist. Train Episodes Shot Query Way 1-shot 5-way Acc. 5-shot MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS Cosine Euclid. Cosine Euclid. 1 1 1 1 15 15 15 15 5 5 20 20 38.82 ± 0.69% 44.54 ± 0.56% 46.61 ± 0.78% 59.84 ± 0.64% 43.63 ± 0.76% 51.34 ± 0.64% 49.17 ± 0.83% 62.66 ± 0.71% MATCHING NETS MATCHING NETS MATCHING NETS MATCHING NETS PROTONETS PROTONETS PROTONETS PROTONETS Cosine Euclid. Cosine Euclid. Cosine Euclid. Cosine Euclid. 5 5 5 5 5 5 5 5 15 15 15 15 15 15 15 15 5 5 20 20 5 5 20 20 46.43 ± 0.74% 54.60 ± 0.62% 46.43 ± 0.78% 60.97 ± 0.67% 46.46 ± 0.79% 55.77 ± 0.69% 47.99 ± 0.79% 63.66 ± 0.68% 42.48
1703.05175#44
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04933
45
Chaudhari, Pratik, Choromanska, Anna, Soatto, Stefano, Le- Cun, Yann, Baldassi, Carlo, Borgs, Christian, Chayes, Jen- nifer, Sagun, Levent, and Zecchina, Riccardo. Entropy-sgd: In ICLR’2017, Biasing gradient descent into wide valleys. arXiv:1611.01838, 2017. Chen, Scott Saobing and Gopinath, Ramesh A. Gaussianization. In Leen, T. K., Dietterich, T. G., and Tresp, V. (eds.), Advances in Neural Information Processing Systems 13, pp. 423–429. MIT Press, 2001. URL http://papers.nips.cc/paper/ 1856-gaussianization.pdf. Amari, Shun-Ichi. Natural gradient works efficiently in learning. Neural Comput., 10(2), 1998. Arpit, Devansh, Zhou, Yingbo, Kota, Bhargava U, and Govin- daraju, Venu. Normalization propagation: A parametric tech- nique for removing internal covariate shift in deep networks. arXiv preprint arXiv:1603.01431, 2016.
1703.04933#45
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
46
Bach, Francis R. and Blei, David M. (eds.). Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Work- shop and Conference Proceedings, 2015. JMLR.org. URL http://jmlr.org/proceedings/papers/v37/. Badrinarayanan, Vijay, Mishra, Bamdev, and Cipolla, Roberto. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. In ICLR’2015, arXiv:1409.0473, 2015. Bottou, Léon. Large-scale machine learning with stochastic gradi- ent descent. In Proceedings of COMPSTAT’2010, pp. 177–186. Springer, 2010. Bottou, Léon and Bousquet, Olivier. The tradeoffs of large In Platt, J.C., Koller, D., Singer, Y., and
1703.04933#46
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
47
Bottou, Léon and Bousquet, Olivier. The tradeoffs of large In Platt, J.C., Koller, D., Singer, Y., and Cho, Kyunghyun, van Merrienboer, Bart, Gülçehre, Çaglar, Bah- danau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Ben- gio, Yoshua. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Mos- chitti, Alessandro, Pang, Bo, and Daelemans, Walter (eds.), Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1724–1734. ACL, 2014. ISBN 978-1- 937284-96-1. URL http://aclweb.org/anthology/ D/D14/D14-1179.pdf. Choromanska, Anna, Henaff, Mikael, Mathieu, Michaël, Arous, Gérard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In AISTATS, 2015.
1703.04933#47
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.05175
47
Model Dist. Train Episodes Shot Query Way 1-shot 5-way Acc. 5-shot PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. Euclid. Euclid. Euclid. 1 1 1 1 1 1 15 15 15 15 15 15 5 10 15 20 25 30 46.14 ± 0.77% 61.36 ± 0.68% 48.27 ± 0.79% 64.18 ± 0.68% 48.60 ± 0.76% 64.62 ± 0.66% 48.57 ± 0.79% 65.04 ± 0.69% 48.51 ± 0.83% 64.63 ± 0.69% 49.42 ± 0.78% 65.38 ± 0.68% PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. Euclid. Euclid. Euclid. 5 5 5 5 5 5 15 15 15 15 15 15 5 10 15 20 25 30 44.53 ± 0.76% 65.77 ± 0.70% 45.09 ± 0.79% 67.49 ± 0.70% 44.07 ± 0.80% 68.03 ± 0.66% 43.57
1703.05175#47
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
http://arxiv.org/pdf/1703.05175
Jake Snell, Kevin Swersky, Richard S. Zemel
cs.LG, stat.ML
null
null
cs.LG
20170315
20170619
[ { "id": "1605.05395" }, { "id": "1502.03167" } ]
1703.04933
48
Chorowski, Jan K, Bahdanau, Dzmitry, Serdyuk, Dmitriy, Cho, Kyunghyun, and Bengio, Yoshua. Attention-based models for speech recognition. In Advances in Neural Information Process- ing Systems, pp. 577–585, 2015. Collobert, Ronan, Puhrsch, Christian, and Synnaeve, Gabriel. Wav2letter: an end-to-end convnet-based speech recognition system. arXiv preprint arXiv:1609.03193, 2016. Dauphin, Yann N., Pascanu, Razvan, Gülçehre, Çaglar, Cho, KyungHyun, Ganguli, Surya, and Bengio, Yoshua. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. NIPS, 2014. Sharp Minima Can Generalize For Deep Nets Desjardins, Guillaume, Simonyan, Karen, Pascanu, Razvan, and Kavukcuoglu, Koray. Natural neural networks. NIPS, 2015. Dinh, Laurent, Krueger, David, and Bengio, Yoshua. Nice: Non- linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
1703.04933#48
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
49
Hinton, Geoffrey E and Van Camp, Drew. Keeping the neural networks simple by minimizing the description length of the In Proceedings of the sixth annual conference on weights. Computational learning theory, pp. 5–13. ACM, 1993. Hochreiter, Sepp and Schmidhuber, Jürgen. Flat minima. Neural Computation, 9(1):1–42, 1997. Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. Density estimation using real nvp. In ICLR’2017, arXiv:1605.08803, 2016. Hyvärinen, Aapo and Pajunen, Petteri. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429–439, 1999. Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgra- dient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. Im, Daniel Jiwoong, Tao, Michael, and Branson, Kristin. An empirical analysis of deep network loss surfaces. arXiv preprint arXiv:1612.04010, 2016.
1703.04933#49
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
50
Gehring, Jonas, Auli, Michael, Grangier, David, and Dauphin, Yann N. A convolutional encoder model for neural machine translation. arXiv preprint arXiv:1611.02344, 2016. Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. In Aistats, volume 15, pp. 275, 2011. Gonen, Alon and Shalev-Shwartz, Shai. Fast rates for empirical risk minimization of strict saddle problems. arXiv preprint arXiv:1701.04271, 2017. Batch normaliza- tion: Accelerating deep network training by reducing in- ternal covariate shift. In Bach & Blei (2015), pp. 448– 456. URL http://jmlr.org/proceedings/papers/ v37/ioffe15.html. Jarrett, Kevin, Kavukcuoglu, Koray, LeCun, Yann, et al. What is the best multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2146–2153. IEEE, 2009.
1703.04933#50
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
51
Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron C, and Bengio, Yoshua. Maxout networks. ICML (3), 28: 1319–1327, 2013. Keskar, Nitish Shirish, Mudigere, Dheevatsa, Nocedal, Jorge, Smelyanskiy, Mikhail, and Tang, Ping Tak Peter. On large- batch training for deep learning: Generalization gap and sharp minima. In ICLR’2017, arXiv:1609.04836, 2017. Goodfellow, Ian J, Shlens, Jonathon, and Szegedy, Christian. Ex- plaining and harnessing adversarial examples. In ICLR’2015 arXiv:1412.6572, 2015. Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pp. 6645–6649. IEEE, 2013.
1703.04933#51
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
52
Hannun, Awni Y., Case, Carl, Casper, Jared, Catanzaro, Bryan, Diamos, Greg, Elsen, Erich, Prenger, Ryan, Satheesh, San- jeev, Sengupta, Shubho, Coates, Adam, and Ng, Andrew Y. Deep speech: Scaling up end-to-end speech recognition. CoRR, abs/1412.5567, 2014. URL http://arxiv.org/abs/ 1412.5567. Kingma, Diederik P, Salimans, Tim, Jozefowicz, Rafal, Chen, Xi, Sutskever, Ilya, and Welling, Max. Improved variational infer- ence with inverse autoregressive flow. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 4743–4751. Curran Associates, Inc., 2016. Klyachko, Alexander A. Random walks on symmetric spaces and inequalities for matrix spectra. Linear Algebra and its Applications, 319(1-3):37–59, 2000.
1703.04933#52
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
53
Klyachko, Alexander A. Random walks on symmetric spaces and inequalities for matrix spectra. Linear Algebra and its Applications, 319(1-3):37–59, 2000. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Ima- genet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097– 1105, 2012. Hardt, Moritz, Recht, Ben, and Singer, Yoram. Train faster, gener- alize better: Stability of stochastic gradient descent. In Balcan, Maria-Florina and Weinberger, Kilian Q. (eds.), Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, vol- ume 48 of JMLR Workshop and Conference Proceedings, pp. 1225–1234. JMLR.org, 2016. URL http://jmlr.org/ proceedings/papers/v48/hardt16.html. Lafond, Jean, Vasilache, Nicolas, and Bottou, Léon. About diago- nal rescaling applied to neural nets. ICML Workshop on Opti- mization Methods for the Next Generation of Machine Learning, 2016.
1703.04933#53
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
54
Larsen, Anders Boesen Lindbo, Sønderby, Søren Kaae, and Winther, Ole. Autoencoding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015. URL http: //arxiv.org/abs/1512.09300. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Surpassing human-level perfor- mance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015. Montufar, Guido F, Pascanu, Razvan, Cho, Kyunghyun, and Ben- gio, Yoshua. On the number of linear regions of deep neural networks. In Advances in neural information processing sys- tems, pp. 2924–2932, 2014. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 770–778, 2016.
1703.04933#54
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
55
Nair, Vinod and Hinton, Geoffrey E. Rectified linear units improve In Proceedings of the 27th restricted boltzmann machines. international conference on machine learning (ICML-10), pp. 807–814, 2010. Sharp Minima Can Generalize For Deep Nets Nesterov, Yurii and Vial, Jean-Philippe. Confidence level solutions for stochastic programming. Automatica, 44(6):1559–1568, 2008. Neyshabur, Behnam, Salakhutdinov, Ruslan R, and Srebro, Nati. Path-sgd: Path-normalized optimization in deep neural net- works. In Advances in Neural Information Processing Systems, pp. 2422–2430, 2015. Pascanu, Razvan and Bengio, Yoshua. Revisiting natural gradient for deep networks. ICLR, 2014. Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Google’s neural machine translation system: Bridging the gap between human and ma- chine translation. arXiv preprint arXiv:1609.08144, 2016.
1703.04933#55
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
56
Zhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Benjamin, and Vinyals, Oriol. Understanding deep learning requires re- In ICLR’2017, arXiv:1611.03530, thinking generalization. 2017. Raghu, Maithra, Poole, Ben, Kleinberg, Jon, Ganguli, Surya, and Sohl-Dickstein, Jascha. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016. # A Radial transformations Rezende, Danilo Jimenez and Mohamed, Shakir. Variational in- ference with normalizing flows. In Bach & Blei (2015), pp. 1530–1538. URL http://jmlr.org/proceedings/ papers/v37/rezende15.html. We show an elementary transformation to locally perturb the geometry of a finite-dimensional vector space and therefore affect the relative flatness between a finite number minima, at least in terms of spectral norm of the Hessian. We define the function:
1703.04933#56
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
57
Sagun, Levent, Bottou, Léon, and LeCun, Yann. Singularity of the hessian in deep learning. arXiv preprint arXiv:1611.07476, 2016. Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neu- ral networks. In Advances in Neural Information Processing Systems, pp. 901–901, 2016. Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen, Xi. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226–2234, 2016. V5 > 0,Vp €J0, 5,V(r,#) € Ry x]0, 6, W(r,?,6,p) = I(r € [0,d]) r+ 1(r € [0,4]) p . +1(r €]F,6]) ((- 5) me +6) wi (r,#,5,p) = U(r ¢ [0,4]) + U(r € (0,7) c + 1(r €l?,6]) £= °
1703.04933#57
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
58
Saxe, Andrew M., McClelland, James L., and Ganguli, Surya. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013. URL http://arxiv.org/abs/1312.6120. For a parameter ˆθ ∈ Θ and δ > 0, ρ ∈]0, δ[, ˆr ∈]0, δ[, inspired by the radial flows (Rezende & Mohamed, 2015) in we can define the radial transformations Simonyan, Karen and Zisserman, Andrew. Very deep convolutional In ICLR’2015, networks for large-scale image recognition. arXiv:1409.1556, 2015. v(@—4ll.F50) ( * ) (0-6) +0 |9 — All vo € O, g-*(0) Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014.
1703.04933#58
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
59
Swirszcz, Grzegorz, Czarnecki, Wojciech Marian, and Pascanu, Razvan. Local minima in training of deep networks. CoRR, abs/1611.06310, 2016. Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian, and Fergus, Rob. In ICLR’2014, Intriguing properties of neural networks. arXiv:1312.6199, 2014. with Jacobian V0 €O, (Vgu1)(8) = w"(r, 7,5, p) In — U(r €]f, 5) Pe (0 6)"(4—) +1(r €}i,6)) — Tn, with r = ||0 — O|lo. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convo- lutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015.
1703.04933#59
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
60
First, we can observe in Figure 6 that these transformations are purely local: they only have an effect inside the ball B2(ˆθ, δ). Through these transformations, you can arbitrarily perturb the ranking between several minima in terms of flatness as described in Subsection 5.1. Theis, Lucas, Oord, Aäron van den, and Bethge, Matthias. A note on the evaluation of generative models. In ICLR’2016 arXiv:1511.01844, 2016. Sharp Minima Can Generalize For Deep Nets # in
1703.04933#60
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
62
Relative to recent works (Hardt et al., 2016; Gonen & Shalev-Shwartz, 2017) assuming Lipschitz continuity of the loss function to derive uniform stability bound, we make the following observation: Theorem 6. For a one-hidden layer rectified neural network of the form y = φrect(x · θ1) · θ2, if L is not constant, then it is not Lipschitz continuous. (b) g−1(θ) Figure 6: An example of a radial transformation on a 2- dimensional space. We can see that only the area in blue and red, i.e. inside B2(ˆθ, δ), are affected. Best seen with colors. Proof. Since a Lipschitz function is necessarily absolutely continuous, we will consider the cases where L is absolutely continuous. First, if L has zero gradient almost everywhere, then L is constant. Now, if there is a point θ with non-zero gradient, then by writing # B Considering the bias parameter (∇L)(θ1, θ2) = [(∇θ1L)(θ1, θ2) (∇θ2L)(θ1, θ2)],
1703.04933#62
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
63
When we consider the bias parameter for a one (hidden) layer neural network, the non-negative homogeneity prop- erty translates into we have (∇L)(αθ1, α−1θ2) = [α−1(∇θ1L)(θ1, θ2) α(∇θ2L)(θ1, θ2)]. Without loss of generality, we consider (Vo, L)(01, 92) 4 0. Then the limit of the norm y = φrect(x · θ1 + b1) · θ2 + b2 = φrect(x · αθ1 + αb1) · α−1θ2 + b2, I(VL)(01, a 02)||3 = a *||(Vo, L)(01, 62) 3 +07 ||(Vo,L)(01,42)I|3 of the gradient goes to +∞ as α goes to 0. Therefore, L is not Lipschitz continuous. which results in conclusions similar to section 4. For a deeper rectified neural network, this property results This result can be generalized to several other models con- taining a one-hidden layer rectified neural network, includ- ing deeper rectified networks. Sharp Minima Can Generalize For Deep Nets
1703.04933#63
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04933
64
Sharp Minima Can Generalize For Deep Nets # D Euclidean distance and input representation A natural consequence of Subsection 5.2 is that metrics re- lying on Euclidean metric like mean square error or Earth- mover distance will rank very differently models depending on the input representation chosen. Therefore, the choice of input representation is critical when ranking different models based on these metrics. Indeed, bijective transfor- mations as simple as feature standardization or whitening can change the metric significantly. On the contrary, ranking resulting from metrics like f- divergence and log-likelihood are not perturbed by bijective transformations because of the change of variables formula.
1703.04933#64
Sharp Minima Can Generalize For Deep Nets
Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.
http://arxiv.org/pdf/1703.04933
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
cs.LG
8.5 pages of main content, 2.5 of bibliography and 1 page of appendix
null
cs.LG
20170315
20170515
[ { "id": "1609.03193" }, { "id": "1701.04271" }, { "id": "1609.04836" }, { "id": "1606.04838" }, { "id": "1611.03530" }, { "id": "1605.08803" }, { "id": "1511.01029" }, { "id": "1609.08144" }, { "id": "1611.01838" }, { "id": "1606.05336" }, { "id": "1603.01431" }, { "id": "1511.01844" }, { "id": "1612.04010" }, { "id": "1611.07476" }, { "id": "1611.02344" } ]
1703.04247
0
7 1 0 2 r a M 3 1 ] R I . s c [ 1 v 7 4 2 4 0 . 3 0 7 1 : v i X r a # DeepFM: A Factorization-Machine based Neural Network for CTR Prediction Huifeng Guo∗1 , Ruiming Tang2, Yunming Ye†1, Zhenguo Li2, Xiuqiang He2 1Shenzhen Graduate School, Harbin Institute of Technology, China 2Noah’s Ark Research Lab, Huawei, China [email protected], [email protected] 2{tangruiming, li.zhenguo, hexiuqiang}@huawei.com # Abstract
1703.04247#0
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
1
# Abstract Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, ex- isting methods seem to have a strong bias towards low- or high-order interactions, or require exper- tise feature engineering. In this paper, we show that it is possible to derive an end-to-end learn- ing model that emphasizes both low- and high- order feature interactions. The proposed model, DeepFM, combines the power of factorization ma- chines for recommendation and deep learning for feature learning in a new neural network architec- ture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the ef- fectiveness and efficiency of DeepFM over the ex- isting models for CTR prediction, on both bench- mark data and commercial data.
1703.04247#1
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
2
1 Introduction The prediction of click-through rate (CTR) is critical in rec- ommender system, where the task is to estimate the probabil- ity a user will click on a recommended item. In many recom- mender systems the goal is to maximize the number of clicks, and so the items returned to a user can be ranked by estimated CTR; while in other application scenarios such as online ad- vertising it is also important to improve revenue, and so the ranking strategy can be adjusted as CTR×bid across all can- didates, where “bid” is the benefit the system receives if the item is clicked by a user. In either case, it is clear that the key is in estimating CTR correctly. SP Addition ——> Wei ght-1 Connection Normal Connection Inner Product ___ _y Embedding QY Simoia Function . Field j oe Figure 1: Wide & deep architecture of DeepFM. The wide and deep component share the same input raw feature vector, which enables DeepFM to learn low- and high-order feature interactions simulta- neously from the input raw features.
1703.04247#2
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
3
can be used as a signal for CTR. As a second observation, male teenagers like shooting games and RPG games, which means that the (order-3) interaction of app category, user gen- der and age is another signal for CTR. In general, such inter- actions of features behind user click behaviors can be highly sophisticated, where both low- and high-order feature interac- tions should play important roles. According to the insights of the Wide & Deep model [Cheng et al., 2016] from google, considering low- and high-order feature interactions simulta- neously brings additional improvement over the cases of con- sidering either alone. The key challenge is in effectively modeling feature inter- actions. Some feature interactions can be easily understood, thus can be designed by experts (like the instances above). However, most other feature interactions are hidden in data and difficult to identify a priori (for instance, the classic as- sociation rule “diaper and beer” is mined from data, instead of discovering by experts), which can only be captured auto- matically by machine learning. Even for easy-to-understand interactions, it seems unlikely for experts to model them ex- haustively, especially when the number of features is large.
1703.04247#3
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
4
It is important for CTR prediction to learn implicit feature interactions behind user click behaviors. By our study in a mainstream apps market, we found that people often down- load apps for food delivery at meal-time, suggesting that the (order-2) interaction between app category and time-stamp ∗This work is done when Huifeng Guo worked as intern at Noah’s Ark Research Lab, Huawei. # †Corresponding Author. Despite their simplicity, generalized linear models, such as FTRL [McMahan et al., 2013], have shown decent perfor- mance in practice. However, a linear model lacks the abil- ity to learn feature interactions, and a common practice is to manually include pairwise feature interactions in its fea- ture vector. Such a method is hard to generalize to model high-order feature interactions or those never or rarely appear in the training data [Rendle, 2010]. Factorization Machines
1703.04247#4
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
5
(FM) [Rendle, 2010] model pairwise feature interactions as inner product of latent vectors between features and show very promising results. While in principle FM can model high-order feature interaction, in practice usually only order- 2 feature interactions are considered due to high complexity. As a powerful approach to learning feature representa- tion, deep neural networks have the potential to learn so- phisticated feature interactions. Some ideas extend CNN and RNN for CTR predition [Liu et al., 2015; Zhang et al., 2014], but CNN-based models are biased to the in- teractions between neighboring features while RNN-based models are more suitable for click data with sequential de- [Zhang et al., 2016] studies feature representa- pendency. tions and proposes Factorization-machine supported Neural Network (FNN). This model pre-trains FM before applying DNN, thus limited by the capability of FM. Feature interac- tion is studied in [Qu et al., 2016], by introducing a prod- uct layer between embedding layer and fully-connected layer, and proposing the Product-based Neural Network (PNN). As noted in [Cheng et al., 2016], PNN and FNN, like other deep models, capture little low-order
1703.04247#5
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
6
the Product-based Neural Network (PNN). As noted in [Cheng et al., 2016], PNN and FNN, like other deep models, capture little low-order feature interactions, which are also essential for CTR prediction. To model both low- and high-order feature interactions, [Cheng et al., 2016] pro- poses an interesting hybrid network structure (Wide & Deep) that combines a linear (“wide”) model and a deep model. In this model, two different inputs are required for the “wide part” and “deep part”, respectively, and the input of “wide part” still relies on expertise feature engineering.
1703.04247#6
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
7
One can see that existing models are biased to low- or high- order feature interaction, or rely on feature engineering. In this paper, we show it is possible to derive a learning model that is able to learn feature interactions of all orders in an end- to-end manner, without any feature engineering besides raw features. Our main contributions are summarized as follows: • We propose a new neural network model DeepFM (Figure 1) that integrates the architectures of FM and deep neural networks (DNN). It models low-order fea- ture interactions like FM and models high-order fea- ture interactions like DNN. Unlike the wide & deep model [Cheng et al., 2016], DeepFM can be trained end- to-end without any feature engineering. • DeepFM can be trained efficiently because its wide part and deep part, unlike [Cheng et al., 2016], share the same input and also the embedding vector. In [Cheng et al., 2016], the input vector can be of huge size as it in- cludes manually designed pairwise feature interactions in the input vector of its wide part, which also greatly increases its complexity. • We evaluate DeepFM on both benchmark data and com- mercial data, which shows consistent improvement over existing models for CTR prediction.
1703.04247#7
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
8
• We evaluate DeepFM on both benchmark data and com- mercial data, which shows consistent improvement over existing models for CTR prediction. 2 Our Approach Suppose the data set for training consists of n instances (χ, y), where χ is an m-fields data record usually involving a pair of user and item, and y ∈ {0, 1} is the associated la- bel indicating user click behaviors (y = 1 means the user
1703.04247#8
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
9
clicked the item, and y = 0 otherwise). χ may include cat- egorical fields (e.g., gender, location) and continuous fields (e.g., age). Each categorical field is represented as a vec- tor of one-hot encoding, and each continuous field is repre- sented as the value itself, or a vector of one-hot encoding af- ter discretization. Then, each instance is converted to (x, y) where x = [xf ield1, xf ield2, ..., xf iledj , ..., xf ieldm ] is a d- dimensional vector, with xf ieldj being the vector representa- tion of the j-th field of χ. Normally, x is high-dimensional and extremely sparse. The task of CTR prediction is to build a prediction model ˆy = CT R model(x) to estimate the prob- ability of a user clicking a specific app in a given context.
1703.04247#9
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
10
2.1 DeepFM We aim to learn both low- and high-order feature interactions. To this end, we propose a Factorization-Machine based neu- ral network (DeepFM). As depicted in Figure 11, DeepFM consists of two components, FM component and deep com- ponent, that share the same input. For feature i, a scalar wi is used to weigh its order-1 importance, a latent vector Vi is used to measure its impact of interactions with other features. Vi is fed in FM component to model order-2 feature interac- tions, and fed in deep component to model high-order feature interactions. All parameters, including wi, Vi, and the net- work parameters (W (l), b(l) below) are trained jointly for the combined prediction model: ˆy = sigmoid(yF M + yDN N ), (1) where ˆy ∈ (0, 1) is the predicted CTR, yF M is the output of FM component, and yDN N is the output of deep component. # FM Component ——> Weight-1 Connection Normal Connection i ! Output Units | — > Embedding P Aadition Q& inner Product Figure 2: The architecture of FM.
1703.04247#10
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
11
# FM Component ——> Weight-1 Connection Normal Connection i ! Output Units | — > Embedding P Aadition Q& inner Product Figure 2: The architecture of FM. The FM component is a factorization machine, which is proposed in [Rendle, 2010] to learn feature interactions for recommendation. Besides a linear (order-1) interactions among features, FM models pairwise (order-2) feature inter- actions as inner product of respective feature latent vectors. 1In all Figures of this paper, a Normal Connection in black refers to a connection with weight to be learned; a Weight-1 Con- nection, red arrow, is a connection with weight 1 by default; Em- bedding, blue dashed arrow, means a latent vector to be learned; Ad- dition means adding all input together; Product, including Inner- and Outer-Product, means the output of this unit is the product of two input vector; Sigmoid Function is used as the output function in CTR prediction; Activation Functions, such as relu and tanh, are used for non-linearly transforming the signal.
1703.04247#11
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
12
It can capture order-2 feature interactions much more effec- tively than previous approaches especially when the dataset is sparse. In previous approaches, the parameter of an interac- tion of features i and j can be trained only when feature i and feature j both appear in the same data record. While in FM, it is measured via the inner product of their latent vectors Vi and Vj. Thanks to this flexible design, FM can train latent vector Vi (Vj) whenever i (or j) appears in a data record. Therefore, feature interactions, which are never or rarely appeared in the training data, are better learnt by FM. As Figure 2 shows, the output of FM is the summation of an Addition unit and a number of Inner Product units: dd yrM = (w, x) + > > (ViVi) tj. +22, (2) fiH=lje=fitl where w € R¢ and V; € R* (k is given} The Addition unit ((w, x)) reflects the importance of order-1 features, and the Inner Product units represent the impact of order-2 feature interactions. # Deep Component ——> VWeight-1 Connection! Normal Connection !_——— — > Embedding WY Signoid Function 2 Activation Function Output Units | 1
1703.04247#12
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
13
# Deep Component ——> VWeight-1 Connection! Normal Connection !_——— — > Embedding WY Signoid Function 2 Activation Function Output Units | 1 Figure 3: The architecture of DNN. The deep component is a feed-forward neural network, which is used to learn high-order feature interactions. As shown in Figure 3, a data record (a vector) is fed into the neu- ral network. Compared to neural networks with image [He et al., 2016] or audio [Boulanger-Lewandowski et al., 2013] data as input, which is purely continuous and dense, the in- put of CTR prediction is quite different, which requires a new network architecture design. Specifically, the raw fea- ture input vector for CTR prediction is usually highly sparse3, super high-dimensional4, categorical-continuous-mixed, and grouped in fields (e.g., gender, location, age). This suggests an embedding layer to compress the input vector to a low- dimensional, dense real-value vector before further feeding into the first hidden layer, otherwise the network can be over- whelming to train.
1703.04247#13
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
14
Figure 4 highlights the sub-network structure from the in- put layer to the embedding layer. We would like to point out the two interesting features of this network structure: 1) while the lengths of different input field vectors can be different, 2We omit a constant offset for simplicity. 3Only one entry is non-zero for each field vector. 4E.g., in an app store of billion users, the one field vector for user ID is already of billion dimensions. Input Layer we Field m Field 1 Figure 4: The structure of the embedding layer their embeddings are of the same size (k); 2) the latent fea- ture vectors (V ) in FM now server as network weights which are learned and used to compress the input field vectors to the embedding vectors. In [Zhang et al., 2016], V is pre-trained by FM and used as initialization. In this work, rather than us- ing the latent feature vectors of FM to initialize the networks as in [Zhang et al., 2016], we include the FM model as part of our overall learning architecture, in addition to the other DNN model. As such, we eliminate the need of pre-training by FM and instead jointly train the overall network in an end-to-end manner. Denote the output of the embedding layer as:
1703.04247#14
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
15
a(0) = [e1, e2, ..., em], where ei is the embedding of i-th field and m is the number of fields. Then, a(0) is fed into the deep neural network, and the forward process is: a(l+1) = σ(W (l)a(l) + b(l)), (4) where l is the layer depth and σ is an activation function. a(l), W (l), b(l) are the output, model weight, and bias of the l-th layer. After that, a dense real-value feature vector is gener- ated, which is finally fed into the sigmoid function for CTR prediction: yDN N = σ(W |H|+1 · aH + b|H|+1), where |H| is the number of hidden layers. It is worth pointing out that FM component and deep com- ponent share the same feature embedding, which brings two important benefits: 1) it learns both low- and high-order fea- ture interactions from raw features; 2) there is no need for ex- pertise feature engineering of the input, as required in Wide & Deep [Cheng et al., 2016].
1703.04247#15
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
16
2.2 Relationship with the other Neural Networks Inspired by the enormous success of deep learning in var- ious applications, several deep models for CTR prediction are developed recently. This section compares the proposed DeepFM with existing deep models for CTR prediction. FNN: As Figure 5 (left) shows, FNN is a FM-initialized feed- forward neural network [Zhang et al., 2016]. The FM pre- training strategy results in two limitations: 1) the embedding parameters might be over affected by FM; 2) the efficiency is reduced by the overhead introduced by the pre-training stage. In addition, FNN captures only high-order feature interac- tions. In contrast, DeepFM needs no pre-training and learns both high- and low-order feature interactions. PNN: For the purpose of capturing high-order feature interac- tions, PNN imposes a product layer between the embedding layer and the first hidden layer [Qu et al., 2016]. According to different types of product operation, there are three vari- ants: IPNN, OPNN, and PNN∗, where IPNN is based on in- ner product of vectors, OPNN is based on outer product, and PNN∗ is based on both inner and outer products.
1703.04247#16
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
17
1 & Output Units v ay Aetrainind Field i Field j Field » Field i Field j FN PNN Field m Field i Field j —— > Weight-1 Connection Normal Connection <> Enbodding @ Activation Function P Addition Inner/Outer Product moid Function Field » Wide & Deep Figure 5: The architectures of existing deep models for CTR prediction: FNN, PNN, Wide & Deep Model Table 1: Comparison of deep models for CTR prediction High-order No Feature Engineering Features √ √ √ √ √ √ No Pre-training × √ √ √ Low-order Features × × √ √ FNN PNN Wide & Deep DeepFM × √ 3 Experiments In this section, we compare our proposed DeepFM and the other state-of-the-art models empirically. The evaluation re- sult indicates that our proposed DeepFM is more effective than any other state-of-the-art model and the efficiency of DeepFM is comparable to the best ones among the others.
1703.04247#17
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
18
To make the computation more efficient, the authors pro- posed the approximated computations of both inner and outer products: 1) the inner product is approximately computed by eliminating some neurons; 2) the outer product is approxi- mately computed by compressing m k-dimensional feature vectors to one k-dimensional vector. However, we find that the outer product is less reliable than the inner product, since the approximated computation of outer product loses much information that makes the result unstable. Although inner product is more reliable, it still suffers from high computa- tional complexity, because the output of the product layer is connected to all neurons of the first hidden layer. Different from PNN, the output of the product layer in DeepFM only connects to the final output layer (one neuron). Like FNN, all PNNs ignore low-order feature interactions.
1703.04247#18
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
19
3.1 Experiment Setup Datasets We evaluate the effectiveness and efficiency of our proposed DeepFM on the following two datasets. 1) Criteo Dataset: Criteo dataset 5 includes 45 million users’ click records. There are 13 continuous features and 26 cate- gorical ones. We split the dataset randomly into two parts: 90% is for training, while the rest 10% is for testing. 2) Company∗ Dataset: In order to verify the performance of DeepFM in real industrial CTR prediction, we conduct exper- iment on Company∗ dataset. We collect 7 consecutive days of users’ click records from the game center of the Company∗ App Store for training, and the next 1 day for testing. There are around 1 billion records in the whole collected dataset. In this dataset, there are app features (e.g., identification, cat- egory, and etc), user features (e.g., user’s downloaded apps, and etc), and context features (e.g., operation time, and etc).
1703.04247#19
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
20
Wide & Deep: Wide & Deep (Figure 5 (right)) is proposed by Google to model low- and high-order feature interactions simultaneously. As shown in [Cheng et al., 2016], there is a need for expertise feature engineering on the input to the “wide” part (for instance, cross-product of users’ install apps and impression apps in app recommendation). In contrast, DeepFM needs no such expertise knowledge to handle the input by learning directly from the input raw features. A straightforward extension to this model is replacing LR by FM (we also evaluate this extension in Section 3). This extension is similar to DeepFM, but DeepFM shares the fea- ture embedding between the FM and deep component. The sharing strategy of feature embedding influences (in back- propagate manner) the feature representation by both low- and high-order feature interactions, which models the repre- sentation more precisely. Evaluation Metrics We use two evaluation metrics in our experiments: AUC (Area Under ROC) and Logloss (cross entropy).
1703.04247#20
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
21
Evaluation Metrics We use two evaluation metrics in our experiments: AUC (Area Under ROC) and Logloss (cross entropy). Model Comparison We compare 9 models in our experiments: LR, FM, FNN, PNN (three variants), Wide & Deep, and DeepFM. In the Wide & Deep model, for the purpose of eliminating feature engineering effort, we also adapt the original Wide & Deep model by replacing LR by FM as the wide part. In order to distinguish these two variants of Wide & Deep, we name them LR & DNN and FM & DNN, respectively.6 Parameter Settings To evaluate the models on Criteo dataset, we follow the pa- rameter settings in [Qu et al., 2016] for FNN and PNN: (1) Summarizations: To summarize, the relationship between DeepFM and the other deep models in four aspects is pre- sented in Table 1. As can be seen, DeepFM is the only model that requires no pre-training and no feature engineering, and captures both low- and high-order feature interactions. 5http://labs.criteo.com/downloads/2014-kaggle-display- advertising-challenge-dataset/
1703.04247#21
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
22
5http://labs.criteo.com/downloads/2014-kaggle-display- advertising-challenge-dataset/ 6We do not use the Wide & Deep API released by Google, as the efficiency of that implementation is very low. We implement Wide & Deep by ourselves by simplifying it with shared optimizer for both deep and wide part. dropout: 0.5; (2) network structure: 400-400-400; (3) opti- mizer: Adam; (4) activation function: tanh for IPNN, relu for other deep models. To be fair, our proposed DeepFM uses the same setting. The optimizers of LR and FM are FTRL and Adam respectively, and the latent dimension of FM is 10. To achieve the best performance for each individual model on Company∗ dataset, we conducted carefully parameter study, which is discussed in Section 3.3. 3.2 Performance Evaluation In this section, we evaluate the models listed in Section 3.1 on the two datasets to compare their effectiveness and efficiency.
1703.04247#22
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
23
3.2 Performance Evaluation In this section, we evaluate the models listed in Section 3.1 on the two datasets to compare their effectiveness and efficiency. Efficiency Comparison The efficiency of deep learning models is important to real- world applications. We compare the efficiency of differ- ent models on Criteo dataset by the following formula: |training time of deep CT R model| . The results are shown in |training time of LR| Figure 6, including the tests on CPU (left) and GPU (right), where we have the following observations: 1) pre-training of FNN makes it less efficient; 2) Although the speed up of IPNN and PNN∗ on GPU is higher than the other models, they are still computationally expensive because of the in- efficient inner product operations; 3) The DeepFM achieves almost the most efficient in both tests. 40.00 30.00 on EK FAN ® own FN pyne times of LR training 1.89 189 215 37.61 192 30.44 200 retraining 132 0.00 0.00 0.00 0.00 0.00 0.00
1703.04247#23
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
24
40.00 30.00 2.00 on Seo af | | | EK fe FAN ® own FN pyne ny R&D FMB Deep times of LR times of LR RAD EME IPN OPN PNN* 128 141 132 277 129 249. 132 ining 0.95 0.00 0.00 0.00 0.00 0.00 0.00 training 1.89 189 215 37.61 192 30.44 200 retraining 132 0.00 0.00 0.00 0.00 0.00 0.00 2.00 Seo af | | | fe ny R&D FMB Deep times of LR RAD EME IPN OPN PNN* 128 141 132 277 129 249. 132 ining 0.95 0.00 0.00 0.00 0.00 0.00 0.00 Figure 6: Time comparison. Effectiveness Comparison The performance for CTR prediction of different models on Criteo dataset and Company∗ dataset is shown in Table 2, where we have the following observations:
1703.04247#24
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
25
Figure 6: Time comparison. Effectiveness Comparison The performance for CTR prediction of different models on Criteo dataset and Company∗ dataset is shown in Table 2, where we have the following observations: Learning feature interactions improves the performance of CTR prediction model. This observation is from the fact that LR (which is the only model that does not con- sider feature interactions) performs worse than the other models. As the best model, DeepFM outperforms LR by 0.86% and 4.18% in terms of AUC (1.15% and 5.60% in terms of Logloss) on Company∗ and Criteo datasets. • Learning high- and low-order feature interactions si- multaneously and properly improves the performance of CTR prediction model. DeepFM outperforms the models that learn only low-order feature interactions (namely, FM) or high-order feature interactions (namely, FNN, IPNN, OPNN, PNN∗). Compared to the second best model, DeepFM achieves more than 0.37% and 0.25% in terms of AUC (0.42% and 0.29% in terms of Logloss) on Company∗ and Criteo datasets. • Learning high- and low-order feature interactions si- multaneously while sharing the same feature embed- ding for high- and low-order feature interactions learn- ing improves the performance of CTR prediction model.
1703.04247#25
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
26
DeepFM outperforms the models that learn high- and low-order feature interactions using separate feature em- beddings (namely, LR & DNN and FM & DNN). Com- pared to these two models, DeepFM achieves more than 0.48% and 0.33% in terms of AUC (0.61% and 0.66% in terms of Logloss) on Company∗ and Criteo datasets. # Table 2: Performance on CTR prediction. Criteo Company∗ AUC 0.8640 0.8678 0.8683 0.8664 0.8658 0.8672 LR & DNN 0.8673 FM & DNN 0.8661 0.8715 LR FM FNN IPNN OPNN PNN∗ LogLoss 0.02648 0.02633 0.02629 0.02637 0.02641 0.02636 0.02634 0.02640 0.02618 AUC 0.7686 0.7892 0.7963 0.7972 0.7982 0.7987 0.7981 0.7850 0.8007 LogLoss 0.47762 0.46077 0.45738 0.45323 0.45256 0.45214 0.46772 0.45382 0.45083 DeepFM
1703.04247#26
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
27
Overall, our proposed DeepFM model beats the competi- tors by more than 0.37% and 0.42% in terms of AUC and Logloss on Company∗ dataset, respectively. In fact, a small improvement in offline AUC evaluation is likely to lead to a significant increase in online CTR. As reported in [Cheng et al., 2016], compared with LR, Wide & Deep improves AUC by 0.275% (offline) and the improvement of online CTR is 3.9%. The daily turnover of Company∗’s App Store is mil- lions of dollars, therefore even several percents lift in CTR brings extra millions of dollars each year. 3.3 Hyper-Parameter Study We study the impact of different hyper-parameters of differ- ent deep models, on Company∗ dataset. The order is: 1) ac- tivation functions; 2) dropout rate; 3) number of neurons per layer; 4) number of hidden layers; 5) network shape.
1703.04247#27
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
28
Activation Function According to [Qu et al., 2016], relu and tanh are more suit- able for deep models than sigmoid. In this paper, we compare the performance of deep models when applying relu and tanh. As shown in Figure 7, relu is more appropriate than tanh for all the deep models, except for IPNN. Possible reason is that relu induces sparsity. LOGLOSs 0.0265 Brel Btanh SS = SJ s Ss Ss vs 0.0263, 0.0264 9.0262 0.0261 0.026 se oe $ e gs ‘AUC LOGLOSs 0874 0.0265 Brel Btanh SS = SJ s Ss Ss vs 0.87 0.0263, 0.872 0.0264 oss 0.866 relu 9.0262 0.864 Grek 0.862 | Btanh 0.0261 0.86 0.026 ss Ss Ss ees sess se oe $ & e gs ‘AUC 0874 0.87 0.872 oss 0.866 relu 0.864 Grek 0.862 | Btanh 0.86 ss Ss Ss ees sess & Figure 7: AUC and Logloss comparison of activation functions.
1703.04247#28
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
29
Figure 7: AUC and Logloss comparison of activation functions. Dropout Dropout [Srivastava et al., 2014] refers to the probability that a neuron is kept in the network. Dropout is a regularization technique to compromise the precision and the complexity of the neural network. We set the dropout to be 1.0, 0.9, 0.8, 0.7, 0.6, 0.5. As shown in Figure 8, all the models are able to reach their own best performance when the dropout is properly set (from 0.6 to 0.9). The result shows that adding reasonable randomness to model can strengthen model’s robustness. AUC 0.874 0.872 a, peeprm 0.87 pF 0.868 =n 0.862 =PNN* 0.86 “@-LR@ONN 0.858 —FM&DNN 0.856 1 09 08 07 06 05
1703.04247#29
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
30
AUC LOGLOSS 0.874 0.0267 ~~DeepFM “= FNN +IPNN 0.872 a, peeprm 20266 0.87 pF oes al --OPNN 0.868 =n 0.0264 0.862 =PNN* 0.0262 0.86 “@-LR@ONN 0.0261 0.858 —FM&DNN 0.026 0.856 0.0259 1 09 08 07 06 05 1 PNNE -©-LR&ONN —-FM&DNN 09 08 07 06 05 LOGLOSS 0.0267 ~~DeepFM “= FNN +IPNN 20266 oes al --OPNN 0.0264 0.0262 0.0261 0.026 0.0259 1 PNNE -©-LR&ONN —-FM&DNN 09 08 07 06 05 Figure 8: AUC and Logloss comparison of dropout.
1703.04247#30
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
31
Figure 8: AUC and Logloss comparison of dropout. Number of Neurons per Layer When other factors remain the same, increasing the number of neurons per layer introduces complexity. As we can ob- serve from Figure 9, increasing the number of neurons does not always bring benefit. For instance, DeepFM performs sta- bly when the number of neurons per layer is increased from 400 to 800; even worse, OPNN performs worse when we in- crease the number of neurons from 400 to 800. This is be- cause an over-complicated model is easy to overfit. In our dataset, 200 or 400 neurons per layer is a good choice. AUC 0.874 0872 ~~DeepFM 0.87 FN 0.868 0.866 “IPN 0.864 ~opnn 0.862 cepnne 0.86 -eraonn 0.858 —pmaonn 0.856 100 200-400-800 LOGLOSS 0.0267 0.0266 ~DeepFM 0.0265 NN 0.0264 “IPN 90263 ~-OPNN 00262 #—$~—~~ gy °0261 0.026 0.0259 100 200 © 400800 -©-LR&DNN --FM&DNN
1703.04247#31
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
32
AUC LOGLOSS 0.874 0.0267 0872 0.0266 ~~DeepFM 0.87 ~DeepFM FN 0.0265 NN 0.868 0.0264 0.866 “IPN “IPN 0.864 ~opnn 90263 ~-OPNN 0.862 cepnne 00262 #—$~—~~ gy 0.86 -eraonn — °0261 0.858 —pmaonn 0.026 0.856 0.0259 100 200-400-800 100 200 © 400800 -©-LR&DNN --FM&DNN Figure 9: AUC and Logloss comparison of number of neurons. Number of Hidden Layers As presented in Figure 10, increasing number of hidden lay- ers improves the performance of the models at the beginning, however, their performance is degraded if the number of hid- den layers keeps increasing. This phenomenon is also be- cause of overfitting. AUC 0.874 0.872 —beepF 0.87 —_— vee =F oes PN 0.866 ey 0.864 0.862 PNN® -©-LR&DNN 0.86 —pmaonn 0.858 1 3 5 7
1703.04247#32
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
33
LOGLOSS 0.0267 0.0266 —~DeepFM 0.0265, eet =A 0.0264 PN 0.0268 pane 00262 0.0261 -PNN® 0.026 0.0259 1 3 5 7 -©-LR&DNN —-FM&ONN AUC LOGLOSS 0.874 0.0267 0.872 0.0266 —beepF —~DeepFM 0.87 —_— vee 0.0265, eet =F =A oes 0.0264 PN PN 0.866 ey 0.0268 pane 0.864 00262 0.862 0.0261 PNN® -PNN® -©-LR&DNN 0.86 —pmaonn 0.026 0.858 0.0259 1 3 5 7 1 3 5 7 -©-LR&DNN —-FM&ONN Figure 10: AUC and Logloss comparison of number of layers.
1703.04247#33
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
34
Figure 10: AUC and Logloss comparison of number of layers. Network Shape We test four different network shapes: constant, increasing, decreasing, and diamond. When we change the network shape, we fix the number of hidden layers and the total num- ber of neurons. For instance, when the number of hidden lay- ers is 3 and the total number of neurons is 600, then four dif- ferent shapes are: constant (200-200-200), increasing (100200-300), decreasing (300-200-100), and diamond (150-300- 150). As we can see from Figure 11, the “constant” network shape is empirically better than the other three options, which is consistent with previous studies [Larochelle et al., 2009]. AUC os72 os 0.87 ones +Deeprm os S$,» 2s -#Fwn 0.867 “KOPN oes “PN cess MUREDNN 0.864 0.863 “MBPNN 0.862 constant increasing decreasing diamond LoGLoss 0.02645 0.0264 0.02635, +DeepFM 0.068. NN 0.02625 “OPN 0.0262 pnt 9.02615 UR ONN 0264 “+ FMONN 0.02605 constant increasingdecreasing diamond
1703.04247#34
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
35
AUC LoGLoss os72 0.02645 os 0.0264 0.87 ones +Deeprm 0.02635, +DeepFM os S$,» 2s -#Fwn 0.068. NN 0.867 “KOPN 0.02625 “OPN oes “PN 0.0262 pnt cess MUREDNN 9.02615 UR ONN 0.864 0.863 “MBPNN 0264 “+ FMONN 0.862 0.02605 constant increasing decreasing diamond constant increasingdecreasing diamond # Figure 11: AUC and Logloss comparison of network shape. # 4 Related Work In this paper, a new deep neural network is proposed for CTR prediction. The most related domains are CTR prediction and deep learning in recommender system. In this section, we discuss related work in these two domains. CTR prediction plays an important role in recommender system [Richardson et al., 2007; Juan et al., 2016; McMa- han et al., 2013]. Besides generalized linear models and FM, a few other models are proposed for CTR prediction, such as tree-based model [He et al., 2014], tensor based model [Rendle and Schmidt-Thieme, 2010], support vector machine [Chang et al., 2010], and bayesian model [Graepel et al., 2010].
1703.04247#35
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
36
The other related domain is deep learning in recommender In Section 1 and Section 2.2, several deep learn- systems. ing models for CTR prediction are already mentioned, thus we do not discuss about them here. Several deep learn- ing models are proposed in recommendation tasks other than CTR prediction (e.g., [Covington et al., 2016; Salakhutdi- nov et al., 2007; van den Oord et al., 2013; Wu et al., 2016; Zheng et al., 2016; Wu et al., 2017; Zheng et al., 2017]). [Salakhutdinov et al., 2007; Sedhain et al., 2015; Wang et al., 2015] propose to improve Collaborative Filter- ing via deep learning. The authors of [Wang and Wang, 2014; van den Oord et al., 2013] extract content feature by deep learning to improve the performance of music recommenda- tion. [Chen et al., 2016] devises a deep learning network to consider both image feature and basic feature of display ad- verting. [Covington et al., 2016] develops a two-stage deep learning framework for YouTube video recommendation. # 5 Conclusions
1703.04247#36
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
37
# 5 Conclusions In this paper, we proposed DeepFM, a Factorization-Machine based Neural Network for CTR prediction, to overcome the shortcomings of the state-of-the-art models and to achieve better performance. DeepFM trains a deep component and an FM component jointly. It gains performance improvement from these advantages: 1) it does not need any pre-training; 2) it learns both high- and low-order feature interactions; 3) it introduces a sharing strategy of feature embedding to avoid feature engineering. We conducted extensive experiments on two real-world datasets (Criteo dataset and a commercial App Store dataset) to compare the effectiveness and efficiency of DeepFM and the state-of-the-art models. Our experiment re- sults demonstrate that 1) DeepFM outperforms the state-of- the-art models in terms of AUC and Logloss on both datasets; 2) The efficiency of DeepFM is comparable to the most effi- cient deep model in the state-of-the-art. There are two interesting directions for future study. One is exploring some strategies (such as introducing pooling lay- ers) to strengthen the ability of learning most useful high- order feature interactions. The other is to train DeepFM on a GPU cluster for large-scale problems.
1703.04247#37
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
38
References [Boulanger-Lewandowski et al., 2013] Nicolas Boulanger- Lewandowski, Yoshua Bengio, and Pascal Vincent. Au- dio chord recognition with recurrent neural networks. In ISMIR, pages 335–340, 2013. [Chang et al., 2010] Yin-Wen Chang, Cho-Jui Hsieh, Kai- Wei Chang, Michael Ringgaard, and Chih-Jen Lin. Train- ing and testing low-degree polynomial data mappings via linear SVM. JMLR, 11:1471–1490, 2010. [Chen et al., 2016] Junxuan Chen, Baigui Sun, Hao Li, Hongtao Lu, and Xian-Sheng Hua. Deep CTR prediction in display advertising. In MM, 2016. Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. Wide CoRR, & deep learning for recommender systems. abs/1606.07792, 2016.
1703.04247#38
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
39
[Covington et al., 2016] Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recom- mendations. In RecSys, pages 191–198, 2016. [Graepel et al., 2010] Thore Graepel, Joaquin Qui˜nonero Candela, Thomas Borchert, and Ralf Herbrich. Web- scale bayesian click-through rate prediction for sponsored search advertising in microsoft’s bing search engine. In ICML, pages 13–20, 2010. [He et al., 2014] Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers, and Joaquin Qui˜nonero Candela. Practical lessons from predicting clicks on ads at facebook. In ADKDD, pages 5:1–5:9, 2014. [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, pages 770–778, 2016.
1703.04247#39
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
40
[Juan et al., 2016] Yu-Chin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. Field-aware factorization ma- chines for CTR prediction. In RecSys, pages 43–50, 2016. [Larochelle et al., 2009] Hugo Larochelle, Yoshua Bengio, J´erˆome Louradour, and Pascal Lamblin. Exploring strate- gies for training deep neural networks. JMLR, 10:1–40, 2009. [Liu et al., 2015] Qiang Liu, Feng Yu, Shu Wu, and Liang Wang. A convolutional click prediction model. In CIKM, 2015. [McMahan et al., 2013] H. Brendan McMahan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, and Jeremy Ku- bica. Ad click prediction: a view from the trenches. In KDD, 2013.
1703.04247#40
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
41
[Qu et al., 2016] Yanru Qu, Han Cai, Kan Ren, Weinan Zhang, Yong Yu, Ying Wen, and Jun Wang. Product- based neural networks for user response prediction. CoRR, abs/1611.00144, 2016. and Lars Schmidt-Thieme. Pairwise interaction tensor factor- ization for personalized tag recommendation. In WSDM, pages 81–90, 2010. [Rendle, 2010] Steffen Rendle. Factorization machines. In ICDM, 2010. [Richardson et al., 2007] Matthew Richardson, Ewa Domi- nowska, and Robert Ragno. Predicting clicks: estimating the click-through rate for new ads. In WWW, pages 521– 530, 2007. [Salakhutdinov et al., 2007] Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey E. Hinton. Restricted boltzmann ma- chines for collaborative filtering. In ICML, pages 791–798, 2007. [Sedhain et al., 2015] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. Autorec: Au- In WWW, pages toencoders meet collaborative filtering. 111–112, 2015.
1703.04247#41
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
42
[Srivastava et al., 2014] Nitish Srivastava, Geoffrey E. Hin- and Ruslan ton, Alex Krizhevsky, Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, 2014. [van den Oord et al., 2013] A¨aron van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. In NIPS, pages 2643–2651, 2013. [Wang and Wang, 2014] Xinxi Wang and Ye Wang. Improv- ing content-based and hybrid music recommendation us- ing deep learning. In ACM MM, pages 627–636, 2014. [Wang et al., 2015] Hao Wang, Naiyan Wang, and Dit-Yan Yeung. Collaborative deep learning for recommender sys- tems. In ACM SIGKDD, pages 1235–1244, 2015. [Wu et al., 2016] Yao Wu, Christopher DuBois, Alice X. Zheng, and Martin Ester. Collaborative denoising auto- encoders for top-n recommender systems. In ACM WSDM, pages 153–162, 2016.
1703.04247#42
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.04247
43
[Wu et al., 2017] Chao-Yuan Wu, Amr Ahmed, Alex Beu- tel, Alexander J. Smola, and How Jing. Recurrent recom- mender networks. In WSDM, pages 495–503, 2017. [Zhang et al., 2014] Yuyu Zhang, Hanjun Dai, Chang Xu, Jun Feng, Taifeng Wang, Jiang Bian, Bin Wang, and Tie- Yan Liu. Sequential click prediction for sponsored search with recurrent neural networks. In AAAI, 2014. [Zhang et al., 2016] Weinan Zhang, Tianming Du, and Jun Wang. Deep learning over multi-field categorical data - - A case study on user response prediction. In ECIR, 2016. [Zheng et al., 2016] Yin Zheng, Yu-Jin Zhang, and Hugo Larochelle. A deep and autoregressive approach for topic modeling of multimodal data. IEEE Trans. Pattern Anal. Mach. Intell., 38(6):1056–1069, 2016. [Zheng et al., 2017] Lei Zheng, Vahid Noroozi, and Philip S. Yu. Joint deep modeling of users and items using reviews for recommendation. In WSDM, pages 425–434, 2017.
1703.04247#43
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide \& Deep model from Google, DeepFM has a shared input to its "wide" and "deep" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.
http://arxiv.org/pdf/1703.04247
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, Xiuqiang He
cs.IR, cs.CL
null
null
cs.IR
20170313
20170313
[]
1703.03664
0
7 1 0 2 r a M 0 1 ] V C . s c [ 1 v 4 6 6 3 0 . 3 0 7 1 : v i X r a # Parallel Multiscale Autoregressive Density Estimation # Scott Reed 1 A¨aron van den Oord 1 Nal Kalchbrenner 1 Sergio G´omez Colmenarejo 1 Ziyu Wang 1 Dan Belov 1 Nando de Freitas 1 # Abstract PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pix- els. This can be sped up by caching activations, but still involves generating each pixel sequen- In this work, we propose a parallelized tially. PixelCNN that allows more efficient inference by modeling certain pixel groups as condition- ally independent. Our new PixelCNN model achieves competitive density estimation and or- ders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical genera- tion of 512 × 512 images. We evaluate the model on class-conditional image generation, text-to- image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive den- sity models that allow efficient sampling.
1703.03664#0
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
1
“A yellow bird with a black head, orange eyes and an orange bill. Figure 1. Samples from our model at resolutions from 4 × 4 to 256 × 256, conditioned on text and bird part locations in the CUB data set. See Fig. 4 and the supplement for more examples. # 1. Introduction case for WaveNet (Oord et al., 2016; Ramachandran et al., 2017). However, even with this optimization, generation is still in serial order by pixel. Many autoregressive image models factorize the joint dis- tribution of images into per-pixel factors: T pear) =| | posi) (1) t=1 Ideally we would generate multiple pixels in parallel, In the autore- which could greatly accelerate sampling. gressive framework this only works if the pixels are mod- eled as independent. Thus we need a way to judiciously break weak dependencies among pixels; for example im- mediately neighboring pixels should not be modeled as in- dependent since they tend to be highly correlated.
1703.03664#1
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
2
For example PixelCNN (van den Oord et al., 2016b) uses a deep convolutional network with carefully designed fil- ter masking to preserve causal structure, so that all factors in equation 1 can be learned in parallel for a given image. However, a remaining difficulty is that due to the learned causal structure, inference proceeds sequentially pixel-by- pixel in raster order. Multiscale image generation provides one such way to break weak dependencies. In particular, we can model cer- tain groups of pixels as conditionally independent given a lower resolution image and various types of context infor- mation, such as preceding frames in a video. The basic idea is obvious, but nontrivial design problems stand between the idea and a workable implementation. In the naive case, this requires a full network evaluation per pixel. Caching hidden unit activations can be used to reduce the amount of computation per pixel, as in the 1D 1DeepMind. [email protected]>. Correspondence to: Scott Reed <reedFirst, what is the right way to transmit global information from a low-resolution image to each generated pixel of the high-resolution image? Second, which pixels can we gen- erate in parallel? And given that choice, how can we avoid border artifacts when merging sets of pixels that were gen- erated in parallel, blind to one another?
1703.03664#2
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
3
Parallel Multiscale Autoregressive Density Estimation In this work we show how a very substantial portion of the spatial dependencies in PixelCNN can be cut, with only modest degradation in performance. Our formulation al- lows sampling in O(log N) time for N pixels, instead of O(N) as in the original PixelCNN, resulting in orders of In the case of video, in magnitude speedup in practice. which we have access to high-resolution previous frames, we can even sample in O(1) time, with much better perfor- mance than comparably-fast baselines. conditional image generation schemes such as text and spa- tial structure to image (Mansimov et al., 2015; Reed et al., 2016b;a; Wang & Gupta, 2016). The addition of multiscale structure has also been shown Denton et al. to be useful in adversarial networks. (2015) used a Laplacian pyramid to generate images in a coarse-to-fine manner. Zhang et al. (2016) composed a low-resolution and high-resolution text-conditional GAN, yielding higher quality 256 × 256 bird and flower images.
1703.03664#3
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
4
At a high level, the proposed approach can be viewed as a way to merge per-pixel factors in equation 1. If we merge the factors for, e.g. xi and x j, then that dependency is “cut”, so the model becomes slightly less expressive. However, we get the benefit of now being able to sample xi and x j in parallel. If we divide the N pixels into G groups of T pixels each, the joint distribution can be written as a product of the corresponding G factors: Generator networks can be combined with a trained model, such as an image classifier or captioning network, to gen- erate high-resolution images via optimization and sam- pling procedures (Nguyen et al., 2016). Wu et al. (2017) state that it is difficult to quantify GAN performance, and propose Monte Carlo methods to approximate the log- likelihood of GANs on MNIST images. G pet) =| [pesky )) CO) gel Above we assumed that each of the G groups contains ex- actly T pixels, but in practice the number can vary. In this work, we form pixel groups from successively higher- resolution views of an image, arranged into a sub-sampling pyramid, such that G ∈ O(log N).
1703.03664#4
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
5
Both auto-regressive and non auto-regressive deep net- works have recently been applied successfully to image super-resolution. Shi et al. (2016) developed a sub-pixel convolutional network well-suited to this problem. Dahl et al. (2017) use a PixelCNN as a prior for image super- resolution with a convolutional neural network. Johnson et al. (2016) developed a perceptual loss function useful for both style transfer and super-resolution. GAN variants have also been successful in this domain (Ledig et al., 2016; Sønderby et al., 2017). In section 3 we describe this group structure implemented as a deep convolutional network. In section 4 we show that the model excels in density estimation and can produce quality high-resolution samples at high speed. # 2. Related work Deep neural autoregressive models have been applied to image generation for many years, showing promise as a tractable yet expressive density model (Larochelle & Mur- ray, 2011; Uria et al., 2013). Autoregressive LSTMs have been shown to produce state-of-the-art performance in density estimation on large-scale datasets such as Ima- geNet (Theis & Bethge, 2015; van den Oord et al., 2016a).
1703.03664#5
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
6
Causally-structured convolutional networks such as Pixel- CNN (van den Oord et al., 2016b) and WaveNet (Oord et al., 2016) improved the speed and scalability of train- ing. These led to improved autoregressive models for video generation (Kalchbrenner et al., 2016b) and machine trans- lation (Kalchbrenner et al., 2016a). Several other deep, tractable density models have recently been developed. Real NVP (Dinh et al., 2016) learns a mapping from images to a simple noise distribution, which is by construction trivially invertible. It is built from smaller invertible blocks called coupling layers whose Jacobian is lower-triangular, and also has a multiscale structure. Inverse Autoregressive Flows (Kingma & Sal- imans, 2016) use autoregressive structures in the latent space to learn more flexible posteriors for variational auto- encoders. Autoregressive models have also been combined with VAEs as decoder models (Gulrajani et al., 2016).
1703.03664#6
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
7
The original PixelRNN paper (van den Oord et al., 2016a) actually included a multiscale autoregressive version, in which PixelRNNs or PixelCNNs were trained at multiple resolutions. The network producing a given resolution im- age was conditioned on the image at the next lower reso- lution. This work is similarly motivated by the usefulness of multiscale image structure (and the very long history of coarse-to-fine modeling). Non-autoregressive convolutional generator networks have been successful and widely adopted for image generation as well. Instead of maximizing likelihood, Generative Ad- versarial Networks (GANs) train a generator network to fool a discriminator network adversary (Goodfellow et al., 2014). These networks have been used in a wide variety of Our novel contributions in this work are (1) asymptotically and empirically faster inference by modeling conditional independence structure, (2) scaling to much higher reso- lution, (3) evaluating the model on a diverse set of chal- lenging benchmarks including class-, text- and structure- conditional image generation and video generation. Parallel Multiscale Autoregressive Density Estimation
1703.03664#7
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
8
Parallel Multiscale Autoregressive Density Estimation 1 1 1--2--17-2 1),2/)1)2 1) 2)1) 2 v ti +7 Y 3 3 g3+44+344 > oR EH 1 1 1+ 2+-1+2 1),2)1)2 1) 2) 1) 2 tS ra 3 3 3+4+3t4 Figure 2. Example pixel grouping and ordering for a 4 × 4 image. The upper-left corners form group 1, the upper-right group 2, and so on. For clarity we only use arrows to indicate immediately-neighboring dependencies, but note that all pixels in preceding groups can be used to predict all pixels in a given group. For example all pixels in group 2 can be used to predict pixels in group 4. In our image experiments pixels in group 1 originate from a lower-resolution image. For video, they are generated given the previous frames. TUT TTT TTT TTT TT] ResNet Split _ Bot ResNet, Split Shallow PixelCNN Merge TTT TTT | Split Merge
1703.03664#8
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
9
TUT TTT TTT TTT TT] ResNet Split _ Bot ResNet, Split Shallow PixelCNN Merge TTT TTT | Split Merge Figure 3. A simple form of causal upscaling network, mapping from a K × K image to K × 2K. The same procedure can be applied in the vertical direction to produce a 2K × 2K image. In reference to figure 2, the leftmost images could be considered “group 1” pixels; i.e. the upper-left corners. The network shown here produces “group 2” pixels; i.e. the upper-right corners, completing the top-corners half of the image. (A) In the simplest version, a deep convolutional network (in our case ResNet) directly produces the right image from the left image, and merges column-wise. (B) A more sophisticated version extracts features from a convolutional net, splits the feature map into spatially contiguous blocks, and feeds these in parallel through a shallow PixelCNN. The result is then merged as in (A). # 3. Model
1703.03664#9
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
10
# 3. Model The main design principle that we follow in building the model is a coarse-to-fine ordering of pixels. Successively higher-resolution frames are generated conditioned on the previous resolution (See for example Figure 1). Pixels are grouped so as to exploit spatial locality at each resolution, which we describe in detail below. Concretely, to create groups we tile the image with 2 × 2 blocks. The corners of these 2×2 blocks form the four pixel groups at a given scale; i.e. upper-left, upper-right, lower- left, lower-right. Note that some pairs of pixels both within each block and also across blocks can still be dependent. These additional dependencies are important for capturing local textures and avoiding border artifacts. The training objective is to maximize log P(x; θ). Since the joint distribution factorizes over pixel groups and scales, the training can be trivially parallelized. # 3.1. Network architecture Figure 2 shows how we divide an image into disjoint groups of pixels, with autoregressive structure among the groups. The key property to notice is that no two adjacent pixels of the high-resolution image are in the same group. Also, pixels can depend on other pixels below and to the right, which would have been inaccessible in the standard PixelCNN. Each group of pixels corresponds to a factor in the joint distribution of equation 2.
1703.03664#10
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
11
Figure 3 shows an instantiation of one of these factors as a neural network. Similar to the case of PixelCNN, at train- ing time losses and gradients for all of the pixels within a group can be computed in parallel. At test time, infer- ence proceeds sequentially over pixel groups, in parallel within each group. Also as in PixelCNN, we model the color channel dependencies - i.e. green sees red, blue sees red and green - using channel masking. In the case of type-A upscaling networks (See Figure 3A), sampling each pixel group thus requires 3 network evalua- tions 1. In the case of type-B upscaling, the spatial feature 1However, one could also use a discretized mixture of logistics as output instead of a softmax as in Salimans et al. (2017), in which case only one network evaluation is needed. Parallel Multiscale Autoregressive Density Estimation map for predicting a group of pixels is divided into contigu- ous M × M patches for input to a shallow PixelCNN (See figure 3B). This entails M2 very small network evaluations, for each color channel. We used M = 4, and the shallow PixelCNN weights are shared across patches.
1703.03664#11
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
12
The division into non-overlapping patches may appear to risk border artifacts when merging. However, this does not occur for several reasons. First, each predicted pixel is di- rectly adjacent to several context pixels fed into the upscal- ing network. Second, the generated patches are not directly adjacent in the 2K ×2K output image; there is always a row or column of pixels on the border of any pair. Note that the only learnable portions of the upscaling mod- ule are (1) the ResNet encoder of context pixels, and (2) the shallow PixelCNN weights in the case of type-B upscaling. The “merge” and “split” operations shown in figure 3 only marshal data and are not associated with parameters. Given the first group of pixels, the rest of the groups at a given scale can be generated autoregressively. The first group of pixels can be modeled using the same approach as detailed above, recursively, down to a base resolution at which we use a standard PixelCNN. At each scale, the number of evaluations is O(1), and the resolution doubles after each upscaling, so the overall complexity is O(log N) to produce images with N pixels. # 3.2. Conditional image modeling
1703.03664#12
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
13
# 3.2. Conditional image modeling across 200 bird species, with 10 captions per image. As conditioning information we used a 32 × 32 spatial encoding of the 15 annotated bird part locations. • MPII (Andriluka et al., 2014) has around 25K images of 410 human activities, with 3 captions per image. We kept only the images depicting a single person, and cropped the image centered around the person, leaving us about 14K images. We used a 32 × 32 en- coding of the 17 annotated human part locations. • MS-COCO (Lin et al., 2014) has 80K training images with 5 captions per image. As conditioning we used the 80-class segmentation scaled to 32 × 32. • Robot Pushing (Finn et al., 2016) contains sequences of 20 frames of size 64 × 64 showing a robotic arm pushing objects in a basket. There are 50, 000 training sequences and a validation set with the same objects but different arm trajectories. One test set involves a subset of the objects seen during training and another involving novel objects, both captured on an arm and camera viewpoint not seen during training.
1703.03664#13
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
14
All models for ImageNet, CUB, MPII and MS-COCO were trained using RMSprop with hyperparameter € = le — 8, with batch size 128 for 200K steps. The learning rate was set initially to le — 4 and decayed to le — 5. Given some context information c, such as a text descrip- tion, a segmentation, or previous video frames, we maxi- mize the conditional likelihood log P(x|c; θ). Each factor in equation 2 simply adds c as an additional conditioning variable. The upscaling neural network corresponding to each factor takes c as an additional input. For encoding text we used a character-CNN-GRU as in (Reed et al., 2016a). For spatially structured data such as segmentation masks we used a standard convolutional net- work. For encoding previous frames in a video we used a ConvLSTM as in (Kalchbrenner et al., 2016b). For all of the samples we show, the queries are drawn from the validation split of the corresponding data set. That is, the captions, key points, segmentation masks, and low- resolution images for super-resolution have not been seen by the model during training.
1703.03664#14
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
15
When we evaluate negative log-likelihood, we only quan- tize pixel values to [0, ..., 255] at the target resolution, not separately at each scale. The lower resolution images are then created by sub-sampling this quantized image. # 4.2. Text and location-conditional generation # 4. Experiments # 4.1. Datasets We evaluate our model on ImageNet, Caltech-UCSD Birds (CUB), the MPII Human Pose dataset (MPII), the Mi- crosoft Common Objects in Context dataset (MS-COCO), and the Google Robot Pushing dataset. • For ImageNet (Deng et al., 2009), we trained a class- conditional model using the 1000 leaf node classes. • CUB (Wah et al., 2011) contains 11, 788 images In this section we show results for CUB, MPII and MS- COCO. For each dataset we trained type-B upscaling net- works with 12 ResNet layers and 4 PixelCNN layers, with 128 hidden units per layer. The base resolution at which we train a standard PixelCNN was set to 4 × 4.
1703.03664#15
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
16
To encode the captions we padded to 201 characters, then fed into a character-level CNN with three convolutional layers, followed by a GRU and average pooling over time. Upscaling networks to 8 × 8, 16 × 16 and 32 × 32 shared a single text encoder. For higher-resolution upscaling net- works we trained separate text encoders. In principle all upscalers could share an encoder, but we trained separably to save memory and time. Parallel Multiscale Autoregressive Density Estimation Captions Keypoints Samples tail beak tail beak tail beak vail beak This is a large brown bird with a bright green head, yellow bill and orange feet. With long brown upper converts and giant white wings, the grey breasted bird flies through the air. Agrey bird witha small head and short beak with lighter grey wing bars and a bright rane . yellow belly. A white large bird with orange legs and gray secondaries and primaries, and a short yellow bill. Figure 4. Text-to-image bird synthesis. The leftmost column shows the entire sampling process starting by generating 4 × 4 images, followed by six upscaling steps, to produce a 256 × 256 image. The right column shows the final sampled images for several other queries. For each query the associated part keypoints and caption are shown to the left of the samples.
1703.03664#16
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
17
Captions A fisherman sitting along the edge of a creek preparing his equipment to cast. Two teams of players are competing in a game at a gym. Aman in blue pants and a blue t-shirt, wearing brown sneakers, is working on a roof. Awoman in black work out clothes is kneeling on an exercise mat. Keypoints nme Samples head head pelvis head pelvis head pelvis head pelvis Figure 5. Text-to-image human synthesis.The leftmost column again shows the sampling process, and the right column shows the final frame for several more examples. We find that the samples are diverse and usually match the color and position constraints. For CUB and MPII, we have body part keypoints for birds and humans, respectively. We encode these into a 32 × 32 × P binary feature map, where P is the number of parts; 17 for MPII and 15 for CUB. A 1 indicates the part is visible, and 0 indicates the part is not visible. For MS-COCO, we resize the class segmentation mask to 32 × 32 × 80.
1703.03664#17
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
18
the target resolution for an upscaler network is higher than 32 × 32, these conditioning features are randomly cropped along with the target image to a 32 × 32 patch. Because the network is fully convolutional, the network can still gen- erate the full resolution at test time, but we can massively save on memory and computation during training. For all datasets, we then encode these spatial features us- ing a 12-layer ResNet. These features are then depth- concatenated with the text encoding and resized with bi- If linear interpolation to the spatial size of the image. Figure 4 shows examples of text- and keypoint-to-bird image synthesis. Figure 5 shows examples of text- and keypoint-to-human image synthesis. Figure 6 shows ex- amples of text- and segmentation-to-image synthesis. Parallel Multiscale Autoregressive Density Estimation me me A young man riding on the back of a brown horse. ov Old time railroad caboose sitting on track with two people inside. Sn ‘fi L —d La 1 A large passenger jet taxis on an airport tarmac. uy rap A professional baseball player is ready to hit the ball. Aman sitting at a desk covered with papers.
1703.03664#18
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
19
Figure 6. Text and segmentation-to-image synthesis. The left column shows the full sampling trajectory from 4 × 4 to 256 × 256. The caption queries are shown beneath the samples. Beneath each image we show the image masked with the largest object in each scene; i.e. only the foreground pixels in the sample are shown. More samples with all categories masked are included in the supplement. CUB PixelCNN Multiscale PixelCNN MPII PixelCNN Multiscale PixelCNN MS-COCO PixelCNN Multiscale PixelCNN Train Val 2.93 2.91 2.99 2.98 Train Val 2.92 2.90 2.91 3.03 Train Val 3.08 3.07 3.16 3.14 Test 2.92 2.98 Test 2.92 3.03 Test - The motivation for training the O(T ) model is that previous frames in a video provide very detailed cues for predicting the next frame, so that our pixel groups could be condition- ally independent even without access to a low-resolution image. Without the need to upscale from a low-resolution image, we can produce “group 1” pixels - i.e. the upper-left corner group - directly by conditioning on previous frames. Then a constant number of network evaluations are needed to sample the next three pixel groups at the final scale.
1703.03664#19
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
20
Table 1. Text and structure-to image negative conditional log- likelihood in nats per sub-pixel. Quantitatively, the Multiscale PixelCNN results are not far from those obtained using the original PixelCNN (Reed In addition, we in- et al., 2016c), as shown in Table 1. creased the sample resolution by 8×. Qualitatively, the sample quality appears to be on par, but with much greater realism due to the higher resolution. # 4.3. Action-conditional video generation The second version is our multi-step upscaler used in previ- ous experiments, conditioned on both previous frames and robot arm state and actions. The complexity of sampling from this model is O(T log N), because at every time step the upscaling procedure must be run, taking O(log N) time. The models were trained for 200K steps with batch size 64, using the RMSprop optimizer with centering and € = le-8. The learning rate was initialized to le — 4 and decayed by factor 0.3 after 83K steps and after 113K steps. For the O(T) model we used a mixture of discretized logistic out- puts (Salimans et al., 2017) and for the O(T log N) mode we used a softmax ouptut.
1703.03664#20
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
21
In this section we present results on Robot Pushing videos. All models were trained to perform future frame prediction conditioned on 2 starting frames and also on the robot arm actions and state, which are each 5-dimensional vectors. We trained two versions of the model, both versions using type-A upscaling networks (See Fig. 3). The first is de- signed to sample in O(T ) time, for T video frames. That is, the number of network evaluations per frame is constant with respect to the number of pixels. Table 2 compares two variants of our model with the origi- nal VPN. Compared to the O(T ) baseline - a convolutional LSTM model without spatial dependencies - our O(T ) model performs dramatically better. On the validation set, in which the model needs to generalize to novel combina- tions of objects and arm trajectories, the O(T log N) model does much better than our O(T ) model, although not as well as the original O(T N) model. Parallel Multiscale Autoregressive Density Estimation 8x8 — 128x128 8x8 > 512x512 | | 16x16 — 128x128 32x32 — 128x128
1703.03664#21
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
22
8x8 — 128x128 8x8 > 512x512 | | 16x16 — 128x128 32x32 — 128x128 Figure 7. Upscaling low-resolution images to 128 × 128 and 512 × 512. In each group of images, the left column is made of real images, and the right columns of samples from the model. = = Monastery Cardoon Figure 8. Class-conditional 128 × 128 samples from a model trained on ImageNet. On the testing sets, we observed that the O(T ) model per- formed as well as on the validation set, but the O(T log N) model showed a drop in performance. However, this drop does not occur due to the presence of novel objects (in fact this setting actually yields better results), but due to the novel arm and camera configuration used during testing 2. It appears that the O(T log N) model may have overfit to the background details and camera position of the 10 train- ing arms, but not necessarily to the actual arm and object motions. It should be possible to overcome this effect with better regularization and perhaps data augmentation such as mirroring and jittering frames, or simply training on data with more diverse camera positions. 2From communication with the Robot Pushing dataset author.
1703.03664#22
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]
1703.03664
23
2From communication with the Robot Pushing dataset author. The supplement contains example videos generated on the validation set arm trajectories from our O(T log N) model. We also trained 64 → 128 and 128 → 256 upscalers con- ditioned on low-resolution and a previous high-resolution frame, so that we can produce 256 × 256 videos. # 4.4. Class-conditional generation To compare against other image density models, we trained our Multiscale PixelCNN on ImageNet. We used type-B upscaling networks (Seee figure 3) with 12 ResNet (He et al., 2016) layers and 4 PixelCNN layers, with 256 hidden units per layer. For all PixelCNNs in the model, we used the same architecture as in (van den Oord et al., 2016b). We generated images with a base resolution of 8 × 8 and Parallel Multiscale Autoregressive Density Estimation Tr Model - O(T) baseline - O(TN) VPN O(T) VPN 1.03 O(T log N) VPN 0.74 Val 2.06 0.62 1.04 0.74 Ts-seen Ts-novel 2.08 0.64 1.04 1.06 2.07 0.64 1.04 0.97
1703.03664#23
Parallel Multiscale Autoregressive Density Estimation
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images. We evaluate the model on class-conditional image generation, text-to-image synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.
http://arxiv.org/pdf/1703.03664
Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas
cs.CV, cs.NE
null
null
cs.CV
20170310
20170310
[ { "id": "1701.05517" }, { "id": "1612.00005" }, { "id": "1612.03242" }, { "id": "1610.00527" }, { "id": "1610.10099" }, { "id": "1702.00783" }, { "id": "1609.03499" }, { "id": "1611.05013" } ]