doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.08098 | 34 | [14] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, âSelf- normalizing neural networks,â arXiv preprint arXiv:1706.02515, 2017. [15] R. Duggal and A. Gupta, âP-telu: Parametric tan hyperbolic linear unit activation for deep neural networks,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 974â 978.
[16] B. Xu, R. Huang, and M. Li, âRevise saturated activation functions,â arXiv preprint arXiv:1602.05980, 2016.
activation that can use negative values better and also has better learning property?
# REFERENCES
[17] S. Ioffe and C. Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â arXiv preprint arXiv:1502.03167, 2015.
[18] X. Glorot and Y. Bengio, âUnderstanding the difï¬culty of training deep feedforward neural networks,â in Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, 2010, pp. 249â256. | 1706.08098#34 | FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks | Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks. | http://arxiv.org/pdf/1706.08098 | Suo Qiu, Xiangmin Xu, Bolun Cai | cs.CV | null | null | cs.CV | 20170625 | 20180129 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1605.09332"
},
{
"id": "1706.02515"
},
{
"id": "1602.05980"
},
{
"id": "1710.09967"
},
{
"id": "1511.06422"
},
{
"id": "1606.00305"
},
{
"id": "1604.04112"
}
] |
1706.08098 | 35 | [1] S. Hochreiter and J. Schmidhuber, âLong short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â1780, 1997.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â in Advances in neural infor- mation processing systems, 2012, pp. 1097â1105.
[3] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, âInception-v4, inception-resnet and the impact of residual connections on learning,â arXiv preprint arXiv:1602.07261, 2016.
[4] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770â778. | 1706.08098#35 | FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks | Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks. | http://arxiv.org/pdf/1706.08098 | Suo Qiu, Xiangmin Xu, Bolun Cai | cs.CV | null | null | cs.CV | 20170625 | 20180129 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1605.09332"
},
{
"id": "1706.02515"
},
{
"id": "1602.05980"
},
{
"id": "1710.09967"
},
{
"id": "1511.06422"
},
{
"id": "1606.00305"
},
{
"id": "1604.04112"
}
] |
1706.08098 | 36 | [5] V. Nair and G. E. Hinton, âRectiï¬ed linear units improve restricted boltz- mann machines,â in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807â814.
[19] D. Mishkin and J. Matas, âAll you need is a good init,â arXiv preprint arXiv:1511.06422, Nov. 2015.
[20] A. Krizhevsky and G. Hinton, âLearning multiple layers of features from tiny images,â 2009.
[21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., âImagenet large scale visual recognition challenge,â International Journal of Computer Vision, vol. 115, no. 3, pp. 211â252, 2015.
[22] S. Gross and M. Wilber, âTraining and investigating residual nets,â Facebook AI Research, CA.[Online]. Avilable: http://torch. ch/blog/2016/02/04/resnets. html, 2016. | 1706.08098#36 | FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks | Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks. | http://arxiv.org/pdf/1706.08098 | Suo Qiu, Xiangmin Xu, Bolun Cai | cs.CV | null | null | cs.CV | 20170625 | 20180129 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1605.09332"
},
{
"id": "1706.02515"
},
{
"id": "1602.05980"
},
{
"id": "1710.09967"
},
{
"id": "1511.06422"
},
{
"id": "1606.00305"
},
{
"id": "1604.04112"
}
] |
1706.08098 | 37 | [23] C. J. B. Yann LeCun, Corinna Cortes, âThe mnist database of handwrit- ten digits,â http://yann.lecun.com/exdb/mnist/, 1998.
[24] M. Lin, Q. Chen, and S. Yan, âNetwork in network,â arXiv preprint arXiv:1312.4400, 2013.
[6] X. Glorot, A. Bordes, and Y. Bengio, âDeep sparse rectiï¬er neural networks.â in Aistats, vol. 15, no. 106, 2011, p. 275.
[25] A. Shah, E. Kadam, H. Shah, and S. Shinde, âDeep residual networks with exponential linear unit,â arXiv preprint arXiv:1604.04112, 2016.
[7] A. L. Maas, A. Y. Hannun, and A. Y. Ng, âRectiï¬er nonlinearities improve neural network acoustic models,â in Proc. ICML, vol. 30, no. 1, 2013. | 1706.08098#37 | FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks | Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks. | http://arxiv.org/pdf/1706.08098 | Suo Qiu, Xiangmin Xu, Bolun Cai | cs.CV | null | null | cs.CV | 20170625 | 20180129 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1605.09332"
},
{
"id": "1706.02515"
},
{
"id": "1602.05980"
},
{
"id": "1710.09967"
},
{
"id": "1511.06422"
},
{
"id": "1606.00305"
},
{
"id": "1604.04112"
}
] |
1706.08098 | 38 | [8] K. He, X. Zhang, S. Ren, and J. Sun, âDelving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation,â in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026â1034.
[26] D. Mishkin, N. Sergievskiy, and J. Matas, âSystematic evaluation of convolution neural network advances on the imagenet,â Computer Vision and Image Understanding, 2017. [Online]. Available: http: //www.sciencedirect.com/science/article/pii/S1077314217300814
[9] B. Xu, N. Wang, T. Chen, and M. Li, âEmpirical evaluation of rectiï¬ed activations in convolutional network,â arXiv preprint arXiv:1505.00853, 2015.
[10] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, âFast and accurate deep network learning by exponential linear units (elus),â arXiv preprint arXiv:1511.07289, 2015. | 1706.08098#38 | FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks | Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks. | http://arxiv.org/pdf/1706.08098 | Suo Qiu, Xiangmin Xu, Bolun Cai | cs.CV | null | null | cs.CV | 20170625 | 20180129 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1511.07289"
},
{
"id": "1605.09332"
},
{
"id": "1706.02515"
},
{
"id": "1602.05980"
},
{
"id": "1710.09967"
},
{
"id": "1511.06422"
},
{
"id": "1606.00305"
},
{
"id": "1604.04112"
}
] |
1706.07881 | 0 | 7 1 0 2
n u J 3 2 ] G L . s c [
1 v 1 8 8 7 0 . 6 0 7 1 : v i X r a
# On Sampling Strategies for Neural Network-based Collaborative Filtering
# Ting Chen University of California, Los Angeles Los Angeles, CA 90095 [email protected]
Yizhou Sun University of California, Los Angeles Los Angeles, CA 90095 [email protected]
Yue Shiâ Yahoo! Research Sunnyvale, CA 94089 [email protected]
Liangjie Hong Etsy Inc. Brooklyn, NY 11201 [email protected] | 1706.07881#0 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 1 | ABSTRACT Recent advances in neural networks have inspired people to de- sign hybrid recommendation algorithms that can incorporate both (1) user-item interaction information and (2) content information including image, audio, and text. Despite their promising results, neural network-based recommendation algorithms pose extensive computational costs, making it challenging to scale and improve upon. In this paper, we propose a general neural network-based recommendation framework, which subsumes several existing state- of-the-art recommendation algorithms, and address the efficiency issue by investigating sampling strategies in the stochastic gradient descent training for the framework. We tackle this issue by first establishing a connection between the loss functions and the user- item interaction bipartite graph, where the loss function terms are defined on links while major computation burdens are located at nodes. We call this type of loss functions âgraph-basedâ loss func- tions, for which varied mini-batch sampling strategies can have different computational costs. Based on the insight, three novel sampling strategies are proposed, which can significantly improve the training efficiency of the proposed framework (up to Ã30 times speedup in our experiments), as well as improving the recommen- dation performance. Theoretical analysis is also provided for both the computational cost and the convergence. We believe the study of sampling strategies have further implications on general graph- based loss functions, and would also enable more research under the neural network-based recommendation framework. | 1706.07881#1 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 2 | ACM Reference format: Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong. 2017. On Sampling Strategies for Neural Network-based Collaborative Filtering. In Proceedings of KDD â17, Halifax, NS, Canada, August 13-17, 2017, 14 pages. https://doi.org/10.1145/3097983.3098202
# âNow at Facebook.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. KDD â17, August 13-17, 2017, Halifax, NS, Canada © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa- tion for Computing Machinery. ACM ISBN 978-1-4503-4887-4/17/08. . . $15.00 https://doi.org/10.1145/3097983.3098202 | 1706.07881#2 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 3 | 1 INTRODUCTION Collaborative Filtering (CF) has been one of the most effective meth- ods in recommender systems, and methods like matrix factorization [17, 18, 27] are widely adopted. However, one of its limitation is the dealing of âcold-startâ problem, where there are few or no observed interactions for new users or items, such as in news recommenda- tion. To overcome this problem, hybrid methods are proposed to incorporate side information [7, 25, 28], or item content informa- tion [11, 31] into the recommendation algorithm. Although these methods can deal with side information to some extent, they are not effective for extracting features in complicated data, such as image, audio and text. On the contrary, deep neural networks have been shown very powerful at extracting complicated features from those data automatically [15, 19]. Hence, it is natural to combine deep learning with traditional collaborative filtering for recommendation tasks, as seen in recent studies [1, 4, 32, 37]. | 1706.07881#3 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 4 | In this work, we generalize several state-of-the-art neural networkbased recommendation algorithms [1, 4, 30], and propose a more general framework that combines both collaborative filtering and deep neural networks in a unified fashion. The framework inherits the best of two worlds: (1) the power of collaborative filtering at capturing user preference via their interaction with items, and (2) that of deep neural networks at automatically extracting high-level features from content data. However, it also comes with a price. Traditional CF methods, such as sparse matrix factorization [17, 27], are usually fast to train, while the deep neural networks in gen- eral are much more computationally expensive [19]. Combining these two models in a new recommendation framework can easily increase computational cost by hundreds of times, thus require a new design of the training algorithm to make it more efficient. | 1706.07881#4 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 5 | We tackle the computational challenges by first establishing a connection between the loss functions and the user-item interaction bipartite graph. We realize the key issue when combining the CF and deep neural networks are in: the loss function terms are defined over the links, and thus sampling is on links for the stochastic gradient training, while the main computational burdens are located at nodes (e.g., Convolutional Neural Network computation for image of an item). For this type of loss functions, varied mini-batch sampling strategies can lead to different computational costs, depending on how many node computations are required in a mini-batch. The existing stochastic sampling techniques, such as IID sampling, are
KDD â17, August 13-17, 2017, Halifax, NS, Canada
inefficient, as they do not take into account the node computations that can be potentially shared across links/data points. | 1706.07881#5 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 6 | inefficient, as they do not take into account the node computations that can be potentially shared across links/data points.
Inspired by the connection established, we propose three novel sampling strategies for the general framework that can take coupled computation costs across user-item interactions into consideration. The first strategy is Stratified Sampling, which try to amortize costly node computation by partitioning the links into different groups based on nodes (called stratum), and sample links based on these groups. The second strategy is Negative Sharing, which is based on the observation that interaction/link computation is fast, so once a mini-batch of user-item tuples are sampled, we share the nodes for more links by creating additional negative links between nodes in the same batch. Both strategies have their pros and cons, and to keep their advantages while avoid their weakness, we form the third strategy by combining the above two strategies. Theoretical analysis of computational cost and convergence is also provided.
⢠We propose a general hybrid recommendation framework (Neural Network-based Collaborative Filtering) combining CF and content-based methods with deep neural networks, which generalize several state-of-the-art approaches.
⢠We establish a connection between the loss functions and the user-item interaction graph, based on which, we propose sampling strategies that can significantly improve training efficiency (up to Ã30 times faster in our experiments) as well as the recommendation performance of the proposed framework. | 1706.07881#6 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 7 | ⢠We provide both theoretical analysis and empirical experi- ments to demonstrate the superiority of the proposed meth- ods.
# 2 A GENERAL FRAMEWORK FOR NEURAL NETWORK-BASED COLLABORATIVE FILTERING
In this section, we propose a general framework for neural network- based Collaborative Filtering that incorporates both interaction and content information.
2.1 Text Recommendation Problem In this work, we use the text recommendation task [1, 4, 31, 32] as an illustrative application for the proposed framework. However, the proposed framework can be applied to more scenarios such as music and video recommendations. | 1706.07881#7 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 8 | We use xu and xv to denote features of user u and item v, respec- tively. In text recommendation setting, we set xu to one-hot vector indicating uâs user id (i.e. a binary vector with only one at the u-th position)1, and xv as the text sequence, i.e. xv = (w1, w2, · · · , wt ). A response matrix ËR is used to denote the historical interactions be- tween users and articles, where Ëruv indicates interaction between a user u and an article v, such as âclick-or-notâ and âlike-or-notâ. Fur- thermore, we consider ËR as implicit feedback in this work, which means only positive interactions are provided, and non-interactions are treated as negative feedback implicitly.
1Other user profile features can be included, if available.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
r(u,v) fQ | |g t t Xu Xy
Figure 1: The functional embedding framework. | 1706.07881#8 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 9 | r(u,v) fQ | |g t t Xu Xy
Figure 1: The functional embedding framework.
Given user/item features {xu }, {xv } and their historical interac- tion ËR, the goal is to learn a model which can rank new articles for an existing user u based on this userâs interests and an articleâs text content.
2.2 Functional Embedding In most of existing matrix factorization techniques [17, 18, 27], each user/item ID is associated with a latent vector u or v (i.e., embedding), which can be considered as a simple linear trans- formation from the one-hot vector represented by their IDs, i.e. uu = f(xu ) = WT xu (W is the embedding/weight matrix). Al- though simple, this direct association of user/item ID with repre- sentation make it less flexible and unable to incorporate features such as text and image. | 1706.07881#9 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 10 | In order to effectively incorporate user and item features such as content information, it has been proposed to replace embedding vectors u or v with functions such as decision trees [38] and some specific neural networks [1, 4]. Generalizing the existing work, we propose to replace the original embedding vectors u and v with general differentiable functions f(·) â Rd and g(·) â Rd that take user/item features xu , xv as their inputs. Since the user/item embed- dings are the output vectors of functions, we call this approach Func- tional Embedding. After embeddings are computed, a score function r (u, v) can be defined based on these embeddings for a user/item pair (u, v), such as vector dot product r (u, v) = f(xu )T g(xv ) (used in this work), or a general neural network. The model framework is shown in Figure 1. It is easy to see that our framework is very general, as it does not explicitly specify the feature extraction func- tions, as long as the functions are differentiable. In practice, these function can be specified with neural networks such as CNN or RNN, for extracting high-level information from image, | 1706.07881#10 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 12 | For simplicity, we will denote the output of f(xu ) and g(xv ) by fu and gv , which are the embedding vectors for user u and item v.
2.3 Loss Functions for Implicit Feedback In many real-world applications, users only provide positive signals according to their preferences, while negative signals are usually implicit. This is usually referred as âimplicit feedbackâ [13, 23, 26].
On Sampling Strategies for Neural Network-based Collaborative Filtering
# Table 1: Examples of loss functions for recommendation.
Pointwise loss SG-loss [22]: -L(u, veo ( log o(f7 gv) + AEyâ~p,, log o(-f gu ) MSE-loss [30]: D(u,v)eo (rie - fi] gu) + ABEL ~py (Fy â £0 Be! *) Pairwise loss Log-loss [26]: -D(u,o)ep Eoâ~P,, log olriet Bo â fy ev) Hinge-loss [33]: 5 (u,vjeo Boâ~Pn max (« gu -fTeo ty. ° | 1706.07881#12 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 13 | In this work, we consider two types of loss functions that can handle recommendation tasks with implicit feedback, namely, pointwise loss functions and pairwise loss functions. Pointwise loss functions have been applied to such problems in many existing work. In [1, 30, 32], mean square loss (MSE) has been applied where âneg- ative termsâ are weighted less. And skip-gram (SG) loss has been successfully utilized to learn robust word embedding [22].
These two loss functions are summarized in Table 1. Note that we use a weighted expectation term over all negative samples, which can be approximated with small number of samples. We can also abstract the pointwise loss functions into the following form:
Lpointwise = Ey~p,(u)|Ev~Py(v|u)Cuv壉 (u, 214) (1) + Ey np, (vyCuyrL (u,v'|0) | 1706.07881#13 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 14 | v â²â¼Pn (v â²)câ where Pd is (empirical) data distribution, Pn is user-defined negative data distribution, c is user defined weights for the different user- item pairs, θ denotes the set of all parameters, L+(u, v |θ ) denotes the loss function on a single positive pair (u, v), and Lâ(u, v |θ ) denotes the loss on a single negative pair. Generally speaking, given a user u, pointwise loss function encourages her score with positive items {v}, and discourage her score with negative items {v â²}.
When it comes to ranking problem as commonly seen in implicit feedback setting, some have argued that the pairwise loss would be advantageous [26, 33], as pairwise loss encourages ranking of positive items above negative items for the given user. Different from pointwise counterparts, pairwise loss functions are defined on a triplet of (u, v, v â²), where v is a positive item and v â² is a negative item to the user u. Table 1 also gives two instances of such loss functions used in existing papers [26, 33] (with γ being the pre- defined âmarginâ parameter). We can also abstract pairwise loss functions by the following form: | 1706.07881#14 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 15 | v â²â¼Pn (v â²)cuvv â² L(u, v, v â²|θ ) (2) where the notations are similarly defined as in Eq. 1 and L(u, v, v â²|θ ) denotes the loss function on the triplet (u, v, v â²).
# 2.4 Stochastic Gradient Descent Training and Computational Challenges
To train the model, we use stochastic gradient descent based algo- rithms [3, 16], which are widely used for training matrix factoriza- tion and neural networks. The main flow of the training algorithm is summarized in Algorithm 1. By adopting the functional embedKDD â17, August 13-17, 2017, Halifax, NS, Canada
# Algorithm 1 Standard model training procedure
while not converged do // mini-batch sampling draw a mini-batch of user-item tuples (u, v)2 // forward pass compute f(xu ), g(xv ) and their interaction fT compute the loss function L // backward pass compute gradients and apply SGD updates u gv end while | 1706.07881#15 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 16 | ding with (deep) neural networks, we can increase the power of the model, but it also comes with a cost. Figure 2 shows the training time (for CiteULike data) with different item functions g(·), namely linear embedding taking item id as feature (equivalent to conven- tional MF), CNN-based content embedding, and RNN/LSTM-based content embedding. We see orders of magnitude increase of train- ing time for the latter two embedding functions, which may create barriers to adopt models under this framework. | 1706.07881#16 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 17 | Breaking down the computation cost of the framework, there are three major parts of computational cost. The first part is the user based computation (denoted by tf time units per user), which includes forward computation of user function f(xu ), and backward computation of the function output w.r.t. its parameters. The second part is the item based computation (denoted by tд time units per item), which similarly includes forward computation of item func- tion g(xv ), as well as the back computation. The third part is the computation for interaction function (denoted by ti time units per interaction). The total computational cost for a mini-batch is then tf à # of users + tд à # of items + ti à # of interactions, with some other minor operations which we assume ignorable. In the text rec- ommendation application, user IDs are used as user features (which can be seen as linear layer on top of the one-hot inputs), (deep) neural networks are used for text sequences, vector dot product is used as interaction function, thus the dominant computational cost is tд (orders of magnitude larger than tf and ti ). In other words, we assume tд ⫠tf , ti in this work.
8108 © 5 ® B10? ® £ 101 & & E 100 Linear/MF CNN RNN Item function
Figure 2: Model training time per epoch with different types of item functions (in log-scale). | 1706.07881#17 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 18 | Figure 2: Model training time per epoch with different types of item functions (in log-scale).
2Draw a mini-batch of user-item triplets (u, v, v â²) if a pairwise loss function is adopted.
KDD â17, August 13-17, 2017, Halifax, NS, Canada
iz|
Figure 3: The bipartite interaction graph for pointwise loss functions, where loss functions are defined over links. The pairwise loss functions are defined over pairs of links.
# 3 MINI-BATCH SAMPLING STRATEGIES FOR EFFICIENT MODEL TRAINING
In this section, we propose and discuss different sampling strategies that can improve the efficiency of the model training. | 1706.07881#18 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 19 | In this section, we propose and discuss different sampling strategies that can improve the efficiency of the model training.
3.1 Computational Cost in a Graph View Before the discussion of different sampling strategies, we motivate our readers by first making a connection between the loss func- tions and the bipartite graph of user-item interactions. In the loss functions laid out before, we observed that each loss function term in Eq. 1, namely, L(u, v), involves a pair of user and item, which corresponds to a link in their interaction graph. And two types of links corresponding to two types of loss terms in the loss func- tions, i.e., positive links/terms and negative links/terms. Similar analysis holds for pairwise loss in Eq. 2, though there are slight differences as each single loss function corresponds to a pair of links with opposite signs on the graph. We can also establish a cor- respondence between user/item functions and nodes in the graph, i.e., f(u) to user node u and g(v) to item node v. The connection is illustrated in Figure 3. Since the loss functions are defined over the links, we name them âgraph-basedâ loss functions to emphasize the connection. | 1706.07881#19 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 20 | The key observation for graph-based loss functions is that: the loss functions are defined over links, but the major computational burden are located at nodes (due to the use of costly g(·) function). Since each node is associated with multiple links, which are corre- sponding to multiple loss function terms, the computational costs of loss functions over links are coupled (as they may share the same nodes) when using mini-batch based SGD. Hence, varied sampling strategies yield different computational costs. For example, when we put links connected to the same node together in a mini-batch, the computational cost can be lowered as there are fewer g(·) to compute3. This is in great contrast to conventional optimization problems, where each loss function term dose not couple with others in terms of computation cost.
3This holds for both forward and backward computation. For the latter, the gradient from different links can be aggregated before back-propagating to g(·).
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong | 1706.07881#20 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 21 | Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
3.2 Existing Mini-Batch Sampling Strategies In standard SGD sampler, (positive) data samples are drawn uni- formly at random for gradient computation. Due to the appearance of negative samples, we draw negative samples from some prede- fined probability distribution, i.e. (u â², v â²) â¼ Pn (u â², v â²). We call this approach âIID Samplingâ, since each positive link is dependently and identical distributed, and the same holds for negative links (with a different distribution).
Many existing algorithms with graph-based loss functions [1, 22, 29] adopt the âNegative Samplingâ strategy, in which k negative samples are drawn whenever a positive example is drawn. The neg- ative samples are sampled based on the positive ones by replacing the items in the positive samples. This is illustrated in Algorithm 2 and Figure 4(a).
Algorithm 2 Negative Sampling [1, 21, 29]
Require: number of positive links in a mini-batch b, number of negative links per positive one: k draw b positive links uniformly at random for each of b positive links do
draw k negative links by replacing true item v with v â² â Pn (v â²) end for | 1706.07881#21 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 22 | draw k negative links by replacing true item v with v â² â Pn (v â²) end for
The IID Sampling strategy dose not take into account the prop- erty of graph-based loss functions, since samples are completely independent of each other. Hence, the computational cost in a single mini-batch cannot be amortized across different samples, leading to very extensive computations with (deep) neural networks. The Negative Sampling does not really help, since the item function computation cost tд is the dominant one. To be more specific, con- sider a mini-batch with b(1 + k) links sampled by IID Sampling or Negative Sampling, we have to conduct item based g(·) compu- tation b(1 + k) times, since items in a mini-batch are likely to be non-overlapping with sufficient large item sets.
# 3.3 The Proposed Sampling Strategies
Stratified Sampling (by Items). Motivated by the connec- tion between the loss functions and the bipartite interaction graph as shown in Figure 3, we propose to sample links that share nodes, in particular those with high computational cost (i.e. tд for item function g(·) in our case). By doing so, the computational cost within a mini-batch can be amortized, since fewer costly functions are computed (in both forward and backward propagations). | 1706.07881#22 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 23 | In order to achieve this, we (conceptually) partition the links, which correspond to loss function terms, into strata. A stratum in the strata is a set of links on the bipartite graph sharing the same source or destination node. Instead of drawing links directly for training, we will first draw stratum and then draw both positive and negative links. Since we want each stratum to share the same item, we can directly draw an item and then sample its links. The details are given in Algorithm 3 and illustrated in Figure 4(b).
Compared to Negative Sampling in Algorithm 2, there are several differences: (1) Stratified Sampling can be based on either item or user, but in the negative sampling only negative items are drawn; and (2) each node in stratified sampling can be associated with more than 1 positive link (i.e., s > 1, which can help improve the
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
(a) Negative (b) Stratified (by Items) (c) Negative Sharing (d) Stratified with N.S.
ut v4 v2 {l/l fl /1
ut Vi 2 us. U4
Ui Vi u2 v2 u3 v3 ua v4
ui u2 Vi us v2 ua | 1706.07881#23 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 24 | ut Vi 2 us. U4
Ui Vi u2 v2 u3 v3 ua v4
ui u2 Vi us v2 ua
Figure 4: Illustration of four different sampling strategies. 4(b)-4(d) are the proposed sampling strategies. Red lines denote positive links/interactions, and black lines denote negative links/interactions.
Algorithm 3 Stratified Sampling (by Items)
# Algorithm 4 Negative Sharing
Require: number of positive links in a mini-batch: b, number of positive links per stratum: s, number of negative links per positive one: k repeat
Require: number of positive links in a mini-batch: b draw b positive user-item pairs {(u, v)} uniformly at random construct negative pairs by connecting non-linked users and items in the batch
draw an item v â Pd (v) draw s positive users {u} of v uniformly at random draw k à s negative users {u â²} â Pd (u â²) until a mini-batch of b positive links are sampled
cost of training is on the node computation and the node set is fixed given the batch of b positive links, we can share the nodes for negative links without increasing much of computational burdens. Based on this idea, Algorithm 4 summarizes an extremely simple sampling procedure, and it is illustrated in Figure 4(c). | 1706.07881#24 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 25 | speedup as shown below), while in negative sampling each node is only associated with one positive link.
Now we consider its speedup for a mini-batch including b posi- tive links/interactions and bk negative ones, which contains b(1+k) users and b/s items. The Stratified Sampling (by Items) only requires b/s computations of g(·) functions, while the Negative Sampling requires b(1 + k) computations. Assuming tд â« tf , ti , i.e. the com- putation cost is dominated by the item function д(·), the Stratified Sampling (by Items) can provide s(1 + k) times speedup in a mini- batch. With s = 4, k = 10 as used in some of our experiments, it yields to Ã40 speedup optimally. However, it is worth pointing out that item-based Stratified Sampling cannot be applied to pairwise loss functions, which compare preferences over items based on a given user. | 1706.07881#25 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 26 | 3.3.2 Negative Sharing. The idea of Negative Sharing is inspired from a different aspect of the connection between the loss func- tions and the bipartite interaction graph. Since ti ⪠tд, i.e. the computational cost of interaction function (dot product) is ignor- able compared to that of item function, when a mini-batch of users and items are sampled, increasing the number of interactions among them may not result in a significant increase of computational cost. This can be achieved by creating a complete bipartite graph for a mini-batch by adding negative links between all non-interaction pairs between users and items. Using this strategy, we can draw NO negative links at all!
More specifically, consider the IID Sampling, when b positive links are sampled, there will be b users and b items involved (assum- ing the sizes of user set and item set are much larger than b). Note that, there are b(b â1) non-interactions in the mini-batch, which are not considered in IID Sampling or Negative Sampling, instead they draw additional negative samples. Since the main computational | 1706.07881#26 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 27 | Since Negative Sharing avoids sampling k negative links, it only contains b items while in Negative Sampling contains b(1 + k) items. So it can provide (1 +k) times speedup compared to Negative Sampling (assuming tд ⫠tf , ti , and total interaction cost is still insignificant). Given the batch size b is usually larger than k (e.g., b = 512, k = 20 in our experiments), much more negative links (e.g. 512 à 511) will also be considered, this is helpful for both faster convergence and better performance, which is shown in our experiments. However, as the number of negative samples increases, the performance and the convergence will not be improved linearly. diminishing return is expected.
Stratified Sampling with Negative Sharing. The two strate- gies above can both reduce the computational cost by smarter sampling of the mini-batch. However, they both have weakness: Stratified Sampling cannot deal with pairwise loss and it is still dependent on the number of negative examples k, and Negative Sharing introduces a lot of negative samples which may be unnec- essary due to diminishing return. | 1706.07881#27 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 28 | The good news is, the two sampling strategies are proposed from different perspectives, and combining them together can preserve their advantages while avoid their weakness. This leads to the Stratified Sampling with Negative Sharing, which can be applied to both pointwise and pairwise loss functions, and it can have flexible ratio between positive and negative samples (i.e. more positive links given the same negative links compared to Negative Sharing). To do so, basically we sample positive links according to Stratified Sampling, and then sample/create negative links by treating non- interactions as negative links. The details are given in Algorithm 5 and illustrated in Figure 4(d).
Computationally, Stratified Sampling with Negative Sharing only involve b/s item nodes in a mini-batch, so it can provide the same
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Algorithm 5 Stratified Sampling with Negative Sharing
Require: number of positive links in a mini-batch: b, number of positive links per stratum: s repeat
draw an item v â Pd (v) draw s positive users of item v uniformly at random
until a mini-batch of b/s items are sampled construct negative pairs by connecting non-linked users and items in the batch | 1706.07881#28 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 29 | until a mini-batch of b/s items are sampled construct negative pairs by connecting non-linked users and items in the batch
s(1 + k) times speedup over Negative Sampling as Stratified Sam- pling (by Items) does, but it will utilize much more negative links compared to Negative Sampling. For example, in our experiments with b = 512, s = 4, we have 127 negative links per positive one, much larger than k = 10 in Negative Sampling, and only requires 1/4 times of g(·) computations compared to Negative Sharing.
Implementation Details. When the negative/noise distri- bution Pn is not unigram4, we need to adjust the loss function in order to make sure the stochastic gradient is unbiased. For point- wise loss, each of the negative term is adjusted by multiplying a weight of Pn (v â²) ; for pairwise loss, each term based on a triplet of Pd (v â²) (u, v, v â²) is adjusted by multiplying a weight of Pn (v â²) Pd (v â²) the sampled negative item. | 1706.07881#29 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 30 | Instead of sampling, we prefer to use shuffling as much as we can, which produces unbiased samples while yielding zero variance. This can be a useful trick for achieving better performance when the number of drawn samples are not large enough for each loss terms. For IID and Negative Sampling, this can be easily done for positive links by simply shuffling them. As for the Stratified Sampling (w./wo. Negative Sharing), instead of shuffling the positive links directly, we shuffle the randomly formed strata (where each stratum contains roughly a single item)5. All other necessary sampling operations required are sampling from discrete distributions, which can be done in O(1) with Alias method.
In Negative Sharing (w./wo. Stratified Sampling), We can com- pute the user-item interactions with more efficient operator, i.e. replacing the vector dot product between each pair of (f, g) with matrix multiplication between (F, G), where F = [fu1 , · · · , fun ], G = [gv1 , · · · , gvm ]. Since matrix multiplication is higher in BLAS level than vector multiplication [14], even we increase the number of interactions, with medium matrix size (e.g. 1000à 1000) it does not affect the computational cost much in practice.
# 3.4 Computational Cost and Convergence Analysis | 1706.07881#30 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 31 | # 3.4 Computational Cost and Convergence Analysis
Here we provide a summary for the computational cost for different sampling strategies discussed above, and also analyze their conver- gences. Two aspects that can lead to speedup are analyzed: (1) the computational cost for a mini-batch, i.e. per iteration, and (2) the number of iterations required to reach some referenced loss.
4Unigram means proportional to item frequency, such as node degree in user-item interaction graph. 5This can be done by first shuffling users associated with each item, and then concate- nating all links according to items in random order, random strata is then formed by segmenting the list.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong | 1706.07881#31 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 32 | Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
3.4.1 Computational Cost. To fairly compare different sampling strategies, we fix the same number of positive links in each of the mini-batch, which correspond to the positive terms in the loss function. Table 2 shows the computational cost of different sampling strategies for a given mini-batch. Since tд ⫠tf , ti in practice, we approximate the theoretical speedup per iteration by comparing the number of tд computation. We can see that the proposed sampling strategies can provide (1 + k), by Negative Sharing, or s(1 + k), by Stratified Sampling (w./w.o. Negative Sharing), times speedup for each iteration compared to IID Sampling or Negative Sampling. As for the number of iterations to reach a reference loss, it is related to number of negative samples utilized, which is analyzed below.
3.4.2 Convergence Analysis. We want to make sure the SGD training under the proposed sampling strategies can converge cor- rectly. The necessary condition for this to hold is the stochastic gradient estimator has to be unbiased, which leads us to the follow- ing lemma. | 1706.07881#32 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 33 | Lemma 1. (unbiased stochastic gradient) Under sampling Algo- rithm 2, 3, 4, and 5, we have EB [âLB (θ t )] = âL(θ t ). In other words, the stochastic mini-batch gradient equals to true gradient in expecta- tion.
This holds for both pointwise loss and pairwise loss. It is guar- anteed since we draw samples stochastically and re-weight certain samples accordingly. The detailed proof can be found in the supple- mentary material.
Given this lemma, we can further analyze the convergence be- havior of the proposed sampling behaviors. Due to the highly non- linear and non-convex functions composed by (deep) neural net- works, the convergence rate is usually difficult to analyze. So we show the SGD with the proposed sampling strategies follow a local convergence bound (similar to [10, 24]).
Proposition 1. (local convergence) Suppose L has Ï -bounded , and θ â is the gradient; let ηt = η = c/ minimizer to L. Then, the following holds for the proposed sampling strategies given in Algorithm 2, 3, 4, 5
ein, LIV-L@) IP] J AO FO, | 1706.07881#33 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 34 | ein, LIV-L@) IP] J AO FO,
The detailed proof is also given in the supplementary material. Furthermore, utilizing more negative links in each mini-batch can lower the expected stochastic gradient variance. As shown in [35, 36], the reduction of variance can lead to faster convergence. This suggests that Negative Sharing (w./wo. Stratified Sampling) has better convergence than the Stratified Sampling (by Items).
4 EXPERIMENTS 4.1 Data Sets Two real-world text recommendation data sets are used for the experiments. The first data set CiteULike, collected from CiteU- Like.org, is provided in [31]. The CiteULike data set contains users bookmarking papers, where each paper is associated with a title and an abstract. The second data set is a random subset of Yahoo!
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Table 2: Computational cost analysis for a batch of b positive links. We use vec to denote vector multiplication, and mat to denote matrix multiplication. Since tд ⫠tf , ti in practice, the theoretical speedup per iteration can be approximated by comparing the number of tд computation, which is colored red below. The number of iterations to reach a referenced loss is related to the number of negative links in each mini-batch. | 1706.07881#34 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 35 | Sampling IID [3] Negative [1, 21, 29] Stratified (by Items) Negative Sharing Stratified with N.S. # pos. links b b b b b # neg. links bk bk bk b(b â 1) b(bâ1) s # tf b(1 + k) b b(1 + k) b b # tд b(1 + k) b(1 + k) b s b b s # ti b(1 + k) vec b(1 + k) vec b(1 + k) vec b à b mat b à b mat s pointwise â â â â â pairwise à â à â â
News data set 6, which contains users clicking on news presented at Yahoo!. There are 5,551 users and 16,980 items, and total of 204,986 positive interactions in CiteULike data. As for Yahoo! News data, there are 10,000 users, 58,579 items and 515,503 interactions.
Table 3: Comparisons of speedup for different sampling strategies against IID Sampling: per iteration, # of iteration, and total speedup. | 1706.07881#35 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 36 | Table 3: Comparisons of speedup for different sampling strategies against IID Sampling: per iteration, # of iteration, and total speedup.
Following [4], we select a portion (20%) of items to form the pool of test items. All user interactions with those test items are held-out during training, only the remaining user-item interactions are used as training data, which simulates the scenarios for recommending newly-emerged text articles.
4.2 Experimental Settings The main purpose of experiments is to compare the efficiency and effectiveness of our proposed sampling strategies against existing ones. So we mainly compare Stratified Sampling, Negative Sharing, and Stratified Sampling with Negative Sharing, against IID sampling and Negative Sampling. It is worth noting that several existing state- of-the-art models [1, 4, 30] are special cases of our framework (e.g. using MSE-loss/Log-loss with CNN or RNN), so they are compared to other loss functions under our framework. | 1706.07881#36 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 37 | Model Sampling Negative Stratified Per it. 1.02 8.83 CiteULike # of it. 1.00 0.97 Total 1.02 8.56 Per it. 1.03 6.40 News # of it. 1.03 0.97 CNN LSTM N.S. Strat. w. N.S. Negative Stratified N.S. Strat. w. N.S. 8.42 15.53 0.99 3.1 2.87 3.4 2.31 1.87 0.96 0.77 2.45 2.22 19.50 29.12 0.95 2.38 7.03 7.57 6.54 11.49 1.0 3.12 2.78 3.13 2.21 2.17 1.25 1.03 4.14 3.32 Total 1.06 6.20 14.45 24.98 1.25 3.22 11.5 10.41 | 1706.07881#37 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 38 | Evaluation Metrics. For recommendation performance, we follow [1, 32] and use recall@M. As pointed out in [32], the precision is not a suitable performance measure since non interactions may be due to (1) the user is not interested in the item, or (2) the user does not pay attention to its existence. More specifically, for each user, we rank candidate test items based on the predicted scores, and then compute recall@M based on the list. Finally the recall@M is averaged over all users.
As for the computational cost, we mainly measure it in three dimensions: the training time for each iteration (or epoch equiv- alently, since batch size is fixed for all methods), the number of iterations needed to reach a referenced loss, and the total amount of computation time needed to reach the same loss. In our exper- iments, we use the smallest loss obtained by IID sampling in the maximum 30 epochs as referenced loss. Noted that all time measure mentioned here is in Wall Time. | 1706.07881#38 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 39 | Parameter Settings. The key parameters are tuned with validation set, while others are simply set to reasonable values. We adopt Adam [16] as the stochastic optimizer. We use the same batch size b = 512 for all sampling strategies, we use the number of positive link per sampled stratum s = 4, learning rate is set to 0.001 for MSE-loss, and 0.01 for others. γ is set to 0.1 for Hinge-loss, and 10 for others. λ is
set to 8 for MSE-loss, and 128 for others. We set number of negative examples k = 10 for convolutional neural networks, and k = 5 for RNN/LSTM due to the GPU memory limit. All experiments are run with Titan X GPUs. We use unigram noise/negative distribution. For CNN, we adopt the structure similar in [15], and use 50 filters with filter size of 3. Regularization is added using both weight decay on user embedding and dropout on item embedding. For RNN, we use LSTM [12] with 50 hidden units. For both models, the dimensions of user and word embedding are set to 50. Early stop is utilized, and the experiments are run to maximum 30 epochs.
# 4.3 Speedup Under Different Sampling Strategies | 1706.07881#39 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 40 | # 4.3 Speedup Under Different Sampling Strategies
Table 3 breaks down the speedup into (1) speedup for training on a given mini-batch, (2) number of iterations (to reach referenced cost) speedup, and (3) the total speedup, which is product of the first two. Different strategies are compared against IID Sampling. It is shown that Negative Sampling has similar computational cost as IID Sampling, which fits our projection. All three proposed sampling strategies can significantly reduce the computation cost within a mini-batch. Moreover, the Negative Sharing and Stratified Sampling with Negative Sharing can further improve the convergence w.r.t. the number of iterations, which demonstrates the benefit of using larger number of negative examples.
# 6https://webscope.sandbox.yahoo.com/catalog.php?datatype=r&did=75
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
(a) Citeulike (epoch) (b) Citeulike (wall time) (c) News (epoch) (d) News (wall time)
Figure 5: Training loss curves (all methods have the same number of b positive samples in a mini-batch) | 1706.07881#40 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 41 | Figure 5: Training loss curves (all methods have the same number of b positive samples in a mini-batch)
(a) Citeulike (epoch) (b) Citeulike (wall time) (c) News (epoch) (d) News (wall time)
Figure 6: Test performance/recall curves (all methods have the same number of b positive samples in a mini-batch).
Figure 5 and 6 shows the convergence curves of both loss and test performance for different sampling strategies (with CNN + SG- loss). In both figures, we measure progress every epoch, which is equivalent to a fixed number of iterations since all methods have the same batch size b. In both figures, we can observe mainly two types of convergences behavior. Firstly, in terms of number of it- erations, Negative Sharing (w./wo. Stratified Sampling) converge fastest, which attributes to the number of negative samples used. Secondly, in terms of wall time, Negative Sharing (w./wo. Stratified Sampling) and Stratified Sampling (by Items) are all significantly faster than baseline sampling strategies, i.e. IID Sampling and Neag- tive Sampling. It is also interesting to see that that overfitting occurs earlier as convergence speeds up, which does no harm as early stop- ping can be used. | 1706.07881#41 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 42 | For Stratified Sampling (w./wo. negative sharing), the number of positive links per stratum s can also play a role to improve speedup as we analyzed before. As shown in Figure 7, the convergence time as well as recommendation performance can both be improved with a reasonable s, such as 4 or 8 in our case.
(a) Loss (Stratified) (b) Loss (Stratified with N.S.)
# (c) Recall (Stratified)
# (d) Recall (Stratified with N.S.)
Figure 7: The number of positive links per stratum s VS loss and performance.
# 4.4 Recommendation Performance Under Different Sampling Strategies
It is shown in above experiments that the proposed sampling strate- gies are significantly faster than the baselines. But we would also like to further access the recommendation performance by adopting the proposed strategies.
Negative Sharing and Stratified Sampling with Negative Sharing, since there are much more negative samples utilized, their perfor- mances are significantly better. We also observe that the current recommendation models based on MSE-loss [1, 30] can be improved by others such as SG-loss and pairwise loss functions [4].
Table 4 compares the proposed sampling strategies with CNN/RNN | 1706.07881#42 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 43 | Table 4 compares the proposed sampling strategies with CNN/RNN
models and four loss functions (both pointwise and pairwise). We can see that IID Sampling, Negative Sampling and Stratified Sam- pling (by Items) have similar recommendation performances, which is expected since they all utilize same amount of negative links. For
To further investigate the superior performance brought by Neg- ative Sharing. We study the number of negative examples k and the convergence performance. Figure 8 shows the test performance against various k. As shown in the figure, we observe a clear di- minishing return in the improvement of performance. However,
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Table 4: Recall@50 for different sampling strategies under different models and losses. | 1706.07881#43 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 44 | CiteULike News Model Sampling SG-loss MSE-loss Hinge-loss Log-loss SG-loss MSE-loss Hinge-loss Log-loss IID 0.4746 0.4437 - - 0.1091 0.0929 - - Negative 0.4725 0.4408 0.4729 0.4796 0.1083 0.0956 0.1013 0.1009 CNN Stratified 0.4761 0.4394 - - 0.1090 0.0913 - - Negative Sharing 0.4866 0.4423 0.4794 0.4769 0.1131 0.0968 0.0909 0.0932 Stratified with N.S. 0.4890 0.4535 0.4790 0.4884 0.1196 0.1043 0.1059 0.1100 IID 0.4479 0.4718 - - 0.0971 0.0998 - - Negative 0.4371 0.4668 0.4321 0.4540 0.0977 0.0977 0.0718 0.0711 LSTM Stratified 0.4344 0.4685 - - 0.0966 0.0996 - - Negative Sharing 0.4629 0.4839 0.4605 0.4674 0.1121 | 1706.07881#44 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 46 | (a) CiteULike (b) News
# F
recommender systems, recent efforts are made in combining col- laborative filtering and neural networks [1, 4, 30, 32]. [32] adopts autoencoder for extracting item-side text information for article recommendation, [1] adopts RNN/GRU to better understand the text content. [4] proposes to use CNN and pairwise loss functions, and also incorporate unsupervised text embedding. The general functional embedding framework in this work subsumes existing models [1, 4, 30].
Figure 8: The number of negatives VS performances.
the performance seems still increasing even we use 20 negative examples, which explains why our proposed method with negative sharing can result in better performance. | 1706.07881#46 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 47 | Figure 8: The number of negatives VS performances.
the performance seems still increasing even we use 20 negative examples, which explains why our proposed method with negative sharing can result in better performance.
5 RELATED WORK Collaborative filtering [18] has been one of the most effective meth- ods in recommender systems, and methods like matrix factorization [17, 27] are widely adopted. While many papers focus on the ex- plicit feedback setting such as rating prediction, implicit feedback is found in many real-world scenarios and studied by many pa- pers as well [13, 23, 26]. Although collaborative filtering techniques are powerful, they suffer from the so-called âcold-startâ problem since side/content information is not well leveraged. To address the issue and improve performance, hybrid methods are proposed to incorporate side information [5, 7, 25, 28, 38], as well as content information [4, 11, 31, 32]. | 1706.07881#47 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 48 | Stochastic Gradient Descent [3] and its variants [16] have been widely adopted in training machine learning models, including neural networks. Samples are drawn uniformly at random (IID) so that the stochastic gradient vector equals to the true gradient in expectation. In the setting where negative examples are over- whelming, such as in word embedding (e.g., Word2Vec [22]) and network embedding (e.g., LINE [29]) tasks, negative sampling is utilized. Recent efforts have been made to improve SGD conver- gence by (1) reducing the variance of stochastic gradient estimator, or (2) distributing the training over multiple workers. Several sam- pling techniques, such as stratified sampling [35] and importance sampling [36] are proposed to achieve the variance reduction. Dif- ferent from their work, we improve sampling strategies in SGD by reducing the computational cost of a mini-batch while preserving, or even increasing, the number of data points in the mini-batch. Sampling techniques are also studied in [9, 39] to distribute the computation of matrix factorization, their objectives in sampling strategy design are reducing the parameter overlapping and cache miss. We also find that the idea of sharing negative examples is exploited to speed up word embedding training in [14]. | 1706.07881#48 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 49 | Deep Neural Networks (DNNs) have been showing extraordinary abilities to extract high-level features from raw data, such as video, audio, and text [8, 15, 34]. Compared to traditional feature detectors, such as SIFT and n-grams, DNNs and other embedding methods [5, 6, 29] can automatically extract better features that produce higher performance in various tasks. To leverage the extraordinary feature extraction or content understanding abilities of DNNs for
6 DISCUSSIONS While it is discussed under content-based collaborative filtering problem in this work, the study of sampling strategies for âgraph- basedâ loss functions have further implications. The IID sampling strategy is simple and popular for SGD-based training, since the loss function terms usually do not share the common computations. So no matter how a mini-batch is formed, it almost bears the same
KDD â17, August 13-17, 2017, Halifax, NS, Canada
amount of computation. This assumption is shattered by models that are defined under graph structure, with applications in social and knowledge graph mining [2], image caption ranking [20], and so on. For those scenarios, we believe better sampling strategies can result in much faster training than that with IID sampling. | 1706.07881#49 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 50 | We would also like to point out limitations of our work. The first one is the setting of implicit feedback. When the problem is posed under explicit feedback, Negative Sharing can be less effective since the constructed negative samples may not overlap with the explicit negative ones. The second one is the assumption of efficient com- putation for interaction functions. When we use neural networks as interaction functions, we may need to consider constructing negative samples more wisely for Negative Sharing as it will also come with a noticeable cost.
7 CONCLUSIONS AND FUTURE WORK In this work, we propose a hybrid recommendation framework, combining conventional collaborative filtering with (deep) neural networks. The framework generalizes several existing state-of-the- art recommendation models, and embody potentially more pow- erful ones. To overcome the high computational cost brought by combining âcheapâ CF with âexpensiveâ NN, we first establish the connection between the loss functions and the user-item interac- tion bipartite graph, and then point out the computational costs can vary with different sampling strategies. Based on this insight, we propose three novel sampling strategies that can significantly improve the training efficiency of the proposed framework, as well as the recommendation performance. | 1706.07881#50 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 51 | In the future, there are some promising directions. Firstly, based on the efficient sampling techniques of this paper, we can more efficiently study different neural networks and auxiliary informa- tion for building hybrid recommendation models. Secondly, we can also study the effects of negative sampling distributions and its affect on the design of more efficient sampling strategies. Lastly but not least, it would also be interesting to apply our sampling strategies in a distributed training environments where multi-GPUs and multi-machines are considered.
ACKNOWLEDGEMENTS The authors would like to thank anonymous reviewers for helpful suggestions. The authors would also like to thank NVIDIA for the donation of one Titan X GPU. This work is partially supported by NSF CAREER #1741634.
REFERENCES [1] Trapit Bansal, David Belanger, and Andrew McCallum. 2016. Ask the GRU: Multi-task Learning for Deep Text Recommendations. In RecSysâ16. 107â114. [2] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Ok- sana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NIPSâ13. 2787â2795.
[3] Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In COMPSTATâ2010. Springer, 177â186. | 1706.07881#51 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 52 | [3] Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In COMPSTATâ2010. Springer, 177â186.
Joint Text Em- bedding for Personalized Content-based Recommendation. In arXiv preprint arXiv:1706.01084.
[5] Ting Chen and Yizhou Sun. 2017. Task-Guided and Path-Augmented Heteroge- neous Network Embedding for Author Identification. In WSDMâ17. 295â304. [6] Ting Chen, Lu-An Tang, Yizhou Sun, Zhengzhang Chen, and Kai Zhang. 2016. Entity Embedding-based Anomaly Detection for Heterogeneous Categorical Events. In IJCAIâ16. Miami.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
[7] Tianqi Chen, Weinan Zhang, Qiuxia Lu, Kailong Chen, Zhao Zheng, and Yong Yu. 2012. SVDFeature: a toolkit for feature-based collaborative filtering. Journal of Machine Learning Research 13, Dec (2012), 3619â3622. | 1706.07881#52 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 53 | [8] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12, Aug (2011), 2493â2537.
[9] Rainer Gemulla, Erik Nijkamp, Peter J Haas, and Yannis Sismanis. 2011. Large- scale matrix factorization with distributed stochastic gradient descent. In KDDâ11. 69â77.
[10] Saeed Ghadimi and Guanghui Lan. 2013. Stochastic first-and zeroth-order meth- ods for nonconvex stochastic programming. SIAM Journal on Optimization 23, 4 (2013), 2341â2368.
[11] Prem K Gopalan, Laurent Charlin, and David Blei. 2014. Content-based recom- mendations with poisson factorization. In NIPSâ14. 3176â3184.
[12] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735â1780. | 1706.07881#53 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 54 | [12] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735â1780.
[13] Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In ICDMâ08. 263â272.
[14] Shihao Ji, Nadathur Satish, Sheng Li, and Pradeep Dubey. 2016. Parallelizing word2vec in shared and distributed memory. arXiv preprint arXiv:1604.04661 (2016).
[15] Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014).
[16] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimiza- tion. arXiv preprint arXiv:1412.6980 (2014).
[17] Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDDâ08. 426â434.
[18] Yehuda Koren, Robert Bell, Chris Volinsky, et al. 2009. Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30â37. | 1706.07881#54 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 55 | [19] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classifi- cation with deep convolutional neural networks. In NIPSâ12. 1097â1105. [20] Xiao Lin and Devi Parikh. 2016. Leveraging visual question answering for imagecaption ranking. In ECCVâ16. Springer, 261â277.
[21] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
[22] T Mikolov and J Dean. 2013. Distributed representations of words and phrases and their compositionality. NIPSâ13 (2013).
[23] Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-class collaborative filtering. In ICDMâ08. 502â511.
[24] Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. 2016. Stochastic Variance Reduction for Nonconvex Optimization. In ICMLâ16. 314â323. | 1706.07881#55 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 56 | [25] Steffen Rendle. 2010. Factorization machines. In ICDMâ10. 995â1000. [26] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In UAIâ09. AUAI Press, 452â461.
[27] Ruslan Salakhutdinov and Andriy Mnih. 2011. Probabilistic matrix factorization. In NIPSâ11, Vol. 20. 1â8.
[28] Ajit P Singh and Geoffrey J Gordon. 2008. Relational learning via collective matrix factorization. In KDDâ08. 650â658.
[29] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In WWWâ15. 1067â1077. [30] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. 2013. Deep
content-based music recommendation. In NIPSâ13. 2643â2651.
[31] Chong Wang and David M Blei. 2011. Collaborative topic modeling for recom- mending scientific articles. In KDDâ11. 448â456. | 1706.07881#56 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 57 | [31] Chong Wang and David M Blei. 2011. Collaborative topic modeling for recom- mending scientific articles. In KDDâ11. 448â456.
[32] Hao Wang, Naiyan Wang, and Dit-Yan Yeung. 2015. Collaborative deep learning for recommender systems. In KDDâ15. 1235â1244.
Improving maximum margin matrix factorization. Machine Learning 72, 3 (2008), 263â276. [34] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional
networks for text classification. In NIPSâ15. 649â657.
[35] Peilin Zhao and Tong Zhang. 2014. Accelerating minibatch stochastic gradient descent using stratified sampling. arXiv preprint arXiv:1405.3080 (2014). [36] Peilin Zhao and Tong Zhang. 2015. Stochastic Optimization with Importance
Sampling for Regularized Loss Minimization. In ICMLâ15. 1â9. | 1706.07881#57 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 58 | Sampling for Regularized Loss Minimization. In ICMLâ15. 1â9.
[37] Yin Zheng, Bangsheng Tang, Wenkui Ding, and Hanning Zhou. 2016. A Neural Autoregressive Approach to Collaborative Filtering. In ICMLâ16. 764â773. [38] Ke Zhou, Shuang-Hong Yang, and Hongyuan Zha. 2011. Functional matrix
factorizations for cold-start recommendation. In SIGIRâ11. 315â324.
[39] Yong Zhuang, Wei-Sheng Chin, Yu-Chin Juan, and Chih-Jen Lin. 2013. A fast parallel SGD for matrix factorization in shared memory systems. In Recsys. 249â 256.
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
SUPPLEMENTARY MATERIAL A PROOFS Here we give the proofs for both the lemma and the proposition introduced in the main paper. For brevity, throughout we assume by default the loss function L is the pointwise loss of Eq. (1) in the main paper. Proofs are only given for the pointwise loss, but it can be similarly derived for the pairwise loss. We start by first introducing some definitions. | 1706.07881#58 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 59 | Definition 1. A function f is L-smooth if there is a constant L such that
â¥âf (x) â âf (y)⥠⤠Lâ¥x â y â¥
Such an assumption is very common in the analysis of first-order methods. In the following proof, we assume any loss functions L is
L-smooth.
Property 1. (Quadratic Upper Bound) A L-smooth function f has the following property â¥y â x â¥2
f (y) ⤠f (x) + âf (x)T (y â x) + L 2
Definition 2. We say a function f has a Ï -bounded gradient if â¥âfi (θ )â¥2 â¤ Ï for all i â [n] and any θ â Rd . For each training iteration, we first sample a mini-batch of links (denoted by B) of both positive links (B
+) and negative links (Bâ), according to the sampling algorithm (one of the Algorithm 2, 3, 4, 5), and then the stochastic gradient is computed and applied to the parameters as follows: | 1706.07881#59 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 60 | nt Mt - - pitta gt_ Tt » Civ VL" (Olu,v) ~ = » Cay VL" (Olu, v) (3) (u,v) â¬By (u,v) â¬By
Here we use L+(θ |u, v) to denote the gradient of loss function L+(θ ) given a pair of (u, v). And m, n are the number of positive and negative links in the batch B, respectively.
Lemma 1. (unbiased stochastic gradient) Under sampling Algorithm 2, 3, 4, 5, we have EB [âLB (θ t )] = âL(θ t ). In other words, the stochastic mini-batch gradient equals to true gradient in expectation.
Proof. Below we prove this lemma for each for the sampling Algorithm. For completeness, we also show the proof for Uniform Sampling as follows. The main idea is show the expectation of stochastic gradient computed in a randomly formed mini-batch equal to the true gradient of objective in Eq. 1. | 1706.07881#60 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 61 | IID Sampling. The positive links in the batch B are i.i.d. samples from Pd (u, v) (i.e. drawn uniformly at random from all positive links), and the negative links in B are i.i.d. samples from Pd (u)Pn (v), thus we have
EB [âLB (θ t )] 1
1m ix - ur , == D1 Beuv)PatuoyletoVL* (Ol, 2)]1 + = Y Beu,vy-Pa(w Pao Caer VL Ole, vâ)) i=1 i=1 (4) =Ey~py(u)|Eo~Py(v|u) [Cue VL" (lu, v)] + Ee p, (oy [Cue VL (Alu, vâ)] =VL(0")
The first equality is due to the definition of sampling procedure, the second equality is due to the definition of expectation, and the final equality is due to the definition of pointwise loss function in Eq. 1. | 1706.07881#61 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 62 | Negative Sampling. In Negative Sampling, we have batch B consists of i.i.d. samples of m positive links, and conditioning on each positive link, k negative links are sampled by replacing items in the same i.i.d. manner. Positive links are sampled from Pd (u, v), and negative items are sampled from Pn (v â²), thus we have
EB [âLB (θ t )] 1
k 1< 1 _ _ , == D Eu orPalurdg > Bo'~PaoleeoVL* Olt 2) + Coy VL (Olu, 0â)] i=1 j=l (5) =E,,~p,(u)|Eo~Py(v|u)lCuv VL" (Olu, Â¥)] + Ev~p,(o) [Cu VL (Ou, 0â) =VL£(6")
The first equality is due to the definition of sampling procedure, and the second equality is due to the properties of joint probability distribution and expectation.
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong | 1706.07881#62 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 63 | KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
Stratified Sampling (by Items). In Stratified Sampling (by Items), a batch B consists of links samples drawn in two steps: (1) draw an item v â¼ Pd (v), and (2) draw positive users u â¼ Pd (u|v) and negative users u â² â¼ Pd (u) respectively. Additionally, negative terms are also re-weighted, thus we have
# EplVLB(0")]
By ryi)| + So BucrytuinleteoW £*(Olu 0) + 2 a ~rqnleao per VLâ (82) i=1 =Buo)-Pyluo)lCtoVL* (Olu, 0)] + cur ranracsne 39.£°(6lu2)] =E(u,v)~Py(u,v)Cuy VL" (Alu, 2)] + Eu, v)~Py(u)Pa(o)Cuw VL (lu, 2)] =E,~pg(u)|Ev~Pg(v|u) luv VL" (lu, 2) + Ey p, (wy leu VL (Alu, 0â) | 1706.07881#63 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 64 | =âL(θ t )
The first equality is due to the definition of sampling procedure, and the second, the third and the forth equality is due to the properties of joint probability distribution and expectation.
Negative Sharing. In Negative Sharing, we only draw positive links uniformly at random (i.e. (u, v) â¼ Pd (u, v)), while constructing negative links from sharing the items in the batch. So the batch B we use for computing gradient consists of both m positive links and m(m â 1) negative links.
Although we do not draw negative links directly, we can still calculate their probability according to the probability distribution from which we draw the positive links. So a pair of constructed negative link in the batch is drawn from (u, v) â¼ Pd (u, v) = Pd (v)Pd (u|v). Additionally, negative terms are also re-weighted, we have EB [âLB (θ t )] | 1706.07881#64 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 65 | oo + veto Domes? Palo peg âmy (u,e)-Pa(u,e lus VL" Blu, 2] + oy » Eue)Paluolue p gy VL (Ol 21 =Ey~py(u)|Eo~Py(v|u) [cue VL (lu, v)] + Ee p,(o [Cue VL (Alu, vâ)] =VL(0')
=âL(θ t )
The first equality is due to the definition of sampling procedure, and the second equality is due to the properties of joint probability distribution and expectation.
Stratified Sampling with Negative Sharing. Under this setting, we follow a two-step sampling procedure: (1) draw an item v â¼ Pd (v), and (2) draw positive users u â¼ Pd (u|v). Negative links are constructed from independently drawn items in the same batch. So the batch B consists of m positive links and n negative links.
We can use the same method as in Negative Sharing to calculate the probability of sampled negative links, which is also (u, v) â¼ Pd (u, v). Again, negative terms are re-weighted, thus we have
EB [âLB (θ t )] 1 | 1706.07881#65 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 66 | EB [âLB (θ t )] 1
(2) =â 15" EL Py (v),u~Pg(ulv) lous VL" (Alu, v)] + â 1y Evu,v)~Pg(u,v)Cuw ao ye (Olu, v)] i=l naa =E(u,v)~Py(u,v) (Cu VL" (Olu, 2)] + E(u, 0)~Pg(u)Pn(v) Cae VL" (Olu, 0â) =E,~P,(u)|Eo~Py(o|u) leu VL" (Alu, 2)] + By~p, (wo [eye VL (Olu, 0â) =VL(0')
=âL(θ t )
The first equality is due to the definition of sampling procedure, and the second, third and fourth equality is due to the properties of joint â¡ probability distribution and expectation.
â
Proposition 1. Suppose L has o-bounded gradient; let np =n = c/VT where c = | AO 60"), and 6* is the minimizer to L. Then, the following holds for the sampling strategies given in Algorithm 2, 3, 4, 5 | 1706.07881#66 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 67 | min Ell|VL(6â)II7] < 0<t<T-1
(6)
(7)
(8)
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Proof. With the property of L-smooth function L, we have
E[L(θ t +1)] ⤠E[L(θ t ) + â¨âL(θ t ), θ t +1 â θ t â© + L 2 â¥Î¸ t +1 â θ t â¥2]
By applying the stochastic update equation, lemma 1, i.e. EB [âLB (θ t )] = âL(θ t ), we have
E[â¨âL(θ t ), θ t +1 â θ t â© + L 2 â¥Î¸ t +1 â θ t â¥2] â¤Î·t E[â¥âL(θ t )â¥2] + Lη 2 2 t E[â¥âLB (θ t )â¥2] (10)
Combining results in Eq. 9 and 10, with assumption that the function L is Ï -bounded, we have
2 Lη t 2b | 1706.07881#67 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 68 | Combining results in Eq. 9 and 10, with assumption that the function L is Ï -bounded, we have
2 Lη t 2b
E[L(θ t +1)] ⤠E[L(θ t )] + ηt E[â¥âL(θ t )â¥2] + Ï
2
Rearranging the above equation we obtain
1 E[L(θ t â L(θ t +1)] + Lηt 2b E[â¥âL(θ t )â¥2] â¤ Ï 2 (11)
t we have 1 T-1
â
By summing Eq. 11 from t = 0 to T â 1 and setting η = c/
c/-VT,
1 E[â¥âL(θ t )â¥2] ⤠E[â¥L(θ t )â¥2] min t T 0 ⤠c 1 â T (L(θ 0) â L(θ â)) + Lc â 2 T Ï 2 (12)
By setting
By setting
c = 2(L(θ 0) â L(θ â)) LÏ 2
We obtain the desired result. | 1706.07881#68 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 69 | By setting
By setting
c = 2(L(θ 0) â L(θ â)) LÏ 2
We obtain the desired result.
B VECTOR DOT PRODUCT VERSUS MATRIX MULTIPLICATION Here we provide some empirical evidence for the computation time difference of replacing vector dot product with matrix multiplication. Since vector dot product can be batched by element-wise matrix multiplication followed by summing over each row. We compare two operations between two square matrices of size n: (1) element-wise matrix multiplication, and (2) matrix multiplication. A straightforward 3). However, modern computation devices such implementation of the former has algorithmic complexity of O(n as GPUs are better optimized for the latter, so when the matrix size is relatively small, their computation time can be quite similar. This is demonstrated in Figure 9. In our choice of batch size and embedding dimension, n ⪠1000, so the computation time is comparable. Furthermore, ti ⪠tд, so even several times increase would also be ignorable.
140 120 100 Ss E So 8 5 . E 3 0 ° z z E 5 0 + z I 20 . 0 fe ® -20 0O 2000 4000 6000 8000 10000 12000 14000 16000 matrix size
Figure 9: The computation time ratio between matrix multiplication and element-wise matrix multiplication for different square matrix sizes.
(9)
â¡
ao | 1706.07881#69 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07881 | 70 | Figure 9: The computation time ratio between matrix multiplication and element-wise matrix multiplication for different square matrix sizes.
(9)
â¡
ao
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
C FUNCTIONAL EMBEDDING VERSUS FUNCTIONAL REGULARIZATION In this work we propose a functional embedding framework, in which the embedding of a user/item is obtained by some function such as neural networks. We notice another approach is to penalize the distance between user/item embedding and the function output (instead of equate them directly as in functional embedding), which we refer as functional regularization, and it is used in [32]. More specifically, functional regularization emits following form of loss function:
L(hu , hv ) + λâ¥hu â f(xu )â¥2 Here we point out its main issue, which does not appear in Functional Embedding. In order to equate the two embedding vectors, we need to increase λ. However, setting large λ will slow down the training progress under coordinate descent. The gradient w.r.t. hu is u â ft (xu ), which means hu cannot be effectively updated by interaction information. âhu | 1706.07881#70 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | [
{
"id": "1706.01084"
},
{
"id": "1604.04661"
}
] |
1706.07269 | 1 | # Tim Miller
School of Computing and Information Systems University of Melbourne, Melbourne, Australia tmiller@ unimelb. edu. au
# Abstract
There has been a recent resurgence in the area of explainable artiï¬cial intelligence as re- searchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artiï¬cial intelligence. However, it is fair to say that most work in explainable artiï¬cial intelligence uses only the researchersâ intuition of what constitutes a âgoodâ explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people deï¬ne, gener- ate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the ï¬eld of explainable artiï¬cial intelligence should build on this existing re- search, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important ï¬ndings, and discusses ways that these can be infused with work on explainable artiï¬cial intelligence. | 1706.07269#1 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 2 | Keywords: Explanation, Explainability, Interpretability, Explainable AI, Transparency
# Contents
3 4 6 7 7 2.1 Deï¬nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Explanation as a Product . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Explanation as Abductive Reasoning . . . . . . . . . . . . . . . . . Interpretability and Justiï¬cation . . . . . . . . . . . . . . . . . . . 2.1.5 8 8 8 11 12 13 14 | 1706.07269#2 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 3 | 1 Introduction 1.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Major Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Outline 1.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
# 2 Philosophical Foundations â What Is Explanation?
Preprint submitted to Journal Name
February 14, 2022 | 1706.07269#3 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 4 | 2.2 Why People Ask for Explanations . . . . . . . . . . . . . . . . . . . . . . 2.3 Contrastive Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Types and Levels of Explanation . . . . . . . . . . . . . . . . . . . . . . . 2.5 Structure of Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Explanation and XAI 2.6.1 Causal Attribution is Not Causal Explanation . . . . . . . . . . . . 2.6.2 Contrastive Explanation . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Explanatory Tasks and Levels of Explanation . . . . . . . . . . . . 2.6.4 Explanatory Model of Self . . . . . . . . . . . . . . . . . . | 1706.07269#4 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 6 | 3.1 Deï¬nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Intentionality and Explanation . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Beliefs, Desires, Intentions, and Traits . . . . . . . . . . . . . . . . . . . . 3.3.1 Malleâs Conceptual Model for Social Attribution . . . . . . . . . . 3.4 Individual vs. Group Behaviour . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Norms and Morals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Social Attribution and XAI 3.6.1 Folk Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Malleâs | 1706.07269#6 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 7 | Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Malleâs Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Collective Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.4 Norms and Morals . . . . . . . . . . . . . . . . . . . . . . . . . . . tions? 4.1 Causal Connection, Explanation Selection, and Evaluation . . . . . . . . . 4.2 Causal Connection: Abductive Reasoning . . . . . . . . . . . . . . . . . . 4.2.1 Abductive Reasoning and Causal Types . . . . . . . . . . . . . . . 4.2.2 Background and Discounting . . . . . . . . . . . . . . . . . . . . . 4.2.3 Explanatory Modes . . . . . . . . . . . . . . . . . . . . . . . . . . Inherent and | 1706.07269#7 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 8 | . 4.2.3 Explanatory Modes . . . . . . . . . . . . . . . . . . . . . . . . . . Inherent and Extrinsic Features . . . . . . . . . . . . . . . . . . . . 4.2.4 4.3 Causal Connection: Counterfactuals and Mutability . . . . . . . . . . . . 4.3.1 Abnormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Temporality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Controllability and Intent . . . . . . . . . . . . . . . . . . . . . . . Social Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 4.4 Explanation Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Facts and Foils . . . . . . | 1706.07269#8 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 9 | . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Facts and Foils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Abnormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Intentionality and Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Necessity, Suï¬ciency and Robustness 4.4.5 Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 Preconditions, Failure, and Intentions 4.5 Explanation Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Coherence, Simplicity, and Generality . . . . . . . . . . | 1706.07269#9 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 12 | 4.5.3 Goals and Explanatory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Abductive Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Mutability and Computation . . . . . . . . . . . . . . . . . . . . . 4.6.3 Abnormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Intentionality and Functionality . . . . . . . . . . . . . . . . . . . . 4.6.5 Perspectives and Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.6 Evaluation of Explanations 4.6 Cognitive Processes and XAI 5.1 Explanation as Conversation . . . . . . . . . . . . . . . . . . . | 1706.07269#12 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 13 | 4.6 Cognitive Processes and XAI 5.1 Explanation as Conversation . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Logic and Conversation . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Relation & Relevance in Explanation Selection . . . . . . . . . . . 5.1.3 Argumentation and Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Linguistic structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Explanatory Dialogue 5.3 Social Explanation and XAI . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Conversational Model . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Dialogue . . . . . . . . . . . . | 1706.07269#13 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 14 | . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Dialogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implicature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 5.3.5 Dilution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Social and Interactive Explanation . . . . . . . . . . . . . . . . . . 5.3.6 46 46 47 47 48 48 48 49 49 50 50 51 53 54 55 56 56 56 57 58 58 59 59 | 1706.07269#14 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 15 | # 5 Social Explanation â How Do People Communicate Explanations?
# 6 Conclusions
# 1. Introduction
Recently, the notion of explainable artiï¬cial intelligence has seen a resurgence, after having slowed since the burst of work on explanation in expert systems over three decades ago; for example, see Chandrasekaran et al. [23], [168], and Buchanan and Shortliï¬e [14]. Sometimes abbreviated XAI (eXplainable artiï¬cial intelligence), the idea can be found in grant solicitations [32] and in the popular press [136]. This resurgence is driven by evidence that many AI applications have limited take up, or are not appropriated at all, due to ethical concerns [2] and a lack of trust on behalf of their users [166, 101]. The running hypothesis is that by building more transparent, interpretable, or explainable systems, users will be better equipped to understand and therefore trust the intelligent agents [129, 25, 65].
While there are many ways to increase trust and transparency of intelligent agents, two complementary approaches will form part of many trusted autonomous systems: (1) generating decisions1 in which one of the criteria taken into account during the compu- tation is how well a human could understand the decisions in the given context, which is often called interpretability or explainability; and (2) explicitly explaining decisions | 1706.07269#15 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 16 | 1We will use decision as the general term to encompass outputs from AI systems, such as categorisations, action selection, etc.
3
to people, which we will call explanation. Applications of explanation are considered in many sub-ï¬elds of artiï¬cial intelligence, such as justifying autonomous agent behaviour [129, 65], debugging of machine learning models [89], explaining medical decision-making [45], and explaining predictions of classiï¬ers [157].
If we want to design, and implement intelligent agents that are truly capable of providing explanations to people, then it is fair to say that models of how humans explain decisions and behaviour to each other are a good way to start analysing the problem. Researchers argue that people employ certain biases [82] and social expectations [72] when they generate and evaluate explanation, and I argue that such biases and expectations can improve human interactions with explanatory AI. For example, de Graaf and Malle [34] argues that because people assign human-like traits to artiï¬cial agents, people will expect explanations using the same conceptual framework used to explain human behaviours. | 1706.07269#16 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 17 | Despite the recent resurgence of explainable AI, most of the research and practice in this area seems to use the researchersâ intuitions of what constitutes a âgoodâ explanation. Miller et al. [132] shows in a small sample that research in explainable AI typically does not cite or build on frameworks of explanation from social science. They argue that this could lead to failure. The very experts who understand decision-making models the best are not in the right position to judge the usefulness of explanations to lay users â a phenomenon that Miller et al. refer to (paraphrasing Cooper [31]) as âthe inmates running the asylumâ. Therefore, a strong understanding of how people deï¬ne, generate, select, evaluate, and present explanations seems almost essential.
In the ï¬elds of philosophy, psychology, and cognitive science, there is a vast and ma- ture body of work that studies these exact topics. For millennia, philosophers have asked the questions about what constitutes an explanation, what is the function of explana- tions, and what are their structure. For over 50 years, cognitive and social psychologists have analysed how people attribute and evaluate the social behaviour of others. For over two decades, cognitive psychologists and scientists have investigated how people generate explanations and how they evaluate their quality. | 1706.07269#17 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 18 | I argue here that there is considerable scope to infuse this valuable body of research into explainable AI. Building intelligent agents capable of explanation is a challenging task, and approaching this challenge in a vacuum considering only the computational problems will not solve the greater problems of trust in AI. Further, while some recent work builds on the early ï¬ndings on explanation in expert systems, that early research was undertaken prior to much of the work on explanation in social science. I contend that newer theories can form the basis of explainable AI â although there is still a lot to learn from early work in explainable AI around design and implementation.
This paper aims to promote the inclusion of this existing research into the ï¬eld of ex- planation in AI. As part of this work, over 250 publications on explanation were surveyed from social science venues. A smaller subset of these were chosen to be presented in this paper, based on their currency and relevance to the topic. The paper presents relevant theories on explanation, describes, in many cases, the experimental evidence supporting these theories, and presents ideas on how this work can be infused into explainable AI.
# 1.1. Scope
In this article, the term âExplainable AI â loosely refers to an explanatory agent reveal- ing underlying causes to its or another agentâs decision making. However, it is important
4 | 1706.07269#18 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 19 | 4
Social Science XAI Human-Agent Interaction Artiï¬cial Intelligence Human-Computer Interaction
Figure 1: Scope of Explainable Artiï¬cial Intelligence
to note that the solution to explainable AI is not just âmore AIâ. Ultimately, it is a human-agent interaction problem. Human-agent interaction can be deï¬ned as the inter- section of artiï¬cial intelligence, social science, and human-computer interaction (HCI); see Figure 1. Explainable AI is just one problem inside human-agent interaction.
This article highlights the top circle in Figure 1: the philosophy, social and cognitive psychology, and cognitive science views of explanation, and their relation to the other two circles: their impact on the design of both artiï¬cial intelligence and our interactions with them. With this scope of explainable AI in mind, the scope of this article is threefold:
⢠Survey: To survey and review relevant articles on the philosophical, cognitive, and social foundations of explanation, with an emphasis on âeverydayâ explanation. | 1706.07269#19 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 20 | ⢠Survey: To survey and review relevant articles on the philosophical, cognitive, and social foundations of explanation, with an emphasis on âeverydayâ explanation.
⢠Everyday explanation: To focus on âeverydayâ (or local) explanations as a tool and process for an agent, who we call the explainer, to explain decisions made by itself or another agent to a person, who we call the explainee. âEverydayâ explanations are the explanations of why particular facts (events, properties, decisions, etc.) occurred, rather than explanations of more general relationships, such as those seen in scientiï¬c explanation. We justify this focus based on the observation from AI literature that trust is lost when users cannot understand traces of observed behaviour or decisions [166, 129], rather than trying to understand and construct generalised theories. Despite this, everyday explanations also sometimes refer to generalised theories, as we will see later in Section 2, so scientiï¬c explanation is relevant, and some work from this area is surveyed in the paper.
5
⢠Relationship to Explainable AI : To draw important points from relevant articles to some of the diï¬erent sub-ï¬elds of explainable AI.
The following topics are considered out of scope of this article: | 1706.07269#20 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 21 | The following topics are considered out of scope of this article:
⢠Causality: While causality is important in explanation, this paper is not a survey on the vast work on causality. I review the major positions in this ï¬eld insofar as they relate to the relationship with models of explanation.
⢠Explainable AI : This paper is not a survey on existing approaches to explanation or interpretability in AI, except those that directly contribute to the topics in scope or build on social science. For an excellent short survey on explanation in machine learning, see Biran and Cotton [9].
1.2. Major Findings
As part of this review, I highlight four major ï¬ndings from the surveyed literature that I believe are important for explainable AI, but which I believe most research and practitioners in artiï¬cial intelligence are currently unaware:
1. Explanations are contrastive â they are sought in response to particular counter- factual cases, which are termed foils in this paper. That is, people do not ask why event P happened, but rather why event P happened instead of some event Q. This has important social and computational consequences for explainable AI. In Sections 2â4, models of how people provide contrastive explanations are reviewed. | 1706.07269#21 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 22 | 2. Explanation are selected (in a biased manner) â people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes inï¬nite number of causes to be the explanation. However, this selection is inï¬uenced by certain cognitive biases. In Section 4, models of how people select explanations, including how this relates to contrast cases, are reviewed.
3. Probabilities probably donât matter â while truth and likelihood are important in explanation and probabilities really do matter, referring to probabilities or statis- tical relationships in explanation is not as eï¬ective as referring to causes. The most likely explanation is not always the best explanation for a person, and importantly, using statistical generalisations to explain why events occur is unsatisfying, unless accompanied by an underlying causal explanation for the generalisation itself.
4. Explanations are social â they are a transfer of knowledge, presented as part of a conversation2 or interaction, and are thus presented relative to the explainerâs beliefs about the explaineeâs beliefs. In Section 5, models of how people interact regarding explanations are reviewed.
2Note that this does not imply that explanations must be given in natural language, but implies that
explanation is a social interaction between the explainer and the explainee.
6 | 1706.07269#22 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 23 | 2Note that this does not imply that explanations must be given in natural language, but implies that
explanation is a social interaction between the explainer and the explainee.
6
These four points all converge around a single point: explanations are not just the presentation of associations and causes (causal attribution), they are contextual. While an event may have many causes, often the explainee cares only about a small subset (relevant to the context), the explainer selects a subset of this subset (based on several diï¬erent criteria), and explainer and explainee may interact and argue about this explanation.
I assert that, if we are to build truly explainable AI, especially intelligent systems that are able to oï¬er explanations, then these three points are imperative in many applications.
# 1.3. Outline | 1706.07269#23 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 24 | # 1.3. Outline
The outline of this paper is as follows. Section 1.4 presents a motivating example of an explanatory agent that is used throughout the paper. Section 2 presents the philosophical foundations of explanation, deï¬ning what explanations are, what they are not, how to relate to causes, their meaning and their structure. Section 3 focuses on one speciï¬c type of explanation â those relating to human or social behaviour, while Section 4 surveys work on how people generate and evaluate explanations more generally; that is, not just social behaviour. Section 5 describes research on the dynamics of interaction in explanation between explainer and explainee. Section 6 concludes and highlights several major challenges to explanation in AI.
1.4. Example
This section presents a simple example, which is used to illustrate many important concepts through this paper. It is of a hypothetical system that categorises images of arthropods into several diï¬erent types, based on certain physical features of the arthro- pods, such as number of legs, number of eyes, number of wings, etc. The algorithm is assumed to have been trained on a large set of valid data and is highly accurate. It is used by entomologists to do automatic classiï¬cation of their research data. Table 1 outlines a simple model of the features of arthropods for illustrative purposes. An explanation function is available for the arthropod system. | 1706.07269#24 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 25 | Compound Type No. Legs Stinger No. Eyes Eyes Wings Spider 8 x 8 x 0 Beetle 6 x 2 Vv 2 Bee 6 v 5 Vv 4 Fly 6 x 5 Vv 2
Table 1: A simple lay model for distinguishing common arthropods.
Now, consider the idealised and simple dialogue between a human user and âExplA- gentâ, who is the interactive explanation agent, outlined in Table 2. This dialogue is not intended to be realistic, but is merely illustrative of how a particular explanatory agent may interact: responding to posed questions, using mixed modalities â in this case, language and visual images â and being able to answer a range of questions about its decision making. This example shows diï¬erent types of questions being posed, and demonstrates that the explanatory agent will need to keep track of the state of the ex- planation; for example, by noting what it has already told the explainee, and may have to infer what the explainee has inferred themselves.
7
Person: ExplAgent: âBecause the arthropod in image J has eight legs, consistent with those
âWhy is image J labelled as a Spider instead of a Beetle? â
ExplAgent: âBecause the arthropod in image J has eight legs, consistent with those in the category Spider, while those in Beetle have six legs.â | 1706.07269#25 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 26 | ExplAgent: âBecause the arthropod in image J has eight legs, consistent with those in the category Spider, while those in Beetle have six legs.â
in the category Spider, while those in Beetle have six legs.â âWhy did you infer that the arthropod in image J had eight legs instead of six? â
# Person:
ExplAgent: âI counted the eight legs that I found, as I have just highlighted on the image now.â (ExplAgent shows the image with the eight legs counted). âHow do you know that spiders have eight legs? â
Person: ExplAgent: âBecause in the training set I was trained on, almost all animals with
ExplAgent: âBecause in the training set I was trained on, almost all animals with eight legs were labelled as Spider.â
eight legs were labelled as Spider.â âBut an octopus can have eight legs too. Why did you not classify image J as an octopus? â
# Person:
ExplAgent: âBecause my function is only to classify arthropods.â
Figure 2: Example Explanation Dialogue between a Person and an Explanation Agent
We will refer back to this example throughout the paper and link diï¬erence parts of work the diï¬erent parts of the dialogue above.
# 2. Philosophical Foundations â What Is Explanation? | 1706.07269#26 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 27 | # 2. Philosophical Foundations â What Is Explanation?
To explain an event is to provide some information about its causal history. In an act of explaining, someone who is in possession of some information about the causal history of some event â explanatory information, I shall call it â tries to convey it to someone else. â Lewis [99, p. 217]
In this section, we outline foundational work in explanation, which helps to deï¬ne causal explanation and how it diï¬ers from other concepts such as causal attribution and interpretability.
2.1. Deï¬nitions
There are several related concepts in explanation, which seem to be used interchange- ably between authors and also within articles, often demonstrating some conï¬ation of the terms. In particular, this section describes the diï¬erence between causal attribution and causal explanation. We will also brieï¬y touch on the diï¬erence between explanation and interpretability.
# 2.1.1. Causality
The idea of causality has attracted much work, and there are several diï¬erent accounts of what constitutes a cause of an event or property. The various deï¬nitions of causation can be broken into two major categories: dependence theories and transference theories. | 1706.07269#27 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 28 | Causality and Counterfactuals. Hume [79, Section VII] is credited with deriving what is known as the regularity theory of causation. This theory states that there is a cause between two types of events if events of the ï¬rst type are always followed by events of the second. However, as argued by Lewis [98], the deï¬nition due to Hume is in fact about 8
counterfactuals, rather than dependence alone. Hume argues that the co-occurrence of events C and E, observed from experience, do not give causal information that is useful. Instead, the cause should be understood relative to an imagined, counterfactual case: event C is said to have caused event E if, under some hypothetical counterfactual case the event C did not occur, E would not have occurred. This deï¬nition has been argued and reï¬ned, and many deï¬nitions of causality are based around this idea in one way or another; c.f. Lewis [98], Hilton [71]. | 1706.07269#28 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 29 | This classical counterfactual model of causality is well understood but competing deï¬nitions exist. Interventionist theories of causality [191, 58] state that event C can be deemed a cause of event E if and only if any change to event E can be brought about solely by intervening on event C. Probabilistic theories, which are extensions of interventionist theories, state that event C is a cause of event E if and only if the occurrence of C increases the probability of E occurring [128].
Transference theories [5, 43, 39], on the other hand, are not deï¬ned on dependence, but instead describe physical causation as the transference of energy between objects. In short, if E is an event representing the change of energy of an object O, then C causes E if object O is in contact with the object that causes C, and there is some quantity of energy transferred. | 1706.07269#29 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.