Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,400 | 4,983 | Minimax Theory for High-dimensional Gaussian
Mixtures with Sparse Mean Separation
Martin Azizyan
Machine Learning Department
Carnegie Mellon University
[email protected]
Aarti Singh
Machine Learning Department
Carnegie Mellon University
[email protected]
Larry Wasserman
Department of Statistics
Carnegie Mellon University
[email protected]
Abstract
While several papers have investigated computationally and statistically efficient
methods for learning Gaussian mixtures, precise minimax bounds for their statistical performance as well as fundamental limits in high-dimensional settings are not
well-understood. In this paper, we provide precise information theoretic bounds
on the clustering accuracy and sample complexity of learning a mixture of two
isotropic Gaussians in high dimensions under small mean separation. If there is
a sparse subset of relevant dimensions that determine the mean separation, then
the sample complexity only depends on the number of relevant dimensions and
mean separation, and can be achieved by a simple computationally efficient procedure. Our results provide the first step of a theoretical basis for recent methods
that combine feature selection and clustering.
1
Introduction
Gaussian mixture models provide a simple framework for several machine learning problems including clustering, density estimation and classification. Mixtures are especially appealing in high
dimensional problems. Perhaps the most common use of Gaussian mixtures is for clustering. Of
course, the statistical (and computational) behavior of these methods can degrade in high dimensions. Inspired by the success of variable selection methods in regression, several authors have
considered variable selection for clustering. However, there appears to be no theoretical results
justifying the advantage of variable selection in high dimensional setting.
To see why some sort of variable selection might be useful, consider clustering n subjects using a
vector of d genes for each subject. Typically d is much larger than n which suggests that statistical
clustering methods will perform poorly. However, it may be the case that there are only a small
number of relevant genes in which case we might expect better behavior by focusing on this small
set of relevant genes.
The purpose of this paper is to provide precise bounds on clustering error with mixtures of Gaussians. We consider both the general case where all features are relevant, and the special case where
only a subset of features are relevant. Mathematically, we model an irrelevant feature by requiring
the mean of that feature to be the same across clusters, so that the feature does not serve to differentiate the groups. Throughout this paper, we use the probability of misclustering an observation,
relative to the optimal clustering if we had known the true distribution, as our loss function. This is
akin to using excess risk in classification.
This paper makes the following contributions:
? We provide information theoretic bounds on the sample complexity of learning a mixture
of two isotropic Gaussians with equal weight in the small mean separation setting that precisely captures the dimension dependence, and matches known sample complexity requirements for some existing algorithms. This also debunks the myth that there is a gap between
1
statistical and computational complexity of learning mixture of two isotropic Gaussians for
small mean separation. Our bounds require non-standard arguments since our loss function
does not satisfy the triangle inequality.
? We consider the high-dimensional setting where only a subset of relevant dimensions determine the mean separation between mixture components and show that learning is substantially easier as the sample complexity only depends on the sparse set of relevant dimensions.
This provides some theoretical basis for feature selection approaches to clustering.
? We show that a simple computationally feasible procedure nearly achieves the information
theoretic sample complexity even in high-dimensional sparse mean separation settings.
Related Work. There is a long and continuing history of research on mixtures of Gaussians. A
complete review is not feasible but we mention some highlights of the work most related to ours.
Perhaps the most popular method for estimating a mixture distribution is maximum likelihood. Unfortunately, maximizing the likelihood is NP-Hard. This has led to a stream of work on alternative
methods for estimating mixtures. These new algorithms use pairwise distances, spectral methods or
the method of moments.
Pairwise methods are developed in Dasgupta (1999), Schulman and Dasgupta (2000) and Arora and
Kannan (2001). These methods require
the mean separation to increase with dimension. The first
?
one requires the separation to be d while the latter two improve it to d1/4 . To avoid this problem,
Vempala and Wang (2004) introduced the idea of using spectral methods for estimating mixtures
of spherical Gaussians which makes mean separation independent of dimension. The assumption
that the components are spherical was removed in Brubaker and Vempala (2008). Their method
only requires the components to be separated by a hyperplane and runs in polynomial time, but
requires n = ?(d4 log d) samples. Other spectral methods include Kannan et al. (2005), Achlioptas
and McSherry (2005) and Hsu and Kakade (2013). The latter uses clever spectral decompositions
together with the method of moments to derive an effective algorithm.
Kalai et al. (2012) use the method of moments to get estimates without requiring separation between
components of the mixture components. A similar approach is given in Belkin and Sinha (2010).
Chaudhuri et al. (2009) give a modified k-means algorithm for estimating a mixture of two Gaus2
?
sians. For the large mean separation setting ? > 1, Chaudhuri et al. (2009) show that n = ?(d/?
)
samples are needed. They also provide an information theoretic bound on the necessary sample complexity of any algorithm which matches the sample complexity of their method (up to log factors) in
4
?
d and ?. If the mean separation is small ? < 1, they show that n = ?(d/?
) samples are sufficient.
Our results for the small mean separation setting give a matching necessary condition. Assuming
the separation between the component means is not too sparse, Chaudhuri and Rao (2008) provide
an algorithm for learning the mixture that has polynomial computational and sample complexity.
Most of these papers are concerned with computational efficiency and do not give precise, statistical
minimax upper and lower bounds. None of them deal with the case we are interested in, namely, a
high dimensional mixture with sparse mean separation.
We should also point out that the results in different papers are not necessarily comparable since
different authors use different loss functions. In this paper we use the probability of misclassifying
a future observation, relative to how the correct distribution clusters the observation, as our loss
function. This should not be confused with the probability of attributing a new observation to the
wrong component of the mixture. The latter loss does not typically tend to zero as the sample
size increases. Our loss is similar to the excess risk used in classification where we compare the
misclassification rate of a classifier to the misclassification rate of the Bayes optimal classifier.
Finally, we remind the reader that our motivation for studying sparsely separated mixtures is that
this provides a model for variable selection in clustering problems. There are some relevant recent
papers on this problem in the high-dimensional setting. Pan and Shen (2007) use penalized mixture
models to do variable selection and clustering simultaneously. Witten and Tibshirani (2010) develop
a penalized version of k-means clustering. Related methods include Raftery and Dean (2006); Sun
et al. (2012) and Guo et al. (2010). The applied bioinformatics literature also contains a huge number
of heuristic methods for this problem. None of these papers provide minimax bounds for the clustering error or provide theoretical evidence of the benefit of using variable selection in unsupervised
problems such as clustering.
2
2
Problem Setup
In this paper, we consider the simple setting of learning a mixture of two isotropic Gaussians with
equal mixing weights,1 given n data points X1 , . . . , Xn ? Rd drawn i.i.d. from a d-dimensional
mixture density function
1
1
p? (x) = f (x; ?1 , ? 2 I) + f (x; ?2 , ? 2 I),
2
2
where f (?; ?, ?) is the density of N (?, ?), ? > 0 is a fixed constant, and ? := (?1 , ?2 ) ? ?. We
consider two classes ? of parameters:
?? = {(?1 , ?2 ) : k?1 ? ?2 k ? ?}
??,s = {(?1 , ?2 ) : k?1 ? ?2 k ? ?, k?1 ? ?2 k0 ? s} ? ?? .
The first class defines mixtures where the components have a mean separation of at least ? > 0.
The second class defines mixtures with mean separation ? > 0 along a sparse set of s ? {1, . . . , d}
dimensions. Also, let P? denote the probability measure corresponding to p? .
For a mixture with parameter ?, the Bayes optimal classification, that is, assignment of a point
x ? Rd to the correct mixture component, is given by the function
F? (x) = argmax f (x; ?i , ? 2 I).
i?{1,2}
Given any other candidate assignment function F : Rd ? {1, 2}, we define the loss incurred by F
as
L? (F ) = min P? ({x : F? (x) 6= ?(F (x))})
?
where the minimum is over all permutations ? : {1, 2} ? {1, 2}. This is the probability of misclustering relative to an oracle that uses the true distribution to do optimal clustering.
We denote by Fbn any assignment function learned from the data X1 , . . . , Xn , also referred to as
estimator. The goal of this paper is to quantify how the minimax expected loss (worst case expected
loss for the best estimator)
Rn ? inf sup E? L? (Fbn )
bn ???
F
scales with number of samples n, the dimension of the feature space d, the number of relevant dimensions s, and the signal-to-noise ratio defined as the ratio of mean separation to standard deviation
?/?. We will also demonstrate a specific estimator that achieves the minimax scaling.
For the purposes of this paper, we say that feature j is irrelevant if ?1 (j) = ?2 (j). Otherwise we
say that feature j is relevant.
3
Minimax Bounds
3.1
Small mean separation setting without sparsity
We begin without assuming any sparsity, that is, all features are relevant. In this case, comparing
the projections of the data to the projection of the sample mean onto the first principal component
suffices to achieve both minimax optimal sample complexity and clustering loss.
Theorem 1 (Upper bound). Define
b n) ? ?
b n)
1 if xT v1 (?
bTn v1 (?
b
Fn (x) =
2 otherwise.
Pn
b n = n?1 Pn (Xi ? ?
bn )(Xi ? ?
bn )T is the sample
where ?
bn = n?1 i=1 Xi is the sample mean, ?
i=1
b n ) denotes the eigenvector corresponding to the largest eigenvalue of ?
b n . If
covariance and v1 (?
n ? max(68, 4d), then
2 r
4?
d log(nd)
sup E? L? (Fb) ? 600 max
,1
.
?2
n
????
1
We believe our results should also hold in the unequal mixture weight setting without major modifications.
3
Furthermore, if
?
?
?
? 2 max(80, 14 5d), then
n
?2
b
sup E? L? (F ) ? 17 exp ?
+ 9 exp ?
.
32
80? 2
????
We note that the estimator in Theorem 1 (and that in Theorem 3) does not use knowledge of ? 2 .
Theorem 2 (Lower bound). Assume that d ? 9 and ?? ? 0.2. Then
)
(?
r
2
log
2
d
?
1
?
1
1
.
inf sup E? L? (Fbn ) ?
min
,
bn ????
500
3 ?2
n
4
F
We believe that some of the constants (including lower bound on d and exact upper bound on ?/?)
can be tightened, but the results demonstrate matching scaling behavior of clustering error with d, n
and ?/?. Thus, we see (ignoring constants and log terms) that
r
?2 d
d
Rn ? 2
, or equivalently n ? 4 4 for a constant target value of Rn .
?
n
? /?
The result is quite intuitive: the dependence on dimension d is as expected. Also we see that the rate
depends in a precise way on the signal-to-noise ratio ?/?. In particular, the results imply that we
need d ? n.
In modern high-dimensional datasets, we often have d > n i.e. large number of features and not
enough samples. However, inference is usually tractable since not all features are relevant to the
learning task at hand. This sparsity of relevant feature set has been successfully exploited in supervised learning problems such as regression and classification. We show next that the same is true for
clustering under the Gaussian mixture model.
3.2
Sparse and small mean separation setting
Now we consider the case where there are s < d relevant features. Let S denote the set of relevant
features. We begin by constructing an estimator Sbn of S as follows. Define
1+?
b n (i, i), where
?bn =
min ?
1 ? ? i?{1,...,d}
where
r
?=
6 log(nd) 2 log(nd)
+
.
n
n
Now let
b n (i, i) > ?bn }.
Sbn = {i ? {1, . . . , d} : ?
Now we use the same method as before, but using only the features in Sbn identified as relevant.
Theorem 3 (Upper bound). Define
bb ) ? ?
bb )
1 if xTSb v1 (?
bTSb v1 (?
Sn
Sn
n
n
Fbn (x) =
2 otherwise
b b are the sample mean and
where xSbn are the coordinates of x restricted to Sbn , and ?
bSbn and ?
Sn
covariance of the data restricted to Sbn . If n ? max(68, 4s), d ? 2 and ? ? 41 , then
r
1
?
16? 2
s log(ns)
? s log(nd) 4
b
sup E? L? (F ) ? 603 max
,1
+ 220
.
?2
n
?
n
????,s
Next we find the lower bound.
Theorem 4 (Lower bound). Assume that
?
?
? 0.2, d ? 17, and that 5 ? s ? d+3
4 . Then
s
(r
)
1
d?1 1
8 ?2 s ? 1
b
inf sup E? L? (Fn ) ?
min
log
,
.
bn ????,s
600
45 ?2
n
s?1 2
F
4
We remark again that the constants in our bounds can be tightened, but the results suggest that
r
1/4
? 2 s log d
? s2 log d
Rn 2
,
?
n
?
n
2
s log d
or n = ?
for a constant target value of Rn .
?4 /? 4
In this case, we have a gap between the upper and lower bounds for the clustering loss. Also, the
sample complexity can possibly be improved to scale as s (instead of s2 ) using a different method.
However, notice that the dimension only enters logarithmically. If the number of relevant dimensions
is small then we can expect good rates. This provides some justification for feature selection. We
conjecture that the lower bound is tight and that the gap could be closed by using a sparse principal
component method as in Vu and Lei (2012) to find the relevant features. However, that method is
combinatorial and so far there is no known computationally efficient method for implementing it
with similar guarantees.
We note that the upper bound is achieved by a two-stage method that first finds the relevant dimensions and then estimates the clusters. This is in contrast to the methods described in the introduction
which do clustering and variable selection simultaneously. This raises an interesting question: is it
always possible to achieve the minimax rate with a two-stage procedure or are there cases where a
simultaneous method outperforms a two-stage procedure? Indeed, it is possible that in the case of
general covariance matrices (non-spherical) two-stage methods might fail. We hope to address this
question in future work.
4
Proofs of the Lower Bounds
The lower bounds for estimation problems rely on a standard reduction from expected error to hypothesis testing that assumes the loss function is a semi-distance, which the clustering loss isn?t.
However, a local triangle inequality-type bound can be shown (Proposition 2). This weaker condition can then be used to lower-bound the expected loss, as stated in Proposition 1 (which follows
easily from Fano?s inequality).
The proof techniques of the sparse and non-sparse lower bounds are almost identical. The main difference is that in the non-sparse case, we use the Varshamov?Gilbert bound (Lemma 1) to construct
a set of sufficiently dissimilar hypotheses, whereas in the sparse case we use an analogous result for
sparse hypercubes (Lemma 2). See the supplementary material for complete proofs of all results.
In this section and the next, ? and ? denote the univariate standard normal PDF and CDF.
Lemma 1 (Varshamov?Gilbert bound). Let ? = {0, 1}m for m ? 8. There exists a subset
{?0 , ..., ?M } ? ? such that ?0 = (0, ..., 0), ?(?i , ?j ) ? m
8 for all 0 ? i < j ? M , and
M ? 2m/8 , where ? denotes the Hamming distance between two vectors (Tsybakov (2009)).
Lemma 2. Let ? = {? ? {0, 1}m : k?k0 = s} for integers m > s ? 1 such that s ? m/4. There
s/5
exist ?0 , ..., ?M ? ? such that ?(?i , ?j ) > s/2 for all 0 ? i < j ? M , and M ? m
?1
s
(Massart (2007), Lemma 4.10).
Proposition 1. Let ?0 , ..., ?M ? ?? (or ??,s ), M ? 2, 0 < ? < 1/8, and ? > 0. If for all 1 ? i ?
M
M , KL(P?i , P?0 ) ? ? log
, and if L?i (Fb) < ? implies L?j (Fb) ? ? for all 0 ? i 6= j ? M and
n
b
clusterings F , then inf Fbn maxi?[0..M ] E?i L?i (Fbn ) ? 0.07?.
p
Proposition 2. For any ?, ?0 ? ?? , and any clustering Fb, let ? = L? (Fb) + KL(P? , P?0 )/2. If
L? (F?0 ) + ? ? 1/2, then L? (F?0 ) ? ? ? L?0 (Fb) ? L? (F?0 ) + ?.
We will also need the following two results. Let ? = (?0 ??/2, ?0 +?/2) and ?0 = (?0 ??0 /2, ?0 +
T 0
?0 /2) for ?0 , ?, ?0 ? Rd such that k?k = k?0 k, and let cos ? = |?k?k?2 | .
?
Proposition 3. Let g(x) = ?(x)(?(x) ? x?(?x)). Then 2g k?k
sin ? cos ? ? L? (F?0 ) ? tan
2?
? .
Proposition 4. Let ? =
k?k
2? .
Then KL(P? , P?0 ) ? ? 4 (1 ? cos ?).
5
?
2? , and define
n?
log 2 ? 2 ?1
, ??
3
?
n 4 d?1
o
. Define ?20 = ?2 ?(d?
Pd?1
1)2 . Let ? = {0, 1}d?1 . For ? = (?(1), ..., ?(d ? 1)) ? ?, let ??
= ?0 ed + i=1 (2?(i) ? 1)ei
(where {ei }di=1 is the standard basis for Rd ). Let ?? = ? ?2? , ?2? ? ?? .
Proof of Theorem 2. Let ? =
= min
2
By Proposition 4, KL(P?? , P?? ) ? ? 4 (1 ? cos ??,? ) where cos ??,? = 1 ? 2?(?,?)
, ?, ? ? ?, and
?2
2
. By Proposition 3, since cos ??,? ? 21 ,
? is the Hamming distance, so KL(P?? , P?? ) ? ? 4 2(d?1)
?2
p
?
1
1 1 + cos ??,? p
4 d ? 1
L?? (F?? ) ? tan ??,? ?
, and
1 ? cos ??,? ?
?
?
cos ??,?
?
?
?
p
p
? g(?) 1 + cos ??,? 1 ? cos ??,? ? 2g(?)
p
?(?, ?)
?
where g(x) = ?(x)(?(x) ? x?(?x)). By Lemma 1, there exist ?0 , ..., ?M ? ? such that M ?
2(d?1)/8 and ?(?i , ?j ) ? d?1
8 for all 0 ? i < j ? M . For simplicity of notation, let ?i = ??i for
all i ? [0..M ]. Then, for i 6= j ? [0..M ],
?
?
2
4 d ? 1
1
d ? 1
4 2(d ? 1)
KL(P?i , P?j ) ? ?
, L?i (F?j ) ?
and L?i (F?j ) ? g(?)
.
?2
?
?
2
?
L?? (F?? ) ? 2g(?) sin ??,? cos ??,?
Define ? = 14 (g(?) ? 2? 2 )
?
d?1
.
?
Then for any i 6= j ? [0..M ], and any Fb such that L?i (Fb) < ?,
r
?
KL(P?i , P?j )
4
1
d ? 1
1
2
2
b
L?i (F?j ) + L?i (F ) +
<
+ (g(?) ? 2? ) + ?
?
2
? 4
?
2
because, for ? ? 0.1, by definition of ,
?
?
4
1
d ? 1
d ? 1
1
2
2
+ (g(?) ? 2? ) + ?
?2
? .
? 4
?
?
2
2
log M
So, by Proposition 2, L?j (Fb) ? ?. Also, KL(P?i , P?0 ) ? (d ? 1)? 4 2
?2 ? 9n for all 1 ? i ? M ,
2
log 2
because, by definition of , ? 4 2
?2 ? 72n . So by Proposition 1 and the fact that ? ? 0.1,
(?
)
r
2
1
?
1
log
2
d
?
1
min
,
inf max E?i L?i (Fbn ) ? 0.07? ?
bn i?[0..M ]
500
3 ?2
n
4
F
and to complete the proof we use sup???? E? L? (Fbn ) ? maxi?[0..M ] E?i L?i (Fbn ) for any Fbn .
d?1
4 .
Proof of Theorem 4. For simplicity,
this construction for
qwe state
??,s+1 , assuming 4 ? s ?
q
2
?
8 ?
1
d?1
Let ? = 2?
, and define = min
, 12 ??s . Define ?20 = ?2 ? s2 . Let ? =
45 ?
n log
s
Pd?1
{? ? {0, 1}d?1 : k?k0 = s}. For ? = (?(1), ..., ?(d ? 1)) ? ?, let
?? = ?0 ed + i=1 ?(i)ei
(where {ei }di=1 is the standard basis for Rd ). Let ?? = ? ?2? , ?2? ? ??,s . By Lemma 2, there
s/5
exist ?0 , ..., ?M ? ? such that M ? d?1
? 1 and ?(?i , ?j ) ? 2s for all 0 ? i < j ? M . The
s
?
?
remainder of the proof is analogous to that of Theorem 2 with ? = 14 (g(?) ? 2? 2 ) ?s .
5
Proofs of the Upper Bounds
Propositions 5 and 6 below bound the error in estimating the mean and principal direction, and
can be obtained using standard concentration bounds and a variant of the Davis?Kahan theorem.
Proposition 7 relates these errors to the clustering loss. For the sparse case, Propositions 8 and 9
bound the added error induced by the support estimation procedure. See supplementary material for
proof details.
i.i.d.
d
Proposition 5. Let ? = (?0 ?q
?, ?0 + ?) for some ?q
0 , ? ? R and X1 , ..., Xn ? P? . For any
1
1
2 max(d,8 log ? )
2 log ?
+ k?k
with probability at least 1 ? 3?.
? > 0, we have k?0 ? ?
bn k ? ?
n
n
6
i.i.d.
Proposition 6. Let ? = (?0 ? ?, ?0 + ?) for some ?0 , ? ? Rd and X1 , ..., Xn ? P? with
b n )|. For any 0 < ? < d?1
? , if
d > 1 and n ? 4d. Define cos ? = |v1 (? 2 I + ??T )T v1 (?
e
2
q
1
max(d,8 log ? )
?
?
1
n
max k?k2 , k?k
? 160 , then with probability at least 1 ? 12? ? 2 exp ? 20 ,
n
s
2
?
?
10
?
d
10
d
d
,
sin ? ? 14 max
log max 1,
log
.
k?k2 k?k
n
?
n
?
Proposition 7. Let ? = (?0 ? ?, ?0 + ?), and for some x0 , v ? Rd with kvk = 1, let Fb(x) = 1
if xT v ? xT0 v, and 2 otherwise. Define cos ? = |v T ?|/k?k. If |(x0 ? ?0 )T v| ? ?1 + k?k2 for
some 1 ? 0 and 0 ? 2 ? 14 , and if sin ? ? ?15 , then
(
2 )
1
k?k
k?k
k?k
b
L? (F ) ? exp ? max 0,
? 21
+ 2 sin ? 2 sin ?
+1 .
21 + 2
2
2?
?
?
??0 )T v
Proof. Let r = (x0cos
. Since the clustering loss is invariant to rotation and translation,
?
Z ?
1
k?k ? |x| tan ? ? r
1 x
k?k + |x| tan ? + r
b
L? (F ) ?
?
??
dx
?
2 ?? ?
?
?
?
Z ?
k?k ? r
k?k
?
??
? |x| tan ? dx.
?(x) ?
?
?
??
Since tan ? ? 21 and 2 ? 14 , we have r ? 2?1 + 2k?k2 , and ? k?k
? ? k?k?r
?
?
?
k?k?r
k?k
k?k
2 1 + 2 ? ? max 0, 2? ? 21 . Defining A = ? ,
Z ?
Z ?Z A
k?k ? r
k?k ? r
?(x) ?
??
? |x| tan ? dx ? 2
?(x)?(y)dydx
?
?
??
0
A?x tan ?
Z ?
Z A cos ?+(u+A sin ?) tan ?
=2
?(u)?(v)dudv ? 2? (A) tan ? (A sin ? + 1)
?A sin ? A cos ?
k?k
k?k
? 2? max 0,
? 21
+ 21 sin ? + 1
tan ?
2
2?
?
where we used u = x cos ? ? y sin ? and v = x sin ? + y cos ? in the second step. The bound now
follows easily.
Proof of Theorem 1. Using Propositions 5 and 6 with ? = ?1n , Proposition 7, and the fact that
(C + x) exp(? max(0, x ? 4)2 /8) ? (C + 6) exp(? max(0, x ? 4)2 /10) for all C, x > 0,
2 r
4?
d log(nd)
b
,1
E? L? (F ) ? 600 max
2
?
n
(it is easy to verify that the bounds are decreasing with k?k, so we use k?k = ?2 to bound the
supremum). In the d = 1 case Proposition 6?need not be applied, since the principal directionsagree
n
trivially. The bound for ?? ? 2 max(80, 14 5d) can be shown similarly, using ? = exp ? 32
.
i.i.d.
Proposition 8. Let ? q
= (?0 ? ?, ?0 + ?) for some ?0 , ? ? Rd and X1 , ..., Xn ? P? . For any
6 log ?1
? 21 , with probability at least 1 ? 6d?, for all i ? [d],
0 < ? < ?1e such that
n
s
s
1
6
log
2 log 1?
2 log 1?
?
b n (i, i) ? (? 2 + ?(i)2 )| ? ? 2
|?
+ 2?|?(i)|
+ (? + |?(i)|)2
.
n
n
n
i.i.d.
Proposition 9. Let ? = (?0 ? ?, ?0 + ?) for some ?0 , ? ? Rd and X1 , ..., Xn ? P? . Define
?
e
S(?) = {i ? [d] : ?(i) 6= 0} and S(?)
= {i ? [d] : |?(i)| ? 4? ?}.
e
Assume that n ? 1, d ? 2, and ? ? 14 . Then S(?)
? Sbn ? S(?) with probability at least 1 ? n6 .
7
Proof. By Proposition 8, with probability at least 1 ? n6 ,
r
r
6
log(nd)
2 log(nd)
2 log(nd)
2
2
2
b n (i, i) ? (? + ?(i) )| ? ?
|?
+ 2?|?(i)|
+ (? + |?(i)|)2
n
n
n
b
for all i ? [d]. Assume the above event holds. If S(?) = [d], then of course Sn ? S(?). Otherwise,
b n (i, i) ? (1 + ?)? 2 , so it is clear that Sbn ? S(?). The
for i ?
/ S(?), we have (1 ? ?)? 2 ? ?
e
remainder of the proof is trivial if S(?)
= ? or S(?) = ?. Assume otherwise. For any i ? S(?),
b n (i, i) ? (1 ? ?)? 2 + 1 ? 2 log(nd) ?(i)2 ? 2??|?(i)|.
?
n
2
?
(1+?)
e
b n (i, i) and i ? Sbn (we ignore strict
so
?2 ? ?
By definition, |?(i)| ? 4? ? for all i ? S(?),
1??
e
equality above as a measure 0 event), i.e. S(?)
? Sbn , which concludes the proof.
?
e
Proof of Theorem 3. Define S(?) = {i ? [d] : ?(i) 6= 0} and S(?)
= {i ? [d] : |?(i)| ? 4? ?}.
e
Assume S(?)
? Sbn ? S(?) (by Proposition 9, this holds with probability at least 1 ? n6 ). If
e
S(?)
= ?, then we simply have E? L? (Fbn ) ? 12 .
e
b b )T v1 (?)|, cos ?e = |v1 (? b )T v1 (?)|, and cos ? =
Assume S(?)
6= ?. Let cos ?b = |v1 (?
Sn
Sn
T
2
b b ) v1 (? b )| where ? = ? I + ??T , and for simplicity we define ?
b b and ? b to be
|v1 (?
Sn
Sn
Sn
Sn
b
e
b
b
the same as ?n and ? in Sn , respectively, and 0 elsewhere. Then sin ? ? sin ? + sin ?, and
q
?
?
e
k? ? ?S(?)
k
k? ? ?S(?)
k
4? ? |S(?)| ? |S(?)|
e
b
? s?
e
?
?
?8
.
sin ? =
k?k
k?k
k?k
?
Using the same argument as the proof of Theorem 1, as long as the above bound is smaller than
!r
?
2
? s?
3
?
s log(ns)
,
1
+
104
+ .
E? L? (Fb) ? 600 max
? 2
?
n
?
n
2 ? 4? s?
1
b
Using the fact L? (F ) ? always, and that ? ? 1 implies log(nd) ? 1, the bound follows.
2
6
4
n
1
?
,
2 5
Conclusion
We have provided minimax lower and upper bounds for estimating high dimensional mixtures. The
bounds show explicitly how the statistical difficulty of the problem depends on dimension d, sample
size n, separation ? and sparsity level s.
For clarity, we focused on the special case where there are two spherical components with equal
mixture weights. In future work, we plan to extend the results to general mixtures of k Gaussians.
One of our motivations for this work is the recent interest in variable selection methods to facilitate
clustering in high dimensional problems. Existing methods such as Pan and Shen (2007); Witten
and Tibshirani (2010); Raftery and Dean (2006); Sun et al. (2012) and Guo et al. (2010) provide
promising numerical evidence that variable selection does improve high dimensional clustering.
Our results provide some theoretical basis for this idea.
However, there is a gap between the results in this paper and the above methodology papers. Indeed, as of now, there is no rigorous proof that the methods in those papers outperform a two stage
approach where the first stage screens for relevant features and the second stage applies standard
clustering methods on the features found in the first stage. We conjecture that there are conditions
under which simultaneous feature selection and clustering outperforms a two stage method. Settling
this question will require the aforementioned extension of our results to the general mixture case.
Acknowledgements
This research is supported in part by NSF grants IIS-1116458 and CAREER award IIS-1252412, as
well as NSF Grant DMS-0806009 and Air Force Grant FA95500910373.
8
References
Dimitris Achlioptas and Frank McSherry. On spectral learning of mixtures of distributions. In
Learning Theory, pages 458?469. Springer, 2005.
Sanjeev Arora and Ravi Kannan. Learning mixtures of arbitrary gaussians. In Proceedings of the
thirty-third annual ACM symposium on Theory of computing, pages 247?257. ACM, 2001.
Mikhail Belkin and Kaushik Sinha. Polynomial learning of distribution families. In Foundations of
Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 103?112. IEEE, 2010.
S Charles Brubaker and Santosh S Vempala. Isotropic pca and affine-invariant clustering. In Building
Bridges, pages 241?281. Springer, 2008.
Kamalika Chaudhuri and Satish Rao. Learning mixtures of product distributions using correlations
and independence. In COLT, pages 9?20, 2008.
Kamalika Chaudhuri, Sanjoy Dasgupta, and Andrea Vattani. Learning mixtures of gaussians using
the k-means algorithm. arXiv preprint arXiv:0912.0086, 2009.
Sanjoy Dasgupta. Learning mixtures of gaussians. In Foundations of Computer Science, 1999. 40th
Annual Symposium on, pages 634?644. IEEE, 1999.
Jian Guo, Elizaveta Levina, George Michailidis, and Ji Zhu. Pairwise variable selection for highdimensional model-based clustering. Biometrics, 66(3):793?804, 2010.
Daniel Hsu and Sham M Kakade. Learning mixtures of spherical gaussians: moment methods and
spectral decompositions. In Proceedings of the 4th conference on Innovations in Theoretical
Computer Science, pages 11?20. ACM, 2013.
Adam Tauman Kalai, Ankur Moitra, and Gregory Valiant. Disentangling gaussians. Communications of the ACM, 55(2):113?120, 2012.
Ravindran Kannan, Hadi Salmasian, and Santosh Vempala. The spectral method for general mixture
models. In Learning Theory, pages 444?457. Springer, 2005.
Pascal Massart. Concentration inequalities and model selection. 2007.
Wei Pan and Xiaotong Shen. Penalized model-based clustering with application to variable selection. The Journal of Machine Learning Research, 8:1145?1164, 2007.
Adrian E Raftery and Nema Dean. Variable selection for model-based clustering. Journal of the
American Statistical Association, 101(473):168?178, 2006.
Leonard J. Schulman and Sanjoy Dasgupta. A two-round variant of em for gaussian mixtures. In
Proc. 16th UAI (Conference on Uncertainty in Artificial Intelligence), pages 152?159, 2000.
Wei Sun, Junhui Wang, and Yixin Fang. Regularized k-means clustering of high-dimensional data
and its asymptotic consistency. Electronic Journal of Statistics, 6:148?167, 2012.
Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer Series in Statistics.
Springer, 2009.
Santosh Vempala and Grant Wang. A spectral algorithm for learning mixture models. Journal of
Computer and System Sciences, 68(4):841?860, 2004.
Vincent Q Vu and Jing Lei. Minimax sparse principal subspace estimation in high dimensions. arXiv
preprint arXiv:1211.0373, 2012.
Daniela M Witten and Robert Tibshirani. A framework for feature selection in clustering. Journal
of the American Statistical Association, 105(490), 2010.
9
| 4983 |@word version:1 polynomial:3 nd:10 adrian:1 bn:10 decomposition:2 covariance:3 mention:1 reduction:1 moment:4 contains:1 series:1 daniel:1 ours:1 outperforms:2 existing:2 comparing:1 dx:3 fn:2 numerical:1 dydx:1 intelligence:1 isotropic:5 provides:3 along:1 symposium:3 focs:1 combine:1 x0:2 pairwise:3 ravindran:1 indeed:2 expected:5 andrea:1 behavior:3 inspired:1 spherical:5 decreasing:1 confused:1 estimating:6 begin:2 notation:1 provided:1 substantially:1 eigenvector:1 developed:1 guarantee:1 wrong:1 classifier:2 k2:2 grant:4 before:1 understood:1 local:1 limit:1 might:3 ankur:1 suggests:1 co:21 statistically:1 thirty:1 testing:1 vu:2 procedure:5 matching:2 projection:2 suggest:1 get:1 onto:1 clever:1 selection:19 risk:2 gilbert:2 dean:3 maximizing:1 focused:1 shen:3 simplicity:3 wasserman:1 estimator:5 fang:1 coordinate:1 justification:1 analogous:2 target:2 tan:11 construction:1 exact:1 us:2 hypothesis:2 logarithmically:1 sparsely:1 preprint:2 wang:3 capture:1 worst:1 enters:1 sbn:10 sun:3 removed:1 pd:2 complexity:12 singh:1 tight:1 raise:1 serve:1 efficiency:1 basis:5 triangle:2 easily:2 k0:3 separated:2 effective:1 artificial:1 quite:1 heuristic:1 larger:1 supplementary:2 say:2 otherwise:6 statistic:3 kahan:1 differentiate:1 advantage:1 eigenvalue:1 product:1 remainder:2 relevant:21 mixing:1 poorly:1 chaudhuri:5 achieve:2 intuitive:1 cluster:3 requirement:1 jing:1 adam:1 derive:1 develop:1 stat:1 misclustering:2 c:2 implies:2 quantify:1 direction:2 correct:2 larry:2 material:2 implementing:1 require:3 suffices:1 proposition:23 mathematically:1 extension:1 hold:3 sufficiently:1 considered:1 normal:1 exp:7 major:1 achieves:2 yixin:1 aarti:2 purpose:2 estimation:5 proc:1 combinatorial:1 bridge:1 largest:1 successfully:1 hope:1 gaussian:6 always:2 modified:1 kalai:2 avoid:1 pn:2 likelihood:2 contrast:1 rigorous:1 inference:1 typically:2 interested:1 classification:5 aforementioned:1 colt:1 pascal:1 plan:1 special:2 equal:3 construct:1 santosh:3 identical:1 unsupervised:1 nearly:1 future:3 np:1 belkin:2 modern:1 simultaneously:2 argmax:1 huge:1 interest:1 mixture:42 kvk:1 mcsherry:2 necessary:2 biometrics:1 continuing:1 varshamov:2 theoretical:6 sinha:2 rao:2 assignment:3 deviation:1 subset:4 satish:1 too:1 gregory:1 hypercubes:1 density:3 fundamental:1 st:1 together:1 sanjeev:1 fbn:11 again:1 moitra:1 possibly:1 american:2 vattani:1 satisfy:1 explicitly:1 depends:4 stream:1 closed:1 sup:7 sort:1 bayes:2 contribution:1 air:1 accuracy:1 hadi:1 vincent:1 salmasian:1 none:2 history:1 simultaneous:2 ed:2 definition:3 dm:1 proof:17 di:2 hamming:2 hsu:2 popular:1 knowledge:1 appears:1 focusing:1 alexandre:1 supervised:1 methodology:1 improved:1 wei:2 furthermore:1 stage:9 myth:1 achlioptas:2 correlation:1 hand:1 ei:4 defines:2 perhaps:2 lei:2 believe:2 building:1 facilitate:1 requiring:2 true:3 verify:1 equality:1 deal:1 round:1 sin:16 kaushik:1 davis:1 d4:1 pdf:1 theoretic:4 complete:3 demonstrate:2 charles:1 common:1 rotation:1 witten:3 ji:1 extend:1 association:2 mellon:3 rd:10 trivially:1 consistency:1 fano:1 similarly:1 had:1 recent:3 irrelevant:2 inf:5 inequality:4 success:1 exploited:1 minimum:1 george:1 determine:2 signal:2 semi:1 relates:1 ii:2 sham:1 match:2 levina:1 long:2 justifying:1 award:1 variant:2 regression:2 cmu:3 arxiv:4 achieved:2 whereas:1 jian:1 massart:2 strict:1 subject:2 tend:1 induced:1 integer:1 enough:1 concerned:1 easy:1 independence:1 michailidis:1 identified:1 idea:2 pca:1 akin:1 remark:1 useful:1 clear:1 nonparametric:1 tsybakov:2 outperform:1 exist:3 misclassifying:1 nsf:2 notice:1 tibshirani:3 carnegie:3 dasgupta:5 group:1 drawn:1 clarity:1 ravi:1 v1:13 run:1 uncertainty:1 throughout:1 reader:1 almost:1 family:1 electronic:1 separation:23 scaling:2 comparable:1 bound:39 oracle:1 annual:3 precisely:1 argument:2 min:7 xiaotong:1 vempala:5 martin:1 conjecture:2 department:3 across:1 smaller:1 pan:3 em:1 appealing:1 kakade:2 modification:1 restricted:2 invariant:2 computationally:4 agree:1 daniela:1 fail:1 needed:1 tractable:1 studying:1 gaussians:13 spectral:8 dudv:1 alternative:1 denotes:2 clustering:36 include:2 assumes:1 especially:1 question:3 added:1 concentration:2 dependence:2 elizaveta:1 subspace:1 distance:4 degrade:1 trivial:1 kannan:4 assuming:3 remind:1 ratio:3 btn:1 innovation:1 equivalently:1 setup:1 unfortunately:1 disentangling:1 robert:1 frank:1 stated:1 perform:1 upper:8 observation:4 datasets:1 defining:1 communication:1 precise:5 brubaker:2 rn:5 arbitrary:1 introduced:1 namely:1 kl:8 unequal:1 learned:1 address:1 usually:1 below:1 dimitris:1 sparsity:4 including:2 max:19 misclassification:2 event:2 difficulty:1 rely:1 settling:1 force:1 regularized:1 sian:1 zhu:1 minimax:11 improve:2 imply:1 arora:2 raftery:3 concludes:1 n6:3 isn:1 sn:11 review:1 literature:1 schulman:2 acknowledgement:1 relative:3 asymptotic:1 loss:16 expect:2 highlight:1 permutation:1 interesting:1 foundation:2 incurred:1 affine:1 sufficient:1 tightened:2 azizyan:1 translation:1 course:2 penalized:3 elsewhere:1 supported:1 weaker:1 mikhail:1 sparse:16 tauman:1 benefit:1 dimension:18 xn:6 fb:11 author:2 far:1 bb:2 excess:2 ignore:1 gene:3 supremum:1 uai:1 xi:3 why:1 promising:1 career:1 ignoring:1 investigated:1 necessarily:1 constructing:1 main:1 motivation:2 noise:2 s2:2 x1:6 referred:1 screen:1 n:2 candidate:1 third:1 theorem:13 specific:1 xt:2 maxi:2 evidence:2 exists:1 kamalika:2 valiant:1 gap:4 easier:1 attributing:1 led:1 simply:1 univariate:1 xt0:1 gaus2:1 applies:1 springer:5 acm:4 cdf:1 goal:1 leonard:1 feasible:2 hard:1 hyperplane:1 principal:5 lemma:7 sanjoy:3 highdimensional:1 support:1 guo:3 latter:3 dissimilar:1 bioinformatics:1 d1:1 |
4,401 | 4,984 | Cluster Trees on Manifolds
Sivaraman Balakrishnan?
[email protected]
Srivatsan Narayanan?
[email protected]
Aarti Singh?
[email protected]
Alessandro Rinaldo?
[email protected]
Larry Wasserman?
[email protected]
School of Computer Science? and Department of Statistics?
Carnegie Mellon University
In this paper we investigate the problem of estimating the cluster tree for a density f supported on
or near a smooth d-dimensional manifold M isometrically embedded in RD . We analyze a modified version of a k-nearest neighbor based algorithm recently proposed by Chaudhuri and Dasgupta
(2010). The main results of this paper show that under mild assumptions on f and M , we obtain
rates of convergence that depend on d only but not on the ambient dimension D. Finally, we sketch
a construction of a sample complexity lower bound instance for a natural class of manifold oblivious
clustering algorithms.
1
Introduction
In this paper, we study the problem of estimating the cluster tree of a density when the density
is supported on or near a manifold. Let X := {X1 , . . . , Xn } be a sample drawn i.i.d. from a
distribution P with density f . The connected components Cf (?) of the upper level set {x : f (x) ?
?} are called density clusters. The collection C = {Cf (?) : ? ? 0} of all such clusters is called the
cluster tree and estimating this cluster tree is referred to as density clustering.
The density clustering paradigm is attractive for various reasons. One of the main difficulties of
clustering is that often the true goals of clustering are not clear and this makes clusters, and clustering
as a task seem poorly defined. Density clustering however is estimating a well defined population
quantity, making its goal, consistent recovery of the population density clusters, clear. Typically
only mild assumptions are made on the density f and this allows extremely general shapes and
numbers of clusters at each level. Finally, the cluster tree is an inherently hierarchical object and
thus density clustering algorithms typically do not require specification of the ?right? level, rather
they capture a summary of the density across all levels.
The search for a simple, statistically consistent estimator of the cluster tree has a long history.
Hartigan (1981) showed that the popular single-linkage algorithm is not consistent for a sample
from RD , with D > 1. Recently, Chaudhuri and Dasgupta (2010) analyzed an algorithm which is
both simple and consistent. The algorithm finds the connected components of a sequence of carefully constructed neighborhood graphs. They showed that, as long as the parameters of the algorithm
are chosen appropriately, the resulting collection of connected components correctly estimates the
cluster tree with high probability.
In this paper, we are concerned with the problem of estimating the cluster tree when the density
f is supported on or near a low dimensional manifold. The motivation for this work stems from
the problem of devising and analyzing clustering algorithms with provable performance that can be
used in high dimensional applications. When data live in high dimensions, clustering (as well as
other statistical tasks) generally become prohibitively difficult due to the curse of dimensionality,
1
which demands a very large sample size. In many high dimensional applications however data is
not spread uniformly but rather concentrates around a low dimensional set. This so-called manifold
hypothesis motivates the study of data generated on or near low dimensional manifolds and the study
of procedures that can adapt effectively to the intrinsic dimensionality of this data.
Here is a brief summary of the main contributions of our paper: (1) We show that the simple algorithm studied in the paper Chaudhuri and Dasgupta (2010) is consistent and has fast rates of
convergence for data on or near a low dimensional manifold M . The algorithm does not require
the user to first estimate M (which is a difficult problem). In other words, the algorithm adapts to
the (unknown) manifold. (2) We show that the sample complexity for identifying salient clusters is
independent of the ambient dimension. (3) We sketch a construction of a sample complexity lower
bound instance for a natural class of clustering algorithms that we study in this paper. (4) We introduce a framework for studying consistency of clustering when the distribution is not supported on
a manifold but rather, is concentrated near a manifold. The generative model in this case is that the
data are first sampled from a distribution on a manifold and then noise is added. The original data
are latent (unobserved). We show that for certain noise models we can still efficiently recover the
cluster tree on the latent samples.
1.1
Related Work
The idea of using probability density functions for clustering dates back to Wishart Wishart (1969).
Hartigan (1981) expanded on this idea and formalized the notions of high-density clustering, of
the cluster tree and of consistency and fractional consistency of clustering algorithms. In particular, Hartigan (1981) showed that single linkage clustering is consistent when D = 1 but is only
fractionally consistent when D > 1. Stuetzle and R. (2010) and Stuetzle (2003) have also proposed
procedures for recovering the cluster tree. None of these procedures however, come with the theoretical guarantees given by Chaudhuri and Dasgupta (2010), which demonstrated that a generalization
of Wishart?s algorithm allows one to estimate parts of the cluster tree for distributions with fulldimensional support near-optimally under rather mild assumptions. This paper forms the starting
point for our work and is reviewed in more detail in the next section.
In the last two decades, much of the research effort involving the use of nonparametric density
estimators for clustering has focused on the more specialized problems of optimal estimation of the
support of the distribution or of a fixed level set. However, consistency of estimators of a fixed level
set does not imply cluster tree consistency, and extending the techniques and analyses mentioned
above to hold simultaneously over a variety of density levels is non-trivial. See for example the
papers Polonik (1995); Tsybakov (1997); Walther (1997); Cuevas and Fraiman (1997); Cuevas et al.
(2006); Rigollet and Vert (2009); Maier et al. (2009); Singh et al. (2009); Rinaldo and Wasserman
(2010); Rinaldo et al. (2012), and references therein. Estimating the cluster tree has more recently
been considered by Kpotufe and von Luxburg (2011) who also give a simple pruning procedure
for removing spurious clusters. Steinwart (2011) and Sriperumbudur and Steinwart (2012) propose
procedures for determining recursively the lowest split in the cluster tree and give conditions for
asymptotic consistency with minimal assumptions on the density.
2
Background and Assumptions
Let P be a distribution supported on an unknown d-dimensional manifold M . We assume that the
manifold M is a d-dimensional Riemannian manifold without boundary embedded in a compact set
X ? RD with d < D. We further assume that the volume of the manifold is bounded from above by
a constant, i.e., vold (M ) ? C. The main regularity condition we impose on M is that its condition
number be not too large. The condition number of M is 1/? , where ? is the largest number such
that the open normal bundle about M of radius r is imbedded in RD for every r < ? . The condition
number is discussed in more detail in the paper Niyogi et al. (2008).
The Euclidean norm is denoted by k ? k and vd denotes the volume of the d-dimensional unit ball in
Rd . B(x, r) denotes the full-dimensional ball of radius r centered at x and BM (x, r) ..= B(x, r) ?
2
M . For Z ? Rd and ? > 0, define Z? = Z + B(0, ?) and ZM,? = (Z + B(0, ?)) ? M . Note that
Z? is full dimensional, while if Z ? M then ZM,? is d-dimensional.
Let f be the density of P with respect to the uniform measure on M . For ? ? 0, let Cf (?) be the
collection of connected components of the level set {x ? X : f (x) ? ?} and define the cluster tree
of f to be the hierarchy C = {Cf (?) : ? ? 0}. For a fixed ?, any member of Cf (?) is a cluster.
For a cluster C its restriction to the sample X is defined to be C[X] = C ? X. The restriction of
the cluster tree C to X is defined to be C[X] = {C ? X : C ? C}. Informally, this restriction is a
dendrogram-like hierarchical partition of X.
To give finite sample results, following Chaudhuri and Dasgupta (2010), we define the notion of
salient clusters. Our definitions are slight modifications of those in Chaudhuri and Dasgupta (2010)
to take into account the manifold assumption.
Definition 1 Clusters A and A? are (?, ?) separated if there exists a nonempty S ? M such that:
1. Any path along M from A to A? intersects S.
2. supx?SM,? f (x) < (1 ? ?) inf x?AM,? ?A?M,? f (x).
Chaudhuri and Dasgupta (2010) analyze a robust single linkage (RSL) algorithm (in Figure 1). An
RSL algorithm estimates the connected components at a level ? in two stages. In the first stage,
the sample is cleaned by thresholding the k-nearest neighbor distance of the sample points at a
radius r and then, in the second stage, the cleaned sample is connected at a connection radius R.
The connected components of the resulting graph give an estimate of the restriction Cf (?)[X]. In
Section 4 we prove a sample complexity lower bound for the class of RSL algorithms which we now
define.
Definition 2 The class of RSL algorithms refers to any algorithm that is of the form described in
the algorithm in Figure 1 and relying on Euclidean balls, with any choice of k, r and R.
We define two notions of consistency for an estimator Cb of the cluster tree:
Definition 3 (Hartigan consistency) For any sets A, A? ? X , let An (resp., A?n ) denote the smallest cluster of Cb containing A ? X (resp, A? ? X). We say Cb is consistent if, whenever A and A? are
different connected components of {x : f (x) ? ?} (for some ? > 0), the probability that An is
disconnected from A?n approaches 1 as n ? ?.
Definition 4 ((?, ?) consistency) For any sets A, A? ? X such that A and A? are (?, ?) separated,
let An (resp., A?n ) denote the smallest cluster of Cb containing A ? X (resp, A? ? X). We say Cb is
consistent if, whenever A and A? are different connected components of {x : f (x) ? ?} (for some
? > 0), the probability that An is disconnected from A?n approaches 1 as n ? ?.
The notion of (?, ?) consistency is similar that of Hartigan consistency except restricted to (?, ?)
separated clusters A and A? .
Chaudhuri and Dasgupta (2010) prove a theorem, establishing finite sample bounds for a particular
RSL algorithm. In their result there is no manifold and f is a density with respect to the Lebesgue
measure on RD . Their result in essence says that if
D
D
n?O
log 2
??2 vD (?/2)D
?? vD (?/2)D
then an RSL algorithm with appropriately chosen parameters can resolve any pair of (?, ?) clusters
at level at least ?. It is important to note that this theorem does not apply to the setting when
distributions are supported on a lower dimensional set for at least two reasons: (1) the density f is
singular with respect to the Lebesgue measure on X and so the cluster tree is trivial, and (2) the
definitions of saliency with respect to X are typically not satisfied when f has a lower dimensional
support.
3
1. For each Xi , rk (Xi ) := inf{r : B(Xi , r) contains k data points}.
2. As r grows from 0 to ?:
(a) Construct a graph Gr,R with nodes {Xi : rk (Xi ) ? r} and edges (Xi , Xj ) if
kXi ? Xj k ? R.
(b) Let C(r) be the connected components of Gr,R .
b
3. Denote Cb = {C(r) : r ? [0, ?)} and return C.
Figure 1: Robust Single Linkage (RSL) Algorithm
3
Clustering on Manifolds
In this section we show that the RSL algorithm can be adapted to recover the cluster tree of a
distribution supported on a manifold of dimension d < D with the rates depending only on d. In
place of the cluster salience parameter ?, our rates involve a new parameter ?
3? ?? ?
? := min
.
,
,
16 72d 16
The precise reason for this definition of ? will be clear from the proofs (particularly of Lemma 7)
but for now notice that in addition to ? it is dependent on the condition number 1/? and deteriorates
as the condition number increases. Finally, to succinctly present our results we use ? := log n +
d log(1/?).
Theorem 5 There are universal constants C1 and C2 such that the following holds. For any ? > 0,
0 < ? < 1/2, run the algorithm in Figure 1 on a sample X drawn from f , where the parameters are
set according to the equations
R = 4? and k = C1 log2 (1/?)(?/?2 ).
Then with probability at least 1??, Cb is (?, ?) consistent. In particular, the clusters containing A[X]
and A? [X], where A and A? are (?, ?) separated, are internally connected and mutually disconnected
in C(r) for r defined by
1
k
C2 log(1/?) p
d
vd r ? =
k?
+
1 ? ?/6 n
n
provided ? ?
2 k
.
v d ?d n
Before we prove this theorem a few remarks are in order:
1. To obtain an explicit sample complexity we plug in the value of k and solve for n from the inequality restricting ?. The sample complexity of the RSL algorithm for recovering (?, ?) clusters
at level at least ? on a manifold M with condition number at most 1/? is
d
d
log
n=O
??2 vd ?d
??2 vd ?d
where ? = C min (?, ?? /d, ? ). Ignoring constants that depend on d the main difference between
this result and the result of Chaudhuri and Dasgupta (2010) is that our results only depend on
the manifold dimension d and not the ambient dimension D (typically D ? d). There is also a
dependence of our result on 1/(?? )d , for ?? ? ?. In Section 4 we sketch the construction of an
instance that suggests that this dependence is not an artifact of our analysis and that the sample
complexity of the class of RSL algorithms is at least n ? 1/(?? )?(d) .
2. Another aspect is that our choice of the connection radius R depends on the (typically) unknown
?, while for comparison, the connection radius in Chaudhuri and Dasgupta (2010) is chosen to be
4
?
2r. Under the mild assumption that ? ? nO(1) (which is satisfied for instance, if the density
on M is bounded from above), we show in Appendix A.8 that an identical theorem holds for
R = 4r. k is the only real tuning parameter of this algorithm whose choice depends on ? and an
unknown leading constant.
3. It is easy to see that this theorem also establishes consistency for recovering the entire cluster
tree by selecting an appropriate schedule on ?n , ?n and kn that ensures that all clusters are
distinguished for n large enough (see Chaudhuri and Dasgupta (2010) for a formal proof).
Our proofs structurally mirror those in Chaudhuri and Dasgupta (2010). We begin with a few technical results in 3.1. In Section 3.2 we establish (?, ?) consistency by showing that the clusters are
mutually disjoint and internally connected. The main technical challenge is that the curvature of the
manifold, modulated by its condition number 1/? , limits our ability to resolve the density level sets
from a finite sample, by limiting the maximum cleaning and connection radii the algorithm can use.
In what follows, we carefully analyze this effect and show that somewhat surprisingly, despite this
curvature, essentially the same algorithm is able to adapt to the unknown manifold and produce a
consistent estimate of the entire cluster tree. Similar manifold adaptivity results have been shown in
classification Dasgupta and Freund (2008) and in non-parametric regression Kpotufe and Dasgupta
(2012); Bickel and Li (2006).
3.1
Technical results
In our proof, we use the uniform convergence of the empirical mass of Euclidean balls to their true
mass. In the full dimensional setting of Chaudhuri and Dasgupta (2010), this follows from standard
VC inequalities. To the best of our knowledge however sharp (ambient dimension independent)
inequalities for manifolds are unknown. We get around this obstacle by using the insight that, in
order to analyze the RSL algorithms, uniform convergence for Euclidean balls around the sample
points and around a fixed minimum s-net N of M (for an appropriately chosen s) suffice to analyze
the RSL algorithm.
Recall, an s-net N ?n M is such that every point of o
M is at a distance at most s from some point
in N . Let Bn,N := B(z, s) : z ? N ? X, s ? 0 be the collection of balls whose centers are
sample or net points. We now state our uniform convergence lemma. The proof is in Appendix A.3.
Lemma 6 (Uniform Convergence) Assume k ? ?. Then there exists a constant C0 such that the
following holds. For every ? > 0, with probability > 1 ? ?, for all B ? Bn,N , we have:
C? ?
=? Pn (B) > 0,
n
p
k
k
C?
P (B) ? +
k? =? Pn (B) ? ,
n
n
n
k
C? p
k
k? =? Pn (B) < ,
P (B) ? ?
n
n
n
where C? := 2C0 log(2/?), and ? := 1 + log n + log |N | = Cd + log n + d log(1/s). Here
Pn (B) = |X ? B|/n denotes the empirical probability measure of B, and C is a universal constant.
P (B) ?
Next we provide a tight estimate of the volume of a small ball intersected with M . This bounds
the distortion of the apparent density due to the curvature of the manifold and is central to many of
our arguments. Intuitively, the claim states that the volume is approximately that of a d-dimensional
Euclidean ball, provided that its radius is small enough compared to ? . The lower bound is based
on Lemma 5.3 of Niyogi et al. (2008) while the upper bound is based on a modification of the main
result of Chazal (2013).
Lemma 7 (Ball volumes) Assume r < ? /2. Define S := B(x, r) ? M for a point x ? M . Then
d/2
d
r2
?
d
1? 2
vd r ? vold (S) ? vd
r1d ,
4?
? ? 2r1
5
where r1 = ? ? ?
3.2
p
1 ? 2r/? . In particular, if r ? ?? /72d for 0 ? ? < 1, then
vd rd (1 ? ?/6) ? vold (S) ? vd rd (1 + ?/6).
Separation and Connectedness
Lemma 8 (Separation) Assume that we pick k, r and R to satisfy the conditions:
r ? ?,
R = 4?,
p
C
C? p
k
k
?
k?,
vd rd (1 + ?/6)?(1 ? ?) ? ?
k?.
vd rd (1 ? ?/6)? ? +
n
n
n
n
Then with probability 1 ? ?, we have: (1) All points in A??r and A???r are kept, and all points in
S??r are removed. (2) The two point sets A ? X and A? ? X are disconnected in Gr,R .
Proof. The proof is analogous to the separation proof of Chaudhuri and Dasgupta (2010) with several modifications. Most importantly, we need to ensure that despite the curvature of the manifold
we can still resolve the density well enough to guarantee that we can identify and eliminate points
in the region of separation.
Throughout the proof, we will assume that the good event in Lemma 6 (uniform convergence for
Bn,N ) occurs. Since r ? ?? /72d, by Lemma 7 vol(BM (x, r)) is between vd rd (1 ? ?/6) and
vd rd (1+?/6), for any x ?
? M . So if Xi ? A?A? , then BM (Xi , r) has mass at least vd rd (1??/6)??.
C?
k
Since this is ? n + n k? by assumption, this ball contains at least k sample points, and hence
Xi is kept. On the other hand, if Xi ? ?S??r , then the set BM (Xi , r) contains mass at most
vd rd (1 + ?/6) ? ?(1 ? ?). This is ? nk ? Cn? k?. Thus by Lemma 6 BM (Xi , r) contains fewer than
k sample points, and hence Xi is removed.
To prove the graph is disconnected, we first need a bound on the geodesic distance between two
points that are at most R apart in Euclidean distance. Such an estimate follows from Proposition
6.3 in Niyogi et al.q(2008) who show that if kp ? qk = R ? ? /2, then the geodesic distance
4R
dM (p, q) ? ? ? ? 1 ? 2R
? 2R. Now,
? . In particular, if R ? ? /4, then dM (p, q) < R 1 + ?
notice that if the graph is connected there must be an edge that connects two points that are at a
geodesic distance of at least 2(? ? r). Any path between a point in A and a point in A? along M
must pass through S??r and must have a geodesic length of at least 2(? ? r). This is impossible if
the connection radius satisfies 2R < 2(? ? r), which follows by the assumptions on r and R.
All the conditions in Lemma 8 can be simultaneously satisfied by setting k := 16C?2 (?/?2 ), and
k
C? p
vd rd (1 ? ?/6) ? ? = +
k?.
(1)
n
n
The condition on r is satisfied since ? ?
2 k
v d ?d n
and the condition on R is satisfied by its definition.
Lemma 9 (Connectedness) Assume that the parameters k, r and R satisfy the separation conditions (in Lemma 8). Then, with probability at least 1 ? ?, A[X] is connected in Gr,R .
Proof. Let us show that any two points in A ? X are connected in Gr,R . Consider y, y ? ? A ? X.
Since A is connected, there is a path P between y, y ? lying entirely inside A, i.e., a continuous map
P : [0, 1] ? A such that P (0) = y and P (1) = y ? . We can find a sequence of points y0 , . . . , yt ? P
such that y0 = y, yt = y ? , and the geodesic distance on M (and hence the Euclidean distance)
between yi?1 and yi is at most ?, for an arbitrarily small constant ?.
Let N be minimal R/4-net of M . There exist zi ? N such that kyi ? zi k ? R/4. Since yi ? A, we
have zi ? AM,R/4 , and hence the ball BM (zi , R/4) lies completely inside AM,R/2 ? AM,??r . In
particular, the density inside the ball is at least ? everywhere, and hence the mass inside it is at least
vd (R/4)d (1 ? ?/6)? ?
6
C? ?
.
n
Observe that R ? 4r and so this condition is satisfied as a consequence of satisfying Equation 1.
Thus Lemma 6 guarantees that the ball BM (zi , R/4) contains at least one sample point, say xi .
(Without loss of generality, we may assume x0 = y and xt = y ? .) Since the ball lies completely in
AM,??r , the sample point xi is not removed in the cleaning step (Lemma 8).
Finally, we bound d(xi?1 , xi ) by considering the sequence of points (xi?1 , zi?1 , yi?1 , yi , zi , xi ).
The pair (yi?1 , yi ) are at most s apart and the other successive pairs at most R/4 apart, hence
d(xi?1 , xi ) ? 4(R/4) + ? = R + ?. The claim follows by letting ? ? 0.
4
A lower bound instance for the class of RSL algorithms
Recall that the sample complexity in Theorem 5 scales as n = O ??2 vdd ?d log ??2 vdd ?d where
? = C min (?, ?? /d, ? ). For full dimensionaldensities, Chaudhuri and
Dasgupta (2010) showed
1
1
the information theoretic lower bound n = ? ??2 vD ?D log ??2 vD ?D . Their construction can be
straightforwardly modified to a d-dimensional instance on a smooth manifold. Ignoring constants
that depend on d, these upper and lower bounds can still differ by a factor of 1/(?? )d , for ?? ? ?.
In this section we provide an informal sketch of a hard instance for the class of RSL algorithms (see
Definition 2) that suggests a sample complexity lower bound of n ? 1/(?? )?(d) .
We first describe our lower bound instance. The manifold M consists of two disjoint components, C
and C ? (whose sole function is to ensure f integrates to 1). The component C in turn contains three
parts, which we call ?top?, ?middle?, and ?bottom? respectively. The middle part, denoted M
? 2 , is the
portion of the?standard d-dimensional unit sphere Sd (0, 1) between the planes x1 = + 1 ? 4? 2
2
and x?
1 = ? 1 ? 4? . The top part, denoted M1 , is the upper hemisphere of radius 2? centered
2
at (+
? 1 ? 4? , 0, 0, . . . , 0). The bottom part, denoted M3 , is a symmetric hemisphere centered at
(? 1 ? 4? 2 , 0, 0, . . . , 0). Thus C is obtained by gluing a portion of the unit sphere with two (small)
hemispherical caps. C as described does not have a condition number at most 1/? because of the
?corners? at the intersection of M2 and M1 ? M3 . This can be fixed without affecting the essence
of the construction by smoothing this intersection by rolling a ball of radius ? around it (a similar
construction is made rigorous in Theorem 6 of Genovese et al. (2012)). Let P be the distribution
on M whose density over C is ? if |x1 | > 1/2, and ?(1 ? ?) if |x1 | ? 1/2, where ? is chosen
small enough such that ? vold (C) ? 1. The density over C ? is chosen such that the total mass of the
manifold is 1. Now M1 and M3 are (?, ?) separated at level ? for ? = ?(1). The separator set S is
the equator of M2 in the plane x1 = 0.
We now provide some intuition for why RSL algorithms will require n ? 1/(?? )?(d) to succeed on
this instance. We focus our discussion on RSL algorithms with k > 2, i.e. on algorithms that do in
fact use a cleaning step, ignoring the single linkage algorithm which is known to be inconsistent for
full dimensional densities. Intuitively, because of the curvature of the described instance, the mass
of a sufficiently large Euclidean ball in the separator set is larger than the mass of a corresponding
ball in the true clusters. This means that any algorithm that uses large balls cannot reliably clean the
sample and this restricts the size of the balls that can be used. Now if points in the regions of high
density are to survive then there must be k sample points in the small ball around any point in the
true clusters and this gives us a lower bound on the necessary sample size.
The RSL algorithms work by counting the number of sample points inside the balls B(x, r) centered
at the sample points x, for some radius r. In order for the algorithm to reliably resolve (?, ?) clusters,
it should distinguish points in the separator set S ? M2 from those in the level ? clusters M1 ?M3 . A
necessary condition for this is that the mass of a ball B(x, r) for x ? S??r should be strictly smaller
than the mass inside B(y, r) for y ? M
p1 ? M3 . In Appendix A.4, we show that this condition
restricts the radius r to be at most O(? ?/d). Now, consider any sample point x0 in M1 ? M3
(such an x exists with high probability). Since x0 should not be removed during the cleaning step,
the ball B(x0 , r) must contain some other sample point (indeed, it must contain at least k ? 1
more sample points). By a union bound, this happens with probability at most (n ? 1)vd rd ? ?
7
d d/2
O(d?d/2
n?d/2? ?). If we want the algorithm to succeed with probability at least 1/2 (say) then
d
n ? ? ? d ??d/2 .
5
Cluster tree recovery in the presence of noise
So far we have considered the problem of recovering the cluster tree given samples from a density
supported on a lower dimensional manifold. In this section we extend these results to the more
general situation when we have noisy samples concentrated near a lower dimensional manifold.
Indeed it can be argued that the manifold + noise model is a natural and general model for highdimensional data. In the noisy setting, it is clear that we can infer the cluster tree of the noisy
density in a straightforward way. A stronger requirement would be consistency with respect to
the underlying latent sample. Following the literature on manifold estimation (Balakrishnan et al.
(2012); Genovese et al. (2012)) we consider two main noise models. For both of them, we specify a
distribution Q for the noisy sample.
1. Clutter Noise: We observe data Y1 , . . . , Yn from the mixture Q := (1 ? ?)U + ?P where
0 < ? ? 1 and U is a uniform distribution on X . Denote the samples drawn from P in this mixture
X = {X1 , . . . , Xm }. The points drawn from U are called background clutter. In this case, we can
show:
Theorem 10 There are universal constants C1 and C2 such that the following holds. For any ? > 0,
0 < ? < 1/2, run the algorithm in Figure 1 on a sample {Y1 , . . . , Yn }, with parameters
R := 4? k := C1 log2 (1/?)(?/?2 ).
Then with probability at least 1 ? ?, Cb is (?, ?) consistent. In particular, the clusters containing
A[X] and A? [X] are internally connected and mutually disconnected in C(r) for r defined by
1
k
C2 log(1/?) p
?vd rd ? =
k?
+
1 ? ?/6 n
n
d/D
d/D
2v
(1??)
k 1?d/D
provided ? ? max vd2?d nk , Dvd ?d/D ?
where ? is now slightly modified (in conn
??
?
stants), i.e., ? := min ?7 , 72d
.
, 24
2. Additive Noise: The data are of the form Yi = Xi +?i where X1 , . . . , Xn ? P ,and ?1 , . . . , ?n are
a sample from any bounded noise distribution ?, with ?i ? B(0, ?). Note that Q is the convolution
of P and ?, Q = P ? ?.
Theorem 11 There are universal constants C1 and C2 such that the following holds. For any ? > 0,
0 < ? < 1/2, run the algorithm in Figure 1 on the sample {Y1 , . . . , Yn } with parameters
R := 5? k := C1 log2 (1/?)(?/?2 ).
Then with probability at least 1 ? ?, Cb is (?, ?) consistent for ? ? ??/24d. In particular, the clusters
containing {Yi : Xi ? A} and {Yi : Xi ? A? } are internally connected and mutually disconnected
in C(r) for r defined by
C? p
k
vd rd (1 ? ?/12)(1 ? ?/6)? = +
k?
n n
?
??
.
, 144d
if ? ? vd2?d nk and ? ? ??/24d, where ? := min ?7 , 24
The proofs for both Theorems 10 and 11 appear in Appendix A.5. Notice that in each case we receive
samples from a full D-dimensional distribution but are still able to achieve rates independent of D
because these distributions are concentrated around the lower dimensional M . For the clutter noise
case we produce a tree that is consistent for samples drawn from P (which are exactly on M ), while
in the additive noise case we produce a tree on the observed Yi s which is (?, ?) consistent for the
latent Xi s (for ? small enough). It is worth noting that in the case of clutter noise we can still
consistently recover the entire cluster tree. Intuitively, this is because the k-NN distances for points
on M are much smaller than for clutter points that are far away from M . As a result the clutter noise
only affects a vanishingly low level set of the cluster tree.
8
References
S. Balakrishnan, A. Rinaldo, D. Sheehy, A. Singh, and L. Wasserman. Minimax rates for homology inference.
AISTATS, 2012.
P. Bickel and B. Li. Local polynomial regression on unknown manifolds. In Technical report, Department of
Statistics, UC Berkeley. 2006.
K. Chaudhuri and S. Dasgupta. Rates of convergence for the cluster tree. In J. Lafferty, C. K. I. Williams,
J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23,
pages 343?351. 2010.
F. Chazal. An upper bound for the volume of geodesic balls in submanifolds of euclidean spaces. Personal Communication, available at http://geometrica.saclay.inria.fr/team/Fred.Chazal/BallVolumeJan2013.pdf, 2013.
A. Cuevas and R. Fraiman. A plug-in approach to support estimation. Annals of Statistics, 25(6):2300?2312,
1997.
A. Cuevas, W. Gonz?lez-Manteiga, and A. Rodr?guez-Casal. Plug-in estimation of general level sets. Aust. N.
Z. J. Stat., 48(1):7?19, 2006.
S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. In STOC, pages 537?546.
2008.
C. R. Genovese, M. Perone-Pacifico, I. Verdinelli, and L. Wasserman. Minimax manifold estimation. Journal
of Machine Learning Research, 13:1263?1291, 2012.
J. A. Hartigan. Consistency of single linkage for high-density clusters. Journal of the American Statistical
Association, 76(374):pp. 388?394, 1981.
S. Kpotufe and S. Dasgupta. A tree-based regressor that adapts to intrinsic dimension. J. Comput. Syst. Sci.,
78(5):1496?1515, 2012.
S. Kpotufe and U. von Luxburg. Pruning nearest neighbor cluster trees. In L. Getoor and T. Scheffer, editors,
Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ?11, pages 225?
232. ACM, New York, NY, USA, 2011.
M. Maier, M. Hein, and U. von Luxburg. Optimal construction of k-nearest-neighbor graphs for identifying
noisy clusters. Theor. Comput. Sci., 410(19):1749?1764, 2009.
P. Niyogi, S. Smale, and S. Weinberger. Finding the homology of submanifolds with high confidence from
random samples. Discrete & Computational Geometry, 39(1-3):419?441, 2008.
W. Polonik. Measuring mass concentrations and estimating density contour clusters: an excess mass approach.
Annals of Statistics, 23(3):855?882, 1995.
P. Rigollet and R. Vert. Fast rates for plug-in estimators of density level sets. Bernoulli, 15(4):1154?1178,
2009.
A. Rinaldo, A. Singh, R. Nugent, and L. Wasserman. Stability of density-based clustering. Journal of Machine
Learning Research, 13:905?948, 2012.
A. Rinaldo and L. Wasserman. Generalized density clustering. The Annals of Statistics, 38(5):2678?2722,
2010. 0907.3454.
A. Singh, C. Scott, and R. Nowak. Adaptive {H}ausdorff estimation of density level sets. Ann. Statist.,
37(5B):2760?2782, 2009.
B. K. Sriperumbudur and I. Steinwart. Consistency and rates for clustering with dbscan. Journal of Machine
Learning Research - Proceedings Track, 22:1090?1098, 2012.
I. Steinwart. Adaptive density level set clustering. Journal of Machine Learning Research - Proceedings Track,
19:703?738, 2011.
W. Stuetzle. Estimating the cluster tree of a density by analyzing the minimal spanning tree of a sample. J.
Classification, 20(1):025?047, 2003.
W. Stuetzle and N. R. A generalized single linkage method for estimating the cluster tree of a density. Journal
of Computational and Graphical Statistics, 19(2):397?418, 2010.
A. B. Tsybakov. On nonparametric estimation of density level sets. Ann. Statist., 25(3):948?969, 1997.
G. Walther. Granulometric smoothing. Annals of Statistics, 25(6):2273?2299, 1997.
D. Wishart. Mode analysis: a generalization of nearest neighbor which reduces chaining. In Proceedings of the
Colloquium on Numerical Taxonomy held in the University of St. Andrews, pages 282?308. 1969.
9
| 4984 |@word mild:4 version:1 middle:2 polynomial:1 norm:1 rsl:17 stronger:1 c0:2 open:1 bn:3 pick:1 recursively:1 contains:6 selecting:1 guez:1 must:6 numerical:1 partition:1 additive:2 shape:1 generative:1 fewer:1 devising:1 plane:2 node:1 successive:1 along:2 constructed:1 c2:5 become:1 walther:2 prove:4 consists:1 inside:6 introduce:1 x0:4 indeed:2 p1:1 relying:1 resolve:4 curse:1 considering:1 provided:3 estimating:9 bounded:3 begin:1 suffice:1 mass:12 underlying:1 lowest:1 what:1 hemispherical:1 submanifolds:2 unobserved:1 finding:1 guarantee:3 berkeley:1 every:3 cuevas:4 isometrically:1 exactly:1 prohibitively:1 unit:3 internally:4 yn:3 appear:1 before:1 local:1 sd:1 limit:1 consequence:1 dbscan:1 despite:2 analyzing:2 establishing:1 path:3 approximately:1 connectedness:2 inria:1 therein:1 studied:1 suggests:2 statistically:1 union:1 procedure:5 stuetzle:4 universal:4 empirical:2 vert:2 projection:1 word:1 confidence:1 refers:1 get:1 cannot:1 live:1 impossible:1 restriction:4 map:1 demonstrated:1 center:1 yt:2 straightforward:1 williams:1 starting:1 focused:1 formalized:1 recovery:2 identifying:2 wasserman:6 m2:3 estimator:5 insight:1 importantly:1 fraiman:2 population:2 stability:1 notion:4 analogous:1 limiting:1 resp:4 construction:7 hierarchy:1 annals:4 user:1 cleaning:4 us:1 hypothesis:1 satisfying:1 particularly:1 bottom:2 observed:1 capture:1 vd2:2 region:2 ensures:1 connected:18 culotta:1 removed:4 alessandro:1 mentioned:1 intuition:1 colloquium:1 complexity:9 geodesic:6 personal:1 vdd:2 singh:5 depend:4 tight:1 completely:2 various:1 intersects:1 separated:5 fast:2 describe:1 kp:1 zemel:1 neighborhood:1 whose:4 apparent:1 larger:1 solve:1 say:5 distortion:1 casal:1 ability:1 statistic:7 niyogi:4 noisy:5 sequence:3 net:4 propose:1 vanishingly:1 zm:2 fr:1 date:1 chaudhuri:16 poorly:1 adapts:2 achieve:1 convergence:8 cluster:61 regularity:1 extending:1 r1:2 produce:3 requirement:1 object:1 depending:1 andrew:1 stat:3 nearest:5 sole:1 school:1 recovering:4 c:3 come:1 differ:1 concentrate:1 radius:13 vc:1 centered:4 larry:2 require:3 argued:1 generalization:2 proposition:1 theor:1 strictly:1 hold:6 lying:1 around:7 considered:2 sufficiently:1 normal:1 cb:9 claim:2 bickel:2 smallest:2 aarti:2 estimation:7 integrates:1 sivaraman:1 largest:1 establishes:1 modified:3 rather:4 pn:4 focus:1 consistently:1 bernoulli:1 rigorous:1 am:5 inference:1 dependent:1 nn:1 typically:5 entire:3 eliminate:1 spurious:1 rodr:1 classification:2 denoted:4 polonik:2 smoothing:2 uc:1 construct:1 identical:1 survive:1 genovese:3 icml:2 report:1 oblivious:1 few:2 simultaneously:2 geometry:1 connects:1 lebesgue:2 investigate:1 analyzed:1 mixture:2 held:1 bundle:1 ambient:4 edge:2 nowak:1 necessary:2 tree:37 euclidean:9 taylor:1 hein:1 theoretical:1 minimal:3 instance:10 obstacle:1 measuring:1 rolling:1 uniform:7 gr:5 too:1 optimally:1 straightforwardly:1 kn:1 supx:1 kxi:1 st:1 density:43 international:1 regressor:1 lez:1 von:3 central:1 satisfied:6 containing:5 wishart:4 corner:1 american:1 leading:1 return:1 li:2 syst:1 account:1 satisfy:2 depends:2 analyze:5 portion:2 recover:3 contribution:1 stants:1 qk:1 maier:2 efficiently:1 who:2 saliency:1 identify:1 none:1 worth:1 history:1 chazal:3 whenever:2 definition:9 sriperumbudur:2 pp:1 dm:2 proof:11 riemannian:1 sampled:1 popular:1 recall:2 knowledge:1 fractional:1 dimensionality:2 cap:1 schedule:1 carefully:2 back:1 specify:1 generality:1 stage:3 dendrogram:1 sketch:4 steinwart:4 hand:1 mode:1 artifact:1 grows:1 usa:1 effect:1 contain:2 true:4 homology:2 hence:6 symmetric:1 attractive:1 during:1 essence:2 chaining:1 generalized:2 pdf:1 theoretic:1 recently:3 specialized:1 rigollet:2 volume:6 discussed:1 slight:1 m1:5 extend:1 association:1 mellon:1 rd:19 tuning:1 consistency:16 shawe:1 specification:1 curvature:5 showed:4 inf:2 apart:3 hemisphere:2 gonz:1 certain:1 inequality:3 arbitrarily:1 yi:11 minimum:1 somewhat:1 impose:1 paradigm:1 full:6 infer:1 stem:1 reduces:1 smooth:2 technical:4 adapt:2 plug:4 long:2 sphere:2 involving:1 regression:2 essentially:1 cmu:5 equator:1 c1:6 receive:1 background:2 addition:1 affecting:1 want:1 singular:1 appropriately:3 member:1 balakrishnan:3 inconsistent:1 lafferty:1 seem:1 call:1 near:8 counting:1 presence:1 noting:1 split:1 easy:1 concerned:1 enough:5 variety:1 xj:2 affect:1 zi:7 idea:2 cn:1 linkage:7 effort:1 york:1 remark:1 generally:1 clear:4 informally:1 involve:1 nonparametric:2 tsybakov:2 clutter:6 statist:2 narayanan:1 concentrated:3 nugent:1 http:1 exist:1 restricts:2 notice:3 deteriorates:1 disjoint:2 correctly:1 track:2 carnegie:1 discrete:1 dasgupta:20 vol:1 conn:1 salient:2 fractionally:1 drawn:5 intersected:1 hartigan:6 kyi:1 clean:1 kept:2 graph:6 luxburg:3 run:3 everywhere:1 place:1 throughout:1 separation:5 appendix:4 entirely:1 bound:17 distinguish:1 adapted:1 dvd:1 aspect:1 argument:1 extremely:1 min:5 expanded:1 department:2 according:1 ball:24 disconnected:7 across:1 smaller:2 slightly:1 y0:2 granulometric:1 arinaldo:1 gluing:1 making:1 modification:3 happens:1 intuitively:3 restricted:1 equation:2 mutually:4 turn:1 nonempty:1 letting:1 informal:1 studying:1 available:1 apply:1 observe:2 hierarchical:2 away:1 appropriate:1 distinguished:1 srivatsan:1 weinberger:1 original:1 denotes:3 clustering:22 cf:6 ensure:2 top:2 graphical:1 log2:3 pacifico:1 establish:1 added:1 quantity:1 occurs:1 parametric:1 imbedded:1 dependence:2 concentration:1 distance:9 sci:2 vd:23 manifold:38 trivial:2 reason:3 provable:1 spanning:1 length:1 difficult:2 stoc:1 taxonomy:1 smale:1 reliably:2 motivates:1 unknown:7 kpotufe:4 upper:5 convolution:1 sm:1 finite:3 situation:1 communication:1 precise:1 team:1 y1:3 sharp:1 pair:3 cleaned:2 connection:5 able:2 xm:1 scott:1 challenge:1 saclay:1 max:1 event:1 getoor:1 natural:3 difficulty:1 minimax:2 brief:1 imply:1 literature:1 determining:1 asymptotic:1 embedded:2 freund:2 loss:1 adaptivity:1 consistent:15 thresholding:1 editor:2 cd:1 succinctly:1 summary:2 supported:8 last:1 surprisingly:1 salience:1 formal:1 neighbor:5 boundary:1 dimension:8 xn:2 fred:1 contour:1 collection:4 made:2 adaptive:2 bm:7 far:2 excess:1 pruning:2 compact:1 xi:25 search:1 latent:4 continuous:1 decade:1 why:1 reviewed:1 robust:2 inherently:1 ignoring:3 sbalakri:1 separator:3 aistats:1 main:8 spread:1 motivation:1 noise:12 x1:7 referred:1 scheffer:1 ny:1 structurally:1 explicit:1 comput:2 lie:2 removing:1 theorem:11 rk:2 xt:1 showing:1 r2:1 intrinsic:2 exists:3 restricting:1 effectively:1 perone:1 mirror:1 demand:1 nk:3 intersection:2 vold:4 aust:1 rinaldo:6 satisfies:1 acm:1 succeed:2 goal:2 ann:2 hard:1 except:1 uniformly:1 lemma:14 called:4 total:1 pas:1 verdinelli:1 m3:6 highdimensional:1 support:4 modulated:1 |
4,402 | 4,985 | Convex Tensor Decomposition via Structured
Schatten Norm Regularization
Ryota Tomioka
Toyota Technological Institute at Chicago
Chicago, IL 60637
[email protected]
Taiji Suzuki
Department of Mathematical
and Computing Sciences
Tokyo Institute of Technology
Tokyo 152-8552, Japan
[email protected]
Abstract
We study a new class of structured Schatten norms for tensors that includes two
recently proposed norms (?overlapped? and ?latent?) for convex-optimizationbased tensor decomposition. We analyze the performance of ?latent? approach
for tensor decomposition, which was empirically found to perform better than the
?overlapped? approach in some settings. We show theoretically that this is indeed
the case. In particular, when the unknown true tensor is low-rank in a specific
unknown mode, this approach performs as well as knowing the mode with the
smallest rank. Along the way, we show a novel duality result for structured Schatten norms, which is also interesting in the general context of structured sparsity.
We confirm through numerical simulations that our theory can precisely predict
the scaling behaviour of the mean squared error.
1
Introduction
Decomposition of tensors [10, 14] (or multi-way arrays) into low-rank components arises naturally
in many real world data analysis problems. For example, in neuroimaging, spatio-temporal patterns
of neural activities that are related to certain experimental conditions or subjects can be found by
computing the tensor decomposition of the data tensor, which can be of size channels ? timepoints ? subjects ? conditions [18]. More generally, any multivariate spatio-temporal data (e.g.,
environmental monitoring) can be regarded as a tensor. If some of the observations are missing, lowrank modeling enables the imputation of missing values. Tensor modelling may also be valuable for
collaborative filtering with temporal or contextual dimension.
Conventionally, tensor decomposition has been tackled through non-convex optimization problems,
using alternate least squares or higher-order orthogonal iteration [6]. Compared to its empirical
success, little has been theoretically understood about the performance of tensor decomposition
algorithms. De Lathauwer et al. [5] showed an approximation bound for a truncated higher-order
SVD (also known as the Tucker decomposition). Nevertheless the generalization performance of
these approaches has been widely open. Moreover, the model selection problem can be highly
challenging, especially for the Tucker model [5, 27], because we need to specify the rank rk for each
mode (here a mode refers to one dimensionality of a tensor); that is, we have K hyper-parameters
to choose for a K-way tensor, which is challenging even for K = 3.
Recently a convex-optimization-based approach for tensor decomposition has been proposed by
several authors [9, 15, 23, 25], and its performance has been analyzed in [26].
1
size=[50 50 20]
30
rank=[40 40 3]
Overlapped Schatten 1?norm
Latent Schatten 1?norm
25
20
||W?W*||
Estimation error ||W?W*||F
F
60
40
20
0
0
1
10
10
Regularization constant ?
2
10
15
10
0
10
20
30
40
Rank of the first two modes
50
60
Figure 1: Estimation of a low-rank 50?50?20 tensor of rank r ? r ? 3 from noisy measurements.
The noise standard deviation is ? = 0.1. The estimation errors of two convex optimization based
methods are plotted against the rank r of the first two modes. The solid lines show the error at the
fixed regularization constant ?, which is 0.89 for the overlapped approach and 3.79 for the latent
approach (see also Figure 2). The dashed lines show the minimum error over candidates of the
regularization constant ? from 0.1 to 100. In the inset, the errors of the two approaches are plotted
against the regularization constant ? for rank r = 40 (marked with gray dashed vertical line in
the outset). The two values (0.89 and 3.79) are marked with vertical dashed lines. Note that both
approaches need no knowledge of the true rank; the rank is automatically learned.
The basic idea behind their convex approach, which we call overlapped approach, is to unfold1 a
tensor into matrices along different modes and penalize the unfolded matrices to be simultaneously
low-rank based on the Schatten 1-norm, which is also known as the trace norm and nuclear norm [7,
22, 24]. This approach does not require the rank of the decomposition to be specified beforehand,
and due to the low-rank inducing property of the Schatten 1-norm, the rank of the decomposition is
automatically determined.
However, it has been noticed that the above overlapped approach has a limitation that it performs
poorly for a tensor that is only low-rank in a certain mode. The authors of [25] proposed an alternative approach, which we call latent approach, that decomposes a given tensor into a mixture of
tensors that each are low-rank in a specific mode. Figure 1 demonstrates that the latent approach
is preferable to the overlapped approach when the underlying tensor is almost full rank in all but
one mode. However, so far no theoretical analysis has been presented to support such an empirical
success.
In this paper, we rigorously study the performance of the latent approach and show that the mean
squared error of the latent approach scales no greater than the minimum mode-k rank of the underlying true tensor, which clearly explains why the latent approach performs better than the overlapped
approach in Figure 1.
Along the way, we show a novel duality between the two types of norms employed in the above
two approaches, namely the overlapped Schatten norm and the latent Schatten norm. This result
is closely related and generalize the results in structured sparsity literature [2, 13, 17, 21]. In fact,
the (plain) overlapped group lasso constrains the weights to be simultaneously group sparse over
overlapping groups. The latent group lasso predicts with a mixture of group sparse weights [see
also 1, 3, 12]. These approaches clearly correspond to the two variations of tensor decomposition
algorithms we discussed above.
Finally we empirically compare the overlapped approach and latent approach and show that even
when the unknown tensor is simultaneously low-rank, which is a favorable situation for the overlapped approach, the latent approach performs better in many cases. Thus we provide both theoretical and empirical evidence that for noisy tensor decomposition, the latent approach is preferable to
the overlapped approach. Our result is complementary to the previous study [25, 26], which mainly
focused on the noise-less tensor completion setting.
1
For a K-way tensor, there are K ways to unfold a tensor into a matrix. See Section 2.
2
This paper is structured as follows. In Section 2, we provide basic definitions of the two variations of
structured Schatten norms, namely the overlapped/latent Schatten norms, and discuss their properties, especially the duality between them. Section 3 presents our main theoretical contributions; we
establish the consistency of the latent approach, and we analyze the denoising performance of the
latent approach. In Section 4, we empirically confirm the scaling predicted by our theory. Finally,
Section 5 concludes the paper. Most of the proofs are presented in the supplementary material.
2
Structured Schatten norms for tensors
In this section, we define the overlapped Schatten norm and the latent Schatten norm and discuss
their basic properties.
First we need some basic definitions.
Let W ? Rn1 ????nK be a K-way tensor. We denote the total number of entries in W by N =
?K
?
k=1 nk . The dot product between two tensors W and X is defined as ?W, X ? = vec(W) vec(X
);
N
i.e., the dot product as vectors in R . The Frobenius norm of a tensor is defined as W F =
?
?W, W?. Each dimensionality of a tensor is called a mode. The mode k unfolding W (k) ?
Rnk ?N/nk is a matrix that is obtained by concatenating the mode-k fibers along columns; here a
mode-k fiber is an nk dimensional vector obtained by fixing all the indices but the kth index of W.
The mode-k rank rk of W is the rank of the mode-k unfolding W (k) . We say that a tensor W has
multilinear rank (r1 , . . . , rK ) if the mode-k rank is rk for k = 1, . . . , K [14]. The mode k folding
is the inverse of the unfolding operation.
2.1
Overlapped Schatten norms
The low-rank inducing norm studied in [9, 15, 23, 25], which we call overlapped Schatten 1-norm,
can be written as follows:
?K
W
=
?W (k) ?S1 .
(1)
S1 /1
k=1
In this paper, we consider the following more general overlapped Sp /q-norm, which includes the
Schatten 1-norm as the special case (p, q) = (1, 1). The overlapped Sp /q-norm is written as follows:
(?K
)1/q
q
W
=
?W
?
,
(2)
(k)
Sp
Sp /q
k=1
where 1 ? p, q ? ?; here
?W ?Sp =
(?r
j=1
)1/p
?jp (W )
is the Schatten p-norm for matrices, where ?j (W ) is the jth largest singular value of W .
When used as a regularizer, the overlapped Schatten 1-norm penalizes all modes of W to be jointly
low-rank. It is related to the overlapped group regularization [see 13, 16] in a sense that the same
object W appears repeatedly in the norm.
The following inequality relates the overlapped Schatten 1-norm with the Frobenius norm, which
was a key step in the analysis of [26]:
K
?
?
W
?
rk W F ,
S1 /1
(3)
k=1
where rk is the mode-k rank of W.
Now we are interested in the dual norm of the overlapped Sp /q-norm, because deriving the dual
norm is a key step in solving the minimization problem that involves the norm (2) [see 16], as
well as computing various complexity measures, such as, Rademacher complexity [8] and Gaussian
width [4]. It turns out that the dual norm of the overlapped Sp /q-norm is the latent Sp? /q ? -norm as
shown in the following lemma (proof is presented in Appendix A).
3
Lemma 1. The dual norm of the overlapped Sp /q-norm is the latent Sp? /q ? -norm, where 1/p +
1/p? = 1 and 1/q + 1/q ? = 1, which is defined as follows:
(?
)1/q?
K
(k) q ?
X
=
inf
?X (k) ?Sp?
.
(4)
Sp? /q ?
k=1
(X (1) +???+X (K) )=X
Here the infimum is taken over the K-tuple of tensors X (1) , . . . , X (K) that sums to X .
In the supplementary material, we show a slightly more general version of the above lemma that
naturally generalizes the duality between overlapped/latent group sparsity norms [1, 12, 17, 21]; see
Section A. Note that when the groups have no overlap, the overlapped/latent group sparsity norms
become identical, and the duality is the ordinary duality between the group Sp /q-norms and the
group Sp? /q ? -norms.
2.2
Latent Schatten norms
The latent approach for tensor decomposition [25] solves the following minimization problem
K
?
(k)
minimize
L(W (1) + ? ? ? + W (K) ) + ?
?W (k) ?S1 ,
(5)
W (1) ,...,W (K)
k=1
(k)
where L is a loss function, ? is a regularization constant, and W (k) is the mode-k unfolding of
W (k) . Intuitively speaking, the latent approach for tensor decomposition predicts with a mixture of
K tensors that each are regularized to be low-rank in a specific mode.
Now, since the loss term in the minimization problem (5) only depends on the sum of the tensors
W (1) , . . . , W (K) , minimization problem (5) is equivalent
to the
following minimization problem
minimize L(W) + ?W S /1 .
W
1
In other words, we have identified the structured Schatten norm employed in the latent approach as
the latent S1 /1-norm (or latent Schatten 1-norm for short), which can be written as follows:
K
?
(k)
W
=
inf
?W (k) ?S1 .
(6)
S1 /1
(1)
(K)
(W +???+W )=W k=1
According to Lemma 1, the dual norm of the latent S1 /1-norm is the overlapped S? /?-norm
X
= max ?X (k) ?S? ,
(7)
S? /?
k
where ? ? ?S? is the spectral norm.
The following lemma is similar to inequality (3) and is a key in our analysis (proof is presented in
Appendix B).
Lemma 2.
(
)
?
W
? min rk W F ,
S1 /1
k
where rk is the mode-k rank of W.
Compared to inequality (3), the latent Schatten 1-norm is bounded by the minimal square root of the
ranks instead of the sum. This is the fundamental reason why the latent approach performs betters
than the overlapped approach as in Figure 1.
3
Main theoretical results
In this section, combining the duality we presented in the previous section with the techniques
from Agarwal et al. [1], we study the generalization performance of the latent approach for tensor
decomposition in the context of recovering an unknown tensor W ? from noisy measurements. This
is the setting of the experiment in Figure 1. We first prove a generic consistency statement that does
not take the low-rank-ness of the truth into account. Next we show that a tighter bound that takes the
low-rank-ness into account can be obtained with some incoherence assumption. Finally, we discuss
the difference between overlapped approach and latent approach and provide an explanation for the
empirically observed superior performance of the latent approach in Figure 1.
4
3.1
Consistency
Let W ? be the underlying true tensor and the noisy version Y is obtained as follows:
Y = W ? + E,
where E ? Rn1 ?????nK is the noise tensor.
A consistency statement can be obtained as follows (proof is presented in Appendix C):
Theorem 1. Assume that the regularization constant ? satisfies ? ? E S? /? (overlapped S? /?
(
)
? = argminW 1 Y ? W 2 + ?W
norm of the noise), then the estimator defined by W
,
2
S1 /1
F
satisfies the inequality
?
? ? W ? ? 2? min nk .
W
(8)
F
k
In particular when the noise goes to zero E ? 0, the right hand side of inequality (8) shrinks to zero.
3.2
Deterministic bound
? = ?K W
? (k) and
The consistency statement in the previous section only deals with the sum W
k=1
the statement does not take into account the low-rank-ness of the truth. In this section, we establish
? (k) .
a tighter statement that bounds the errors of individual terms W
To this end, we need some additional assumptions. First, we assume that the unknown tensor W ? is
a mixture of K tensors that each are low-rank in a certain mode and we have a noisy observation Y
as follows:
?K
Y = W? + E =
(9)
W ?(k) + E,
k=1
where r?k =
is the mode-k rank of the kth component W ?(k) ; note that this does not
equal the mode-k rank rk of W ? in general.
(k)
rank(W (k) )
Second, we assume that the spectral norm of the mode-k unfolding of the lth component is bounded
by a constant ? for all k ?= l as follows:
?(l)
?W (k) ?S? ? ?
(?l ?= k, k, l = 1, . . . , K).
(10)
Note that such an additional incoherence assumption has also been used in [1, 3, 11].
We employ the following optimization problem to recover the unknown tensor W ? :
(
K
?
(l)
? = argmin 1 Y ? W 2 + ?W
W
s.t.
W
=
W (k) , ?W (k) ?S? ? ?,
F
S1 /1
2
W
)
?l ?= k ,
k=1
(11)
where ? > 0 is a regularization constant. Notice that we have introduced additional spectral norm
constraints to control the correlation between the components; see also [1].
Our deterministic performance bound can be stated as follows (proof is presented in Appendix D):
? (k) be an optimal decomposition of W
? induced by the latent Schatten 1-norm (6).
Theorem 2. Let W
Assume that the regularization constant ? satisfies ? ? 2E S? /? + ?(K ? 1). Then there is
? of the minimization problem (11) satisfies the
a universal constant c such that, any solution W
following deterministic bound:
?K
?K
? (k) ? W ?(k) 2 ? c?2
W
rk .
(12)
F
k=1
k=1
Moreover, the overall error can be bounded in terms of the multilinear rank of W ? as follows:
? ? W ? 2 ? c?2 min r .
W
(13)
k
F
k
5
Note that in order to get inequality (13), we exploit the arbitrariness of the decomposition W ? =
?K
?(k)
to replace the sum over the ranks with the minimal mode-k rank. This is possible
k=1 W
?
because a singleton decomposition, i.e., W ?(k) = W ? and W ?(k ) = 0 for k ? ?= k, is allowed for
any k.
Comparing two inequalities (8) and (13), we see that there are two regimes. When the noise is small,
(8) is tighter. On the other hand, when the noise is larger and/or mink rk ? mink nk , (13) is tighter.
3.3
Gaussian noise
When the elements of the noise tensor E are Gaussian, we obtain the following theorem.
Theorem 3. Assume that the elements of the noise tensor E are independent zero-mean Gaussian
random variables with variance ? 2 . In addition, assume without loss of generality that the dimensionalities of W ? are sorted in the descending order, i.e., n1 ? ? ? ? ? nK . Then there is a universal
constant c such that, with probability?
at least 1 ? ?, any ?
solution of the minimization problem (11)
?
with regularization constant ? = 2?( N/nK + n1 + 2 log(K/?)) + ?(K ? 1) satisfies
?K
K
2
r?k
1 ? ? (k)
W ? W ?(k) F ? cF ? 2 k=1 ,
(14)
N
nK
k=1
((
) ? )2
) (?
?
nK
2 log(K/?) + ?(K?1)
is a factor that mildly depends
where F = 1 + n1NnK +
2?
N
on the dimensionalities and the constant ? in (10).
Note that the theoretically optimal choice of regularization constant ? is independent of the ranks of
the truth W ? or its factors in (9), which are unknown in practice.
Again we can obtain a bound corresponding to the minimum rank singleton decomposition as in
inequality (13) as follows:
2
mink rk
1 ?
W ? W ? F ? cF ? 2
,
(15)
N
nK
where F is the same factor as in Theorem 3.
3.4
Comparison with the overlapped approach
Inequality (15) explains the superior performance of the latent approach for tensor decomposition in
Figure 1. The inequality obtained in [26] for the overlapped approach that uses overlapped Schatten
1-norm (1) can be stated as follows:
)2(
(
)2
K ?
K
?
?
1 ?
1
?
? 2
? 2 1
1
W ?W F ?c?
rk .
(16)
nk
N
K
K
k=1
k=1
Comparing inequalities (15) and (16), we notice that the complexity of the overlapped approach
depends on the average (square root) of the mode-k ranks r1 , . . . , rK , whereas that of the latent
approach only grows linearly against the minimum mode-k rank. Interestingly, the latent approach
performs as if it knows the mode with the minimum rank, although such information is not given.
Recently, Mu et al. [19] proved a lower bound of the number of measurements for solving linear
inverse problem via the overlapped approach. Although the setting is different, the lower bound
depends on the minimum mode-k rank, which agrees with the complexity of the latent approach.
4
Numerical results
In this section, we numerically confirm the theoretically obtained scaling behavior.
The goal of this experiment is to recover the true low rank tensor W ? from a noisy observation Y.
We randomly generated the true low rank tensors W ? of size 50 ? 50 ? 20 or 80 ? 80 ? 40 with
various mode-k ranks (r1 , r2 , r3 ). A low-rank tensor is generated by first randomly drawing the
6
Overlapped approach
Latent approach
0.015
Comparison
0.015
5.5
5
0.005
size=[50 50 20] ?=0.43
0.01
0.005
size=[50 50 20] ?=0.89
size=[50 50 20] ?=0.89
size=[50 50 20] ?=3.79
size=[50 50 20] ?=3.79
size=[50 50 20] ?=11.29
size=[80 80 40] ?=0.62
size=[80 80 40] ?=1.27
size=[80 80 40] ?=1.27
size=[80 80 40] ?=5.46
size=[80 80 40] ?=5.46
0
0
0.2
0.4
0.6
0.8
Tucker rank complexity
1
MSE (overlap) / MSE (latent)
0.01
Mean squared error (latent)
Mean squared error (overlap)
4.5
4
3.5
3
2.5
2
1.5
1
0.5
size=[80 80 40] ?=16.24
0
0
0.2
0.4
0.6
Latent rank complexity
0.8
1
0
0
1
2
3
TR complexity/LR complexity
4
Figure 2: Performance of the overlapped approach and latent approach for tensor decomposition are
shown against their theoretically predicted complexity measures (see Eqs. (17) and (18)). The right
panel shows the improvement of the latent approach from the overlapped approach against the ratio
of their complexity measures.
r1 ? r2 ? r3 core tensor from the standard normal distribution and multiplying an orthogonal factor
matrix drawn uniformly to its each mode. The observation tensor Y is obtained by adding Gaussian
noise with standard deviation ? = 0.1. There is no missing entries in this experiment.
For each observation Y, we computed tensor decompositions using the overlapped approach and the
latent approach (11). For the optimization, we used the algorithms2 based on alternating direction
method of multipliers described in Tomioka et al. [25]. We computed the solutions for 20 candidate
regularization constants ranging from 0.1 to 100 and report the results for three representative values
for each method.
We measured the quality of the solutions obtained by the two approaches by the mean squared error
? ? W ? 2 /N . In order to make our theoretical predictions more concrete, we define
(MSE) W
F
the quantities in the right hand side of the bounds (16) and (14) as Tucker rank (TR) complexity and
Latent rank (LR) complexity, respectively, as follows:
( ?
)2
? )2 ( ?
K
K ?
1
1
1
TR complexity = K
r
,
(17)
k
k=1
k=1
nk
K
?K
r?k
LR complexity = k=1 ,
(18)
nK
?
where without loss of generality we assume n1 ? ? ? ? ? nK . We have ignored terms like nk /N
because they are negligible for nk ? 50 and N ? 50, 000. The TR complexity is equivalent to the
normalized rank in [26]. Note that the TR complexity (17) is defined in terms of the multilinear rank
(r1 , . . . , rK ) of the truth W ? , whereas the LR complexity (18) is defined in terms of the ranks of the
latent factors (r1 , . . . , rK ) in (9). In order to find a decomposition that minimizes the right hand side
of (18), we ran the latent approach to the true tensor W ? without noise, and took the minimum of
the sum of ranks found by the run and mink rk , i.e., the minimal mode-k rank (because a singleton
solution is also allowed). The whole procedure is repeated 10 times and averaged.
Figure 2 shows the results of the experiment. The left panel shows the MSE of the overlapped
approach against the TR complexity (17). The middle panel shows the MSE of the latent approach
against the LR complexity (18). The right panel shows the improvement (i.e., MSE of the overlap
approach over that of the latent approach) against the ratio of the respective complexity measures.
First, from the left panel, we can confirm that as predicted by [26], the MSE of the overlapped
approach scales linearly against the TR complexity (17) for each value of the regularization constant.
From the central panel, we can clearly see that the MSE of the latent approach scales linearly against
the LR complexity (18) as predicted by Theorem 3. The series with ? (? = 3.79 for 50 ? 50 ? 20,
2
The solver is available online: https://github.com/ryotat/tensor.
7
? = 5.46 for 80 ? 80 ? 40) is mostly below other series, which means that the optimal choice of the
regularization constant is independent of the rank of the true tensor and only depends on the size;
this agrees with the condition on ? in Theorem 3. Since the blue series and red series with the same
markers lie on top of each other (especially the series with ? for which the optimal regularization
constant is chosen), we can see that our theory predicts not only the scaling against the latent ranks
but also that against the size of the tensor correctly. Note that the regularization constants are scaled
by roughly 1.6 to account for the difference in the dimensionality.
The right panel reveals that in many cases the latent approach performs better than the overlapped
approach, i.e., MSE (overlap)/ MSE (latent) greater than one. Moreover, we can see that the success
of the latent approach relative to the overlapped approach is correlated with high TR complexity
to LR complexity ratio. Indeed, we found that an optimal decomposition of the true tensor W ?
was typically a singleton decomposition corresponding to the smallest tucker rank (see Section 3.2).
Note that the two approaches perform almost identically when they are under-regularized (crosses).
The improvements here are milder than that in Figure 1. This is because most of the randomly
generated low-rank tensors were simultaneously low-rank to some degree. It is encouraging that the
latent approach perform at least as well as the overlapped approach in such situations as well.
5
Conclusion
In this paper, we have presented a framework for structured Schatten norms. The current framework
includes both the overlapped Schatten 1-norm and latent Schatten 1-norm recently proposed in the
context of convex-optimization-based tensor decomposition [9, 15, 23, 25], and connects these studies to the broader studies on structured sparsity [2, 13, 17, 21]. Moreover, we have shown a duality
that holds between the two types of norms.
Furthermore, we have rigorously studied the performance of the latent approach for tensor decomposition. We have shown the consistency of the latent Schatten 1-norm minimization. Next, we have
analyzed the denoising performance of the latent approach and shown that the error of the latent approach is upper bounded by the minimal mode-k rank, which contrasts sharply against the average
(square root) dependency of the overlapped approach analyzed in [26]. This explains the empirically
observed superior performance of the latent approach compared to the overlapped approach. The
most difficult case for the overlapped approach is when the unknown tensor is only low-rank in one
mode as in Figure 1.
We have also confirmed through numerical simulations that our analysis precisely predicts the scaling of the mean squared error as a function of the dimensionalities and the sum of ranks of the factors
of the unknown tensor, which is dominated by the minimal mode-k rank. Unlike mode-k ranks, the
ranks of the factors are not easy to compute. However, note that the theoretically optimal choice of
the regularization constant does not depend on these quantities.
Thus, we have theoretically and empirically shown that for noisy tensor decomposition, the latent
approach is more likely to perform better than the overlapped approach. Analyzing the performance
of the latent approach for tensor completion would be an important future work.
The structured Schatten norms proposed in this paper include norms for tensors that are not employed in practice yet. Therefore, it would be interesting to explore various extensions, such as,
using the overlapped S1 /?-norm instead of the S1 /1-norm or a non-sparse tensor decomposition.
Acknowledgment: This work was carried out while both authors were at The University of Tokyo.
This work was partially supported by JSPS KAKENHI 25870192 and 25730013, and the Aihara
Project, the FIRST program from JSPS, initiated by CSTP.
References
[1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation:
Optimal rates in high dimensions. The Annals of Statistics, 40(2):1171?1197, 2012.
[2] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In
Optimization for Machine Learning. MIT Press, 2011.
8
[3] E. J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis?
arXiv:0912.3599, 2009.
Technical report,
[4] V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky. The convex geometry of linear inverse problems,
prepint. Technical report, arXiv:1012.0621v2, 2010.
[5] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM J.
Matrix Anal. Appl., 21(4):1253?1278, 2000.
[6] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank-(R1 , R2 , . . . , RN ) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl., 21(4):1324?1342, 2000.
[7] M. Fazel, H. Hindi, and S. P. Boyd. A Rank Minimization Heuristic with Application to Minimum Order
System Approximation. In Proc. of the American Control Conference, 2001.
[8] R. Foygel and N. Srebro. Concentration-based guarantees for low-rank matrix reconstruction. Technical
report, arXiv:1102.3923, 2011.
[9] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27:025010, 2011.
[10] F. L. Hitchcock. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys., 6(1):
164?189, 1927.
[11] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. Information
Theory, IEEE Transactions on, 57(11):7221?7234, 2011.
[12] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In Advances in
NIPS 23, pages 964?972. 2010.
[13] R. Jenatton, J. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. J.
Mach. Learn. Res., 12:2777?2824, 2011.
[14] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455?500,
2009.
[15] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data.
In Prof. ICCV, 2009.
[16] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Convex and network flow optimization for structured
sparsity. J. Mach. Learn. Res., 12:2681?2720, 2011.
[17] A. Maurer and M. Pontil. Structured sparsity and generalization. Technical report, arXiv:1108.3476,
2011.
[18] M. M?rup. Applications of tensor (multiway array) factorizations and decompositions in data mining.
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):24?40, 2011.
[19] C. Mu, B. Huang, J. Wright, and D. Goldfarb. Square deal: Lower bounds and improved relaxations for
tensor recovery. arXiv preprint arXiv:1307.5870, 2013.
[20] S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. In Advances in NIPS 22, pages 1348?1356.
2009.
[21] G. Obozinski, L. Jacob, and J.-P. Vert. Group lasso with overlaps: the latent group lasso approach.
Technical report, arXiv:1110.0413, 2011.
[22] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[23] M. Signoretto, L. De Lathauwer, and J. Suykens. Nuclear norms for tensors and their use for convex
multilinear estimation. Technical Report 10-186, ESAT-SISTA, K.U.Leuven, 2010.
[24] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Proc. of the 18th Annual Conference
on Learning Theory (COLT), pages 545?560. Springer, 2005.
[25] R. Tomioka, K. Hayashi, and H. Kashima. Estimation of low-rank tensors via convex optimization.
Technical report, arXiv:1010.0789, 2011.
[26] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor decomposition. In Advances in NIPS 24, pages 972?980. 2011.
[27] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279?311,
1966.
[28] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices.
arXiv:1011.3027, 2010.
9
Technical report,
| 4985 |@word middle:1 version:2 norm:73 open:1 simulation:2 decomposition:36 jacob:1 tr:8 solid:1 liu:1 series:5 interestingly:1 current:1 contextual:1 comparing:2 com:1 yet:1 written:3 numerical:3 chicago:2 enables:1 short:1 core:1 yamada:1 lr:7 math:1 zhang:1 mathematical:2 along:4 lathauwer:4 become:1 prove:1 theoretically:7 indeed:2 roughly:1 behavior:1 multi:2 automatically:2 unfolded:1 little:1 encouraging:1 solver:1 psychometrika:1 project:1 estimating:1 moreover:4 underlying:3 bounded:4 panel:7 argmin:1 minimizes:1 unified:1 shraibman:1 guarantee:1 temporal:3 preferable:2 demonstrates:1 scaled:1 control:2 negligible:1 understood:1 mach:2 analyzing:1 initiated:1 incoherence:2 studied:2 challenging:2 appl:2 factorization:1 averaged:1 fazel:2 acknowledgment:1 practice:2 optimizationbased:1 procedure:1 pontil:1 unfold:1 universal:2 empirical:3 vert:1 boyd:1 outset:1 word:1 refers:1 get:1 selection:2 context:3 descending:1 equivalent:2 deterministic:3 missing:4 go:1 convex:15 focused:1 decomposable:1 recovery:2 estimator:2 array:2 regarded:1 nuclear:3 deriving:1 variation:2 annals:1 kolda:1 us:1 overlapped:51 element:2 taiji:2 predicts:4 observed:2 preprint:1 technological:1 valuable:1 ran:1 mu:2 complexity:24 constrains:1 rup:1 rigorously:2 depend:1 solving:2 various:3 fiber:2 regularizer:1 hyper:1 hitchcock:1 heuristic:1 widely:1 supplementary:2 larger:1 say:1 drawing:1 statistic:1 jointly:1 noisy:8 online:1 took:1 reconstruction:1 product:3 argminw:1 combining:1 poorly:1 inducing:4 frobenius:2 r1:7 rademacher:1 object:1 ac:1 fixing:1 completion:4 measured:1 lowrank:1 eq:1 solves:1 recovering:1 predicted:4 involves:1 direction:1 closely:1 tokyo:3 bader:1 material:2 explains:3 require:1 behaviour:1 generalization:3 tighter:4 multilinear:5 extension:1 hold:1 wright:2 normal:1 predict:1 smallest:2 estimation:5 favorable:1 proc:2 largest:1 agrees:2 moor:2 unfolding:5 minimization:10 mit:1 clearly:3 gaussian:5 broader:1 improvement:3 kakenhi:1 rank:79 modelling:1 mainly:1 contrast:1 sense:1 milder:1 gandy:1 typically:1 interested:1 overall:1 dual:5 colt:1 special:1 ness:3 ruan:1 equal:1 cstp:1 identical:1 yu:1 future:1 report:9 sanghavi:1 employ:1 randomly:3 simultaneously:4 individual:1 geometry:1 connects:1 n1:3 highly:1 mining:2 analyzed:3 mixture:4 behind:1 regularizers:1 beforehand:1 tuple:1 respective:1 orthogonal:2 maurer:1 penalizes:1 re:2 plotted:2 theoretical:5 minimal:5 column:1 modeling:1 ordinary:1 deviation:2 entry:2 jsps:2 vandewalle:2 dependency:1 vershynin:1 recht:3 fundamental:1 negahban:2 siam:4 interdisciplinary:1 concrete:1 squared:6 again:1 central:1 rn1:2 choose:1 huang:1 american:1 li:1 japan:1 account:4 parrilo:2 de:6 singleton:4 includes:3 audibert:1 depends:5 root:3 analyze:2 red:1 recover:2 candes:1 collaborative:1 contribution:1 il:1 square:5 minimize:2 variance:1 correspond:1 generalize:1 monitoring:1 multiplying:1 confirmed:1 corruption:1 phys:1 definition:2 against:13 tucker:6 naturally:2 proof:5 hsu:1 proved:1 knowledge:2 dimensionality:6 musialski:1 jenatton:3 appears:1 higher:3 specify:1 improved:1 shrink:1 generality:2 furthermore:1 correlation:1 hand:4 overlapping:1 marker:1 mode:41 infimum:1 quality:1 gray:1 grows:1 ye:1 normalized:1 true:9 multiplier:1 regularization:18 alternating:1 goldfarb:1 deal:2 width:1 performs:7 ranging:1 novel:2 recently:4 superior:3 empirically:6 arbitrariness:1 jp:2 discussed:1 numerically:1 measurement:3 vec:2 leuven:1 consistency:6 multiway:1 dot:2 polyadic:1 multivariate:1 showed:1 inf:2 certain:3 inequality:11 success:3 minimum:9 greater:2 additional:3 employed:3 dashed:3 relates:1 full:1 technical:8 cross:1 bach:3 ravikumar:2 prediction:1 basic:4 titech:1 arxiv:9 iteration:1 agarwal:2 suykens:1 penalize:1 folding:1 addition:1 whereas:2 singular:2 unlike:1 subject:2 induced:1 flow:1 call:3 identically:1 easy:1 lasso:4 identified:1 idea:1 knowing:1 expression:1 speaking:1 repeatedly:1 ignored:1 generally:1 http:1 sista:1 notice:2 correctly:1 blue:1 group:13 key:3 nevertheless:1 drawn:1 imputation:1 relaxation:2 sum:8 run:1 inverse:4 almost:2 chandrasekaran:1 appendix:4 scaling:5 rnk:1 bound:11 guaranteed:1 tackled:1 annual:1 activity:1 precisely:2 constraint:1 sharply:1 dominated:1 min:3 structured:15 department:1 according:1 alternate:1 slightly:1 kakade:1 s1:13 aihara:1 intuitively:1 iccv:1 taken:1 equation:1 foygel:1 discus:3 turn:1 r3:2 know:1 end:1 generalizes:1 operation:1 available:1 v2:1 spectral:3 generic:1 kashima:2 alternative:1 top:1 dirty:1 cf:2 include:1 exploit:1 especially:3 establish:2 prof:1 tensor:81 noticed:1 quantity:2 concentration:1 jalali:1 kth:2 schatten:31 reason:1 willsky:1 index:2 ratio:3 difficult:1 neuroimaging:1 mostly:1 statement:5 ryota:1 trace:2 stated:2 mink:4 wonka:1 anal:2 unknown:9 perform:4 upper:1 vertical:2 observation:5 truncated:1 situation:2 rn:1 ttic:1 introduced:1 namely:2 specified:1 learned:1 nip:3 esat:1 below:1 pattern:1 regime:1 sparsity:9 program:1 max:2 explanation:1 wainwright:2 overlap:6 regularized:2 hindi:1 github:1 technology:1 concludes:1 carried:1 conventionally:1 review:3 literature:1 discovery:1 relative:1 asymptotic:1 loss:4 interesting:2 limitation:1 filtering:1 srebro:2 degree:1 supported:1 jth:1 side:3 institute:2 sparse:4 algorithms2:1 plain:1 dimension:2 world:1 author:3 suzuki:2 far:1 transaction:1 confirm:4 reveals:1 mairal:2 spatio:2 latent:66 decomposes:1 why:2 channel:1 learn:2 robust:2 mse:10 sp:14 main:2 linearly:3 whole:1 noise:12 allowed:2 complementary:1 repeated:1 representative:1 wiley:1 tomioka:5 timepoints:1 concatenating:1 candidate:2 lie:1 toyota:1 rk:17 theorem:7 specific:3 inset:1 r2:3 evidence:1 adding:1 nk:18 mildly:1 likely:1 explore:1 visual:1 signoretto:1 partially:1 hayashi:2 springer:1 truth:4 environmental:1 satisfies:5 ma:1 obozinski:3 lth:1 marked:2 sorted:1 goal:1 replace:1 determined:1 uniformly:1 denoising:2 lemma:6 principal:1 total:1 called:1 duality:8 experimental:1 svd:1 support:1 arises:1 correlated:1 |
4,403 | 4,986 | Convex Relaxations for Permutation Problems
Fajwel Fogel
?
C.M.A.P., Ecole
Polytechnique,
Palaiseau, France
[email protected]
Rodolphe Jenatton
?
CRITEO, Paris & C.M.A.P., Ecole
Polytechnique,
Palaiseau, France
[email protected]
Francis Bach
INRIA, SIERRA Project-Team & D.I.,
?
Ecole
Normale Sup?erieure, Paris, France.
[email protected]
Alexandre d?Aspremont
CNRS & D.I., UMR 8548,
?
Ecole
Normale Sup?erieure, Paris, France.
[email protected]
Abstract
Seriation seeks to reconstruct a linear order between variables using unsorted similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We prove the equivalence between the seriation and the
combinatorial 2-SUM problem (a quadratic minimization problem over permutations) over a class of similarity matrices. The seriation problem can be solved
exactly by a spectral algorithm in the noiseless case and we produce a convex relaxation for the 2-SUM problem to improve the robustness of solutions in a noisy
setting. This relaxation also allows us to impose additional structural constraints
on the solution, to solve semi-supervised seriation problems. We present numerical experiments on archeological data, Markov chains and gene sequences.
1
Introduction
We focus on optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper
is centered on the seriation problem: we are given a similarity matrix between a set of n variables
and assume that the variables can be ordered along a chain, where the similarity between variables
decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear
ordering based on unsorted, possibly noisy, similarity information.
This problem has its roots in archeology [1]. It also has direct applications in e.g. envelope reduction algorithms for sparse linear algebra [2], in identifying interval graphs for scheduling [3], or
in shotgun DNA sequencing where a single strand of genetic material is reconstructed from many
cloned shorter reads, i.e. small, fully sequenced sections of DNA [4, 5]. With shotgun gene sequencing applications in mind, many references focused on the Consecutive Ones Problem (C1P) which
seeks to permute the rows of a binary matrix so that all the ones in each column are contiguous. In
particular, [3] studied further connections to interval graphs and [6] crucially showed that a solution
to C1P can be obtained by solving the seriation problem on the squared data matrix. We refer the
reader to [7, 8, 9] for a much more complete survey of applications.
On the algorithmic front, the seriation problem was shown to be NP-Complete by [10]. Archeological examples are usually small scale and earlier references such as [1] used greedy techniques
to reorder matrices. Similar techniques were, and are still used to reorder genetic data sets. More
general ordering problems were studied extensively in operations research, mostly in connection
with the Quadratic Assignment Problem (QAP), for which several convex relaxations were studied
in e.g. [11, 12]. Since a matrix is a permutation matrix if and only if it is both orthogonal and
1
doubly stochastic, much work also focused on producing semidefinite relaxations to orthogonality constraints [13, 14]. These programs are convex hence tractable but the relaxations are usually
very large and scale poorly. More recently however, [15] produced a spectral algorithm that exactly
solves the seriation problem in a noiseless setting, in results that are very similar to those obtained
on the interlacing of eigenvectors for Sturm Liouville operators. They show that for similarity matrices computed from serial variables (for which a total order exists), the ordering of the second
eigenvector of the Laplacian (a.k.a. the Fiedler vector) matches that of the variables.
Here, we show that the solution of the seriation problem explicitly minimizes a quadratic function.
While this quadratic problem was mentioned explicitly in [15], no connection was made between
the combinatorial and spectral solutions. Our result shows in particular that the 2-SUM minimization problem mentioned in [10], and defined below, is polynomially solvable for matrices coming
from serial data. This result allows us to write seriation as a quadratic minimization problem over
permutation matrices and we then produce convex relaxations for this last problem. This relaxation
appears to be more robust to noise than the spectral or combinatorial techniques in a number of
examples. Perhaps more importantly, it allows us to impose additional structural constraints to solve
semi-supervised seriation problems. We also develop a fast algorithm for projecting on the set of
doubly stochastic matrices, which is of independent interest.
The paper is organized as follows. In Section 2, we show a decomposition result for similarity
matrices formed from the C1P problem. This decomposition allows to make the connection between the seriation and 2-SUM minimization problems on these matrices. In Section 3 we use these
results to write convex relaxations of the seriation problem by relaxing permutation matrices as doubly stochastic matrices in the 2-SUM minimization problem. We also briefly discuss algorithmic
and computational complexity issues. Finally Section 4 discusses some applications and numerical
experiments.
Notation. We write P the set of permutations of {1, . . . , n}. The notation ? will refer to a permuted vector of {1, . . . , n} while the notation ? (in capital letter) will refer to the corresponding matrix permutation, which is a {0, 1} matrix suchP
that ?ij = 1 P
iff ?(j) = i. For a vector
n
n
y 2 Rn , we write var(y) its variance, with var(y) = i=1 yi2 /n ( i=1 yi /n)2 , we also write
y[u,v] 2 Rv u+1 the vector (yu , . . . , yv )T . Here, ei 2 Rn is i-th Euclidean basis vector and 1 is
the vector of ones. We write Sn the set of symmetric matrices of dimension n, k ? kF denotes the
Frobenius norm and i (X) the ith eigenvalue (in increasing order) of X.
2
Seriation & consecutive ones
Given a symmetric, binary matrix A, we will focus on variations of the following 2-SUM combinatorial minimization problem, studied in e.g. [10], and written
Pn
minimize
?(j))2
i,j=1 Aij (?(i)
(1)
subject to ? 2 P.
This problem is used for example to reduce the envelope of sparse matrices and is shown in [10,
Th. 2.2] to be NP-Complete. When A has a specific structure, [15] show that a related matrix reordering problem used for seriation can be solved explicitly by a spectral algorithm. However, the
results in [15] do not explicitly link spectral ordering and the optimum of (1). For some instances
of A related to seriation and consecutive one problems, we show below that the spectral ordering
directly minimizes the objective of problem (1). We first focus on binary matrices, then extend our
results to more general unimodal matrices.
2.1
Binary matrices
Let A 2 Sn and y 2 Rn , we focus on a generalization of the 2-SUM minimization problem
Pn
minimize f (y? ) , i,j=1 Aij (y?(i) y?(j) )2
subject to ? 2 P.
(2)
The main point of this section is to show that if A is the permutation of a similarity matrix formed
from serial data, then minimizing (2) recovers the correct variable ordering. We first introduce a few
definitions following the terminology in [15].
2
Definition 2.1 We say that the matrix A 2 Sn is an R-matrix (or Robinson matrix) iff it is symmetric
and satisfies Ai,j ? Ai,j+1 and Ai+1,j ? Ai,j in the lower triangle, where 1 ? j < i ? n.
Another way to write the R-matrix conditions is to impose Aij ? Akl if |i j| ? |k l| off-diagonal,
i.e. the coefficients of A decrease as we move away from the diagonal (cf. Figure 1).
Figure 1: A Q-matrix A (see Def. 2.7), which has unimodal columns (left), its ?circular square?
A AT (see Def. 2.8) which is an R-matrix (center), and a matrix a aT where a is a unimodal
vector (right).
Definition 2.2 We say that the {0, 1}-matrix A 2 Rn?m is a P-matrix (or Petrie matrix) iff for each
column of A, the ones form a consecutive sequence.
As in [15], we will say that A is pre-R (resp. pre-P) iff there is a permutation ? such that ?A?T is
an R-matrix (resp. ?A is a P-matrix). We now define CUT matrices as follows.
Definition 2.3 For u, v 2 [1, n], we call CU T (u, v) the matrix such that
?
1 if u ? i, j ? v
CU T (u, v) =
0 otherwise,
i.e. CU T (u, v) is symmetric, block diagonal and has one square block equal to one.
The motivation for this definition is that if A is a {0, 1} P-matrix, then AAT is a sum of CUT
matrices (with blocks generated by the columns of A). This means that we can start by studying
problem (2) on CUT matrices. We first show that the objective of (2) has a natural interpretation in
this case, as the variance of a subset of y under a uniform probability measure.
Pn
Lemma 2.4 Let A = CU T (u, v), then f (y) = i,j=1 Aij (yi yj )2 = (v u + 1)2 var(y[u,v] ).
P
Proof. We can write ij Aij (yi yj )2 = y T LA y where LA = diag(A1) A is the Laplacian of
matrix A, which is a block matrix equal to (v u + 1) {i=j} 1 for u ? i, j ? v.
This last lemma shows that solving (2) for CUT matrices amounts to finding a subset of y of size
(u v + 1) with minimum variance. The next lemma characterizes optimal solutions of problem (2)
for CUT matrices and shows that its solution splits the coefficients of y in two disjoint intervals.
Lemma 2.5 Suppose A = CU T (u, v), and write z = y? the optimal solution to (2). If we call
I = [u, v] and I c its complement in [1, n], then zj 2
/ [min(zI ), max(zI )], for all j 2 I c , in other
words, the coefficients in zI and zI c belong to disjoint intervals.
We can use these last results to show that, at least for some vectors y, when A is an R-matrix, then
the solution y? to (2) is monotonic.
Proposition 2.6 Suppose C 2 Sn is a {0, 1} pre-R matrix, A = C 2 , and yi = ai + b for i =
1, . . . , n and a, b 2 R with a 6= 0. If ? is such that ?C?T (hence ?A?T ) is an R-matrix, then the
corresponding permutation ? solves the combinatorial minimization problem (2) for A = C 2 .
3
Proof. Suppose C is {0, 1} pre-R, then C 2 is pre-R and Lemma 5.2 shows that there exists ?
such that ?C?T and ?A?T are R-matrices, so we can write ?A?T as a sum of CUT matrices.
Furthermore, Lemmas 2.4 and 2.5 show that each CUT term is minimized by a monotonic sequence,
but yi = ai+b means here that all monotonic subsets of y of a given length have the same (minimal)
variance, attained by ?y. So the corresponding ? also solves problem (2).
2.2
Unimodal matrices
Here, based on [6], we first define a generalization of P-matrices called (appropriately enough) Qmatrices, i.e. matrices with unimodal columns. We now show that minimizing (2) also recovers the
correct ordering for these more general matrix classes.
Definition 2.7 We say that a matrix A 2 Rn?m is a Q-matrix if and only if each column of A is
unimodal, i.e. its coefficients increase to a maximum, then decrease.
Note that R-matrices are symmetric Q-matrices. We call a matrix A pre-Q iff there is a permutation
? such that ?A is a Q-matrix. Next, again based on [6], we define the circular product of two
matrices.
m
Definition 2.8 Given A, B T 2 Rn?m , and
Pamstrictly positive weight vector w 2 R , their circular
product A B is defined as (A B)ij = k=1 wk min{Aik , Bkj }, i, j = 1, . . . , n, note that when
A is a symmetric matrix, A A is also symmetric.
Remark that when A, B are {0, 1} matrices and w = 1, min{Aik , Bkj } = Aik Bkj , so the circle
product matches the regular matrix product AB T . In the appendix we first prove that when A is a
Q-matrix, then A AT is a sum of CUT matrices. This is illustrated in Figure 1.
Lemma 2.9 Let A 2 Rn?m a Q-matrix, then A AT is a conic combination of CUT matrices.
This last result also shows that A AT is a R-matrix when A is a Q matrix, as a sum of CUT matrices.
These definitions are illustrated in Figure 1. We now recall the central result in [6, Th. 1].
Theorem 2.10 [6, Th. 1] Suppose A 2 Rn?m is pre-Q, then ?A is a Q-matrix iff ?(A AT )?T is
a R-matrix.
We are now ready to show the main result of this section linking permutations which order Rmatrices and solutions to problem (2).
Proposition 2.11 Suppose C 2 Rn?m is a pre-Q matrix and yi = ai + b for i = 1, . . . , n and
a, b 2 R with a 6= 0. Let A = C C T , if ? is such that ?A?T is an R-matrix, then the corresponding
permutation ? solves the combinatorial minimization problem (2).
Proof. If C 2 Rn?m is pre-Q, then Lemma 2.9 and Theorem 2.10 show that there is a permutation
? such that ?(C C T )?T is a sum of CUT matrices (hence a R-matrix). Now as in Propostion 2.6,
all monotonic subsets of y of a given length have the same variance, hence Lemmas 2.4 and 2.5
show that ? solves problem (2).
This result shows that if A is pre-R and can be written A = C C T with C pre-Q, then the
permutation that makes A an R-matrix also solves (2). Since [15] show that sorting the Fiedler
vector also orders A as an R-matrix, Prop. 2.11 gives a polynomial time solution to problem (2)
when A = C C T is pre-R with C pre-Q.
3
Convex relaxations for permutation problems
In the sections that follow, we will use the combinatorial results derived above to produce convex
relaxations of optimization problems written over the set of permutation matrices. Recall that the
Fiedler value of a symmetric non negative matrix is the smallest non-zero eigenvalue of its Laplacian.
The Fiedler vector is the corresponding eigenvector. We first recall the main result from [15] which
shows how to reorder pre-R matrices in a noise free setting.
4
Proposition 3.1 [15, Th. 3.3] Suppose A 2 Sn is a pre-R-matrix, with a simple Fiedler value whose
Fiedler vector v has no repeated values. Suppose that ? 2 P is such that the permuted Fielder
vector ?v is monotonic, then ?A?T is an R-matrix.
The results in [15] provide a polynomial time solution to the R-matrix ordering problem in a noiseless setting. While [15] also show how to handle cases where the Fiedler vector is degenerate, these
scenarios are highly unlikely to arise in settings where observations on A are noisy and we do not
discuss these cases here.
The results in the previous section made the connection between the spectral ordering in [15] and
problem (2). In what follows, we will use (2) to produce convex relaxations to matrix ordering
problems in a noisy setting. We also show in Section 3 how to incorporate a priori knowledge in
the optimization problem. Numerical experiments in Section 4 show that semi-supervised seriation
solutions are sometimes significantly more robust to noise than the spectral solutions ordered from
the Fiedler vector.
Permutations and doubly stochastic matrices. We write Dn the set of doubly stochastic matrices
in Rn?n , i.e. Dn = {X 2 Rn?n : X > 0, X1 = 1, X T 1 = 1}. Note that Dn is convex and
polyhedral. Classical results show that the set of doubly stochastic matrices is the convex hull of the
set of permutation matrices. We also have P = D \ O, i.e. a matrix is a permutation matrix if and
only if it is both doubly stochastic and orthogonal. This means that we can directly write a convex
relaxation to the combinatorial problem (2) by replacing P with its convex hull Dn , to get
minimize g T ?T LA ?g
subject to ? 2 Dn ,
(3)
where g = (1, . . . , n). By symmetry, if a vector ?y minimizes (3), then the reverse vector also
minimizes (3). This often has a significant negative impact on the quality of the relaxation, and
we add the linear constraint eT1 ?g + 1 ? eTn ?g to break symmetries, which means that we always
pick monotonically increasing solutions. Because the Laplacian LA is always positive semidefinite,
problem (3) is a convex quadratic program in the variable ? and can be solved efficiently. To
provide a solution to the combinatorial problem (2), we then generate permutations from the doubly
stochastic optimal solution to (3) (we will describe an efficient procedure to do so in ?3).
The results of Section 2 show that the optimal solution to (2) also solves the seriation problem in
the noiseless setting when the matrix A is of the form C C T with C a Q-matrix and y is an affine
transform of the vector (1, . . . , n). These results also hold empirically for small perturbations of the
vector y and to improve robustness to noisy observations of A, we can average several values of the
objective of (3) over these perturbations, solving
minimize Tr(Y T ?T LA ?Y )/p
subject to eT1 ?g + 1 ? eTn ?g, ?1 = 1, ?T 1 = 1, ?
0,
(4)
in the variable ? 2 Rn?n , where Y 2 Rn?p is a matrix whose columns are small perturbations
of the vector g = (1, . . . , n)T . Note that the objective of (4) can be rewritten in vector format as
Vec(?)T (Y Y T ? LA )Vec(?)/p. Solving (4) is roughly p times faster than individually solving p
versions of (3).
Regularized convex relaxation. As the set of permutation matrices P is the intersection of the set
of doubly stochastic matrices D and the set of orthogonal matrices O, i.e. P = D \ O we can add
a penalty to the objective of the convex relaxed problem (4) to force the solution to get closer to the
set of orthogonal matrices.
p
As a doubly stochastic matrix of Frobenius norm n is necessarily orthogonal, we would ideally
like to solve
minimize p1 Tr(Y T ?T LA ?Y ) ?p k?k2F
(5)
subject to eT1 ?g + 1 ? eTn ?g, ?1 = 1, ?T 1 = 1, ? 0,
with ? large enough to guarantee that the global solution is indeed a permutation. However, this
problem is not convex for any ? > 0 since its Hessian is not positive semi-definite (the Hessian
Y Y T ? LA ?I ? I is never positive semidefinite when ? > 0 since the first eigenvalue of LA
is 0). Instead, we propose a slightly modified version of (5), which has the same objective function
5
up to a constant, and is convex for some values of ?. Remember that the Laplacian matrix LA is
always positive semidefinite with at least one eigenvalue equal to zero (strictly one if the graph is
connected). Let P = I n1 11T .
Proposition 3.2 The optimization problem
minimize p1 Tr(Y T ?T LA ?Y ) ?p kP ?k2F
subject to eT1 ?g + 1 ? eTn ?g, ?1 = 1, ?T 1 = 1, ?
(6)
0,
is equivalent to problem (5) and their objectives differ by a constant. When ? ?
this problem is convex.
2 (LA ) 1 (Y
Y T ),
Incorporating structural contraints. The QP relaxation allows us to add convex structural constraints in the problem. For instance, in archeological applications, one may specify that observation i must appear before observation j, i.e. ?(i) < ?(j). In gene sequencing applications,
one may want to constrain the distance between two elements (e.g. mate reads), which would read
a ? ?(i) ?(j) ? b and introduce an affine inequality on the variable ? in the QP relaxation
of the form a ? eTi ?g eTj ?g ? b. Linear constraints could also be extracted from a reference
gene sequence. More generally, we can rewrite problem (6) with nc additional linear constraints as
follows
minimize p1 Tr(Y T ?T LA ?Y ) ?p kP ?k2F
(7)
subject to DT ?g + ? 0, ?1 = 1, ?T 1 = 1, ? 0,
where D is a matrix of size n ? nc and is a vector of size nc . The first column of D is equal to
e1 en and 1 = 1 (to break symmetry).
Sampling permutations from doubly stochastic matrices. This procedure is based on the fact
that a permutation can be defined from a doubly stochastic matrix D by the order induced on a
monotonic vector. Suppose we generate a monotonic random vector v and compute Dv. To each v,
we can associate a permutation ? such that ?Dv is monotonically increasing. If D is a permutation matrix, then the permutation ? generated by this procedure will be constant, if D is a doubly
stochastic matrix but not a permutation, it might fluctuate. Starting from a solution D to problem (6),
we can use this procedure to generate many permutation matrices ? and we pick the one with lowest
cost y T ?T LA ?y in the combinatorial problem (2). We could also project ? on permutations using
the Hungarian algorithm, but this proved more costly and less effective.
Orthogonal relaxation. Recall that P = D \ O, i.e. a matrix is a permutation matrix if and only
if it is both doubly stochastic and orthogonal. So far, we have relaxed the orthogonality constraint to
replace it by a penalty on the Frobenius norm. Semidefinite relaxations to orthogonality constraints
have been developed in e.g. [12, 13, 14], with excellent approximation bounds, and these could
provide alternative relaxation schemes. However, these relaxations form semidefinite programs of
dimension O(n2 ) (hence have O(n4 ) variables) which are out of reach numerically for most of the
problems considered here.
Algorithms. The convex relaxation in (7) is a quadratic program in the variable ? 2 Rn?n , which
has dimension n2 . For reasonable values of n (around a few hundreds), interior point solvers such
as MOSEK [17] solve this problem very efficiently. Furthermore, most pre-R matrices formed by
squaring pre-Q matrices are very sparse, which considerably speeds up linear algebra. However,
first-order methods remain the only alternative beyond a certain scale. We quickly discuss the implementation of two classes of methods: the Frank-Wolfe (a.k.a. conditional gradient) algorithm,
and accelerated gradient methods.
Solving (7) using the conditional gradient algorithm in [18] requires minimizing an affine function
over the set of doubly stochastic matrices at each iteration. This amounts to solving a classical
transportation (or matching) problem for which very efficient solvers exist [19].
On the other hand, solving (7) using accelerated gradient algorithms requires solving a projection
step on doubly stochastic matrices at each iteration [20]. Here too, exploiting structure significantly
improves the complexity of these steps. Given some matrix ?0 , the projection problem is written
minimize 12 k? ?0 k2F
subject to DT ?g + ? 0, ?1 = 1, ?T 1 = 1, ?
6
0
(8)
in the variable ? 2 Rn?n , with parameter g 2 Rn . The dual is written
maximize
subject to
1
T
2 kx1 +
T
+x (?0 1
z
0, Z
1y T + Dzg T Zk2F Tr(Z T ?0 )
1) + y T (?T0 1 1) + z(DT ?0 g + )
0
(9)
in the variables Z 2 Rn?n , x, y 2 Rn and z 2 Rnc . The dual is written over decoupled linear
constraints in (z, Z) (with x and y are unconstrained). Each subproblem is equivalent to computing
a conjugate norm and can be solved in closed form. In particular, the matrix Z is updated at each
iteration by Z = max{0, x1T + 1y T + Dzg T ?0 }. Warm-starting provides a significant speedup. This means that problem (9) can be solved very efficiently by block-coordinate ascent, whose
convergence is guaranteed in this setting [21], and a solution to (8) can be reconstructed from the
optimum in (9).
4
Applications & numerical experiments
Archeology. We reorder the rows of the Hodson?s Munsingen dataset (as provided by [22] and
manually ordered by [6]), to date 59 graves from 70 recovered artifact types (graves from similar
periods containing similar artifacts). The results are reported in Table 1 (and in the appendix). We
use a fraction of the pairwise orders in [6] to solve the semi-supervised version.
Kendall ?
Spearman ?
Comb. Obj.
# R-constr.
Sol. in [6]
1.00?0.00
1.00?0.00
38520?0
1556?0
Spectral
0.75?0.00
0.90?0.00
38903?0
1802?0
QP Reg
0.73?0.22
0.88?0.19
41810?13960
2021?484
QP Reg + 0.1%
0.76?0.16
0.91?0.16
43457?23004
2050?747
QP Reg + 47.5%
0.97?0.01
1.00?0.00
37602?775
1545?43
Table 1: Performance metrics (median and stdev over 100 runs of the QP relaxation, for Kendall?s ? ,
Spearman?s ? ranking correlations (large values are good), the objective value in (2), and the number of R-matrix monotonicity constraint violations (small values are good), comparing Kendall?s
original solution with that of the Fiedler vector, the seriation QP in (6) and the semi-supervised
seriation QP in (7) with 0.1% and 47.5% pairwise ordering constraints specified. Note that the
semi-supervised solution actually improves on both Kendall?s manual solution and on the spectral
ordering.
Markov chains. Here, we observe many disordered samples from a Markov chain. The mutual
information matrix of these variables must be decreasing with |i j| when ordered according to
the true generating Markov chain [23, Th. 2.8.1], hence the mutual information matrix of these
variables is a pre-R-matrix. We can thus recover the order of the Markov chain by solving the
seriation problem on this matrix. In the following example, we try to recover the order of a Gaussian
Markov chain written Xi+1 = bi Xi + ?i with ?i ? N (0, i2 ). The results are presented in Table 2
on 30 variables. We test performance in a noise free setting where we observe the randomly ordered
model covariance, in a noisy setting with enough samples (6000) to ensure that the spectral solution
stays in a perturbative regime, and finally using much fewer samples (60) so the spectral perturbation
condition fails.
Gene sequencing. In next generation shotgun gene sequencing experiments, genes are cloned about
ten to a hundred times before being decomposed into very small subsequences called ?reads?, each
fifty to a few hundreds base pairs long. Current machines can only accurately sequence these small
reads, which must then be reordered by ?assembly? algorithms, using the overlaps between reads.
We generate artificial sequencing data by (uniformly) sampling reads from chromosome 22 of the
human genome from NCBI, then store k-mer hit versus read in a binary matrix (a k-mer is a fixed
sequence of k base pairs). If the reads are ordered correctly, this matrix should be C1P, hence
we solve the C1P problem on the {0, 1}-matrix whose rows correspond to k-mers hits for each
read, i.e. the element (i, j) of the matrix is equal to one if k-mer j is included in read i. This
matrix is extremely sparse, as it is approximately band-diagonal with roughly constant degree when
reordered appropriately, and computing the Fiedler vector can be done with complexity O(n log n),
as it amounts to computing the second largest eigenvector of n (L)I L, where L is the Laplacian
7
True
Spectral
QP Reg
QP + 0.2%
QP + 4.6%
QP + 54.3%
No noise
1.00?0.00
1.00?0.00
0.50?0.34
0.65?0.29
0.71?0.08
0.98?0.01
Noise within spectral gap
1.00?0.00
0.86?0.14
0.58?0.31
0.40?0.26
0.70?0.07
0.97?0.01
Large noise
1.00?0.00
0.41?0.25
0.45?0.27
0.60?0.27
0.68?0.08
0.97?0.02
Table 2: Kendall?s ? between the true Markov chain ordering, the Fiedler vector, the seriation QP
in (6) and the semi-supervised seriation QP in (7) with varying numbers of pairwise orders specified.
We observe the (randomly ordered) model covariance matrix (no noise), the sample covariance
matrix with enough samples so the error is smaller than half of the spectral gap, then a sample
covariance computed using much fewer samples so the spectral perturbation condition fails.
of the matrix. In our experiments, computing the Fiedler vector of a million base pairs sequence
takes less than a minute using MATLAB?s eigs on a standard desktop machine.
In practice, besides sequencing errors (handled relatively well by the high coverage of the reads),
there are often repeats in long genomes. If the repeats are longer than the k-mers, the C1P assumption is violated and the order given by the Fiedler vector is not reliable anymore. On the other hand,
handling the repeats is possible using the information given by mate reads, i.e. reads that are known
to be separated by a given number of base pairs in the original genome. This structural knowledge
can be incorporated into the relaxation (7). While our algorithm for solving (7) only scales up to
a few thousands base pairs on a regular desktop, it can be used to solve the sequencing problem
hierarchically, i.e. to refine the spectral solution. Graph connectivity issues can be solved directly
using spectral information.
Figure 2: We plot the reads ? reads matrix measuring the number of common k-mers between
read pairs, reordered according to the spectral ordering on two regions (two plots on the left), then
the Fiedler and Fiedler+QP read orderings versus true ordering (two plots on the right). The semisupervised solution contains much fewer misplaced reads.
In Figure 2, the two first plots show the result of spectral ordering on simulated reads from human
chromosome 22. The full R matrix formed by squaring the reads ? kmers matrix is too large to
be plotted in MATLAB and we zoom in on two diagonal block submatrices. In the first one, the
reordering is good and the matrix has very low bandwidth, the corresponding gene segment (or
contig.) is well reconstructed. In the second the reordering is less reliable, and the bandwidth
is larger, the reconstructed gene segment contains errors. The last two plots show recovered read
position versus true read position for the Fiedler vector and the Fiedler vector followed by semisupervised seriation, where the QP relaxation is applied to the reads assembled by the spectral
solution, on 250 000 reads generated in our experiments. We see that the number of misplaced reads
significantly decreases in the semi-supervised seriation solution.
Acknoledgements. AA, FF and RJ would like to acknowledge support from a European Research
Council starting grant (project SIPA) and a gift from Google. FB would like to acknowledge support
from a European Research Council starting grant (project SIERRA). A much more complete version
of this paper is available as [16] at arXiv:1306.4805.
8
References
[1] William S Robinson. A method for chronologically ordering archaeological deposits. American antiquity,
16(4):293?301, 1951.
[2] Stephen T Barnard, Alex Pothen, and Horst Simon. A spectral algorithm for envelope reduction of sparse
matrices. Numerical linear algebra with applications, 2(4):317?334, 1995.
[3] D.R. Fulkerson and O. A. Gross. Incidence matrices and interval graphs. Pacific journal of mathematics,
15(3):835, 1965.
[4] Gemma C Garriga, Esa Junttila, and Heikki Mannila. Banded structure in binary matrices. Knowledge
and information systems, 28(1):197?226, 2011.
[5] Jo?ao Meidanis, Oscar Porto, and Guilherme P Telles. On the consecutive ones property. Discrete Applied
Mathematics, 88(1):325?354, 1998.
[6] David G Kendall. Abundance matrices and seriation in archaeology. Probability Theory and Related
Fields, 17(2):104?112, 1971.
[7] Chris Ding and Xiaofeng He. Linearized cluster assignment via spectral ordering. In Proceedings of the
twenty-first international conference on Machine learning, page 30. ACM, 2004.
[8] Niko Vuokko. Consecutive ones property and spectral ordering. In Proceedings of the 10th SIAM International Conference on Data Mining (SDM?10), pages 350?360, 2010.
[9] Innar Liiv. Seriation and matrix reordering methods: An historical overview. Statistical analysis and data
mining, 3(2):70?91, 2010.
[10] Alan George and Alex Pothen. An analysis of spectral envelope reduction via quadratic assignment
problems. SIAM Journal on Matrix Analysis and Applications, 18(3):706?732, 1997.
[11] Eugene L Lawler. The quadratic assignment problem. Management science, 9(4):586?599, 1963.
[12] Qing Zhao, Stefan E Karisch, Franz Rendl, and Henry Wolkowicz. Semidefinite programming relaxations
for the quadratic assignment problem. Journal of Combinatorial Optimization, 2(1):71?109, 1998.
[13] A. Nemirovski. Sums of random symmetric matrices and quadratic optimization under orthogonality
constraints. Mathematical programming, 109(2):283?317, 2007.
[14] Anthony Man-Cho So. Moment inequalities for sums of random matrices and their applications in optimization. Mathematical programming, 130(1):125?151, 2011.
[15] J.E. Atkins, E.G. Boman, B. Hendrickson, et al. A spectral algorithm for seriation and the consecutive
ones problem. SIAM J. Comput., 28(1):297?310, 1998.
[16] F. Fogel, R. Jenatton, F. Bach, and A. d?Aspremont. Convex relaxations for permutation problems.
arXiv:1306.4805, 2013.
[17] Erling D Andersen and Knud D Andersen. The mosek interior point optimizer for linear programming:
an implementation of the homogeneous algorithm. High performance optimization, 33:197?232, 2000.
[18] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval research logistics quarterly,
3(1-2):95?110, 1956.
[19] L Portugal, F Bastos, J J?udice, J Paixao, and T Terlaky. An investigation of interior-point algorithms for
the linear transportation problem. SIAM Journal on Scientific Computing, 17(5):1202?1223, 1996.
[20] Y. Nesterov. Introductory Lectures on Convex Optimization. Springer, 2003.
[21] D. Bertsekas. Nonlinear Programming. Athena Scientific, 1998.
[22] Frank Roy Hodson. The La T`ene cemetery at M?unsingen-Rain: catalogue and relative chronology, volume 5. St?ampfli, 1968.
[23] Thomas M Cover and Joy A Thomas. Elements of information theory. Wiley-interscience, 2012.
9
| 4986 |@word cu:5 version:4 briefly:1 polynomial:2 norm:4 mers:3 cloned:2 seek:3 crucially:1 linearized:1 decomposition:2 covariance:4 pick:2 tr:5 moment:1 reduction:3 contains:2 ecole:4 genetic:2 etn:4 recovered:2 comparing:1 current:1 incidence:1 perturbative:1 written:8 must:3 numerical:5 plot:5 joy:1 greedy:1 fewer:3 half:1 desktop:2 ith:1 provides:1 mathematical:2 along:1 dn:5 direct:2 prove:2 doubly:16 introductory:1 interscience:1 polyhedral:1 comb:1 introduce:2 pairwise:3 indeed:1 roughly:2 p1:3 decreasing:1 decomposed:1 solver:2 increasing:3 gift:1 project:4 provided:1 notation:3 lowest:1 what:2 minimizes:4 eigenvector:3 akl:1 developed:1 finding:1 guarantee:1 remember:1 exactly:2 hit:2 misplaced:2 grant:2 appear:1 producing:1 bertsekas:1 positive:5 before:2 aat:1 approximately:1 inria:1 might:1 umr:1 studied:4 equivalence:1 relaxing:1 nemirovski:1 bi:1 mer:3 yj:2 practice:1 block:6 definite:1 mannila:1 procedure:4 submatrices:1 significantly:3 matching:1 projection:2 pre:18 word:1 regular:2 get:2 interior:3 operator:1 scheduling:1 catalogue:1 equivalent:2 center:1 transportation:2 archaeological:1 starting:4 convex:23 focused:2 survey:1 identifying:1 importantly:1 fulkerson:1 handle:1 variation:1 coordinate:1 updated:1 resp:2 suppose:8 aik:3 programming:6 homogeneous:1 associate:1 element:3 wolfe:2 roy:1 cut:11 subproblem:1 ding:1 solved:6 thousand:1 region:1 connected:1 ordering:20 decrease:4 sol:1 mentioned:2 gross:1 complexity:3 ideally:1 nesterov:1 solving:11 rewrite:1 algebra:3 segment:2 reordered:3 basis:1 triangle:1 stdev:1 fiedler:17 separated:1 fast:1 describe:1 effective:1 kp:2 artificial:1 whose:4 larger:1 solve:7 say:4 reconstruct:2 otherwise:1 transform:1 noisy:6 sequence:7 eigenvalue:4 sdm:1 propose:1 coming:1 product:4 fr:4 date:1 iff:6 poorly:1 degenerate:1 kx1:1 frobenius:3 x1t:1 exploiting:1 convergence:1 etj:1 optimum:2 gemma:1 cluster:1 produce:4 generating:1 sierra:2 develop:1 archeological:3 ij:3 solves:7 coverage:1 hungarian:1 differ:1 correct:2 porto:1 stochastic:16 hull:2 centered:1 disordered:1 human:2 material:1 ao:1 generalization:2 investigation:1 proposition:4 qmatrices:1 strictly:1 hold:1 around:1 considered:1 algorithmic:2 rnc:1 optimizer:1 consecutive:7 smallest:1 applicable:1 combinatorial:11 et1:4 individually:1 largest:1 council:2 minimization:9 eti:1 stefan:1 always:3 gaussian:1 modified:1 normale:2 pn:3 fluctuate:1 varying:1 acknoledgements:1 derived:1 focus:4 naval:1 sequencing:9 garriga:1 criteo:1 cnrs:1 squaring:2 unlikely:1 france:4 issue:2 dual:2 priori:1 mutual:2 equal:5 field:1 never:1 sampling:2 manually:1 yu:1 k2f:4 mosek:2 minimized:1 np:2 few:4 randomly:2 zoom:1 qing:1 n1:1 william:1 ab:1 interest:1 circular:3 highly:1 mining:2 rodolphe:1 violation:1 semidefinite:7 chain:9 aspremon:1 closer:1 shorter:1 orthogonal:7 decoupled:1 euclidean:1 circle:1 dzg:2 plotted:1 minimal:1 instance:2 column:8 earlier:1 contiguous:1 cover:1 measuring:1 assignment:5 cost:1 subset:4 uniform:1 hundred:3 terlaky:1 front:1 too:2 pothen:2 reported:1 considerably:1 cho:1 st:1 international:2 siam:4 stay:1 off:1 quickly:1 connectivity:1 andersen:2 squared:1 again:1 management:1 central:1 containing:1 jo:1 possibly:1 american:1 zhao:1 wk:1 coefficient:4 explicitly:4 ranking:1 root:1 break:2 closed:1 kendall:6 try:1 francis:2 sup:2 yv:1 start:1 characterizes:1 recover:2 simon:1 minimize:8 formed:4 square:2 variance:5 efficiently:3 correspond:1 accurately:1 produced:1 banded:1 reach:1 manual:1 definition:8 niko:1 proof:3 recovers:2 proved:1 dataset:1 wolkowicz:1 recall:4 knowledge:3 improves:2 organized:1 jenatton:3 actually:1 lawler:1 appears:1 alexandre:1 attained:1 dt:3 supervised:8 follow:1 specify:1 done:1 furthermore:2 correlation:1 sturm:1 hand:2 ei:1 replacing:1 nonlinear:1 google:1 quality:1 perhaps:1 artifact:2 scientific:2 semisupervised:2 true:5 hence:7 read:26 symmetric:9 seriation:29 i2:1 illustrated:2 complete:4 polytechnique:4 recently:1 petrie:1 common:1 permuted:2 empirically:1 qp:16 overview:1 volume:1 million:1 discussed:1 extend:1 interpretation:1 belong:1 linking:1 numerically:1 he:1 refer:3 significant:2 vec:2 ai:7 unconstrained:1 erieure:2 mathematics:2 portugal:1 henry:1 similarity:8 longer:1 add:3 base:5 showed:1 reverse:1 bkj:3 scenario:1 certain:1 store:1 inequality:2 binary:6 unsorted:2 yi:6 atkins:1 minimum:1 additional:3 relaxed:2 impose:3 george:1 maximize:1 period:1 monotonically:2 semi:9 rv:1 full:1 unimodal:6 rj:1 interlacing:1 stephen:1 alan:1 match:2 faster:1 bach:3 long:2 serial:3 e1:1 a1:1 laplacian:6 impact:1 rendl:1 noiseless:4 metric:1 arxiv:2 iteration:3 sometimes:1 sequenced:1 want:1 interval:5 median:1 appropriately:2 envelope:4 fifty:1 ascent:1 subject:9 induced:1 heikki:1 obj:1 call:3 structural:5 split:1 enough:4 zi:4 bandwidth:2 reduce:1 t0:1 handled:1 shotgun:4 bastos:1 qap:1 penalty:2 hessian:2 remark:1 matlab:2 generally:1 eigenvectors:1 amount:3 extensively:1 ten:1 band:1 dna:2 generate:4 udice:1 exist:1 zj:1 disjoint:2 correctly:1 write:12 discrete:1 terminology:1 capital:1 graph:5 relaxation:28 chronologically:1 fraction:1 sum:14 run:1 letter:1 oscar:1 reader:1 reasonable:1 appendix:2 def:2 bound:1 guaranteed:1 followed:1 quadratic:12 refine:1 constraint:13 orthogonality:4 constrain:1 alex:2 speed:1 min:3 extremely:1 format:1 relatively:1 speedup:1 pacific:1 according:2 combination:1 conjugate:1 spearman:2 remain:1 slightly:1 smaller:1 constr:1 n4:1 projecting:1 dv:2 ene:1 boman:1 discus:4 mind:1 tractable:1 studying:1 available:1 operation:1 rewritten:1 observe:3 quarterly:1 away:1 spectral:27 anymore:1 alternative:2 robustness:2 original:2 thomas:2 denotes:1 rain:1 cf:1 ensure:1 assembly:1 ncbi:1 sipa:1 liouville:1 classical:2 objective:8 move:1 costly:1 diagonal:5 gradient:4 distance:2 link:1 simulated:1 athena:1 chris:1 eigs:1 length:2 besides:1 minimizing:3 nc:3 mostly:1 frank:3 negative:2 implementation:2 contraints:1 twenty:1 zk2f:1 observation:4 markov:7 mate:2 acknowledge:2 logistics:1 incorporated:1 team:1 rn:19 perturbation:5 esa:1 david:1 complement:1 pair:6 paris:3 specified:2 connection:5 fogel:3 robinson:2 assembled:1 beyond:1 usually:2 below:2 regime:1 program:4 max:2 reliable:2 overlap:1 natural:1 force:1 regularized:1 warm:1 solvable:1 scheme:1 improve:2 conic:1 ready:1 aspremont:2 sn:5 eugene:1 kf:1 graf:2 relative:1 fully:1 reordering:4 permutation:34 deposit:1 lecture:1 generation:1 var:3 versus:3 degree:1 affine:3 row:3 repeat:3 last:5 free:2 hodson:2 aij:5 sparse:5 hendrickson:1 dimension:3 genome:3 fb:1 horst:1 made:2 franz:1 palaiseau:2 historical:1 far:1 polynomially:1 reconstructed:4 gene:10 monotonicity:1 global:1 reorder:4 xi:2 subsequence:1 table:4 guilherme:1 chromosome:2 robust:2 symmetry:3 permute:1 excellent:1 necessarily:1 european:2 anthony:1 diag:1 yi2:1 main:3 hierarchically:1 motivation:1 noise:8 arise:1 n2:2 repeated:1 x1:1 en:3 ff:1 wiley:1 fails:2 position:2 comput:1 abundance:1 theorem:2 minute:1 xiaofeng:1 specific:1 exists:2 incorporating:1 cmap:2 sorting:1 gap:2 intersection:1 ordered:7 strand:1 monotonic:7 springer:1 aa:1 satisfies:1 extracted:1 acm:1 prop:1 conditional:2 replace:1 barnard:1 man:1 included:1 uniformly:1 lemma:9 total:1 called:2 la:15 support:2 accelerated:2 violated:1 incorporate:1 reg:4 handling:1 |
4,404 | 4,987 | Solving the multi-way matching problem by
permutation synchronization
Deepti Pachauri,? Risi Kondor? and Vikas Singh??
Dept. of Computer Sciences, University of Wisconsin?Madison
?
Dept. of Biostatistics & Medical Informatics, University of Wisconsin?Madison
?
Dept. of Computer Science and Dept. of Statistics, The University of Chicago
[email protected] [email protected] [email protected]
?
Abstract
The problem of matching not just two, but m different sets of objects to each other
arises in many contexts, including finding the correspondence between feature
points across multiple images in computer vision. At present it is usually solved
by matching the sets pairwise, in series. In contrast, we propose a new method,
Permutation Synchronization, which finds all the matchings jointly, in one shot,
via a relaxation to eigenvector decomposition. The resulting algorithm is both
computationally efficient, and, as we demonstrate with theoretical arguments as
well as experimental results, much more stable to noise than previous methods.
1
Introduction
Finding the correct bijection between two sets of objects X = {x1 , x2 , . . . , xn } and X ? =
{x?1 , x?2 , . . . , x?n } is a fundametal problem in computer science, arising in a wide range of contexts [1]. In this paper, we consider its generalization to matching not just two, but m different sets
X1 , X2 , . . . , Xm . Our primary motivation and running example is the classic problem of matching
landmarks (feature points) across many images of the same object in computer vision, which is a
key ingredient of image registration [2], recognition [3, 4], stereo [5], shape matching [6, 7], and
structure from motion (SFM) [8, 9]. However, our approach is fully general and equally applicable
to problems such as matching multiple graphs [10, 11].
Presently, multi-matching is usually solved sequentially, by first finding a putative permutation ?12
matching X1 to X2 , then a permutation ?23 matching X2 to X3 , and so on, up to ?m?1,m . While
one can conceive of various strategies for optimizing this process, the fact remains that when the
data are noisy, a single error in the sequence will typically create a large number of erroneous
pairwise matches [12, 13, 14]. In contrast, in this paper we describe a new method, Permutation
Synchronization, that estimates the entire matrix (?ji )m
i,j=1 of assignments jointly, in a single shot,
and is therefore much more robust to noise.
For consistency, the recovered matchings must satisfy ?kj ?ji = ?ki . While finding an optimal matrix
of permutations satisfying these relations is, in general, combinatorially hard, we show that for the
most natural choice of loss function the problem has a natural relaxation to just finding the n leading
eigenvectors of the cost matrix. In addition to vastly reducing the computational cost, using recent
results from random (matrix
) theory, we show that the eigenvectors are very effective at aggregating
information from all m
2 pairwise matches, and therefore make the algorithm surprisingly robust to
noise. Our experiments show that in landmark matching problems Permutation Synchronization can
recover the correct correspondence between landmarks across a large number of images with small
error, even when a significant fraction of the pairwise matches are incorrect.
The term ?synchronization? is inspired by the recent celebrated work of Singer et al. on a similar
problem involving finding the right rotations (rather than matchings) between electron microscopic
1
images [15][16][17]. Historically, multi-matching has received relatively little attention. However,
independently of, and concurrently with the present work, Huang and Guibas [18] have recently
proposed a semidefinite programming based solution, which parallels our approach, and in problems
involving occlusion might perform even better.
2
Synchronizing permutations
Consider a collection of m sets X1 , X2 , . . . , Xm of n objects each, Xi = {xi1 , xi2 , . . . , xin }, such
that for each pair (Xi , Xj ), each xpi in Xi has a natural counterpart xjq in Xj . For example, in
computer vision, given m images of the same scene taken from different viewpoints, xi1 , xi2 , . . . , xin
might be n visual landmarks detected in image i, while xj1 , xj2 , . . . , xjn are n landmarks detected in
image j, in which case xpi ? xjq signifies that xpi and xjq correspond to the same physical feature.
Since the correspondence between Xi and Xj is a bijection, one can write it as xpi ? xj?ji(p) for some
permutation ?ji : {1, 2, . . . , n} ? {1, 2, . . . , n}. Key to our approach to solving multi-matching is
that with respect to the natural definition of multiplication, (? ? ? )(i) := (? ? (? (i)), the n! possible
permutations of {1, 2, . . . , n} form a group, called the symmetric group of degree n, denoted Sn .
We say that the system of correspondences between X1 , X2 , . . . , Xm is consistent if xpi ? xjq and
xjq ? xkr together imply that xpi ? xkr . In terms of permutations this is equivalent to requiring that
the array (?ij )m
i,j=1 satisfy
?kj ?ji = ?ki
?i, j, k.
(1)
Alternatively, given some reference ordering of x1 , x2 , . . . , xn , we can think of each Xi as realizing
its own permutation ?i (in the sense of x? ? xi?i(?) ), and then ?ji becomes
?ji = ?j ?i?1 .
(2)
The existence of permutations ?1 , ?2 , . . . , ?m satisfying (2) is equivalent to requiring that (?ji )m
i,j=1
satisfy (1). Thus, assuming consistency, solving the multi-matching problem reduces to finding
just m different permutations, rather than O(m2 ). However, the ?i ?s are of course not directly
observable. Rather, in a typical application we have some tentative (noisy) ??ji matchings which we
must synchronize into the form (2) by finding the underlying ?1 , . . . , ?m .
Given (?
?ji )m
i,j=1 and some appropriate distance metric d between permutations, we formalize Permutation Synchronization as the combinatorial optimization problem
minimize
?1 ,?2 ,...,?m ?Sn
N
?
d(?j ?i?1 , ??ji ).
(3)
i,j=1
The computational cost of solving (3) depends critically on the form of the distance metric d. In this
paper we limit ourselves to the simplest choice
d(?, ? ) = n ? ?P (?), P (? )? ,
where P (?) ? R
(4)
n?n
are the usual permutation matrices
{
1 if ?(p) = q
[P (?)]q,p :=
0 otherwise,
?n
and ?A, B? is the matrix inner product ?A, B? := tr(A? B) = p,q=1 Ap,q Bp,q .
The distance (4) simply counts the number of objects?
assigned differently by ? and ? . Furtherm
?ji )?, suggesting the
more, it allows us to rewrite (3) as maximize?1 ,?2 ,...,?m i,j=1 ?P (?j ?i?1 ), P (?
generalization
m
?
?
?
maximize
P (?j ?i?1 ), Tji ,
(5)
?1 ,?2 ,...,?m
i,j=1
?
where the Tji ?s can now be any matrices, subject to Tji
= Tij . Intuitively, each Tji is an objective
matrix, the (q, p) element of which captures the utility of matching xpi in Xi to xjq in Xj . This
generalization is very useful when the assignments of the different xpi ?s have different confidences.
For example, in the landmark matching case, if, due to occlusion or for some other reason, the
counterpart of xpi is not present in Xj , then we can simply set [Tji ]q,p = 0 for all q.
2
2.1
Representations and eigenvectors
The generalized Permutation Synchronization problem (5) can also be written as
maximize ?P, T ? ,
(6)
?1 ,?2 ,...,?m
where
?
P (?1 ?1?1 )
?
..
P =?
.
P (?m ?1?1 )
...
..
.
...
?
?
?1
P (?1 ?m
)
?
..
?
.
and
?1
P (?m ?m
)
T11
? ..
T =? .
Tm1
...
..
.
...
?
T1m
.. ? .
. ?
Tmm
(7)
A matrix valued function ? : Sn ? Cd?d is said to be a representation of the symmetric group if
?(?2 ) ?(?1 ) = ?(?2 ?1 ) for any pair of permutations ?1 , ?2 ? Sn . Clearly, P is a representation
of Sn (actually, the so-called defining representation), since P (?2 ?1 ) = P (?2 ) P (?1 ). Moreover,
P is a so-called orthogonal representation, because each P (?) is real and P (? ?1 ) = P (?)? . Our
fundamental observation is that this implies that P has a very special form.
Proposition 1. The synchronization matrix P is of rank n and is of the form P = U ? U ? , where
?
?
P (?1 )
?
?
U = ? ... ? .
P (?m )
Proof. From P being a representation of Sn ,
?
P (?1 ) P (?1 )?
?
..
P=?
.
?
. . . P (?1 ) P (?m )?
?
..
..
?,
.
.
. . . P (?m ) P (?m )?
P (?m ) P (?1 )?
(8)
implying P = U ? U ? . Since U has n columns, rank(P) is at most n. This rank is achieved because
P (?1 ) is an orthogonal matrix, therefore it has linearly independent columns, and consequently the
columns of U cannot be linearly dependent.
?
Corollary 1. Letting [P (?i )]p denote the p?th column of P (?i ), the normalized columns of U ,
?
?
[P (?1 )]?
1 ?
?
..
u? = ? ?
? = 1, . . . , n,
(9)
?
.
m
[P (?m )]?
are mutually orthogonal unit eigenvectors of P with the same eigenvalue m, and together span the
row/column space of P.
Proof. The columns of U are orthogonal because the columns of each constituent P (?i ) are orthogonal. The normalization follows from each column of P (?i ) having norm 1. The rest follows by
Proposition 1.
?
2.2
An easy relaxation
Solving (6) is computationally difficult, because it involves searching the combinatorial space of a
combination of m permutations. However, Proposition 1 and its corollary suggest relaxing it to
maximize
?P, T ? ,
n
P?Mm
(10)
where Mm
n is the set of mn?dimensional rank n symmetric matrices whose non-zero eigenvalues
are m. This is now just a generalized Rayleigh problem, the solution of which is simply
P=m
n
?
?=1
3
v? v?? ,
(11)
where v1 , v2 , . . . , v? are the n leading normalized eigenvectors of T . Equivalently, P = U ? U ? ,
where
(
)
|
| ... |
?
U = m v1 v2 . . . vn .
(12)
|
| ... |
Thus, in contrast to the original combinatorial problem, (10) can be solved by just finding the m
leading eigenvectors of T .
Of course, from P we must still recover the inAlgorithm 1 Permutation Synchronization
dividual permutations ?1 , ?2 , . . . , ?m . However, as long as P is relatively close in form Input: the objective matrix T
Compute the n leading
(7), this is quite a simple and stable process.
? eigenvectors (v1 , v2 , . . . , vn )
of
T
and
set
U
=
m [v1 , v2 , . . . , vn ]
One way to do it is to let each ?i be the perfor i = 1 to m do
mutation that best matches the (i, 1) block of
?
Pi1 = U(i?1)n+1:in, 1:n U1:n,
1:n
P in the linear assignment sense,
?i = arg max??S ?Pi1 , ?? [Kuhn-Munkres]
n
?i = arg min ?P (?), [P]i,1 ? ,
??Sn
which is solved in O(n3 ) time by the Kuhn?
Munkres algorithm [19]1 , and then set ?ji =
?j ?i?1 , which will then satisfy the consistency
relations. The pseudocode of the full algorithm is given in Algorithm 1.
3
end for
for each (i, j) do
?ji = ?j ?i?1
end for
Output: the matrix (?ji )m
i,j=1 of globally consistent
matchings
Analysis of the relaxed algorithm
Let us now investigate under what conditions we can expect the relaxation (10) to work well, in
particular, in what cases we can expect the recovered matchings to be exact.
In the absence of noise, i.e., when Tji = P (?
?ji ) for some array (?
?ji )j,i of permutations that already satisfy the consistency relations (1), T will have precisely the same structure as described by
Proposition 1 for P. In particular, it will have n mutually orthogonal eigenvectors
?
?
[P (?
?1 )]?
1 ?
?
..
v? = ? ?
? = 1, . . . , n
(13)
?
.
m
[P (?
?m )]?
with the same eigenvalue m. Due to the n?fold degeneracy, however, the matrix of eigenvectors
(12) is only defined up to multiplication by an arbitrary rotation matrix O on the right, which means
that instead of the ?correct? U (whose columns are (13)), the eigenvector decomposition of T may
return any U ? = U O. Fortunately, when forming the product
P = U? ? U?
?
= U O O? U ? = U ? U ?
this rotation cancels, confirming that our algorithm recovers P = T , and hence the matchings
?ji = ??ji , with no error.
Of course, rather than the case when the solution is handed to us from the start, we are more interested in how the algorithm performs in situations when either the Tji blocks are not permutation
matrices, or they are not synchronized. To this end, we set
T = T0 + N ,
(14)
where T0 is the correct ?ground truth? synchronization matrix, while N is a symmetric perturbation
matrix with entries drawn independently from a zero-mean normal distribution with variance ? 2 .
In general, to find the permutation best aligned with a given n ? n matrix T , the Kuhn?Munkres
algorithm solves for ?b = arg max? ?Sn ?P (? ), T ? = arg max? ?Sn (vec(P (? )) ? vec(T )). Therefore,
1
Note that we could equally well have matched the ?i ?s to any other column of blocks, since they are only
defined relative to an arbitrary reference permutation: if, for any fixed ?0 , each ?i is redefined as ?i ?0 , the
predicted relative permutations ?ji = ?j ?0 (?i ?0 )?1 = ?j ?i?1 stay the same.
4
Figure 1: Singular value histogram of T under the noise model where each ??ji with probability p =
{0.10, 0.25, 0.85} is replaced by a random permutation (m = 100, n = 30). Note that apart from the extra peak at zero, the distribution of the stochastic eigenvalues is very similar to the semicircular distribution for
Gaussian noise. As long as the small cluster of deterministic eigenvalues is clearly separated from the noise,
Permutation Synchronization is feasible.
writing T = P (?0 ) + ?, where P (?0 ) is the ?ground truth?, while ? is an error term, it is guaranteed
to return the correct permutation as long as
? vec(?) ? < ? min ? vec(?0 ) ? vec(? ? ) ? /2.
? ? Sn \{?0 }
By the symmetry of Sn , the right hand side is the same for any ?0 , so w.l.o.g. we can set ?0 = e (the
identity), and find that the minimum is achieved when ? ? is just a transposition, e.g., the permutation
that swaps 1 with 2 and leaves 3, 4, . . . , n in place. The corresponding permutation matrix differs
from the idenity in exactly 4 entries, therefore
a sufficient condition for correct reconstruction is that
?
???Frob = ??, ??1/2 = ?vec(?)? < 12 4 = 1. As n grows, ???Frob becomes tightly concentrated
around ?n, so the condition for recovering the correct permutation is ? < 1/n.
Permutation Synchronization can achieve a lower error, especially in the large m regime, because
the eigenvectors aggregate information from all the Tji matrices, and tend to be very stable to perturbations. In general, perturbations of the form (14) exhibit a characteristic phase transition. As
long as the largest eigenvalue of the random matrix N falls below a given multiple of the smallest
non-zero eigenvalue of T0 , adding N will have very little effect on the eigenvectors of T . On the
other hand, when the noise exceeds this limit, the spectra get fully mixed, and it becomes impossible
to recover T0 from T to any precision at all.
If N is a symmetric matrix with independent N (0, ? 2 ) entries, as nm ? ?, its spectrum will tend to
Wigner?s famous semicircle distribution supported on the interval (?2?(nm)1/2 , 2?(nm)1/2 ), and
with probability one the largest eigenvalue will approach 2?(nm)1/2 [20, 21]. In contrast, the nonzero eigenvalues of T0 scale with m, which guarantees that for large enough m the two spectra will
be nicely separated and Permutation Synchronization will have very low error. While much harder
to analyze analytically, empirical evidence suggests that this type of phase transition behavior is
characteristic of any reasonable noise model, for example the one in which we take each block of T
and with some probability p replace it with a random permutation matrix (Figure 1).
To derive more quantitative results, we consider the case where N is a so-called (symmetric) Gaussian Wigner matrix, which has independent N (0, ? 2 ) entries on its diagonal, and N (0, ? 2 /2) entries
everywhere else. It has recently been proved that for this type of matrix the phase transition occurs
stochastic
at ?det
= 1/2, so to recover T0 to any accuracy at all we must have ? < (m/n)1/2 [22].
min /?max
Below this limit, to quantify the actual expected error, we write each leading normalized eigenvector
v1 , v2 , . . . , vn of T as vi = vi? + vi? , where vi? is the projection of vi to the space U0 spanned by the
non-zero eigenvectors v10 , v20 , . . . , vn0 of T0 . By Theorem 2.2 of [22] as nm ? ?,
n
n
a.s.
a.s.
?vi? ?2 ???? 1 ? ? 2
and
?vi? ?2 ???? ? 2 .
(15)
m
m
a.s.
a.s.
It is easy to see that ?vi? , vj? ? ??? 0, which implies ?vi? , vj? ? = ?vi , vj ? ? ?vi? , vj? ? ??? 0,
?
?
so, setting ? = (1 ? ? 2 n/m)?1/2 , the
tend to an
? normalized vectors ?v1 , . . . , ?vn almost surely
?
orthonormal basis for U0 . Thus, U = m [v1 , . . . , vn ] is related to the ?true? U0 = m [v10 , . . . , vn0 ]
by
a.s.
?U ??? U0 O + ?E ? = (U0 + ?E)O,
where O is some rotation and each column of the noise matrices E and E ? has norm ?(n/m)1/2 .
Since multiplying U on the right by an orthogonal matrix does not affect P, and the Kuhn?Munkres
5
Figure 2: The fraction of (?i )m
i=1 permutations that are incorrect when reconstructed by Permutation Synchro-
nization from an array (?
?ji )m
j,i=1 , in which each entry, with probability p is replaced by a random permutation.
The plots show the mean and standard deviation of errors over 20 runs as a function of p for m = 10 (red),
m = 50 (blue) and m = 100 (green). (Left) n = 10. (Center) n = 25. (Right) n = 30.
algorithm is invariant to scaling by a constant, this equation tells us that (almost surely) the effect
of (14) is equivalent to setting U = U0 + ?E. In terms of the individual Pji blocks of P = U U ? ,
neglecting second order terms,
Pji = (Uj0 + ?Ej )(Ui0 + ?Ei )? ? P (?ji ) + ?Uj0 Ei? + ?Ej Ui0? ,
where ?ji is the ground truth matching and Ui0 and Ei denote the appropriate n ? n submatrices
of U 0 and E. Conjecturing that in the limit Ei and Ej follow rotationally invariant distributions,
almost surely
lim ? Uj0 Ei? + Ej Ui0? ?Frob = lim ? Ei + Ej ?Frob ? 2 ?n/m.
Thus, plugging in to our earlier result for the error tolerance of the Kuhn?Munkres algorithm, Permutation Synchronization will correctly recover ?ji with probability one provided 2??n/m < 1, or,
equivalently,
m/n
?2 <
.
1 + 4(m/n)?1
This is much better than our ? < 1/n result for the naive algorithm, and remarkably only slightly
stricter than the condition ? < (m/n)1/2 for recovering the eigenvectors with any accuracy at all.
Of course, these results are asymptotic (in the sense of nm ? ?), and strictly speaking only apply
to additive Gaussian Wigner noise. However, as Figure 2 shows, in practice, even when the noise is
in the form of corrupting entire permutations and nm is relatively small, qualitatively our algorithm
exhibits the correct behavior, and for large enough m Permutation Synchronization does indeed
recover all (?ji )m
j,i=1 with no error even when the vast majority of the entries in T are incorrect.
4
Experiments
Since computer vision is one of the areas where improving the accuracy of multi-matching problems
is the most pressing, our experiments focused on this domain. For a more details of our results,
please see the extended version of the paper available on project website.
Stereo Matching. As a proof of principle, we considered the task of aligning landmarks in 2D
images of the same object taken from different viewpoints in the CMU house (m = 111 frames
of a video sequence of a toy house with n = 30 hand labeled landmark points in each frame) and
CMU hotel (m = 101 frames of a video sequence of a toy hotel, n = 30 hand labeled (landmark
)
m
points in each frame) datasets. The baseline method is to compute (?
?ji )m
i,j=1 by solving 2 independent linear assignment problems based on matching landmarks by their shape context features
[23]. Our method takes the same pairwise matches and synchronizes them with the eigenvector
based procedure. Figure 3 shows that this clearly outperforms the baseline, which tends to degrade
progressively as the number of images increases. This is due to the fact that the appearance (or descriptors) of keypoints differ considerably for large offset pairs (which is likely when the image set
is large), leading to many false matches. In contrast, our method improves as the size of the image
set increases. While simple, this experiment demonstrates the utility of Permutation Synchronization for multi-view stereo matching, showing that instead of heuristically propagating local pairwise
matches, it can find a much more accurate globally consistent matching at little additional cost.
6
(a)
(b)
(c)
Figure 3: (a) Normalized error as m increases on the House dataset. Permutation Synchronization (blue)
vs. the pairwise Kuhn-Munkres baseline (red). (b-c) Matches found for a representative image pair. (Green
circles) landmarks, (green lines) ground truth, (red lines) found matches. (b) Pairwise linear assignment, (c)
Permutation Synchronization. Note that less visible green is good.
Figure 4: Matches for a representative image pairs from the Building (top) and Books (bottom) datasets.
(Green circles) landmark points, (green lines) ground truth matchings, (red lines) found matches. (Left) Pairwise linear assignment, (right) Permutation Synchronization. Note that less visible green is better (right).
Repetitive Structures. Next, we considered a dataset with severe geometric ambiguities due to
repetitive structures. There is some consensus in the community that even sophisticated features
(like SIFT) yield unsatisfactory results in this scenario, and deriving a good initial matching for
structure from motion is problematic (see [24] and references therein). Our evaluations included 16
images from the Building dataset [24]. We identified 25 ?similar looking? landmark points in the
scene, and hand annotated them across all images. Many landmarks were occluded due to the camera
angle. Qualitative results for pairwise matching and Permutation Synchronization are shown in Fig 4
(top). We highlight two important observations. First, our method resolved geometrical ambiguities
by enforcing mutual consistency efficiently. Second, Permutation Synchronization robustly handles
occlusion: landmark points that are occluded in one image are seamlessly assigned to null nodes in
the other (see the set of unassigned points in the rightmost image in Fig 4 (top)) thanks to evidence
derived from the large number of additional images in the dataset. In contrast, pairwise matching
struggles with occlusion in the presence of similar looking landmarks (and feature descriptors). For
n = 25 and m = 16, the error from the baseline method (Pairwise Linear Assignment) was 0.74.
Permutation Synchronization decreased this by 10% to 0.64. The Books dataset (Fig 4, bottom)
contains m = 20 images of multiple books on a ?L? shaped study table [24], and suffers geometrical
ambiguities similar to the above with severe occlusion. Here we identified n = 34 landmark points,
many of which were occluded in most images. The error from the baseline method was 0.92, and
Permutation Synchronization decreased this by 22% to 0.70 (see extended version of the paper).
Keypoint matching with nominal user supervision. Our final experiment deals with matching
problems where keypoints in each image preserve a common structure. In the literature, this is
usually tackled as a graph matching problem, with the keypoints defining the vertices, and their
structural relationships being encoded by the edges of the graph. Ideally, one wants to solve the
problem for all images at once but most practical solutions operate on image (or graph) pairs. Note
7
that in terms of difficulty, this problem is quite distinct from those discussed above. In stereo,
the same object is imaged and what varies from one view to the other is the field of view, scale,
or pose. In contrast, in keypoint matching, the background is not controlled and even sophisticated descriptors may go wrong. Recent solutions often leverage supervision to make the problem tractable [25, 26]. Instead of learning parameters [25, 27], we utilize supervision directly to
provide the correct matches on a small subset of randomly picked image pairs (e.g., via a crowdsourced platform like Mechanical Turk). We hope to exploit this ?ground-truth? to significantly
boost accuracy via Permutation Synchronization. For our experiments, we used the baseline method
output to set up our objective matrix T but with a fixed ?supervision probability?, we replaced
the Tji block by the correct permutation matrix, and ran Permutation Synchronization. We considered the ?Bikes? sub-class from the Caltech 256 dataset, which contains multiple images of
common objects with varying backdrops, and chose to match images in the ?touring bike? class.
Our analysis included 28 out of 110 images in this dataset that
were taken ?side-on?. SUSAN corner detector was used to
identify landmarks in each image. Further, we identified 6 interest points in each image that correspond to the frame of the
bicycle. We modeled the matching cost for an image pair as
the shape distance between interest points in the pair. As before, the baseline was pairwise linear assignment. For a fixed
degree of supervision, we randomly selected image pairs for
supervision and estimated matchings for the rest of the image
pairs. We performed 50 runs for each degree of supervision.
Mean error and standard deviation is shown in Fig 5 as supervision increases. Fig 6 demonstrates qualitative results by our Figure 5: Normalized error as the
degree of supervision varies. Basemethod (right) and pairwise linear assignment (left).
5
line method PLA (red) and Permutation Synchronization (blue)
Conclusions
Estimating the correct matching between two sets from noisy similarity data, such as the visual
feature based similarity matrices that arise in computer vision is an error-prone process. However,
( )
when we have not just two, but m different sets, the consistency conditions between the m
2 pairwise matchings severely constrain the solution. Our eigenvector decomposition based algorithm,
Permutation Synchronization, exploits this fact and pools information from all pairwise similarity
matrices to jointly estimate a globally consistent array of matchings in a single shot. Theoretical
results suggest that this approach is so robust that no matter how high the noise level is, for large
enough m the error is almost surely going to be zero. Experimental results confirm that in a range
of computer vision tasks from stereo to keypoint matching in dissimilar images, the method does
indeed significantly improve performance (especially when m is large, as expected in video), and
can get around problems such as occlusion that a pairwise strategy cannot handle. In future work we
plan to compare our method to [18] (which was published after the present paper was submitted), as
well as investigate using the graph connection Laplacian [28].
Acknowledgments
We thank Amit Singer for invaluable comments and for drawing our attention to [18]. This work
was supported in part by NSF?1320344 and by funding from the University of Wisconsin Graduate
School.
Figure 6: A representative triplet from the ?Touring bike? dataset. (Yellow circle) Interest points in each
image. (Green lines) Ground truth matching for image pairs (left-center) and (center-right). (Red lines) Matches
for the image pairs: (left) supervision=0.1, (right) supervision=0.5.
8
References
[1] R. E. Burkard, M. Dell?Amico, and S. Martello. Assignment problems. SIAM, 2009.
[2] D. Shen and C. Davatzikos. Hammer: hierarchical attribute matching mechanism for elastic registration.
TMI, IEEE, 21, 2002.
[3] K. Duan, D. Parikh, D. Crandall, and K. Grauman. Discovering localized attributes for fine-grained
recognition. In CVPR, 2012.
[4] M.F. Demirci, A. Shokoufandeh, Y. Keselman, L. Bretzner, and S. Dickinson. Object recognition as
many-to-many feature matching. IJCV, 69, 2006.
[5] M. Goesele, N. Snavely, B. Curless, H. Hoppe, and S.M. Seitz. Multi-view stereo for community photo
collections. In ICCV, 2007.
[6] A.C. Berg, T.L. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondences. In CVPR, 2005.
[7] J. Petterson, T. Caetano, J. McAuley, and J. Yu. Exponential family graph matching and ranking. NIPS,
2009.
[8] S. Agarwal, Y. Furukawa, N. Snavely, I. Simon, B. Curless, S.M. Seitz, and R. Szeliski. Building Rome
in a day. Communications of the ACM, 54, 2011.
[9] I. Simon, N. Snavely, and S.M. Seitz. Scene summarization for online image collections. In ICCV, 2007.
[10] P.A. Pevzner. Multiple alignment, communication cost, and graph matching. SIAM JAM, 52, 1992.
[11] S. Lacoste-Julien, B. Taskar, D. Klein, and M.I. Jordan. Word alignment via quadratic assignment. In
Proc. HLT - NAACL, 2006.
[12] A.J. Smola, S.V.N. Vishwanathan, and Q. Le. Bundle methods for machine learning. NIPS, 20, 2008.
[13] I. Tsochantaridis, T. Joachims, T. Hofmann, Y. Altun, and Y. Singer. Large margin methods for structured
and interdependent output variables. JMLR, 6, 2006.
[14] M. Volkovs and R. Zemel. Efficient sampling for bipartite matching problems. In NIPS, 2012.
[15] A. Singer and Y. Shkolnisky. Three-dimensional structure determination from common lines in cryo-EM
by eigenvectors and semidefinite programming. SIAM Journal on Imaging Sciences, 4(2):543?572, 2011.
[16] R. Hadani and A. Singer. Representation theoretic patterns in three dimensional cryo-electron microscopy
I ? the intrinsic reconstitution algorithm. Annals of Mathematics, 174(2):1219?1241, 2011.
[17] R. Hadani and A. Singer. Representation theoretic patterns in three-dimensional cryo-electron microscopy
II ? the class averaging problem. Foundations of Computational Mathematics, 11(5):589?616, 2011.
[18] Qi-Xing Huang and Leonidas Guibas. Consistent shape maps via semidefinite programming. Computer
Graphics Forum, 32(5):177?186, 2013.
[19] H.W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2,
1955.
[20] E.P. Wigner. On the distribution of the roots of certain symmetric matrices. Ann. Math, 67, 1958.
[21] Z. F?uredi and J. Koml?os. The eigenvalues of random symmetric matrices. Combinatorica, 1, 1981.
[22] F. Benaych-Georges and R.R. Nadakuditi. The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Advances in Mathematics, 227(1):494?521, 2011.
[23] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. PAMI,
24(4):509?522, 2002.
[24] R. Roberts, S. Sinha, R. Szeliski, and D. Steedly. Structure from motion for scenes with large duplicate
structures. In CVPR, 2011.
[25] T.S. Caetano, J.J. McAuley, L. Cheng, Q.V. Le, and A.J. Smola. Learning graph matching. PAMI,
31(6):1048?1058, 2009.
[26] M. Leordeanu, M. Hebert, and R. Sukthankar. An integer projected fixed point method for graph matching
and map inference. In NIPS, 2009.
[27] T. Jebara, J. Wang, and S.F. Chang. Graph construction and b-matching for semi-supervised learning. In
ICML, 2009.
[28] A. S. Bandeira, A. Singer, and D. A. Spielman. A Cheeger inequality for the graph connection Laplacian.
SIAM Journal on Matrix Analysis and Applications, 34(4):1611?1630, 2013.
9
| 4987 |@word version:2 kondor:1 norm:2 heuristically:1 seitz:3 decomposition:3 tr:1 harder:1 shot:3 mcauley:2 initial:1 celebrated:1 series:1 contains:2 rightmost:1 outperforms:1 recovered:2 must:4 written:1 chicago:1 additive:1 visible:2 confirming:1 shape:7 hofmann:1 plot:1 touring:2 progressively:1 v:1 implying:1 leaf:1 website:1 selected:1 discovering:1 realizing:1 transposition:1 math:1 bijection:2 node:1 dell:1 pevzner:1 incorrect:3 qualitative:2 ijcv:1 pairwise:17 indeed:2 expected:2 behavior:2 deepti:1 multi:8 inspired:1 globally:3 duan:1 little:3 actual:1 becomes:3 provided:1 project:1 underlying:1 moreover:1 matched:1 biostatistics:1 bike:3 null:1 what:3 estimating:1 burkard:1 eigenvector:5 synchro:1 finding:9 guarantee:1 quantitative:1 stricter:1 exactly:1 grauman:1 demonstrates:2 wrong:1 unit:1 medical:1 before:1 aggregating:1 local:1 tends:1 limit:4 struggle:1 severely:1 vsingh:1 ap:1 pami:2 might:2 chose:1 therein:1 munkres:6 relaxing:1 suggests:1 range:2 graduate:1 practical:1 camera:1 pla:1 acknowledgment:1 practice:1 block:6 differs:1 x3:1 procedure:1 area:1 empirical:1 semicircle:1 submatrices:1 significantly:2 matching:43 projection:1 confidence:1 word:1 suggest:2 altun:1 get:2 cannot:2 close:1 tsochantaridis:1 context:4 impossible:1 writing:1 sukthankar:1 equivalent:3 deterministic:1 map:2 center:3 go:1 attention:2 independently:2 focused:1 shen:1 m2:1 array:4 xkr:2 spanned:1 orthonormal:1 deriving:1 classic:1 searching:1 handle:2 annals:1 construction:1 nominal:1 user:1 exact:1 programming:3 dickinson:1 element:1 recognition:5 satisfying:2 labeled:2 bottom:2 taskar:1 solved:4 capture:1 wang:1 susan:1 caetano:2 ordering:1 ran:1 cheeger:1 ideally:1 occluded:3 singh:1 solving:6 rewrite:1 bipartite:1 swap:1 matchings:11 basis:1 resolved:1 differently:1 various:1 separated:2 distinct:1 describe:1 effective:1 detected:2 crandall:1 tell:1 aggregate:1 zemel:1 whose:2 quite:2 encoded:1 valued:1 solve:1 say:1 drawing:1 otherwise:1 cvpr:3 distortion:1 statistic:1 think:1 jointly:3 noisy:3 final:1 online:1 sequence:3 eigenvalue:11 pressing:1 propose:1 reconstruction:1 product:2 aligned:1 achieve:1 constituent:1 xj2:1 cluster:1 object:11 derive:1 v10:2 pose:1 propagating:1 uredi:1 ij:1 school:1 received:1 solves:1 recovering:2 c:1 involves:1 implies:2 predicted:1 synchronized:1 quantify:1 kuhn:7 differ:1 hungarian:1 correct:11 tji:9 annotated:1 stochastic:2 hammer:1 attribute:2 jam:1 generalization:3 proposition:4 tmm:1 strictly:1 mm:2 around:2 considered:3 ground:7 guibas:2 normal:1 bicycle:1 electron:3 smallest:1 proc:1 applicable:1 combinatorial:3 largest:2 combinatorially:1 create:1 hope:1 concurrently:1 clearly:3 gaussian:3 cryo:3 rather:4 ej:5 unassigned:1 varying:1 corollary:2 derived:1 joachim:1 naval:1 unsatisfactory:1 rank:5 seamlessly:1 contrast:7 martello:1 baseline:7 sense:3 inference:1 dependent:1 typically:1 entire:2 relation:3 going:1 interested:1 arg:4 denoted:1 plan:1 platform:1 special:1 mutual:1 field:1 once:1 having:1 nicely:1 shaped:1 sampling:1 synchronizing:1 yu:1 cancel:1 icml:1 future:1 duplicate:1 randomly:2 preserve:1 tightly:1 petterson:1 individual:1 shkolnisky:1 replaced:3 phase:3 occlusion:6 ourselves:1 frob:4 interest:3 investigate:2 evaluation:1 severe:2 alignment:2 semidefinite:3 bundle:1 accurate:1 edge:1 neglecting:1 orthogonal:7 nadakuditi:1 xjq:6 goesele:1 circle:3 theoretical:2 sinha:1 handed:1 column:12 earlier:1 assignment:12 signifies:1 cost:6 deviation:2 entry:7 vertex:1 subset:1 v20:1 graphic:1 varies:2 considerably:1 thanks:1 fundamental:1 peak:1 xpi:9 siam:4 stay:1 volkovs:1 xi1:2 informatics:1 pool:1 together:2 vastly:1 ambiguity:3 nm:7 huang:2 corner:1 book:3 leading:6 return:2 toy:2 suggesting:1 matter:1 satisfy:5 ranking:1 depends:1 vi:11 leonidas:1 performed:1 view:4 picked:1 root:1 analyze:1 red:6 start:1 recover:6 crowdsourced:1 parallel:1 tmi:1 xing:1 simon:2 mutation:1 minimize:1 accuracy:4 conceive:1 variance:1 characteristic:2 descriptor:3 correspond:2 yield:1 efficiently:1 identify:1 yellow:1 famous:1 curless:2 critically:1 biostat:1 multiplying:1 published:1 submitted:1 detector:1 suffers:1 hlt:1 definition:1 hotel:2 turk:1 proof:3 recovers:1 degeneracy:1 proved:1 dataset:8 lim:2 improves:1 formalize:1 sophisticated:2 actually:1 day:1 follow:1 supervised:1 just:8 smola:2 synchronizes:1 hand:5 ei:6 o:1 grows:1 building:3 effect:2 xj1:1 requiring:2 true:1 normalized:6 counterpart:2 naacl:1 hence:1 assigned:2 analytically:1 symmetric:8 nonzero:1 imaged:1 deal:1 please:1 generalized:2 theoretic:2 demonstrate:1 performs:1 motion:3 invaluable:1 wigner:4 geometrical:2 image:38 recently:2 funding:1 parikh:1 common:3 rotation:4 pseudocode:1 ji:27 physical:1 discussed:1 davatzikos:1 significant:1 vec:6 consistency:6 mathematics:3 stable:3 supervision:11 similarity:3 furtherm:1 aligning:1 own:1 recent:3 optimizing:1 apart:1 scenario:1 certain:1 bandeira:1 inequality:1 caltech:1 furukawa:1 rotationally:1 minimum:1 fortunately:1 relaxed:1 additional:2 george:1 surely:4 maximize:4 semi:1 u0:6 multiple:6 full:1 keypoints:3 reduces:1 ii:1 exceeds:1 match:14 determination:1 long:4 equally:2 plugging:1 controlled:1 laplacian:2 qi:1 involving:2 hoppe:1 vision:6 metric:2 nization:1 cmu:2 steedly:1 histogram:1 normalization:1 repetitive:2 agarwal:1 achieved:2 microscopy:2 uj0:3 addition:1 remarkably:1 want:1 background:1 interval:1 decreased:2 else:1 singular:1 fine:1 extra:1 rest:2 operate:1 benaych:1 comment:1 subject:1 tend:3 jordan:1 integer:1 structural:1 presence:1 leverage:1 easy:2 enough:3 xj:6 affect:1 identified:3 inner:1 det:1 t0:7 utility:2 stereo:6 speaking:1 tij:1 useful:1 eigenvectors:15 concentrated:1 simplest:1 problematic:1 nsf:1 estimated:1 arising:1 correctly:1 klein:1 blue:3 write:2 group:3 key:2 drawn:1 wisc:2 registration:2 utilize:1 lacoste:1 v1:7 vast:1 graph:11 relaxation:4 imaging:1 fraction:2 run:2 angle:1 everywhere:1 place:1 almost:4 reasonable:1 family:1 vn:6 putative:1 scaling:1 sfm:1 ki:2 guaranteed:1 tackled:1 correspondence:5 fold:1 quadratic:1 cheng:1 precisely:1 vishwanathan:1 constrain:1 bp:1 x2:7 scene:4 n3:1 u1:1 argument:1 span:1 min:3 pi1:2 relatively:3 structured:1 combination:1 across:4 slightly:1 em:1 presently:1 intuitively:1 invariant:2 iccv:2 taken:3 computationally:2 equation:1 mutually:2 remains:1 count:1 mechanism:1 xi2:2 singer:7 letting:1 tractable:1 shokoufandeh:1 end:3 photo:1 t1m:1 koml:1 available:1 apply:1 quarterly:1 hierarchical:1 v2:5 appropriate:2 robustly:1 pji:2 vikas:1 existence:1 original:1 top:3 running:1 t11:1 madison:2 exploit:2 risi:2 especially:2 amit:1 pachauri:2 forum:1 objective:3 malik:2 already:1 occurs:1 strategy:2 primary:1 snavely:3 usual:1 diagonal:1 said:1 microscopic:1 exhibit:2 distance:4 thank:1 landmark:18 majority:1 ui0:4 degrade:1 consensus:1 reason:1 enforcing:1 assuming:1 modeled:1 relationship:1 equivalently:2 difficult:1 robert:1 redefined:1 summarization:1 perform:1 observation:2 datasets:2 semicircular:1 finite:1 logistics:1 defining:2 situation:1 extended:2 looking:2 communication:2 frame:5 rome:1 perturbation:4 arbitrary:2 jebara:1 community:2 pair:13 mechanical:1 connection:2 tentative:1 vn0:2 boost:1 nip:4 usually:3 below:2 xm:3 pattern:2 regime:1 including:1 max:4 green:8 video:3 natural:4 difficulty:1 synchronize:1 mn:1 improve:1 historically:1 imply:1 keypoint:3 julien:1 naive:1 kj:2 sn:11 geometric:1 literature:1 interdependent:1 multiplication:2 relative:2 wisconsin:3 synchronization:27 fully:2 loss:1 permutation:55 expect:2 mixed:1 asymptotic:1 highlight:1 ingredient:1 localized:1 foundation:1 degree:4 sufficient:1 consistent:5 principle:1 viewpoint:2 corrupting:1 cd:1 row:1 prone:1 course:4 surprisingly:1 supported:2 hebert:1 side:2 uchicago:1 szeliski:2 wide:1 fall:1 tolerance:1 xn:2 transition:3 collection:3 qualitatively:1 projected:1 reconstructed:1 observable:1 confirm:1 sequentially:1 belongie:1 xi:7 hadani:2 alternatively:1 spectrum:3 triplet:1 table:1 robust:3 elastic:1 symmetry:1 improving:1 domain:1 vj:4 linearly:2 motivation:1 noise:13 arise:1 x1:6 fig:5 representative:3 backdrop:1 precision:1 sub:1 exponential:1 house:3 jmlr:1 grained:1 theorem:1 erroneous:1 showing:1 sift:1 tm1:1 offset:1 amico:1 evidence:2 intrinsic:1 false:1 adding:1 margin:1 rayleigh:1 simply:3 appearance:1 likely:1 xjn:1 forming:1 visual:2 leordeanu:1 chang:1 truth:7 acm:1 identity:1 consequently:1 ann:1 replace:1 absence:1 feasible:1 hard:1 included:2 typical:1 demirci:1 reducing:1 averaging:1 called:4 experimental:2 xin:2 perfor:1 berg:2 combinatorica:1 puzicha:1 arises:1 dissimilar:1 spielman:1 dept:4 |
4,405 | 4,988 | Reflection methods for user-friendly
submodular optimization
Stefanie Jegelka
UC Berkeley
Berkeley, CA, USA
Francis Bach
INRIA - ENS
Paris, France
Suvrit Sra
MPI for Intelligent Systems
T?ubingen, Germany
Abstract
Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular
minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our
method is neither approximate, nor impractical, nor does it need any cumbersome
parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization
problem as a continuous best approximation problem that is solved through a
sequence of reflections, and its solution can be easily thresholded to obtain an
optimal discrete solution. This method solves both the continuous and discrete
formulations of the problem, and therefore has applications in learning, inference,
and reconstruction. In our experiments, we illustrate the benefits of our method on
two image segmentation tasks.
1
Introduction
Submodularity is a rich combinatorial concept that expresses widely occurring phenomena such as
diminishing marginal costs and preferences for grouping. A set function F : 2V ? R on a set V is
submodular if for all subsets S, T ? V , we have F (S ? T ) + F (S ? T ) ? F (S) + F (T ).
Submodular functions underlie the goals of numerous problems in machine learning, computer vision
and signal processing [1]. Several problems in these areas can be phrased as submodular optimization
tasks: notable examples include graph cut-based image segmentation [7], sensor placement [30], or
document summarization [31]. A longer list of examples may be found in [1].
The theoretical complexity of submodular optimization is well-understood: unconstrained minimization of submodular set functions is polynomial-time [19] while submodular maximization is
NP-hard. Algorithmically, however, the picture is different. Generic submodular maximization admits
efficient algorithms that can attain approximate optima with global guarantees; these algorithms are
typically based on local search techniques [16, 35]. In contrast, although polynomial-time solvable,
submodular function minimization (SFM) which seeks to solve
min F (S),
S?V
(1)
poses substantial algorithmic difficulties. This is partly due to the fact that one is commonly interested
in an exact solution (or an arbitrarily close approximation thereof), and ?polynomial-time? is not
necessarily equivalent to ?practically fast?.
Submodular minimization algorithms may be obtained from two main perspectives: combinatorial
and continuous. Combinatorial algorithms for SFM typically use close connections to matroid and
1
maximum flow methods; the currently theoretically fastest combinatorial algorithm for SFM scales
as O(n6 + n5 ? ), where ? is the time to evaluate the function oracle [37] (for an overview of other
algorithms, see e.g., [33]). These combinatorial algorithms are typically nontrivial to implement.
Continuous methods offer an alternative by instead minimizing a convex extension. This idea exploits
the fundamental connection between a submodular function F and its Lov?asz extension f [32], which
is continuous and convex. The SFM problem (1) is then equivalent to
min f (x).
x?[0,1]n
(2)
The Lov?asz extension f is nonsmooth, so we might have to resort to subgradient methods. While
a fundamental result of Edmonds [15] demonstrates that a subgradient of f can be computed in
O(n log n) time, subgradient methods can be?sensitive to choices of the step size, and can be slow.
They theoreticaly converge at a rate of O(1/ t) (after t iterations). The ?smoothing technique? of
[36] does not in general apply here because computing a smoothed gradient is equivalent to solving
the submodular minimization problem. We discuss this issue further in Section 2.
An alternative to minimizing the Lov?asz extension directly on [0, 1]n is to consider a slightly modified
convex problem. Specifically, the exact solution of the discrete problem minS?V F (S) and of its
nonsmooth convex relaxation minx?[0,1]n f (x) may be found as a level set S0 = {k | x?k > 0} of
the unique point x? that minimizes the strongly convex function [1, 10]:
f (x) + 12 kxk2 .
(3)
We will refer to the minimization of (3) as the proximal problem due to its close similarity to proximity
operators used in convex optimization [12]. When F is a cut function, (3) becomes a total variation
problem (see, e.g., [9] and references therein) that also occurs in other regularization problems [1].
Two noteworthy points about (3) are: (i) addition of the strongly convex component 12 kxk2 ; (ii) the
ensuing removal of the box-constraints x ? [0, 1]n . These changes allow us to consider a convex
dual which is amenable to smooth optimization techniques.
Typical approaches to generic SFM include Frank-Wolfe methods [17] that have cheap iterations
and O(1/t) convergence, but can be quite slow in practice (Section 5); or the minimum-normpoint/Fujishige-Wolfe algorithm [20] that has expensive iterations but finite convergence. Other
recent methods are approximate [24]. In contrast to several iterative methods based on convex
relaxations, we seek to obtain exact discrete solutions.
To the best of our knowledge, all generic algorithms that use only submodularity are several orders
of magnitude slower than specialized algorithms when they exist (e.g., for graph cuts). However,
the submodular function is not always generic and given via a black-box,
but has known structure.
Pr
Following [28, 29, 38, 41], we make the assumption that F (S) = i=1 Fi (S) is a sum of sufficiently
?simple? functions (see Sec. 3). This structure allows the use of (parallelizable) dual decomposition
techniques for the problem in Eq. (2), with [11, 38] or without [29] Nesterov?s smoothing technique,
or with direct smoothing [41] techniques. But existing approaches typically have two drawbacks: (1)
they use smoothing or step-size parameters whose selection may be critical and quite tedious; and (2)
they still exhibit slow convergence (see Section 5).
These drawbacks arise from working with formulation (2). Our main insight is that, despite seemingly
counter-intuitive, the proximal problem (3) offers a much more user-friendly tool for solving (1)
than its natural convex counterpart (2), both in implementation and running time. We approach
problem (3) via its dual. This allows decomposition techniques which combine well with orthogonal
projection and reflection methods that (a) exhibit faster convergence, (b) are easily parallelizable, (c)
require no extra hyperparameters, and (d) are extremely easy to implement.
The main three algorithms that we consider are: (i) dual block-coordinate descent (equivalently,
primal-dual proximal-Dykstra), which was already shown to be extremely efficient for total variation
problems [2] that are special cases of Problem (3); (ii) Douglas-Rachford splitting using the careful
variant of [4], which for our formulation (Section 4.2) requires no hyper-parameters; and (iii)
accelerated projected gradient [5]. We will see these alternative algorithms can offer speedups beyond
known efficiencies. Our observations have two implications: first, from the viewpoint of solving
Problem (3), they offers speedups for often occurring denoising and reconstruction problems that
employ total variation. Second, our experiments suggest that projection and reflection methods can
work very well for solving the combinatorial problem (1).
2
In summary, we make the following contributions: (1) In Section 3, we cast the problem of minimizing
decomposable submodular functions as an orthogonal projection problem and show how existing
optimization techniques may be brought to bear on this problem, to obtain fast, easy-to-code and
easily parallelizable algorithms. In addition, we show examples of classes of functions amenable
to our approach. In particular, for simple functions, i.e., those for which minimizing F (S) ? a(S)
is easy for all vectors1 a ? Rn , the problem in Eq. (3) may be solved in O(log 1? ) calls to such
minimization routines, to reach a precision ? (Section 2,3). (2) In Section 5, we demonstrate the
empirical gains of using accelerated proximal methods, Douglas-Rachford and block coordinate
descent methods over existing approaches: fewer hyperparameters and faster convergence.
2
Review of relevant results from submodular analysis
The relevant concepts we review here are the Lov?asz extension, base polytopes of submodular
functions, and relationships between proximal and discrete problems. For more details, see [1, 19].
Lov?asz extension and convexity. The power set 2V may be naturally identified with the vertices of the hypercube, i.e., {0, 1}n . The Lov?asz extension f of any set function is defined
by linear interpolation, so that for any S ? V , F (S) = f (1S ). It may be computed in
closed form once the components of x are sorted: if x?(1) > ? ? ? > x?(n) , then f (x) =
Pn
k=1 x?(k) F ({?(1), . . . , ?(k)}) ? F ({?(1), . . . , ?(k ? 1)}) [32]. For the graph cut function, f
is the total variation.
In this paper, we are going to use two important results: (a) if the set function F is submodular, then
its Lov?asz extension f is convex, and (b) minimizing the set function F is equivalent to minimizing
f (x) with respect to x ? [0, 1]n . Given x ? [0, 1]n , all of its level sets may be considered and the
function may be evaluated (at most n times) to obtain a set S. Moreover, for a submodular function,
the Lov?asz extension happens to be the support function of the base polytope B(F ) defined as
B(F ) = {y ? Rn | ?S ? V, y(S) 6 F (S) and y(V ) = F (V )},
that is f (x) = maxy?B(F ) y > x [15]. A maximizer of y > x (and hence the value of f (x)), may be
computed by the ?greedy algorithm?, which first sorts the components of w in decreasing order
x?(1) > ? ? ? > x?(n) , and then compute y?(k) = F ({?(1), . . . , ?(k)}) ? F ({?(1), . . . , ?(k ? 1)}).
In other words, a linear function can be maximized over B(F ) in time O(n log n + n? ) (note that
the term n? may be improved in many special cases). This is crucial for exploiting convex duality.
Dual of discrete problem. We may derive a dual problem to the discrete problem in Eq. (1) and
the convex nonsmooth problem in Eq. (2), as follows:
min F (S) = min n f (x) = min n max y > x = max
S?V
x?[0,1]
min y > x = max (y)? (V ), (4)
y?B(F ) x?[0,1]n
x?[0,1] y?B(F )
y?B(F )
where (y)? = min{y, 0} applied elementwise. This allows to obtain dual certificates of optimality
from any y ? B(F ) and x ? [0, 1]n .
Proximal problem. The optimization problem (3), i.e., minx?Rn f (x) + 12 kxk2 , has intricate
relations to the SFM problem [10]. Given the unique optimal solution x? of (3), the maximal (resp.
minimal) optimizer of the SFM problem is the set S ? of nonnegative (resp. positive) elements of x? .
More precisely, solving (3) is equivalent to minimizing F (S) + ?|S| for all ? ? R. A solution
S? ? V is obtained from a solution x? as S?? = {i | x?i > ?}. Conversely, x? may be obtained
from all S?? as x?k = sup{? ? R | k ? S?? } for all k ? V . Moreover, if x is an ?-optimal solution
?
of Eq. (3), then we may construct ?n-optimal solutions for all S? [1; Prop. 10.5]. In practice, the
duality gap of the discrete problem is usually much lower than that of the proximal version of the
same problem, as we will see in Section 5. Note that the problem in Eq. (3) provides much more
information than Eq. (2), as all ?-parameterized discrete problems are solved.
The dual problem of Problem (3) reads as follows:
min f (x) + 12 kxk22 = minn max y > x + 12 kxk22 = max minn y > x + 21 kxk22 = max ? 21 kyk22 ,
x?Rn
x?R y?B(F )
y?B(F ) x?R
y?B(F )
where primal and dual variables are linked as x = ?y. Observe that this dual problem is equivalent
to finding the orthogonal projection of 0 onto B(F ).
1
Every vector a ? Rn may be viewed as a modular (linear) set function: a(S) ,
3
P
i?S
a(i).
Divide-and-conquer strategies for the proximal problems. Given a solution x? of the proximal
problem, we have seen how to get S?? for any ? by simply thresholding x? at ?. Conversely, one can
recover x? exactly from at most n well-chosen values of ?. A known divide-and-conquer strategy
[19, 21] hinges upon the fact that for any ?, one can easily see which components of x? are greater
or smaller than ? by computing S?? . The resulting algorithm makes O(n) calls to the submodular
function oracle. In [25], we extend an alternative approach by Tarjan et al. [42] for cuts to general
submodular functions and obtain a solution to (3) up to precision ? in O(min{n, log 1? }) iterations.
This result is particularly useful if our function F is a sum of functions for each of which by itself
the SFM problem is easy.
PpBeyond squared `2 -norms, our algorithm equally applies to computing all
minimizers of f (x) + j=1 hj (xj ) for arbitrary smooth strictly convex functions hj , j = 1, . . . , n.
3
Decomposition of submodular functions
Following
[28, 29, 38, 41], we assume that our function F may be decomposed as the sum F (S) =
Pr
j=1 Fj (S) of r ?simple? functions. In this paper, by ?simple? we mean functions G for which
G(S) ? a(S) can be minimized efficiently for all vectors a ? Rn (more precisely, we require that
S 7? G(S ? T ) ? a(S) can be minimized efficiently over all subsets of V \ T , for any T ? V and a).
Efficiency may arise from the functional form of G, or from the fact that G has small support. For
such functions, Problems (1) and (3) become
Xr
Xr
Xr
min
Fj (S) = min n
fj (x)
minn
fj (x) + 12 kxk22 .
(5)
S?V
j=1
x?[0,1]
j=1
x?R
j=1
1
2
2 kx ? zk2 + fj (x),
The key to the algorithms presented here is to be able to minimize
to orthogonally project z onto B(Fj ): min 21 ky ? zk22 subject to y ? B(Fj ).
or equivalently,
We next sketch some examples of functions F and their decompositions into simple functions Fj . As
shown at the end of Section 2, projecting onto B(Fj ) is easy as soon as the corresponding submodular
minimization problems are easy. Here we outline some cases for which specialized fast algorithms
are known.
Graph cuts. A widely used class of submodular functions are graph cuts. Graphs may be decomposed into substructures such as trees, simple paths or single edges. Message passing algorithms
apply to trees, while the proximal problem for paths is very efficiently solved by [2]. For single edges,
it is solvable in closed form. Tree decompositions are common in graphical models, whereas path
decompositions are frequently used for TV problems [2].
Concave functions. Another important class of submodular functions is that of concave functions of
cardinality, i.e., Fj (S) = h(|S|) for a concave function h. Problem (3) for such functions may be
solved in O(n log n) time (see [18] and our appendix in [25]). Functions of this class have been used
in [24, 27, 41]. Such functions also include covering functions [41].
Hierarchical functions. Here, the ground set corresponds to the leaves of a rooted, undirected tree.
Each node has a weight, and the cost of a set of nodes S ? V is the sum of the weights of all nodes
in the smallest subtree (including the root) that spans S. This class of functions too admits to solve
the proximal problem in O(n log n) time [22, 23, 26].
Small support. Any general, potentially slower algorithm such as the minimum-norm-point algorithm can be applied if the support of each Fj is only a small subset of the ground set.
3.1
Dual decomposition of the nonsmooth problem
We first review existing
for the nonsmooth problem (1). We always
Pr dual decomposition techniques
Qr
assume that F = j=1 Fj , and define Hr := j=1 Rn ' Rn?r . We follow [29] to derive a dual
formulation (see appendix in [25]):
Lemma 1. The dual of Problem (1) may be written in terms of variables ?1 , . . . , ?r ? Rn as
Xr
Xr
max
gj (?j )
s.t. ? ? (?1 , . . . , ?r ) ? Hr |
?j = 0
(6)
j=1
j=1
where gj (?j ) = minS?V Fj (S) ? ?j (S) is a nonsmooth concave function.
The dual is the maximization of a nonsmooth concave function over a convex
Pr set, onto which it is
easy to project: the projection of a vector y has j-th block equal to yj ? 1r k=1 yk . Moreover, in
our setup, functions gj and their subgradients may be computed efficiently through SFM.
4
We consider several existing alternatives for the minimization of f (x) on x ? [0, 1]n , most of which
use Lemma 1. Computing subgradients for any fj means calling the greedy algorithm, which runs in
time O(n log n). All of the following algorithms require the tuning of an appropriate step size.
Primal subgradient descent (primal-sgd): Agnostic to any decomposition properties, we may
apply a standard simple subgradient method to f . A subgradient of f may
? be obtained from the
subgradients of the components fj . This algorithm converges at rate O(1/ t).
a subgradient method to the nonsmooth dual
Dual subgradient descent (dual-sgd) [29]: Applying
?
in Lemma 1 leads to a convergence rate of O(1/ t). Computing a subgradient requires minimizing
the submodular functions Fj individually. In simulations, following [29], we consider a step-size
rule similar to Polyak?s rule (dual-sgd-P) [6], as well as a decaying step-size (dual-sgd-F), and use
discrete optimization for all Fj .
Primal smoothing (primal-smooth) [41]: The nonsmooth primal may be smoothed in several ways
by smoothing the fj individually; one example is f?j? (xj ) = maxyj ?B(Fj ) yj> xj ? 2? kyj k2 . This
leads to a function that is (1/?)-smooth. Computing f?j? means solving the proximal problem for Fj .
The convergence rate is O(1/t), but, apart from step size which may be set relatively easily, the
smoothing constant ? needs to be defined.
Dual smoothing (dual-smooth): Instead of the primal, the dual (6) may be smoothed, e.g., by
entropy [8, 38] applied to each gj as g?j? (?j ) = minx?[0,1]n fj (x) + ?h(x) where h(x) is a negative
entropy. Again, the convergence rate is O(1/t) but there are two free parameters (in particular the
smoothing constant ? which is hard to tune). This method too requires solving proximal problems for
all Fj in each iteration.
Dual smoothing with entropy also admits coordinate descent methods [34] that exploit the decomposition, but we do not compare to those here.
3.2
Dual decomposition methods for proximal problems
We may also consider Eq. (3) and first derive a dual problem using the same technique as in
Section 3.1. Lemma 2 (proved in the appendix in [25]) formally presents ourP
dual formulation as a
best approximation problem. The primal variable can be recovered as x = ? j yj .
Lemma 2. The dual of Eq. (3) may be written as the best approximation problem
Xr
Yr
min ky ? ?k22
s.t. ? ? (?1 , . . . , ?r ) ? Hr |
?j = 0 ,
y?
B(Fj ). (7)
j=1
?,y
j=1
We can actually eliminate the ?j and obtain the simpler looking dual problem
2
1
Xr
yj
s.t. yj ? B(Fj ), j ? {1, . . . , r}
(8)
max ?
y
j=1
2
2
Such a dual was also used in [40]. In Section 5, we will see the effect of solving one of these duals or
the other. For the simpler dual (8) the case r = 2 is of special interest; it reads
1
max
? ky1 + y2 k22 ??
min
ky1 ? (?y2 )k2 .
(9)
2
y1 ?B(F1 ), y2 ?B(F2 )
y1 ?B(F1 ),?y2 ??B(F2 )
We write problem (9) in this suggestive form to highlight its key geometric structure: it is, like (7),
a best approximation problem: i.e., the problem of finding the closest point between the polytopes
B(F1 ) and ?B(F2 ). Notice, however, that (7) is very different from (9)?the former operates in a
product space while the latter does not, a difference that can have impact in practice (see Section 5).
We are now ready to present algorithms that exploit our dual formulations.
4
Algorithms
We describe a few competing methods for solving our smooth dual formulations. We describe the
details for the special 2-block case (9); the same arguments apply to the block dual from Lemma 2.
4.1
Block coordinate descent or proximal-Dykstra
Perhaps the simplest approach to solving (9) (viewed as a minimization problem) is to use a block
coordinate descent (BCD) procedure, which in this case performs the alternating projections:
y1k+1 ? argminy1 ?B(F1 ) ky1 ? (?y2k )k22 ;
y2k+1 ? argminy2 ?B(F2 ) ky2 ? (?y1k+1 )k2 . (10)
5
The iterations for solving (8) are analogous. This BCD method (applied to (9)) is equivalent to
applying the so-called proximal-Dykstra method [12] to the primal problem. This may be seen by
comparing the iterates. Notice that the BCD iteration (10) is nothing but alternating projections onto
the convex polyhedra B(F1 ) and B(F2 ). There exists a large body of literature studying method of
alternating projections?we refer the interested reader to the monograph [13] for further details.
However, despite its attractive simplicity, it is known that BCD (in its alternating projections form),
can converge arbitrarily slowly [4] depending on the relative orientation of the convex sets onto which
one projects. Thus, we turn to a potentially more effective method.
4.2
Douglas-Rachford splitting
The Douglas-Rachford (DR) splitting method [14] includes algorithms like ADMM as a special
case [12]. It avoids the slowdowns alluded to above by replacing alternating projections with
alternating ?reflections?. Formally, DR applies to convex problems of the form [3, 12]
minx
?1 (x) + ?2 (x),
(11)
subject to the qualification ri(dom ?1 ) ? ri(dom ?2 ) 6= ?. To solve (11), DR starts with some z0 ,
and performs the three-step iteration (for k ? 0):
2. vk = prox?1 (2xk ? zk );
3. zk+1 = zk + ?k (vk ? zk ), (12)
P
where ?k ? [0, 2] is a sequence of scalars that satisfy k ?k (2 ? ?k ) = ?. The sequence {xk }
produced by iteration (12) can be shown to converge to a solution of (11) [3; Thm. 25.6].
1. xk = prox?2 (zk );
Introducing the reflection operator
R? := 2 prox? ? I,
and setting ?k = 1, the DR iteration (12) may be written in a more symmetric form as
xk = prox?2 (zk ),
zk+1 = 12 [R?1 R?2 + I]zk ,
k ? 0.
(13)
Applying DR to the duals (7) or (9), requires first putting them in the form (11), either by introducing
extra variables or by going back to the primal, which is unnecessary. This is where the special
structure of our dual problem proves crucial, a recognition that is subtle yet remarkably important.
Instead of applying DR to (9), consider the closely related problem
miny
?1 (y) + ?2? (y),
(14)
where ?1 , ?2? are indicator functions for B(F1 ) and ?B(F2 ), respectively. Applying DR directly
to (14) does not work because usually ri(dom ?1 ) ? ri(dom ?2 ) = ?. Indeed, applying DR to (14)
generates iterates that diverge to infinity [4; Thm. 3.13(ii)]. Fortunately, even though the DR iterates
for (14) may diverge, Bauschke et al. [4] show how to extract convergent sequences from these
iterates, which actually solve the corresponding best approximation problem; for us this is nothing
but the dual (9) that we wanted to solve in the first place. Theorem 3, which is a simplified version
of [4; Thm. 3.13], formalizes the above discussion.
Theorem 3. [4] Let A and B be nonempty polyhedral convex sets. Let ?A (?B ) denote orthogonal
projection onto A (B), and let RA := 2?A ? I (similarly RB ) be the corresponding reflection
operator. Let {zk } be the sequence generated by the DR method (13) applied to (14). If A ? B =
6 ?,
then {zk }k?0 converges weakly to a fixed-point of the operator T := 21 [RA RB + I]; otherwise
kzk k2 ? ?. The sequences {xk } and {?A ?B zk } are bounded; the weak cluster points of either of
the two sequences
{(?A RB zk , xk )}k?0 {(?A xk , xk )}k?0 ,
(15)
are solutions best approximation problem mina,b ka ? bk such that a ? A and b ? B.
The key consequence of Theorem 3 is that we can apply DR with impunity to (14), and extract from
its iterates the optimal solution to problem (9) (from which recovering the primal is trivial). The most
important feature of solving the dual (9) in this way is that absolutely no stepsize tuning is required,
making the method very practical and user friendly.
6
pBCD, iter 1
pBCD, iter 7
DR, iter 1
DR, iter 4
smooth gap
?s = 3.4 ? 106
?s = 4.4 ? 105
?s = 4.17 ? 105
?s = 8.05 ? 104
3
2
3
discrete gap
?d = 4.6 ? 10
?d = 5.5 ? 10
?d = 6.6 ? 10
?d = 5.9 ? 10?1
Figure 1: Segmentation results for the slowest and fastest projection method, with smooth (?s ) and discrete
(?d ) duality gaps. Note how the background noise disappears only for small duality gaps.
5
Experiments
We empirically compare the proposed projection methods2 to the (smoothed) subgradient methods
discussed in Section 3.1. For solving the proximal problem, we apply block coordinate descent (BCD)
and Douglas-Rachford (DR) to Problem (8) if applicable, and also to (7) (BCD-para, DR-para). In
addition, we use acceleration to solve (8) or (9) [5]. The main iteration cost of all methods except
for the primal subgradient method is the orthogonal projection onto polytopes B(Fj ). The primal
subgradient method uses the greedy algorithm in each iteration, which runs in O(n log n). However,
as we will see, its convergence is so slow to counteract any benefit that may arise from not using
projections. We do not include Frank-Wolfe methods here, since FW is equivalent to a subgradient
descent on the primal and converges correspondingly slowly.
As benchmark problems, we use (i) graph cut problems for segmentation, or MAP inference in a
4-neighborhood grid-structured MRF, and (ii) concave functions similar to [41], but together with
graph cut functions. The functions in (i) decompose as sums over vertical and horizontal paths. All
horizontal paths are independent and can be solved together in parallel, and similarly all vertical
paths. The functions in (ii) are constructed by extracting regions Rj via superpixels and, for each
Rj , defining the function Fj (S) = |S||Rj \ S|. We use 200 and 500 regions. The problems
have size 640 ? 427. Hence, for (i) we have r = 640 + 427 (but solve it as r = 2) and for (ii)
r = 640 + 427 + 500 (solved as r = 3). More details and experimental results may be found in [25].
Two functions (r = 2). Figure 2 shows the duality gaps for the discrete and smooth (where
applicable) problems for two instances of segmentation problems. The algorithms working with
the proximal problems are much faster than the ones directly solving the nonsmooth problem. In
particular DR converges extremely fast, faster even than BCD which is known to be a state-of-the-art
algorithms for this problem [2]. This, in itself, is a new insight for solving TV. If we aim for parallel
methods, then again DR outperforms BCD. Figure 3 (right) shows the speedup gained from parallel
processing. Using 8 cores, we obtain a 5-fold speed-up. We also see that the discrete gap shrinks
faster than the smooth gap, i.e., the optimal discrete solution does not require to solve the smooth
problem to extremely high accuracy. Figure 1 illustrates example results for different gaps.
More functions (r > 2). Figure 3 shows example results for four problems of sums of concave and
cut functions. Here, we can only run DR-para. Overall, BCD, DR-para and the accelerated gradient
method perform very well.
In summary, our experiments suggest that projection methods can be extremely useful for solving
the combinatorial submodular minimization problem. Of the tested methods, DR, cyclic BCD and
accelerated gradient perform very well. For parallelism, applying DR on (9) converges much faster
than BCD on the same problem. Moreover, in terms of running times, running the DR method with a
mixed Matlab/C implementation until convergence on a single core is only 3-8 times slower than the
optimized efficient C code of [7], and only 2-4 times on 2 cores. These numbers should be read while
considering that, unlike [7], the projection methods naturally lead to parallel implementations, and
are able to integrate a large variety of functions.
6
Conclusion
We have presented a novel approach to submodular function minimization based on the equivalence
with a best approximation problem. The use of reflection methods avoids any hyperparameters
and reduce the number of iterations significantly, suggesting the suitability of reflection methods
2
Code and data corresponding to this paper are available at https://sites.google.com/site/mloptstat/drsubmod
7
smooth gaps ? smooth problems ? 1
6
grad?accel
BCD
DR
BCD?para
DR?para
3
2
1
grad?accel
BCD
DR
BCD?para
DR?para
4
log10(duality gap)
1
discrete gaps ? smooth problems? 1
4
log10(duality gap)
10
log (duality gap)
discrete gaps ? non?smooth problems ? 1
4
dual?sgd?P
dual?sgd?F
dual?smooth
3
primal?smooth
primal?sgd
2
2
0
?2
0
0
?4
800
?1
1000
discrete gaps ? non?smooth problems ? 4
5
dual?sgd?P
dual?sgd?F
4
dual?smooth
primal?smooth
primal?sgd
3
2
1
20
40
60
iteration
80
100
20
discrete gaps ? smooth problems? 4
40
60
iteration
80
100
smooth gaps ? smooth problems ? 4
5
7
grad?accel
BCD
DR
BCD?para
DR?para
4
3
2
1
grad?accel
BCD
DR
BCD?para
DR?para
6
5
log10(duality gap)
400
600
iteration
log10(duality gap)
200
10
log (duality gap)
?1
4
3
2
1
0
0
0
?1
?1
200
400
600
iteration
800
?1
1000
20
40
60
iteration
80
100
?2
20
40
60
iteration
80
100
Figure 2: Comparison of convergence behaviors. Left: discrete duality gaps for various optimization
schemes for the nonsmooth problem, from 1 to 1000 iterations. Middle: discrete duality gaps for
various optimization schemes for the smooth problem, from 1 to 100 iterations. Right: corresponding
continuous duality gaps. From top to bottom: two different images.
discrete gaps ? 2
discrete gaps ? 3
log10(duality gap)
4
3
2
1
0
4
3
2
1
0
?1
?1
?2
?2
50
100
iteration
150
200
dual?sgd?P
DR?para
BCD
BCD?para
grad?accel
5
log10(duality gap)
5
?3
40 iterations of DR
6
dual?sgd?P
DR?para
BCD
BCD?para
grad?accel
?3
6
5
speedup factor
6
4
3
2
1
20
40
60
iteration
80
100
0
0
2
4
# cores
6
8
Figure 3: Left two plots: convergence behavior for graph cut plus concave functions. Right: Speedup
due to parallel processing.
for combinatorial problems. Given the natural parallelization abilities of our approach, it would
be interesting to perform detailed empirical comparisons with existing parallel implementations of
graph cuts (e.g., [39]). Moreover, a generalization beyond submodular functions of the relationships
between combinatorial optimization problems and convex problems would enable the application of
our framework to other common situations such as multiple labels (see, e.g., [29]).
Acknowledgments. This research was in part funded by the Office of Naval Research under contract/grant
number N00014-11-1-0688, by NSF CISE Expeditions award CCF-1139158, by DARPA XData Award FA875012-2-0331, and the European Research Council (SIERRA project), as well as gifts from Amazon Web Services,
Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloudera, Ericsson, Facebook, General Electric, Hortonworks,
Intel, Microsoft, NetApp, Oracle, Samsung, Splunk, VMware and Yahoo!.
References
[1] F. Bach. Learning with submodular functions: A convex optimization perspective. Arxiv preprint
arXiv:1111.6453v2, 2013.
[2] A. Barbero and S. Sra. Fast Newton-type methods for total variation regularization. In ICML, 2011.
[3] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces.
Springer, 2011.
[4] H. H. Bauschke, P. L. Combettes, and D. R. Luke. Finding best approximation pairs relative to two closed
convex sets in Hilbert spaces. J. Approx. Theory, 127(2):178?192, 2004.
[5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
8
[6] D. P. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
[7] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE TPAMI,
23(11):1222?1239, 2001.
[8] B.Savchynskyy, S.Schmidt, J.H.Kappes, and C.Schn?orr. Efficient MRF energy minimization via adaptive
diminishing smoothing. In UAI, 2012.
[9] A. Chambolle. An algorithm for total variation minimization and applications. J Math. Imaging and Vision,
20(1):89?97, 2004.
[10] A. Chambolle and J. Darbon. On total variation minimization and surface evolution using parametric
maximum flows. Int. Journal of Comp. Vision, 84(3):288?307, 2009.
[11] F. Chudak and K. Nagano. Efficient solutions to relaxations of combinatorial problems with submodular
penalties via the Lov?asz extension and non-smooth convex optimization. In SODA, 2007.
[12] P. L. Combettes and J.-C. Pesquet. Proximal Splitting Methods in Signal Processing. In Fixed-Point
Algorithms for Inverse Problems in Science and Engineering, pages 185?212. Springer, 2011.
[13] F. R. Deutsch. Best Approximation in Inner Product Spaces. Springer Verlag, first edition, 2001.
[14] J. Douglas and H. H. Rachford. On the numerical solution of the heat conduction problem in 2 and 3 space
variables. Tran. Amer. Math. Soc., 82:421?439, 1956.
[15] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In Combinatorial optimization Eureka, you shrink!, pages 11?26. Springer, 2003.
[16] U. Feige, V. S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions. SIAM J Comp,
40(4):1133?1153, 2011.
[17] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3:
95?110, 1956.
[18] S. Fujishige. Lexicographically optimal base of a polymatroid with respect to a weight vector. Mathematics
of Operations Research, pages 186?196, 1980.
[19] S. Fujishige. Submodular Functions and Optimization. Elsevier, 2005.
[20] S. Fujishige and S. Isotani. A submodular function minimization algorithm based on the minimum-norm
base. Pacific Journal of Optimization, 7:3?17, 2011.
[21] H. Groenevelt. Two algorithms for maximizing a separable concave function over a polymatroid feasible
region. European Journal of Operational Research, 54(2):227?236, 1991.
[22] D.S. Hochbaum and S.-P. Hong. About strongly polynomial time algorithms for quadratic optimization
over submodular constraints. Math. Prog., pages 269?309, 1995.
[23] S. Iwata and N. Zuiki. A network flow approach to cost allocation for rooted trees. Networks, 44:297?301,
2004.
[24] S. Jegelka, H. Lin, and J. Bilmes. On fast approximate submodular minimization. In NIPS, 2011.
[25] S. Jegelka, F. Bach, and S. Sra. Reflection methods for user-friendly submodular optimization (extended
version). arXiv, 2013.
[26] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding. Journal
of Machine Learning Research, pages 2297?2334, 2011.
[27] P. Kohli, L. Ladick?y, and P. Torr. Robust higher order potentials for enforcing label consistency. Int.
Journal of Comp. Vision, 82, 2009.
[28] V. Kolmogorov. Minimizing a sum of submodular functions. Disc. Appl. Math., 160(15), 2012.
[29] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition.
IEEE TPAMI, 33(3):531?552, 2011.
[30] A. Krause and C. Guestrin. Submodularity and its applications in optimized information gathering. ACM
Transactions on Intelligent Systems and Technology, 2(4), 2011.
[31] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In NAACL/HLT,
2011.
[32] L. Lov?asz. Submodular functions and convexity. Mathematical programming: the state of the art, Bonn,
pages 235?257, 1982.
[33] S. T. McCormick. Submodular function minimization. Discrete Optimization, 12:321?391, 2005.
[34] O. Meshi, T. Jaakkola, and A. Globerson. Convergence rate analysis of MAP coordinate minimization
algorithms. In NIPS, 2012.
[35] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular
set functions?I. Math. Prog., 14(1):265?294, 1978.
[36] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Prog., 103(1):127?152, 2005.
[37] J. B. Orlin. A faster strongly polynomial time algorithm for submodular function minimization. Math.
Prog., 118(2):237?251, 2009.
[38] B. Savchynskyy, S. Schmidt, J. Kappes, and C. Schn?orr. A study of Nesterov?s scheme for Lagrangian
decomposition and MAP labeling. In CVPR, 2011.
[39] A. Shekhovtsov and V. Hlav?ac. A distributed mincut/maxflow algorithm combining path augmentation and
push-relabel. In Energy Minimization Methods in Computer Vision and Pattern Recognition, 2011.
[40] P. Stobbe. Convex Analysis for Minimizing and Learning Submodular Set functions. PhD thesis, California
Institute of Technology, 2013.
[41] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In NIPS, 2010.
[42] R. Tarjan, J. Ward, B. Zhang, Y. Zhou, and J. Mao. Balancing applied to maximum network flow problems.
In European Symp. on Algorithms (ESA), pages 612?623, 2006.
9
| 4988 |@word kohli:1 middle:1 version:3 polynomial:5 norm:3 tedious:1 seek:2 simulation:1 decomposition:13 sgd:12 feasible:1 cyclic:1 document:2 outperforms:1 existing:7 recovered:1 comparing:1 ka:1 com:1 yet:1 written:3 numerical:1 cheap:1 wanted:1 plot:1 greedy:3 fewer:1 leaf:1 yr:1 xk:8 core:4 certificate:1 provides:1 node:3 iterates:5 preference:1 math:7 simpler:2 zhang:1 mathematical:1 constructed:1 direct:1 become:2 combine:1 symp:1 polyhedral:1 theoretically:1 lov:10 indeed:1 ra:2 intricate:1 behavior:2 nor:2 frequently:1 decreasing:1 decomposed:2 cardinality:1 considering:1 becomes:1 project:4 gift:1 moreover:6 bounded:1 agnostic:1 minimizes:1 finding:3 impractical:1 guarantee:1 formalizes:1 berkeley:2 every:1 friendly:4 concave:9 exactly:1 demonstrates:1 k2:4 underlie:1 grant:1 bertsekas:1 positive:1 service:1 understood:1 local:1 qualification:1 engineering:1 consequence:1 despite:2 parallelize:1 path:7 interpolation:1 noteworthy:1 inria:1 might:1 black:1 therein:1 plus:1 equivalence:1 conversely:2 challenging:1 luke:1 appl:1 fastest:2 unique:2 practical:1 acknowledgment:1 yj:5 globerson:1 practice:3 block:8 implement:3 xr:7 procedure:2 maxflow:1 area:1 empirical:2 attain:1 significantly:1 projection:17 word:1 cloudera:1 suggest:2 get:1 onto:8 close:3 selection:1 operator:5 savchynskyy:2 applying:7 equivalent:8 map:3 lagrangian:1 maximizing:3 convex:25 decomposable:2 splitting:4 simplicity:1 amazon:1 insight:2 rule:2 variation:7 coordinate:7 analogous:1 resp:2 user:4 exact:3 programming:3 us:1 wolfe:4 element:1 expensive:1 particularly:1 recognition:2 cut:13 bottom:1 preprint:1 solved:7 capture:1 region:3 kappes:2 counter:1 yk:1 substantial:1 monograph:1 convexity:2 complexity:1 miny:1 nesterov:3 dom:4 weakly:1 solving:16 upon:1 efficiency:2 f2:6 easily:5 darpa:1 samsung:1 various:2 kolmogorov:1 heat:1 fast:8 describe:2 effective:1 labeling:1 hyper:1 neighborhood:1 quite:2 whose:1 widely:3 solve:8 modular:1 cvpr:1 otherwise:1 ability:1 ward:1 itself:2 seemingly:1 sequence:7 tpami:2 propose:1 reconstruction:2 tran:1 maximal:1 product:2 relevant:2 combining:1 goji:1 nagano:1 intuitive:1 ky:2 qr:1 exploiting:1 convergence:13 cluster:1 optimum:1 converges:5 impunity:1 sierra:1 illustrate:1 derive:3 depending:1 pose:1 ac:1 eq:9 solves:1 soc:1 recovering:1 tziritas:1 deutsch:1 submodularity:4 drawback:2 closely:1 duals:2 enable:1 meshi:1 require:4 f1:6 generalization:1 decompose:1 suitability:1 extension:10 strictly:1 practically:1 proximity:1 sufficiently:1 considered:1 ground:2 algorithmic:1 optimizer:1 smallest:1 applicable:2 combinatorial:11 currently:1 label:2 clearstory:1 sensitive:1 individually:2 council:1 tool:1 minimization:27 brought:1 sensor:1 always:2 aim:1 modified:1 pn:1 hj:2 shrinkage:1 zhou:1 jaakkola:1 office:1 naval:2 vk:2 polyhedron:2 slowest:1 superpixels:1 contrast:3 ladick:1 zk22:1 elsevier:1 inference:2 minimizers:1 typically:4 eliminate:1 diminishing:2 relation:1 going:2 france:1 interested:2 germany:1 issue:1 dual:47 orientation:1 overall:1 yahoo:1 smoothing:11 special:6 art:2 uc:1 marginal:1 equal:1 once:1 construct:1 icml:1 minimized:2 np:1 nonsmooth:11 intelligent:2 employ:1 few:1 vmware:1 beck:1 microsoft:1 interest:1 message:1 primal:19 amenable:2 implication:1 edge:2 orthogonal:5 tree:5 divide:2 y1k:2 theoretical:1 minimal:1 y2k:2 instance:1 teboulle:1 maximization:3 cost:4 introducing:2 vertex:1 decomposability:1 subset:3 veksler:1 too:2 bauschke:3 conduction:1 para:16 proximal:20 fundamental:2 siam:2 contract:1 diverge:2 together:2 squared:1 again:2 cisco:1 augmentation:1 thesis:1 slowly:2 dr:33 resort:1 suggesting:1 potential:1 prox:4 orr:2 sec:1 coding:1 includes:1 int:2 satisfy:1 notable:1 root:1 closed:3 linked:1 francis:1 sup:1 start:1 sort:1 recover:1 decaying:1 parallel:6 substructure:1 expedition:1 contribution:1 minimize:1 orlin:1 accuracy:1 efficiently:4 maximized:1 hortonworks:1 shekhovtsov:1 weak:1 produced:1 disc:1 bilmes:2 comp:3 parallelizable:3 cumbersome:1 reach:1 hlt:1 facebook:1 stobbe:2 energy:4 thereof:1 naturally:3 gain:1 sap:1 proved:1 knowledge:1 kyk22:1 segmentation:5 subtle:1 hilbert:2 routine:1 actually:2 back:1 jenatton:1 higher:1 follow:1 improved:1 formulation:8 evaluated:1 box:2 strongly:4 though:1 shrink:2 chambolle:2 amer:1 until:1 working:2 sketch:1 horizontal:2 web:1 replacing:1 nonlinear:1 maximizer:1 google:2 perhaps:1 scientific:1 usa:1 effect:1 k22:3 concept:3 y2:4 naacl:1 counterpart:1 evolution:1 regularization:2 hence:2 former:1 read:3 alternating:6 symmetric:1 ccf:1 attractive:1 komodakis:1 covering:1 rooted:2 mpi:1 hong:1 mina:1 evident:1 outline:1 demonstrate:1 performs:2 reflection:10 fj:26 image:3 novel:1 recently:1 fi:1 netapp:1 common:2 boykov:1 specialized:2 functional:1 polymatroid:2 empirically:1 overview:1 rachford:6 extend:1 discussed:1 elementwise:1 refer:2 tuning:3 unconstrained:1 consistency:1 grid:1 similarly:2 xdata:1 approx:1 mathematics:1 submodular:49 groenevelt:1 funded:1 vectors1:1 longer:1 similarity:1 gj:4 surface:1 base:4 closest:1 recent:1 perspective:2 apart:1 n00014:1 verlag:1 ubingen:1 suvrit:1 certain:1 arbitrarily:2 seen:2 minimum:3 greater:1 fortunately:1 guestrin:1 converge:3 signal:3 ii:6 multiple:1 rj:3 smooth:27 faster:7 lexicographically:1 bach:4 offer:4 lin:2 equally:1 award:2 impact:1 variant:1 mrf:3 n5:1 vision:6 relabel:1 arxiv:3 iteration:24 hochbaum:1 addition:3 whereas:1 remarkably:1 background:1 krause:2 crucial:2 extra:2 parallelization:1 unlike:1 asz:10 subject:2 fujishige:4 undirected:1 flow:4 call:2 extracting:1 iii:1 easy:8 variety:1 xj:3 matroid:1 pesquet:1 identified:1 competing:1 polyak:1 reduce:1 idea:1 inner:1 grad:6 penalty:1 passing:1 matlab:1 useful:2 detailed:1 tune:1 zabih:1 simplest:1 http:1 exist:1 nsf:1 notice:2 algorithmically:1 darbon:1 rb:3 blue:1 edmonds:2 discrete:25 write:1 express:1 key:4 putting:1 iter:4 four:1 neither:1 douglas:6 thresholded:1 eureka:1 imaging:2 graph:11 subgradient:13 relaxation:3 monotone:2 sum:7 run:3 counteract:1 parameterized:1 inverse:2 you:1 soda:1 place:1 prog:4 reader:1 appendix:3 sfm:9 convergent:1 fold:1 quadratic:2 oracle:3 nonnegative:1 nontrivial:1 placement:1 constraint:2 precisely:2 infinity:1 ri:4 phrased:1 bcd:23 calling:1 barbero:1 generates:1 vondrak:1 speed:1 argument:1 min:16 extremely:5 optimality:1 span:1 subgradients:3 separable:1 relatively:1 speedup:5 structured:1 tv:2 pacific:1 smaller:1 slightly:1 feige:1 making:1 happens:1 maxy:1 projecting:1 pr:4 gathering:1 alluded:1 discus:1 turn:1 nonempty:1 zk2:1 end:1 studying:1 available:1 operation:1 apply:6 observe:1 hierarchical:2 v2:1 generic:4 appropriate:1 quarterly:1 stepsize:1 alternative:5 schmidt:2 ky1:3 slower:3 top:1 running:3 include:4 mincut:1 graphical:1 hinge:1 log10:6 newton:1 exploit:4 especially:1 conquer:2 prof:1 dykstra:3 hypercube:1 already:1 occurs:1 strategy:2 parametric:1 mirrokni:1 exhibit:2 gradient:4 minx:4 nemhauser:1 ensuing:1 athena:1 polytope:1 trivial:1 enforcing:1 code:3 minn:3 relationship:2 minimizing:10 equivalently:2 setup:1 potentially:2 frank:3 negative:1 implementation:4 summarization:2 perform:3 mccormick:1 vertical:2 observation:1 benchmark:1 finite:1 descent:9 logistics:1 defining:1 situation:1 looking:1 extended:1 y1:2 rn:9 smoothed:4 arbitrary:1 tarjan:2 thm:3 esa:1 bk:1 cast:1 paris:1 required:1 pair:1 connection:2 optimized:2 schn:2 california:1 polytopes:3 nip:3 beyond:3 able:2 usually:2 parallelism:1 pattern:1 max:9 including:1 power:1 critical:1 difficulty:1 natural:2 solvable:2 hr:3 indicator:1 chudak:1 scheme:3 kxk22:4 technology:2 orthogonally:1 numerous:1 picture:1 disappears:1 ready:1 methods2:1 stefanie:1 n6:1 extract:2 ky2:1 review:3 geometric:1 literature:1 removal:1 relative:2 bear:1 highlight:1 mixed:1 interesting:1 wolsey:1 allocation:1 integrate:1 jegelka:3 s0:1 thresholding:2 viewpoint:1 balancing:1 summary:2 slowdown:1 soon:1 free:1 allow:1 institute:1 correspondingly:1 matroids:1 sparse:1 benefit:2 distributed:1 kzk:1 kyj:1 rich:1 avoids:2 commonly:1 adaptive:1 projected:1 simplified:1 splunk:1 transaction:1 approximate:5 global:1 suggestive:1 uai:1 mairal:1 unnecessary:1 continuous:6 search:1 iterative:2 zk:12 robust:1 ca:1 sra:3 operational:1 bonn:1 necessarily:1 european:3 electric:1 main:4 noise:1 arise:3 hyperparameters:3 edition:1 nothing:2 body:1 site:2 intel:1 en:1 slow:4 combettes:3 precision:2 paragios:1 mao:1 kxk2:3 z0:1 theorem:3 hlav:1 list:1 admits:3 ericsson:1 grouping:1 exists:1 gained:1 phd:1 magnitude:1 subtree:1 illustrates:1 occurring:3 push:1 kx:1 gap:28 entropy:3 simply:1 scalar:1 applies:2 springer:4 corresponds:1 iwata:1 acm:1 prop:1 obozinski:1 goal:1 sorted:1 viewed:2 consequently:1 careful:1 acceleration:1 fisher:1 admm:1 cise:1 hard:2 change:1 fw:1 specifically:1 typical:1 operates:1 except:1 isotani:1 denoising:1 lemma:6 torr:1 total:7 called:1 partly:1 duality:16 experimental:1 formally:2 support:4 latter:1 accelerated:4 absolutely:1 evaluate:1 tested:1 phenomenon:1 |
4,406 | 4,989 | Curvature and Optimal Algorithms for Learning and
Minimizing Submodular Functions
Rishabh Iyer? , Stefanie Jegelka? , Jeff Bilmes?
University of Washington, Dept. of EE, Seattle, U.S.A.
?
University of California, Dept. of EECS, Berkeley, U.S.A.
[email protected], [email protected], [email protected]
?
Abstract
We investigate three related and important problems connected to machine learning:
approximating a submodular function everywhere, learning a submodular function
(in a PAC-like setting [28]), and constrained minimization of submodular functions.
We show that the complexity of all three problems depends on the ?curvature? of the
submodular function, and provide lower and upper bounds that refine and improve
previous results [2, 6, 8, 27]. Our proof techniques are fairly generic. We either
use a black-box transformation of the function (for approximation and learning),
or a transformation of algorithms to use an appropriate surrogate function (for
minimization). Curiously, curvature has been known to influence approximations
for submodular maximization [3, 29], but its effect on minimization, approximation
and learning has hitherto been open. We complete this picture, and also support
our theoretical claims by empirical results.
1
Introduction
Submodularity is a pervasive and important property in the areas of combinatorial optimization,
economics, operations research, and game theory. In recent years, submodularity?s use in machine
learning has begun to proliferate as well. A set function f : 2V ? R over a finite set V =
{1, 2, . . . , n} is submodular if for all subsets S, T ? V , it holds that f (S) + f (T ) ? f (S ? T ) +
/ S in the context S as
f (S ? T ). Given a set S ? V , we define the gain of an element j ?
f (j|S) , f (S ? j) ? f (S). A function f is submodular if it satisfies diminishing marginal returns,
namely f (j|S) ? f (j|T ) for all S ? T, j ?
/ T , and is monotone if f (j|S) ? 0 for all j ?
/ S, S ? V .
While submodularity, like convexity, occurs naturally in a wide variety of problems, recent studies
have shown that in the general case, many submodular problems of interest are very hard: the
problems of learning a submodular function or of submodular minimization under constraints do
not even admit constant or logarithmic approximation factors in polynomial time [2, 7, 8, 10, 27].
These rather pessimistic results however stand in sharp contrast to empirical observations, which
suggest that these lower bounds are specific to rather contrived classes of functions, whereas much
better results can be achieved in many practically relevant cases. Given the increasing importance
of submodular functions in machine learning, these observations beg the question of qualifying and
quantifying properties that make sub-classes of submodular functions more amenable to learning and
optimization. Indeed, limited prior work has shown improved results for constrained minimization
and learning of sub-classes of submodular functions, including symmetric functions [2, 25], concave
functions [7, 18, 24], label cost or covering functions [9, 31].
In this paper, we take additional steps towards addressing the above problems and show how the
generic notion of the curvature ? the deviation from modularity? of a submodular function determines
both upper and lower bounds on approximation factors for many learning and constrained optimization
problems. In particular, our quantification tightens the generic, function-independent bounds in [8, 2,
27, 7, 10] for many practically relevant functions. Previously, the concept of curvature has been used to
1
tighten bounds for submodular maximization problems [3, 29]. Hence, our results complete a unifying
picture of the effect of curvature on submodular problems. Moreover, curvature is still a fairly generic
concept, as it only depends on the marginal gains of the submodular function. It allows a smooth
transition between the ?easy? functions and the ?really hard? subclasses of submodular functions.
2
Problem statements, definitions and background
Before stating our main results, we provide some necessary definitions and introduce a new concept,
the curve normalized version of a submodular function. Throughout this paper, we assume that
the submodular function f is defined on a ground set V of n elements, that it is nonnegative and
V
f (?) = 0. We also use normalized modular (or
P additive) functions w : 2 ? R which are those that
can be written as a sum of weights, w(S) = i?S w(i). We are concerned with the following three
problems:
Problem 1. (Approximation [8]) Given a submodular function f in form of a value oracle, find an
approximation f? (within polynomial time and representable within polynomial space), such that for
all X ? V , it holds that f?(X) ? f (X) ? ?1 (n)f?(X) for a polynomial ?1 (n).
Problem 2. (PMAC-Learning [2]) Given i.i.d training samples {(Xi , f (Xi )}m
i=1 from a distribution
D, learn an approximation f?(X) that is, with probability 1 ? ?, within a multiplicative factor of
?2 (n) from f .
Problem 3. (Constrained optimization [27, 7, 10, 16]) Minimize a submodular function f over a
family C of feasible sets, i.e., minX?C f (X).
In its general form, the approximation problem was first studied by Goemans
et al. [8], who approx?
imate any monotone
submodular
function
to
within
a
factor
of
O(
n
log
n),
with a lower bound
?
of ?1 (n) = ?( n/ log n). Building on this result, Balcan and Harvey
? [2] show how to PMAC-learn
a monotone submodular function within a factor of ?2 (n) = O( n), and prove a lower bound of
?(n1/3 ) for the learning problem. Subsequent work extends these results to sub-additive and fractionally sub-additive functions [1]. Better learning results are possible for the subclass of submodular
shells [23] and Fourier sparse set functions [26]. Both Problems 1 and 2 have numerous applications
in algorithmic game theory and economics [2, 8] as well as machine learning [2, 22, 23, 26, 15].
Constrained submodular minimization arises in applications such as power assignment or transportation problems [19, 30, 13]. In machine learning, it occurs, for instance, in the form of MAP inference
in high-order graphical models [17] or in size-constrained corpus extraction [21]. Recent results
show that almost all constraints make it hard to solve the minimization even within a constant factor
[27, 6, 16]. Here, we will focus on the constraint of imposing a lower bound on the cardinality, and
on combinatorial constraints where C is the set of all s-t paths, s-t cuts, spanning trees, or perfect
matchings in a graph.
A central concept in this work is the total curvature ?f of a submodular function f and the curvature
?f (S) with respect to a set S ? V , defined as [3, 29]
?f = 1 ? min
j?V
f (j | V \ j)
,
f (j)
?f (S) = 1 ? min
j?S
f (j|S\j)
.
f (j)
(1)
Without loss of generality, assume that f (j) > 0 for all j ? V . It is easy to see that ?f (S) ?
?f (V ) = ?f , and hence ?f (S) is a tighter notion of curvature. A modular function has curvature
?f = 0, and a matroid rank function has maximal curvature ?f = 1. Intuitively, ?f measures how
far away f is from being modular. Conceptually, curvature is distinct from the recently proposed
submodularity ratio [5] that measures how far a function is from being submodular. Curvature has
served to tighten bounds for submodular maximization problems, e.g., from (1?1/e) to ?1f (1?e??f )
for monotone submodular maximization subject to a cardinality constraint [3] or matroid constraints
[29], and these results are tight. For submodular minimization, learning, and approximation, however,
the role of curvature has not yet been addressed (an exception are the upper bounds in [13] for
minimization). In the following sections, we complete the picture of how curvature affects the
complexity of submodular maximization and minimization, approximation, and learning.
The above-cited lower bounds for Problems 1?3 were established with functions of maximal curvature
(?f = 1) which, as we will see, is the worst case. By contrast, many practically interesting functions
have smaller curvature, and our analysis will provide an explanation for the good empirical results
2
observed with such functions [13, 22, 14]. An example for functions with ?f < 1 is the class
of concave over modular functions that have been used in speech processing [22] and computer
Pk
vision [17]. This class comprises, for instance, functions of the form f (X) = i=1 (wi (X))a , for
some a ? [0, 1] and a nonnegative weight vectors wi . Such functions may be defined over clusters
Ci ? V , in which case the weights wi (j) are nonzero only if j ? Ci [22, 17, 11].
Curvature-dependent analysis. To analyze Problems 1 ? 3, we introduce the concept of a curvenormalized polymatroid1 . Specifically, we define the ?f -curve-normalized version of f as
P
f (X) ? (1 ? ?f ) j?X f (j)
?
f (X) =
(2)
?f
If ?f = 0, then we set f ? ? 0. We call f ? the curve-normalized version of f because its curvature
is ?f ? = 1. The function f ? allows us to decompose a submodular function f into a ?difficult? polymatroid function and an ?easy? modularP
part as f (X) = fdifficult (X) + measy (X) where
fdifficult (X) = ?f f ? (X) and measy (X) = (1 ? ?f ) j?X f (j). Moreover, we may modulate the curvature of given any function g with ?g = 1, by constructing a function f (X) , cg(X) + (1 ? c)|X|
with curvature ?f = c but otherwise the same polymatroidal structure as g.
Our curvature-based decomposition is different from decompositions such as that into a totally
normalized function and a modular function [4]. Indeed, the curve-normalized function has some
specific properties that will be useful later on (proved in [12]):
P
Lemma 2.1. If f is monotone submodular with ?f > 0, then f (X) ? j?X f (j) and f (X) ?
P
(1 ? ?f ) j?X f (j).
?
Lemma 2.2. If f is monotone submodular, then
P f (X) in Eqn. (2) is a monotone non-negative
?
submodular function. Furthermore, f (X) ? j?X f (j).
The function f ? will be our tool for analyzing the hardness of submodular problems. Previous
information-theoretic lower bounds for Problems 1?3 [6, 8, 10, 27] are independent of curvature and
use functions with ?f = 1. These curvature-independent bounds are proven by constructing two
essentially indistinguishable matroid rank functions h and f R , one of which depends on a random
set R ? V . One then argues that any algorithm would need to make a super-polynomial number of
queries to the functions for being able to distinguish h and f R with high enough probability. The
lower bound will be the ratio maxX?C h(X)/f R (X). We extend this proof technique to functions
with a fixed given curvature. To this end, we define the functions
f?R (X) = ?f f R (X) + (1 ? ?f )|X|
and
h? (X) = ?f h(X) + (1 ? ?f )|X|.
(3)
Both of these functions have curvature ?f . This construction enables us to explicitly introduce the
effect of curvature into information-theoretic bounds for all monotone submodular functions.
Main results. The curve normalization (2) leads to refined upper bounds for Problems 1?3, while
the curvature modulation (3) provides matching lower bounds. The following are some of our
main results: for approximating submodular functions (Problem 1), we replace
the known bound
?
?
n
log n
). We
of ?1 (n) = O( n log n) [8] by an improved curvature-dependent O( 1+(?n log n?1)(1??
f)
?
n
?
complement this with a lower bound of ?( 1+(?n?1)(1??f ) ). For learning submodular functions
?
(Problem 2), we refine the known
bound of ?2 (n) = O( n) [2] in the PMAC setting to a curvature
?
n
n1/3
?
dependent bound of O( 1+(?n?1)(1??
), with a lower bound of ?(
). Finally,
1+(n1/3 ?1)(1??f )
f)
Table 1 summarizes our curvature-dependent approximation bounds for constrained minimization
(Problem 3). These bounds refine many of the results in [6, 27, 10, 16]. In general, our new curvaturedependent upper and lower bounds refine known theoretical results whenever ?f < 1, in many cases
replacing known polynomial bounds by a curvature-dependent constant factor 1/(1 ? ?f ). Besides
making these bounds precise, the decomposition and the curve-normalized version (2) are the basis
for constructing tight algorithms that (up to logarithmic factors) achieve the lower bounds.
1
A polymatroid function is a monotone increasing, nonnegative, submodular function satisfying f (?) = 0.
3
Constraint
Modular approx. (MUB)
Card. LB
k
1+(k?1)(1??f )
n
1+(n?1)(1??f )
n
2+(n?2)(1??f )
n
1+(n?1)(1??f )
m
1+(m?1)(1??f )
Spanning Tree
Matchings
s-t path
s-t cut
Ellipsoid approx. (EA)
?
log n
O( 1+(?n lognn?1)(1??
)
f)
?
m log m
?
O( 1+( m log m?1)(1??f ) )
?
m log m
O( 1+(?m log
)
? m?1)(1??f )
m
log
m
O( 1+(?m log m?1)(1??f ) )
?
? log m
)
O( 1+(log mm
m?1)(1??f )
Lower bound
1/2
)
?
)
?( 1+(n1/2n?1)(1??
f)
n
?
?(
)
1+(n?1)(1??f )
n
?
?(
1+(n?1)(1??f ) )
2/3
n
?
)
?(
2/3
1+(n
?1)(1??f )
?
n
?
?
)
?(
1+( n?1)(1??f )
Table 1: Summary of our results for constrained minimization (Problem 3).
3
Approximating submodular functions everywhere
We first address improved bounds for the problem of approximating a monotone submodular function
everywhere. Previous work established ?-approximations g to a submodular function f satisfying
g(S) ? f (S) ? ?g(S) for all S ? V [8]. We begin with a theorem showing how any algorithm
computing such an approximation may be used to obtain a curvature-specific, improved approximation.
Note that the curvature of a monotone submodular function can be obtained within 2n + 1 queries to
f . The key idea of Theorem 3.1 is to only approximate the curved part of f , and to retain the modular
part exactly. The full proof is in [12].
Theorem 3.1. Given a polymatroid function f with ?f < 1, let f ? be its curve-normalized version
defined in Equation (2), and let f?? be a submodular function satisfying f?? (X) ? f ? (X) ?
P
?(n)f?? (X), for some X ? V . Then the function f?(X) , ?f f?? (X) + (1 ? ?f ) j?X f (j) satisfies
f?(X) ? f (X) ?
?(n)
f?(X)
f?(X) ?
.
1 + (?(n) ? 1)(1 ? ?f )
1 ? ?f
(4)
Theorem 3.1 may be directly applied to tighten recent results on approximating submodular functions
everywhere. An algorithm by Goemans et al. [8] computes an approximation to a polymatroid
function f in polynomial time by approximating the submodular polyhedron via?an ellipsoid. This
approximation
p (which we call the ellipsoidal approximation) satisfies ?(n) = O( n log n), and has
the form wf (X) for a certain weight vector wf . Corollary 3.2 states that a tighter approximation
is possible for functions with ?f < 1.
p
Corollary 3.2. Let f be a polymatroid function with ?f < 1, and let wf ? (X) be the ellipsoidal
to the ?-curve-normalized version f ? (X) of f . Then the function f ea (X) =
p approximation
P
?
f
?f w (X) + (1 ? ?f ) j?X f (j) satisfies
?
n log n
?
f ea (X) ? f (X) ? O
f ea (X).
(5)
1 + ( n log n ? 1)(1 ? ?f )
If ?f = 0, then the approximation is exact. This is not surprising since a modular function can be
inferred exactly within O(n) oracle calls. The following lower bound (proved in [12]) shows that
Corollary 3.2 is tight up to logarithmic factors. It refines the lower bound in [8] to include ?f .
Theorem 3.3. Given a submodular function f with curvature ?f , there does not exist a (possibly
randomized) polynomial-time algorithm that computes an approximation to f within a factor of
n1/2?
, for any > 0.
1+(n1/2? ?1)(1??f )
?m
The
P simplest alternative approximation to f one might conceive is the modular function f (X) ,
f
(j)
which
can
easily
be
computed
by
querying
the
n
values
f
(j).
j?X
Lemma 3.1. Given a monotone submodular function f , it holds that2
X
|X|
f (X) ? f?m (X) =
f (j) ?
f (X)
1 + (|X| ? 1)(1 ? ?f (X))
j?X
2
In [12], we show this result with a stronger notion of curvature: ??f (X) = 1 ?
4
P
j?X
P
f (j|X\j)
.
f (j)
j?X
(6)
The form of Lemma 3.1 is slightly different from Corollary 3.2. However, there is a straightforward
correspondence: given f? such that f?(X) ? f (X) ? ?0 (n)f?(X), by defining f?0 (X) = ?0 (n)f?(X),
we get that f (X) ? f?0 (X) ? ?0 (n)f (X). Lemma 3.1 for the modular approximation
? is complementary to Corollary 3.2: First, the modular approximation is better whenever |X| ? n. Second,
the bound in Lemma 3.1 depends on the curvature ?f (X) with respect to the set X, which is stronger
than ?f . Third, f?m is extremely simple to compute. For sets of larger cardinality, however, the
ellipsoidal approximation of Corollary 3.2 provides a better approximation, in fact, the best possible
one (Theorem 3.3). In a similar manner, Lemma 3.1 is tight for any modular approximation to a
submodular function:
Lemma 3.2. For any ? > 0, there exists a monotone submodular function f with curvature ? such
|X|
that no modular upper bound on f can approximate f (X) to a factor better than 1+(|X|?1)(1??
.
f)
The improved curvature dependent bounds immediately imply better bounds for the class of concave
over modular functions used in [22, 17, 11].
Corollary 3.4. Given weight vectors w1 , ? ? ? , wk ? 0, and a submodular function f (X) =
Pk
P
a
1?a
f (X)
i=1 ?i [wi (X)] , ?i ? 0, for a ? (0, 1), it holds that f (X) ?
j?X f (j) ? |X|
In particular, when a = 1/2, thep
modular upper bound approximates the sum of square-root over
modular functions by a factor of |X|.
4
Learning Submodular functions
We next address the problem of learning submodular functions in a PMAC setting [2]. The PMAC
(Probably Mostly Approximately Correct) framework is an extension of the PAC framework [28]
to allow multiplicative errors in the function values from a fixed but unknown distribution D over
2V . We are given training samples {(Xi , f (Xi )}m
i=1 drawn i.i.d. from D. The algorithm may take
time polynomial in n, 1/, 1/? to compute a (polynomially-representable) function f? that is a good
approximation to f with respect to D. Formally, f? must satisfy that
h
i
PrX1 ,X2 ,??? ,Xm ?D PrX?D [f?(X) ? f (X) ? ?(n)f?(X)] ? 1 ? ? 1 ? ?
(7)
for some approximation factor ?(n). Balcan and Harvey [2] propose an algorithm
that PMAC-learns
?
any monotone, nonnegative submodular function within a factor ?(n) = n + 1 by reducing the
problem to that of learning a binary classifier. If we assume that we have an upper bound on the
curvature ?f , or that we can estimate it 3 , and have access to the value of the singletons f (j), j ? V ,
then we can obtain better learning results with non-maximal curvature:
Lemma 4.1. Let f be a monotone submodular function for which we know an upper bound on its
curvature and the singleton weights f (j) for all j ? V . For every , ? > 0 there is an algorithm
that uses a polynomial number of training
examples, runs in time polynomial in (n, 1/, 1/?) and
?
n+1
PMAC-learns f within a factor of 1+(?n+1?1)(1??
. If D is a product distribution, then there exists
)
f
log
1
an algorithm that PMAC-learns f within a factor of O( 1+(log 1 ?1)(1??
).
)
f
The algorithm of Lemma 4.1 uses the reduction of Balcan and Harvey [2] to learn the ?f -curvenormalized version f ? of f . From the learned function f?? (X), we construct the final estimate
P
f?(X) , ?f f?? (X) + (1 ? ?f ) j?X f (j). Theorem 3.1 implies Lemma 4.1 for this f?(X).
Moreover, no polynomial-time algorithm can be guaranteed to PMAC-learn f within a factor of
0
n1/3?
, for any 0 > 0 [12]. We end this section by showing how we can learn with a
0
1/3?
1+(n
?1)(1??f )
construction analogous to that in Lemma 3.1.
Lemma 4.2. If f is a monotone submodular function with known curvature (or a known upper
bound) ??f (X), ?X ? V , then for every , ? > 0 there is an algorithm that uses a polynomial number
of training examples, runs in time polynomial in (n, 1/, 1/?) and PMAC learns f (X) within a factor
|X|
of 1 + 1+(|X|?1)(1?
??f (X)) .
3
note that ?f can be estimated from a set of 2n + 1 samples {(j, f (j))}j?V , {(V, f (V ))}, and
{(V \j, f (V \j)}j?V included in the training samples
5
Compare this result to Lemma 4.1. Lemma 4.2 leads to better bounds for small sets, whereas
Lemma 4.1 provides a better general bound. Moreover, in contrast to Lemma 4.1, here we only need an
upper bound on the curvature and do not need to know the singleton weights {f (j), j ? V }. Note also
that, while ?f itself is an upper bound of ??f (X), often one does have an upper bound on ??f (X) if one
knows the function class of f (for example, say concave over modular). In particular, an immediate
Pk
corollary is that the class of concave over modular
= i=1 ?i [wi (X)]a , ?i ? 0, for
? functions f (X)
a ? (0, 1) can be learnt within a factor of min{ n + 1, 1 + |X|1?a }.
5
Constrained submodular minimization
Next, we apply our results to the minimization of submodular functions under constraints. Most
algorithms for constrained minimization use one of two strategies: they apply a convex relaxation [10,
16], or they optimize a surrogate function f? that should approximate f well [6, 8, 16]. We follow
the second strategy and propose a new, widely applicable curvature-dependent choice for surrogate
functions. A suitable selection of f? will ensure theoretically optimal results. Throughout this section,
we refer to the optimal solution as X ? ? argminX?C f (X).
Lemma 5.1. Given a submodular function f , let f?1 be an approximation of f such that f?1 (X) ?
b1 ? argminX?C f?(X) of f? satisfies
f (X) ? ?(n)f?1 (X), for all X ? V . Then any minimizer X
b ? ?(n)f (X ? ). Likewise, if an approximation of f is such that f (X) ? f?2 (X) ? ?(X)f (X)
f (X)
? 2 ? argminX?C f?2 (X) satisfies f (X
b2 ) ?
for a set-specific factor ?(X), then its minimizer X
?
?
4
?
?
?(X )f (X ). If only ?-approximations are possible for minimizing f1 or f2 over C, then the final
bounds are ??(n) and ??(X ? ) respectively.
For Lemma 5.1 to be practically useful, it is essential that f?1 and f?2 be efficiently optimizable
over C. We discuss two general curvature-dependent approximations that work for a large class of
combinatorial constraints. In particular, we use Theorem 3.1: we decompose f into f ? and a modular
part f m , and then approximate f ? while retaining f m , i.e., f? = f?? + f m . The first approach uses a
simple modular upper bound (MUB) and the second relies on the Ellipsoidal approximation (EA) we
used in Section 3.
MUB: The simplest approximation to a submodular function is the modular approximation
P
f?m (X) , j?X f (j) ? f (X). Since here, f?? happens to be equivalent to f m , we obtain the
overall approximation f? = f?m . Lemmas 5.1 and 3.1 directly imply a set-dependent approximation
factor for f?m :
b ? C be a ?-approximate solution for minimizing P
Corollary 5.1. Let X
j?X f (j) over C, i.e.
P
P
f
(j)
?
?
min
f
(j).
Then
X?C
b
j?X
j?X
? ?
f (X)
1+
(|X ? |
?|X ? |
f (X ? ).
? 1)(1 ? ?f (X ? ))
(8)
Corollary 5.1 has also been shown in [13]. Similar to the algorithms in [13], MUB can be extended
to an iterative algorithm yielding performance gains in practice. In particular, Corollary 5.1 implies
improved approximation bounds for practically relevantq
concave over modular functions, such
Pk
P
as those used in [17]. For instance, for f (X) =
i=1
j?X wi (j), we obtain a worst-case
p
?
approximation bound of |X ? | ? n. This is significantly better than the worst case factor of |X ? |
for general submodular functions.
?
EA: Instead of employing a modular upper bound, we can approximate
p f? using the construction
f
?
by Goemans et al. [8], as in Corollary 3.2. In that case, f (X) = ?f w (X) + (1 ? ?f )f m (X)
has a special form: a weighted sum of a concave function and a modular function. Minimizing
such a function over constraints C is harder than minimizing a merely modular function, but with
the algorithm in [24] we obtain an FPTAS5 for minimizing f? over C whenever we can minimize a
nonnegative linear function over C.
4
5
A ?-approximation algorithm for minimizing a function g finds set X : g(X) ? ? minX?C g(X)
The FPTAS will yield a ? = (1 + )-approximation through an algorithm polynomial in 1 .
6
Corollary 5.2. For a submodular function with curvature ?f < 1, algorithm EA will return a
b that satisfies
solution X
?
n log n
b
f (X) ? O ?
f (X ? ).
(9)
( n log n ? 1)(1 ? ?f ) + 1)
Next, we apply the results of this section to specific optimization problems, for which we show
(mostly tight) curvature-dependent upper and lower bounds. We just state our main results; a more
extensive discussion along with the proofs can be found in [12].
Cardinality lower bounds (SLB). A simple constraint is a lower bound on the cardinality of the
solution, i.e., C = {X ? V : |X| ? k}. Svitkina and Fleischer [27] prove that for monotone
submodular functions of arbitrary curvature,
p it is impossible to find a polynomial-time algorithm
with an approximation factor better than n/ log n. They show an algorithm which matches this
approximation factor. Corollaries 5.1 and?5.2 immediately imply curvature-dependent approximation
log n
k
bounds of 1+(k?1)(1??
and O( 1+(?n lognn?1)(1??
). These bounds are improvements over the
f)
f)
results of [27] whenever ?f < 1. Here, MUB is preferable to EA whenever k is small. Moreover,
the bound of EA is tight up to poly-log factors, in that no polynomial time algorithm can achieve a
n1/2?
general approximation factor better than 1+(n1/2?
for any > 0.
?1)(1??f )
In the following problems, our ground set V consists of the set of edges in a graph G = (V, E) with
two distinct nodes s, t ? V and n = |V|, m = |E|. The submodular function is f : 2E ? R.
Shortest submodular s-t path (SSP). Here, we aim to find an s-t path X of minimum (submodular)
length f (X). Goel et al. [6] show a O(n2/3 )-approximation with matching curvature-independent
lower bound ?(n2/3 ). By Corollary 5.1, the curvature-dependent worst-case bound for MUB is
n
1+(n?1)(1??f ) since any minimal s-t path has at most n edges. Similarly, the factor for EA is
?
m log m
O( 1+(?m log
). The bound of EA will be tighter for sparse graphs while MUB provides
m?1)(1??f )
better results for dense ones. Our curvature-dependent lower bound for SSP is
for any > 0, which reduces to the result in [6] for ?f = 1.
n2/3?
,
1+(n2/3? ?1)(1??f )
Minimum submodular s-t cut (SSC): This problem, also known as the cooperative cut problem [16,
17], asks to minimize a monotone submodular function f such that the solution X ? E is a set of
edges whose removal disconnects s from t in G. Using curvature refines the We can also show a
n1/2?
lower bound of [16] to 1+(n1/2?
, for any > 0. Corollary 5.1 implies an approximation
?1)(1??f )
?
m log m
m
factor of O( (?m log m?1)(1??
) for EA and a factor of 1+(m?1)(1??
for MUB, where m = |E|
f)
f )+1
is the number of edges in the graph. Hence the factor for EA is tight for sparse graphs. Specifically
for cut problems, there is yet another useful surrogate function that is exact on local neighborhoods.
Jegelka and Bilmes [16] demonstrate how this approximation may be optimized via a generalized
maximum flow algorithm that maximizes a polymatroidal network flow [20]. This algorithm still
applies to the combination f? = ?f f?? + (1 ? ?f )f m , where we only approximate f ? . We refer to
this approximation as Polymatroidal Network Approximation (PNA).
n
Corollary 5.3. Algorithm PNA achieves a worst-case approximation factor of 2+(n?2)(1??
for the
f)
cooperative cut problem.
For dense graphs, this factor is theoretically tighter than that of the EA approximation.
Minimum submodular spanning tree (SST). Here, C is the family of all spanning trees in a given
graph G. Such constraints occur for example in power assignment problems [30]. Goel et al. [6] show
a curvature-independent optimal approximation factor of O(n) for this problem. Corollary 5.1
n
refines this bound to 1+(n?1)(1??
when using MUB; Corollary 5.2 implies a slightly worse bound
f)
for EA. We also show that the bound of MUB is tight: no polynomial-time algorithm can guarantee a
n1?
, for any , ? > 0.
factor better than 1+(n1? ?1)(1??
f )+??f
Minimum submodular perfect matching (SPM): Here, we aim to find a perfect matching in
a graph that minimizes a monotone submodular function. Corollary 5.1 implies that an MUB
n
approximation will achieve an approximation factor of at most 2+(n?2)(1??
. Similar to the
f)
spanning tree case, the bound of MUB is also tight [12].
7
4
2
50
100 150 200 250
n
?= 0.7
?= 0.5
?= 0.3
?= 0.1
4
3
2
50
100 150 200 250
n
(c) with varying ? and ? = 1
100
80
60
?= n/2
?= n3/4
1/2
?= n
40
20
50
100
n
150
200
(d) varying ? with ? = n/2 and ? = 1
emp. approx. factor
6
(b) varying ?, ?= 0.1
5
emp. approx. factor
?= 0.1
?= 0.2
?= 0.3
?= 0.4
emp. approx. factor
emp. approx. factor
(a) varying ?, ?= 0
8
?= 0.9
?= 0.6
?= 0.3
?= 0.1
8
6
4
2
50
100
n
150
200
Figure 1: Minimization of g? for cardinality lower bound constraints. (a) fixed ? = 0, ? =
n1/2+ , ? = n2 for varying ; (b) fixed = 0.1, but varying ?; (c) different choices of ? for ? = 1;
(d) varying ? with ? = n/2, ? = 1. Dashed lines: MUB, dotted lines: EA, solid lines: theoretical
bound. The results of EA are not visible in some instances since it obtains a factor of 1.
5.1
Experiments
We end this section by empirically demonstrating the performance of MUB and EA and their precise
dependence on curvature. We focus on cardinality lower bound constraints, C = {X ? V : |X| ? ?}
and the ?worst-case? class of functions that has been used throughout this paper to prove lower
? = V \R and R ? V is random set such that
? + ?, |X|, ?} where R
bounds, f R (X) = min{|X ? R|
1/2+
2
|R| = ?. We adjust ? = n
and ? = n by a parameter . The smaller is, the harder the
problem. This function has curvature ?f = 1. To obtain a function with specific curvature ?, we
define f?R (X) = ?f (X) + (1 ? ?)|X| as in Equation (3).
In all our experiments, we take the average over 20 random draws of R. We first set ? = 1 and
vary . Figure 1(a) shows the empirical approximation factors obtained using EA and MUB, and the
theoretical bound. The empirical factors follow the theoretical results very closely. Empirically, we
also see that the problem becomes harder as decreases. Next we fix = 0.1 and vary the curvature
? in f?R . Figure 1(b) illustrates that the theoretical and empirical approximation factors improve
significantly as ? decreases. Hence, much better approximations than the previous theoretical lower
bounds are possible if ? is not too large. This observation can be very important in practice. Here,
too, the empirical upper bounds follow the theoretical bounds very closely.
Figures 1(c) and (d) show results for larger ? and ? = 1. In Figure 1(c), as ? increases, the empirical
factors improve. In particular, as predicted by the theoretical bounds, EA outperforms MUB for large
? and, for ? ? n2/3 , EA finds the optimal solution. In addition, Figures 1(b) and (d) illustrate the
theoretical and empirical effect of curvature: as n grows, the bounds saturate and approximate a
constant 1/(1 ? ?) ? they do not grow polynomially in n. Overall, we see that the empirical results
quite closely follow our theoretical results, and that, as the theory suggests, curvature significantly
affects the approximation factors.
6
Conclusion and Discussion
In this paper, we study the effect of curvature on the problems of approximating, learning and
minimizing submodular functions under constraints. We prove tightened, curvature-dependent upper
bounds with almost matching lower bounds. These results complement known results for submodular
maximization [3, 29]. Given that the functional form and effect of the submodularity ratio proposed
in [5] is similar to that of curvature, an interesting extension is the question of whether there is a
single unifying quantity for both of these terms. Another open question is whether a quantity similar
to curvature can be defined for subadditive functions, thus refining the results in [1] for learning
subadditive functions. Finally it also seems that the techniques in this paper could be used to provide
improved curvature-dependent regret bounds for constrained online submodular minimization [15].
Acknowledgments: Special thanks to Kai Wei for pointing out that Corollary 3.4 holds and for
other discussions, to Bethany Herwaldt for reviewing an early draft of this manuscript, and to
the anonymous reviewers. This material is based upon work supported by the National Science
Foundation under Grant No. (IIS-1162606), a Google and a Microsoft award, and by the Intel Science
and Technology Center for Pervasive Computing. Stefanie Jegelka?s work is supported by the Office
of Naval Research under contract/grant number N00014-11-1-0688, and gifts from Amazon Web
Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloudera, Ericsson, Facebook, General
Electric, Hortonworks, Intel, Microsoft, NetApp, Oracle, Samsung, Splunk, VMware and Yahoo!.
8
References
[1] M. F. Balcan, F. Constantin, S. Iwata, and L. Wang. Learning valuation functions. COLT, 2011.
[2] N. Balcan and N. Harvey. Submodular functions: Learnability, structure, and optimization. In Arxiv
preprint, 2012.
[3] M. Conforti and G. Cornuejols. Submodular set functions, matroids and the greedy algorithm: tight worstcase bounds and some generalizations of the Rado-Edmonds theorem. Discrete Applied Mathematics, 7(3):
251?274, 1984.
[4] W. H. Cunningham. Decomposition of submodular functions. Combinatorica, 3(1):53?68, 1983.
[5] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse
approximation and dictionary selection. In ICML, 2011.
[6] G. Goel, C. Karande, P. Tripathi, and L. Wang. Approximability of combinatorial problems with multi-agent
submodular cost functions. In FOCS, 2009.
[7] G. Goel, P. Tripathi, and L. Wang. Combinatorial problems with discounted price functions in multi-agent
systems. In FSTTCS, 2010.
[8] M. Goemans, N. Harvey, S. Iwata, and V. Mirrokni. Approximating submodular functions everywhere. In
SODA, pages 535?544, 2009.
[9] R. Hassin, J. Monnot, and D. Segev. Approximation algorithms and hardness results for labeled connectivity
problems. J Combinatorial Optimization, 14(4):437?453, 2007.
[10] S. Iwata and K. Nagano. Submodular function minimization under covering constraints. In In FOCS,
pages 671?680. IEEE, 2009.
[11] R. Iyer and J. Bilmes. Algorithms for approximate minimization of the difference between submodular
functions, with applications. In UAI, 2012.
[12] R. Iyer, S. Jegelka, and J. Bilmes. Curvature and Optimal Algorithms for Learning and Optimization of
Submodular Functions: Extended arxiv version, 2013.
[13] R. Iyer, S. Jegelka, and J. Bilmes. Fast semidifferential based submodular function optimization. In ICML,
2013.
[14] S. Jegelka. Combinatorial Problems with submodular coupling in machine learning and computer vision.
PhD thesis, ETH Zurich, 2012.
[15] S. Jegelka and J. Bilmes. Online submodular minimization for combinatorial structures. ICML, 2011.
[16] S. Jegelka and J. A. Bilmes. Approximation bounds for inference using cooperative cuts. In ICML, 2011.
[17] S. Jegelka and J. A. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In
CVPR, 2011.
[18] P. Kohli, A. Osokin, and S. Jegelka. A principled deep random field for image segmentation. In CVPR,
2013.
[19] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In
Proceedings of Uncertainity in Artificial Intelligence. UAI, 2005.
[20] E. Lawler and C. Martel. Computing maximal ?polymatroidal? network flows. Mathematics of Operations
Research, 7(3):334?347, 1982.
[21] H. Lin and J. Bilmes. Optimal selection of limited vocabulary speech corpora. In Interspeech, 2011.
[22] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In The 49th Meeting
of the Assoc. for Comp. Ling. Human Lang. Technologies (ACL/HLT-2011), Portland, OR, June 2011.
[23] H. Lin and J. Bilmes. Learning mixtures of submodular shells with application to document summarization.
In UAI, 2012.
[24] E. Nikolova. Approximation algorithms for offline risk-averse combinatorial optimization, 2010.
[25] J. Soto and M. Goemans. Symmetric submodular function minimization under hereditary family constraints.
arXiv:1007.2140, 2010.
[26] P. Stobbe and A. Krause. Learning fourier sparse set functions. In International Conference on Artificial
Intelligence and Statistics (AISTATS), 2012.
[27] Z. Svitkina and L. Fleischer. Submodular approximation: Sampling-based algorithms and lower bounds.
In FOCS, pages 697?706, 2008.
[28] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
[29] J. Vondr?ak. Submodularity and curvature: the optimal algorithm. RIMS Kokyuroku Bessatsu, 23, 2010.
[30] P.-J. Wan, G. Calinescu, X.-Y. Li, and O. Frieder. Minimum-energy broadcasting in static ad hoc wireless
networks. Wireless Networks, 8:607?617, 2002.
[31] P. Zhang, J.-Y. Cai, L.-Q. Tang, and W.-B. Zhao. Approximation and hardness results for label cut and
related problems. Journal of Combinatorial Optimization, 21(2):192?208, 2011.
9
| 4989 |@word kohli:1 version:8 polynomial:18 stronger:2 seems:1 semidifferential:1 open:2 that2:1 decomposition:4 asks:1 solid:1 harder:3 reduction:1 document:2 pna:2 outperforms:1 surprising:1 lang:1 yet:2 written:1 must:1 refines:3 additive:3 subsequent:1 slb:1 visible:1 enables:1 greedy:2 intelligence:2 provides:4 draft:1 node:1 zhang:1 along:1 focs:3 prove:4 consists:1 manner:1 introduce:3 theoretically:2 hardness:3 indeed:2 multi:2 discounted:1 cardinality:7 increasing:2 totally:1 begin:1 becomes:1 moreover:5 gift:1 maximizes:1 herwaldt:1 hitherto:1 minimizes:1 transformation:2 guarantee:1 berkeley:2 every:2 subclass:2 concave:7 exactly:2 preferable:1 classifier:1 assoc:1 grant:2 before:1 service:1 local:1 ak:1 analyzing:1 meet:1 path:5 modulation:1 approximately:1 black:1 might:1 acl:1 studied:1 suggests:1 limited:2 acknowledgment:1 practice:2 regret:1 area:1 empirical:10 maxx:1 significantly:3 eth:1 matching:5 cloudera:1 suggest:1 get:1 selection:4 context:1 influence:1 impossible:1 risk:1 optimize:1 equivalent:1 map:1 reviewer:1 transportation:1 center:1 straightforward:1 economics:2 convex:1 amazon:1 immediately:2 notion:3 analogous:1 construction:3 exact:2 us:4 element:2 satisfying:3 cut:9 cooperative:3 labeled:1 observed:1 role:1 preprint:1 wang:3 worst:6 connected:1 averse:1 decrease:2 principled:1 rado:1 convexity:1 complexity:2 tight:10 reviewing:1 upon:1 f2:1 basis:1 matchings:2 easily:1 samsung:1 distinct:2 fast:1 query:2 artificial:2 neighborhood:1 refined:1 whose:1 modular:25 larger:2 solve:1 widely:1 say:1 quite:1 otherwise:1 kai:1 cvpr:2 statistic:1 itself:1 final:2 online:2 hoc:1 cai:1 propose:2 maximal:4 product:1 relevant:2 goji:1 nagano:1 achieve:3 seattle:1 contrived:1 cluster:1 perfect:3 illustrate:1 coupling:2 stating:1 predicted:1 implies:5 beg:1 lognn:2 uncertainity:1 submodularity:7 closely:3 correct:1 human:1 material:1 f1:1 fix:1 really:1 decompose:2 anonymous:1 generalization:1 pessimistic:1 tighter:4 extension:2 hold:5 practically:5 mm:1 ground:2 algorithmic:1 claim:1 pointing:1 achieves:1 vary:2 early:1 dictionary:1 applicable:1 combinatorial:10 label:2 clearstory:1 tool:1 weighted:1 minimization:21 super:1 aim:2 rather:2 varying:7 office:1 pervasive:2 corollary:21 focus:2 june:1 refining:1 naval:1 improvement:1 portland:1 rank:2 polyhedron:1 contrast:3 cg:1 wf:3 inference:2 dependent:15 diminishing:1 fptas:1 cunningham:1 overall:2 colt:1 retaining:1 yahoo:1 constrained:11 special:2 fairly:2 kempe:1 marginal:2 field:1 construct:1 extraction:1 washington:1 sampling:1 icml:4 hassin:1 vmware:1 national:1 cornuejols:1 argminx:3 n1:14 microsoft:2 interest:1 investigate:1 adjust:1 mixture:1 rishabh:1 yielding:1 pmac:10 amenable:1 constantin:1 edge:5 necessary:1 tree:5 theoretical:11 minimal:1 instance:4 assignment:2 maximization:6 cost:2 deviation:1 subset:2 addressing:1 too:2 learnability:1 frieder:1 eec:2 learnt:1 thanks:1 cited:1 international:1 randomized:1 retain:1 contract:1 w1:1 connectivity:1 central:1 cisco:1 thesis:1 wan:1 possibly:1 ssc:1 worse:1 admit:1 zhao:1 return:2 li:1 singleton:3 b2:1 wk:1 disconnect:1 satisfy:1 explicitly:1 depends:4 ad:1 multiplicative:2 later:1 root:1 analyze:1 minimize:3 square:1 conceive:1 who:1 likewise:1 efficiently:1 yield:1 hortonworks:1 conceptually:1 bilmes:12 served:1 comp:1 whenever:5 hlt:1 facebook:1 stobbe:1 definition:2 energy:2 naturally:1 proof:4 static:1 gain:3 sap:1 proved:2 begun:1 segmentation:1 rim:1 ea:21 lawler:1 manuscript:1 follow:4 improved:7 wei:1 box:1 generality:1 furthermore:1 just:1 eqn:1 web:1 replacing:1 subadditive:2 google:2 spm:1 soto:1 grows:1 building:1 effect:6 svitkina:2 concept:5 normalized:9 hence:4 symmetric:2 nonzero:1 indistinguishable:1 game:2 interspeech:1 covering:2 generalized:1 complete:3 theoretic:2 demonstrate:1 argues:1 balcan:5 image:1 recently:1 netapp:1 nonmyopic:1 polymatroid:5 functional:1 empirically:2 tightens:1 extend:1 approximates:1 refer:2 imposing:1 approx:7 mathematics:2 similarly:1 submodular:94 access:1 curvature:69 recent:4 certain:1 n00014:1 harvey:5 binary:1 karande:1 martel:1 meeting:1 guestrin:1 minimum:5 additional:1 goel:4 shortest:1 dashed:1 ii:1 full:1 reduces:1 smooth:1 match:1 lin:3 award:1 vision:2 essentially:1 arxiv:3 normalization:1 achieved:1 whereas:2 background:1 addition:1 krause:2 addressed:1 grow:1 probably:1 subject:1 flow:3 call:3 ee:1 near:1 easy:3 concerned:1 enough:1 variety:1 affect:2 matroid:3 nikolova:1 idea:1 fleischer:2 whether:2 curiously:1 speech:2 deep:1 useful:3 sst:1 ellipsoidal:4 simplest:2 exist:1 dotted:1 estimated:1 blue:1 edmonds:1 discrete:1 key:1 fractionally:1 demonstrating:1 drawn:1 uw:2 graph:9 relaxation:1 monotone:19 merely:1 year:1 sum:3 run:2 everywhere:5 soda:1 extends:1 throughout:3 family:3 almost:2 draw:1 summarizes:1 bound:81 guaranteed:1 distinguish:1 correspondence:1 refine:4 nonnegative:5 oracle:3 occur:1 constraint:17 segev:1 x2:1 n3:1 fourier:2 min:5 extremely:1 approximability:1 combination:1 representable:2 smaller:2 slightly:2 wi:6 bessatsu:1 making:1 happens:1 intuitively:1 equation:2 zurich:1 previously:1 imate:1 discus:1 know:3 end:3 optimizable:1 operation:2 apply:3 away:1 appropriate:1 generic:4 spectral:1 alternative:1 include:1 ensure:1 graphical:2 unifying:2 approximating:8 question:3 quantity:2 occurs:2 strategy:2 dependence:1 mirrokni:1 surrogate:4 ssp:2 minx:2 calinescu:1 card:1 valuation:1 spanning:5 besides:1 length:1 conforti:1 ellipsoid:2 ratio:3 minimizing:8 difficult:1 mostly:2 statement:1 negative:1 summarization:2 unknown:1 upper:18 observation:3 finite:1 curved:1 immediate:1 defining:1 extended:2 communication:1 precise:2 sharp:1 lb:1 arbitrary:1 inferred:1 complement:2 namely:1 extensive:1 optimized:1 california:1 learned:1 established:2 address:2 able:1 beyond:1 xm:1 including:1 explanation:1 power:2 suitable:1 quantification:1 improve:3 technology:2 qualifying:1 numerous:1 picture:3 imply:3 kokyuroku:1 stefanie:2 prior:1 removal:1 loss:1 interesting:2 proven:1 querying:1 foundation:1 agent:2 jegelka:10 rkiyer:1 tightened:1 summary:1 supported:2 wireless:2 offline:1 allow:1 wide:1 emp:4 matroids:1 sparse:5 hereditary:1 curve:8 vocabulary:1 stand:1 transition:1 computes:2 osokin:1 far:2 tighten:3 polynomially:2 employing:1 splunk:1 approximate:9 obtains:1 vondr:1 bethany:1 uai:3 corpus:2 b1:1 xi:4 thep:1 iterative:1 modularity:1 table:2 learn:5 poly:1 constructing:3 electric:1 da:1 aistats:1 pk:4 main:4 dense:2 proliferate:1 ling:1 n2:6 prx:1 complementary:1 intel:2 sub:4 comprises:1 third:1 learns:4 tang:1 theorem:9 saturate:1 specific:6 pac:2 showing:2 learnable:1 tripathi:2 ericsson:1 exists:2 essential:1 valiant:1 importance:1 ci:2 phd:1 iyer:4 illustrates:1 logarithmic:3 broadcasting:1 applies:1 minimizer:2 satisfies:7 determines:1 relies:1 iwata:3 shell:2 worstcase:1 acm:1 modulate:1 quantifying:1 towards:1 jeff:1 replace:1 price:1 feasible:1 hard:3 included:1 specifically:2 reducing:1 lemma:20 total:1 goemans:5 exception:1 formally:1 combinatorica:1 support:1 arises:1 dept:2 stefje:1 |
4,407 | 499 | Visual Grammars and their Neural Nets
Eric Mjolsness
Department of Computer Science
Yale University
New Haven, CT 06520-2158
Abstract
I exhibit a systematic way to derive neural nets for vision problems. It
involves formulating a vision problem as Bayesian inference or decision
on a comprehensive model of the visual domain given by a probabilistic
grammar.
1
INTRODUCTION
I show how systematically to derive optimizing neural networks that represent quantitative visual models and match them to data. This involves a design methodology
which starts from first principles, namely a probabilistic model of a visual domain,
and proceeds via Bayesian inference to a neural network which performs a visual
task. The key problem is to find probability distributions sufficiently intricate to
model general visual tasks and yet tractable enough for theory. This is achieved
by probabilistic and expressive grammars which model the image-formation process, including heterogeneous sources of noise each modelled with a grammar rule.
In particular these grammars include a crucial "relabelling" rule that removes the
undetectable internal labels (or indices) of detectable features and substitutes an
uninformed labeling scheme used by the perceiver.
This paper is a brief summary of the contents of [Mjolsness, 1991] .
428
Visual Grammars and their Neural Nets
I
?
I
?
?
instance
2
? ?
3
?
?
?
?
?3
2
3
?
?
?
(unordered dots)
?
?
2 ?
? I
(permuted dots)
Figure 1: Operation of random dot grammar. The first arrow illustrates dot placement; the next shows dot jitter; the next arrow shows the pure, un-numbered feature locations; and the final arrow is the uninformed renumbering scheme of the
perceIver.
2
EXAMPLE: A RANDOM-DOT GRAMMAR
The first example grammar is a generative model of pictures consisting of a number
of dots (e.g. a sum of delta functions) whose relative locations are determined by
one out of M stored models. But the dots are subject to unknown independent jitter
and an unknown global translation, and the identities of the dots (their numerical
labels) are hidden from the perceiver by a random permutation operation. For
example each model might represent an imaginary asterism of equally bright stars
whose locations have been corrupted by instrument noise. One useful task would
be to recognize which model generated the image data. The random-dot grammar
is shown in (1).
model a.nd
loca.tion
rO :
root
Eo (x)
dot
loca.tions
r1
:
insta.nce (Cl', x)
El({Xm})
--+
insta.nce of model Cl' a.t x
-
20'~
--+
{dotloc(Cl', m, xm = X + u~)}
~
dot
jitter
r2 :
dotloc( Cl', m, Xm)
E2(Xm)
scra.mble
all dots
r3
:
{dot(m, Xm)}
E3({xd)
1
IxI2
-log TIm 6(xm - x - u~),
where < u~ >m=O
lim0'6-0 ~
0'6 E m IXm - x - U~12
+ C(0'6)
--+
dot(m,xm}
-
1
~
~
--+
{ima.gedot(xi = Em Pm,iXm)}
-
-log [Pr(P) TIi 6(Xi - Em Pm,iXm)]
where P is a. permuta.tion
1
12
0' il Xm-Xm
~~
429
430
Mjolsness
The final joint probability distribution for this grammar allows recognition and
other problems to be posed as Bayesian inference and solved by neural network
optimization of
A sum over all permutations has been approximated by the optimization over nearpermutations, as usual for Mean Field Theory networks [Yuille, 1990], resulting in
a neural network implementable as an analog circuit. The fact that P appears
only linearly in Efinal makes the optimization problems easier; it is a generalized
"assignment" problem.
2.1
APPROXIMATE NEURAL NETWORK WITHOUT MATCH
VARIABLES
Short of approximating a P configuration sum via Mean Field Theory neural nets,
there is a simpler, cheaper, less accurate approximation that we have used on matching problems similar to the model recognition problem (find a and x) for the dotmatching grammar. Under this approximation,
2+ --~
1 IXi -
"
1 (2N
1 21 x l
argmaxa,xIT(a,xl{xi }):::::: argmaxa,x 'L..,.exp
-T
.
~
m,1
2u}t
a
X - uml
2) ,
(3)
for T = 1. This objective function has a simple interpretation when U r ----+ 00: it
minimizes the Euclidean distance between two Gaussian-blurred images containing
the Xi dots and a shifted version of the Um dots respectively:
argmina ,X J dz IG * II (z) - G * I2(z - x)1 2
argmina,x J dz o"/V2 * 2:i 8(z - Xi) - q / V2 * 2:m 8(z - X - u~)1
argmina,x [C 1 - 2 2:mi J dz exp -~ [Iz - xd 2 + Iz - X - U~12]]
argmaxa,x 2:mi exp - 2~2 IXi - X - U~ 12
IG
G
2
(4)
Deterministic annealing from T = 00 down to T = 1, which is a good strategy for
finding global maxima in equation (3), corresponds to a coarse-to-fine correlation
matching algorithm: the global shift X is computed by repeated local optimization
while gradually decreasing the Gaussian blurr parameter U down to Ujt.
The approximation (3) has the effect of eliminating the discrete Pmi variables, rather
than replacing them with continuous versions Vmi. The same can be said for the
"elastic net" method [Durbin and Willshaw, 1987]. Compared to the elastic net, the
present objective function is simpler, more symmetric between rows and columns,
has a nicer interpretation in terms of known algorithms (correlation matching in
scale space), and is expected to be less accurate.
Visual Grammars and their Neural Nets
3
EXPERIMENTS IN IMAGE REGISTRATION
Equation (3) is an objective function for recovering the global two-dimensional (2D)
translation of a model consisting of arbitrarily placed dots, to match up with similar dots with jittered positions. We use it instead to find the best 2D rotation
and horizontal translation, for two images which actually differ by a horizontal
3D translation with roughly constant camera orientation. The images consist of
line segments rather than single dots, some of which are missing or extra data. In
addition, there are strong boundary effects due to parts of the scene being translated outside the camera's field of view. The jitter is replaced by whatever positional inaccuracies come from an actual camera producing an 128 x 128 image
[Williams and Hanson, 1988] which is then processed by a high quality line-segment
finding algorithm [Burns, 1986]. Better results would be expected of objective functions derived from grammars which explicitly model more of these noise processes,
such as the grammars described in Section 4.
We experimented with minimizing this objective function with respect to unknown
global translations and (sometimes) rotations, using the continuation method and
sets of line segments derived from real images. The results are shown in Figures 2,
3 and 4.
4
MORE GRAMMARS
Going beyond the random-dot grammar, we have studied several grammars of increasing complexity. One can add rotation and dot deletion as new sources of noise,
or introduce a two-level hierarchy, in which models are sets of clusters of dots. In
[Mjolsness et al., 1991] we present a grammar for multiple curves in a single image,
each of which is represented in the image as a set of dots that may be hard to
group into their original curves. This grammar illustrates how flexible objects can
be handled in our formalism.
We approach a modest plateau of generality by augmenting the hierarchical version
of the random-dot grammar with multiple objects in a single scene. This degree of
complexity is sufficient to introduce many interesting features of knowledge representation in high-level vision, such as multiple instances of a model in a scene, as
well as requiring segmentation and grouping as part of the recognition process. We
have shown [Mjolsness, 1991] that such a grammar can yield neural networks nearly
identical to the "Frameville" neural networks we have previously studied as a means
of mixing simple Artificial Intelligence frame systems (or semantic networks) with
optimization-based neural networks. What is more, the transformation leading to
Frameville is very natural. It simply pushes the permutation matrix as far back
into the grammar as possible.
431
432
Mjolsness
-
-- :s::.....===:
-
-
Figure 2: A simple image registration problem. (a) Stair image. (b) Long line
segments derived from stair image. (c) Two unregistered line segment images derived from two images taken from two horizontally translated viewpoints in three
dimensions. The images are a pair of successive frames in an image sequence. (d)
Registered viersions of same data: superposed long line segments extracted from
two stair images (taken from viewpoints differing by a small horizontal translation
in three dimensions) that have been optimally registered in two dimensions.
Visual Grammars and their Neural Nets
Figure 3: Continuation method (deterministic annealing). (a) Objective function at
(j = .0863. (b) Objective function at (j = .300. (c) Objective function at (j = .105.
(d) Objective function at (j .0364.
=
433
434
Mjolsness
Figure 4: Image sequence displacement recovery. Frame 2 is matched to frames 3-8
in the stair image sequence. Horizontal displacements are recovered. Other starting
frames yield similar results except for frame 1, which was much worse. (a) Horizontal displacement recovered, assuming no 2-d rotation. Recovered dispacement as
a function of frame number is monotonic. (b) Horizontal displacement recovered,
along with 2-d rotation which is found to be small except for the final frame. Displacements are in qualitative agreement with (a), more so for small displacements.
(c) Objective function before and after displacement is recovered (upper and lower
curves) without rotation. Note gradual decrease in dE with frame number (and
hence with displacement). (d) Objective function before and after displacement is
recovered (upper and lower curves) with rotation.
Visual Grammars and their Neural Nets
Acknowlegements
Charles Garrett performed the computer simulations and helped formulate the linematching objective function used therein.
References
[Burns, 1986] Burns, J. B. (1986). Extracting straight lines. IEEE Trans . PAMI,
8(4):425-455.
[Durbin and Willshaw, 1987] Durbin, R. and Willshaw, D. (1987). An analog approach to the travelling salesman problem using an elastic net method. Nature,
326:689-691.
[Mjolsness, 1991] Mjolsness, E. (1991). Bayesian inference on visual grammars by
neural nets that optimize. Technical Report YALEU jDCSjTR854, Yale University Department of Computer Science.
[Mjolsness et al., 1991] Mjolsness, E., Rangarajan, A., and Garrett, C. (1991). A
neural net for reconstruction of multiple curves with a visual grammar. In Seattle
International Joint Conference on Neural Networks.
[Williams and Hanson, 1988] Williams, L. R. and Hanson, A. R. (1988). Translating optical flow into token matches and depth from looming. In Second International Conference on Computer Vision, pages 441-448. Staircase test image
sequence.
[Yuille, 1990] Yuille, A. L. (1990). Generalized deformable models, statistical
physics, and matching problems. Neural Computation, 2(1):1-24.
435
| 499 |@word version:3 eliminating:1 nd:1 simulation:1 gradual:1 acknowlegements:1 yaleu:1 configuration:1 imaginary:1 recovered:6 yet:1 numerical:1 remove:1 generative:1 intelligence:1 short:1 coarse:1 location:3 successive:1 simpler:2 relabelling:1 along:1 undetectable:1 qualitative:1 introduce:2 expected:2 intricate:1 roughly:1 decreasing:1 actual:1 increasing:1 matched:1 circuit:1 what:1 minimizes:1 differing:1 finding:2 transformation:1 quantitative:1 xd:2 ro:1 um:1 willshaw:3 whatever:1 producing:1 before:2 local:1 pami:1 might:1 burn:3 therein:1 studied:2 camera:3 displacement:9 matching:4 numbered:1 argmaxa:3 superposed:1 optimize:1 deterministic:2 dz:3 missing:1 williams:3 starting:1 formulate:1 recovery:1 pure:1 rule:2 perceiver:3 hierarchy:1 ixi:2 agreement:1 recognition:3 approximated:1 solved:1 mjolsness:11 decrease:1 complexity:2 segment:6 yuille:3 eric:1 translated:2 joint:2 represented:1 artificial:1 labeling:1 formation:1 outside:1 whose:2 posed:1 grammar:27 final:3 sequence:4 net:12 reconstruction:1 mixing:1 deformable:1 seattle:1 cluster:1 rangarajan:1 r1:1 object:2 tions:1 derive:2 tim:1 augmenting:1 uninformed:2 strong:1 recovering:1 involves:2 come:1 differ:1 translating:1 sufficiently:1 exp:3 label:2 gaussian:2 rather:2 derived:4 xit:1 inference:4 el:1 hidden:1 going:1 orientation:1 flexible:1 loca:2 field:3 identical:1 nearly:1 report:1 haven:1 looming:1 recognize:1 comprehensive:1 cheaper:1 ima:1 replaced:1 consisting:2 stair:4 accurate:2 modest:1 euclidean:1 instance:2 column:1 formalism:1 assignment:1 optimally:1 stored:1 corrupted:1 jittered:1 international:2 systematic:1 probabilistic:3 physic:1 nicer:1 containing:1 worse:1 leading:1 tii:1 de:1 unordered:1 star:1 blurred:1 explicitly:1 tion:2 root:1 view:1 performed:1 helped:1 start:1 bright:1 il:1 yield:2 modelled:1 bayesian:4 straight:1 plateau:1 e2:1 mi:2 knowledge:1 segmentation:1 garrett:2 actually:1 back:1 appears:1 methodology:1 generality:1 correlation:2 horizontal:6 expressive:1 replacing:1 quality:1 effect:2 requiring:1 staircase:1 hence:1 symmetric:1 i2:1 semantic:1 ixi2:1 generalized:2 performs:1 image:21 charles:1 rotation:7 permuted:1 analog:2 interpretation:2 pmi:1 pm:2 dot:25 argmina:3 add:1 optimizing:1 arbitrarily:1 eo:1 ii:1 multiple:4 technical:1 match:4 renumbering:1 long:2 equally:1 heterogeneous:1 vision:4 represent:2 sometimes:1 achieved:1 addition:1 fine:1 annealing:2 source:2 crucial:1 extra:1 subject:1 flow:1 extracting:1 enough:1 shift:1 handled:1 e3:1 useful:1 processed:1 continuation:2 shifted:1 delta:1 discrete:1 iz:2 group:1 key:1 registration:2 sum:3 nce:2 jitter:4 decision:1 ct:1 yale:2 durbin:3 placement:1 scene:3 formulating:1 optical:1 uml:1 department:2 vmi:1 em:2 gradually:1 pr:1 taken:2 equation:2 previously:1 detectable:1 r3:1 tractable:1 instrument:1 travelling:1 salesman:1 operation:2 hierarchical:1 v2:2 substitute:1 original:1 include:1 approximating:1 objective:12 strategy:1 usual:1 said:1 exhibit:1 distance:1 assuming:1 index:1 minimizing:1 design:1 unknown:3 upper:2 implementable:1 frame:9 frameville:2 namely:1 pair:1 hanson:3 registered:2 deletion:1 inaccuracy:1 trans:1 beyond:1 proceeds:1 xm:9 ujt:1 including:1 natural:1 dispacement:1 scheme:2 brief:1 picture:1 relative:1 permutation:3 interesting:1 degree:1 sufficient:1 principle:1 viewpoint:2 systematically:1 translation:6 row:1 summary:1 token:1 placed:1 boundary:1 curve:5 dimension:3 depth:1 ig:2 far:1 approximate:1 global:5 xi:5 un:1 continuous:1 nature:1 elastic:3 cl:4 domain:2 linearly:1 arrow:3 noise:4 repeated:1 insta:2 position:1 xl:1 down:2 unregistered:1 r2:1 experimented:1 grouping:1 consist:1 illustrates:2 push:1 easier:1 simply:1 visual:12 positional:1 horizontally:1 monotonic:1 corresponds:1 extracted:1 identity:1 content:1 hard:1 determined:1 except:2 internal:1 |
4,408 | 4,990 | An Approximate, Efficient Solver for LP Rounding
Srikrishna Sridhar1 , Victor Bittorf1 , Ji Liu1 , Ce Zhang1
Christopher R?e2 , Stephen J. Wright1
1
Computer Sciences Department, University of Wisconsin-Madison, Madison, WI 53706
2
Computer Science Department, Stanford University, Stanford, CA 94305
{srikris,vbittorf,ji-liu,czhang,swright}@cs.wisc.edu
[email protected]
Abstract
Many problems in machine learning can be solved by rounding the solution of an
appropriate linear program (LP). This paper shows that we can recover solutions
of comparable quality by rounding an approximate LP solution instead of the exact one. These approximate LP solutions can be computed efficiently by applying
a parallel stochastic-coordinate-descent method to a quadratic-penalty formulation of the LP. We derive worst-case runtime and solution quality guarantees of
this scheme using novel perturbation and convergence analysis. Our experiments
demonstrate that on such combinatorial problems as vertex cover, independent set
and multiway-cut, our approximate rounding scheme is up to an order of magnitude faster than Cplex (a commercial LP solver) while producing solutions of
similar quality.
1
Introduction
A host of machine-learning problems can be solved effectively as approximations of such NP-hard
combinatorial problems as set cover, set packing, and multiway-cuts [8, 11, 16, 22]. A popular
scheme for solving such problems is called LP rounding [22, chs. 12-26], which consists of the
following three-step process: (1) construct an integer (binary) linear program (IP) formulation of a
given problem; (2) relax the IP to an LP by replacing the constraints x 2 {0, 1} by x 2 [0, 1]; and
(3) round an optimal solution of the LP to create a feasible solution for the original IP problem. LP
rounding is known to work well on a range of hard problems, and comes with theoretical guarantees
for runtime and solution quality.
The Achilles? heel of LP-rounding is that it requires solutions of LPs of possibly extreme scale.
Despite decades of work on LP solvers, including impressive advances during the 1990s, commercial
codes such as Cplex or Gurobi may not be capable of handling problems of the required scale. In this
work, we propose an approximate LP solver suitable for use in the LP-rounding approach, for very
large problems. Our intuition is that in LP rounding, since we ultimately round the LP to obtain an
approximate solution of the combinatorial problem, a crude solution of the LP may suffice. Hence,
an approach that can find approximate solutions of large LPs quickly may be suitable, even if it is
inefficient for obtaining highly accurate solutions.
This paper focuses on the theoretical and algorithmic aspects of finding approximate solutions to an
LP, for use in LP-rounding schemes. Our three main technical contributions are as follows: First, we
show that one can approximately solve large LPs by forming convex quadratic programming (QP)
approximations, then applying stochastic coordinate descent to these approximations. Second, we
derive a novel convergence analysis of our method, based on Renegar?s perturbation theory for linear
programming [17]. Finally, we derive bounds on runtime as well as worst-case approximation ratio
of our rounding schemes. Our experiments demonstrate that our approach, called Thetis, produces
solutions of comparable quality to state-of-the-art approaches on such tasks as noun-phrase chunking
and entity resolution. We also demonstrate, on three different classes of combinatorial problems, that
Thetis can outperform Cplex (a state-of-the-art commercial LP and IP solver) by up to an order of
magnitude in runtime, while achieving comparable solution quality.
1
Related Work. Recently, there has been some focus on the connection between LP relaxations
and maximum a posteriori (MAP) estimation problems [16, 19]. Ravikumar et. al [16] proposed
rounding schemes for iterative LP solvers to facilitate MAP inference in graphical models. In contrast, we propose to use stochastic descent methods to solve a QP relaxation; this allows us to take
advantage of recent results on asynchronous parallel methods of this type [12,14]. Recently, Makari
et. al [13] propose an intriguing parallel scheme for packing and covering problems. In contrast, our
results apply to more general LP relaxations, including set-partitioning problems like multiway-cut.
Additionally, the runtime of our algorithm is less sensitive to approximation error. For an error ",
the bound on runtime of the algorithm in [13] grows as " 5 , while the bound on our algorithm?s
runtime grows as " 2 .
2
Background: Approximating NP-hard problems with LP Rounding
In this section, we review the theory of LP-rounding based approximation schemes for NP-hard
combinatorial problems. We use the vertex cover problem as an example, as it is the simplest
nontrivial setting that exposes the main ideas of this approach.
Preliminaries. For a minimization problem , an algorithm ALG is an ?-factor approximation
for , for some ? > 1, if any solution produced by ALG has an objective value at most ? times
the value of an optimal (lowest cost) solution. For some problems, such as vertex cover, there is a
constant-factor approximation scheme (? = 2). For others, such as set cover, the value of ? can be
as large as O(log N ), where N is the number of sets.
An LP-rounding based approximation scheme for the problem first constructs an IP formulation
of which we denote as ?P ?. This step is typically easy to perform, but the IP formulation P is, in
theory, as hard to solve as the original problem . In this work, we consider applications in which
the only integer variables in the IP formulation are binary variables x 2 {0, 1}. The second step in
LP rounding is a relax / solve step: We relax the constraints in P to obtain a linear program LP (P ),
replacing the binary variables with continuous variables in [0, 1], then solve LP (P ). The third step
is to round the solution of LP (P ) to an integer solution which is feasible for P , thus yielding a
candidate solution to the original problem . The focus of this paper is on the relax / solve step,
which is usually the computational bottleneck in an LP-rounding based approximation scheme.
Example: An Oblivious-Rounding Scheme For Vertex Cover. Let G(V, E) denote a graph with
vertex set V and undirected edges E ? (V ? V ). Let cv denote a nonnegative cost associated with
each vertex v 2 V . A vertex cover of a graph is a subset of V such that each edge e 2 E is incident
to at least one vertex in this set. The minimum-cost vertex cover is the one that minimizes the sum of
terms cv , summed over the vertices v belonging to the cover. Let us review the ?construct,? ?relax /
solve,? and ?round? phases of an LP-rounding based approximation scheme applied to vertex cover.
In the ?construct? phase, we introduce binary variables xv 2 {0, 1}, 8v 2 V , where xv is set to 1 if
the vertex v 2 V is selected in the vertex cover and 0 otherwise. The IP formulation is as follows:
X
min
cv xv s.t. xu + xv 1 for (u, v) 2 E and xv 2 {0, 1} for v 2 V.
(1)
x
v2V
Relaxation yields the following LP
X
min
cv xv s.t. xu + xv
x
v2V
1 for (u, v) 2 E and xv 2 [0, 1] for v 2 V.
(2)
A feasible solution of the LP relaxation (2) is called a ?fractional solution? of the original problem.
In the ?round? phase, we generate a valid vertex cover by simply choosing the vertices v 2 V whose
1
fractional solution xv
2 . It is easy to see that the vertex cover generated by such a rounding
scheme costs no more than twice the cost of the fractional solution. If the fractional solution chosen
for rounding is an optimal solution of (2), then we arrive at a 2-factor approximation scheme for
vertex cover. We note here an important property: The rounding algorithm can generate feasible
integral solutions while being oblivious of whether the fractional solution is an optimal solution of
(2). We formally define the notion of an oblivious rounding scheme as follows.
Definition 1. For a minimization problem
with an IP formulation P whose LP relaxation is
denoted by LP(P ), a -factor ?oblivious? rounding scheme converts any feasible point xf 2 LP(P )
to an integral solution xI 2 P with cost at most times the cost of LP(P ) at xf .
2
Problem Family
Set Covering
Set Packing
Multiway-cut
Graphical Models
Approximation Factor
log(N ) [20]
es + o(s) [1]
3/2 1/k [5]
Heuristic
Machine Learning Applications
Classification [3], Multi-object tracking [24].
MAP-inference [19], Natural language [9].
Computer vision [4], Entity resolution [10].
Semantic role labeling [18], Clustering [21].
Figure 1: LP-rounding schemes considered in this paper. The parameter N refers to the number of
sets; s refers to s-column sparse matrices; and k refers to the number of terminals. e is the Euler?s
constant.
Given a -factor oblivious algorithm ALG to the problem , one can construct a -factor approximation algorithm for by using ALG to round an optimal fractional solution of LP(P ). When we
have an approximate solution for LP(P ) that is feasible for this problem, rounding can produce an
?-factor approximation algorithm for for a factor ? slightly larger than , where the difference
between ? and takes account of the inexactness in the approximate solution of LP(P ). Many
LP-rounding schemes (including the scheme for vertex cover discussed in Section 2) are oblivious.
We implemented the oblivious LP-rounding algorithms in Figure 1 and report experimental results
in Section 4.
3
Main results
In this section, we describe how we can solve LP relaxations approximately, in less time than traditional LP solvers, while still preserving the formal guarantees of rounding schemes. We first define a
notion of approximate LP solution and discuss its consequences for oblivious rounding schemes. We
show that one can use a regularized quadratic penalty formulation to compute these approximate LP
solutions. We then describe a stochastic-coordinate-descent (SCD) algorithm for obtaining approximate solutions of this QP, and mention enhancements of this approach, specifically, asynchronous
parallel implementation and the use of an augmented Lagrangian framework. Our analysis yields a
worst-case complexity bound for solution quality and runtime of the entire LP-rounding scheme.
3.1
Approximating LP Solutions
Consider the LP in the following standard form
min cT x s.t. Ax = b, x
0,
where c 2 Rn , b 2 Rm , and A 2 Rm?n and its corresponding dual
max bT u s.t. c
AT u
0.
(3)
(4)
Let x denote an optimal primal solution of (3). An approximate LP solution x
? that we use for LProunding may be infeasible and have objective value different from the optimum cT x? . We quantify
the inexactness in an approximate LP solution as follows.
Definition 2. A point x
? is an (?, )-approximate solution of the LP (3) if x
?
0 and there exists
constants ? > 0 and > 0 such that
?
kA?
x
bk1 ? ?
and
|cT x
?
cT x? | ? |cT x? |.
Using Definitions 1 and 2, it is easy to see that a -factor oblivious rounding scheme can round
a (0, ) approximate solution to produce a feasible integral solution whose cost is no more than
(1 + ) times the optimal solution of the P . The factor (1 + ) arises because the rounding
algorithm does not have access to an optimal fractional solution. To cope with the infeasibility, we
convert an (?, )-approximate solution to a (0, ?) approximate solution where ? is not too large. For
vertex cover (2), we prove the following result in Appendix C. (Here, ?[0,1]n (?) denotes projection
onto the unit hypercube in Rn .)
Lemma 3. Let x
? be an (", ) approximate solution to the linear program (2) with " 2 [0, 1). Then,
x
? = ?[0,1]n ((1 ") 1 x
?) is a (0, (1 ") 1 )-approximate solution.
Since x
? is a feasible solution for (2), the oblivious rounding scheme in Section 2 results in an 2(1 +
(1 ") 1 ) factor approximation algorithm. In general, constructing (0, ?) from (?, ) approximate
solutions requires reasoning about the structure of a particular LP. In Appendix C, we establish
statements analogous to Lemma 3 for packing, covering and multiway-cut problems.
3
3.2
Quadratic Programming Approximation to the LP
We consider the following regularized quadratic penalty approximation to the LP (3), parameterized
by a positive constant , whose solution is denoted by x( ):
x( ) := arg min f (x) := cT x
u
?T (Ax
x 0
b) +
2
kAx
bk2 +
1
kx
2
x
?k2 ,
(5)
where u
? 2 Rm and x
? 2 Rn are arbitrary vectors. (In practice, u
? and x
? may be chosen as approximations to the dual and primal solutions of (3), or simply set to zero.) The quality of the
approximation (5) depends on the conditioning of underlying linear program (3), a concept that was
studied by Renegar [17]. Denoting the data for problem (3) by d := (A, b, c), we consider perturbations d := ( A, b, c) such that the linear program defined by d + d is primal infeasible. The
primal condition number P is the infimum of the ratios k dk/kdk over all such vectors d. The
dual condition number D is defined analogously. (Clearly both P and D are in the range [0, 1];
smaller values indicate poorer conditioning.) We have the following result, which is proven in the
supplementary material.
Theorem 4. Suppose that P and D are both positive, and let (x? , u? ) be any primal-dual solution
pair for (3), (4). If we define C? := max(kx? x
?k, ku? u
?k), then the unique solution x( ) of (5)
satisfies
p
p
kAx( ) bk ? (1/ )(1 + 2)C? , kx( ) x? k ? 6C? .
10C?
kdk min( P ,
If in addition the parameter
|cT x?
cT x( )| ?
1
D)
?
, then we have
p
25C?
+ 6C?2 + 6k?
xkC? .
2 P D
In practice, we solve (5) approximately, using an algorithm whose complexity depends on the threshold ?? for which the objective is accurate to within ??. That is, we seek x
? such that
1
k?
x
x( )k2 ? f (?
x)
f (x( )) ? ??,
where the left-hand inequality follows from the fact that f is strongly convex with modulus
If we define
C2
25C?
?? := 20
, C20 :=
,
3
2kdk P D
1
.
(6)
then by combining some elementary inequalities with the results of Theorem 4, we obtain the bounds
?
?
p
p
1 25C?
1
25C?
T
T ?
2
|c x
? c x |?
+ 6C? + 6k?
xkC? , kA?
x bk ?
(1 + 2)C? +
.
2 P D
P D
The following result is almost an immediate consequence.
Theorem 5. Suppose that P and D are both positive and let (x? , u? ) be any primal-dual optimal
pair. Suppose that C? is defined as in Theorem 4. Then for any given positive pair (?, ), we have
that x
? satisfies the inequalities in Definition 2 provided that satisfies the following three lower
bounds:
10C?
,
kdk min( P , D )
?
p
1
25C?
+ 6C?2 + 6k?
xkC? ,
T
?
|c x | P D
?
p
1
25C?
(1 + 2)C? +
.
?
2 P D
For an instance of vertex cover with n nodes and m edges, we can show that P 1 = O(n1/2 (m +
p
n)1/2 ) and D1 = O((m + n)1/2 ) (see Appendix D). The values x
? = 1 and u
? = ~0 yield C? ? m.
We therefore obtain = O(m1/2 n1/2 (m + n)(min{?, |cT x? |}) 1 ).
4
Algorithm 1 SCD method for (5)
1: Choose x0 2 Rn ; j
0
2: loop
3:
Choose i(j) 2 {1, 2, . . . , n} randomly with equal probability;
4:
Define xj+1 from xj by setting [xj+1 ]i(j)
max(0, [xj ]i(j)
leaving other components unchanged;
5:
j
j + 1;
6: end loop
3.3
(1/Lmax )[rf (xj )]i(j) ),
Solving the QP Approximation: Coordinate Descent
We propose the use of a stochastic coordinate descent (SCD) algorithm [12] to solve (5). Each step
of SCD chooses a component i 2 {1, 2, . . . , n} and takes a step in the ith component of x along the
partial gradient of (5) with respect to this component, projecting if necessary to retain nonnegativity.
This simple procedure depends on the following constant Lmax , which bounds the diagonals of the
Hessian in the objective of (5):
Lmax = ( max
i=1,2,...,n
AT:i A:i ) +
1
,
(7)
where A:i denotes the ith column of A. Algorithm 1 describes the SCD method. Convergence
results for Algorithm 1 can be obtained from [12]. In this result, E(?) denotes expectation over
all the random variables i(j) indicating the update indices chosen at each iteration. We need the
following quantities:
1
l := , R := sup kxj x( )k2 ,
(8)
j=1,2,...n
where xj denotes the jth iterate of the SCD algorithm. (Note that R bounds the maximum distance
that the iterates travel from the solution x( ) of (5).)
Theorem 6. For Algorithm 1 we have
?
?j ?
?
l
2
2
2
?
2
?
Ekxj x( )k +
E(f (xj ) f ) ? 1
R +
(f (x0 ) f ) ,
Lmax
n(l + Lmax )
Lmax
where f ? := f (x( )). We obtain high-probability convergence of f (xj ) to f ? in the following
sense: For any ? 2 (0, 1) and any small ??, we have P (f (xj ) f ? < ??) 1 ?, provided that
?
?
n(l + Lmax )
Lmax
2
j
log
R2 +
(f (x0 ) f ? ) .
l
2??
?
Lmax
Worst-Case Complexity Bounds. We now combine the analysis in Sections 3.2 and 3.3 to derive
a worst-case complexity bound for our approximate LP solver. Supposing that the columns of A
1
have norm O(1), we have from (7) and (8) that l =
and Lmax = O( ). Theorem 6 indicates
that we require O(n 2 ) iterations to solve (5) (modulo a log term). For the values of described in
Section 3.2, this translates to a complexity estimate of O(m3 n2 /?2 ).
In order to obtain the desired accuracy in terms of feasibility and function value of the LP (captured
by ?) we need to solve the QP to within the different, tighter tolerance ?? introduced in (6). Both
tolerances are related to the choice of penalty parameter in the QP. Ignoring here the dependence
3
on dimensions m and n, we note the relationships ? ? 1 (from Theorem 5) and ?? ?
?
?3 (from (6)). Expressing all quantities in terms of ?, and using Theorem 6, we see an iteration
complexity of ? 2 for SCD (ignoring log terms). The linear convergence rate of SCD is instrumental
to this favorable value. By contrast, standard variants of stochastic-gradient descent (SGD) applied
to the QP yield poorer complexity. For diminishing-step or constant-step variants of SGD, we see
complexity of ? 7 , while for robust SGD, we see ? 10 . (Besides the inverse dependence on ?? or its
square in the analysis of these methods, there is a contribution of order ? 2 from the conditioning of
the QP.)
3.4
Enhancements
We mention two important enhancements that improve the efficiency of the approach outlined above.
The first is an asynchronous parallel implementation of Algorithm 1 and the second is the use of an
augmented Lagrangian framework rather than ?one-shot? approximation by the QP in (5).
5
Task
CoNLL
TAC-KBP
Formulation
Skip-chain CRF
Factor graph
PV
25M
62K
NNZ
51M
115K
P
.87
.79
Thetis
R
F1
.90 .89
.79 .79
Rank
10/13
6/17
P
.86
.80
Gibbs Sampling
R
F1 Rank
.90 .88 10/13
.80 .80
6/17
Figure 2: Solution quality of our LP-rounding approach on two tasks. PV is the number of primal
variables and NNZ is the number of non-zeros in the constraint matrix of the LP in standard form.
The rank indicates where we would been have placed, had we participated in the competition.
Asynchronous Parallel SCD. An asynchronous parallel version of Algorithm 1, described in
[12], is suitable for execution on multicore, shared-memory architectures. Each core, executing
a single thread, has access to the complete vector x. Each thread essentially runs its own version
of Algorithm 1 independently of the others, choosing and updating one component i(j) of x on
each iteration. Between the time a thread reads x and performs its update, x usually will have been
updated by several other threads. Provided that the number of threads is not too large (according to
criteria that depends on n and on the diagonal dominance properties of the Hessian matrix), and the
step size is chosen appropriately, the convergence rate is similar to the serial case, and near-linear
speedup is observed.
Augmented Lagrangian Framework. It is well known (see for example [2, 15]) that the
quadratic-penalty approach can be extended to an augmented Lagrangian framework, in which a
sequence of problems of the form (5) are solved, with the primal and dual solution estimates x
? and
u
? (and possibly the penalty parameter ) updated between iterations. Such a ?proximal method of
multipliers? for LP was described in [23]. We omit a discussion of the convergence properties of
the algorithm here, but note that the quality of solution depends on the values of x
?, u
? and at the
last iteration before convergence is declared. By applying Theorem 5, we note that the constant C?
is smaller when x
? and u
? are close to the primal and dual solution sets, thus improving the approximation and reducing the need to increase to a larger value to obtain an approximate solution of
acceptable accuracy.
4
Experiments
Our experiments address two main questions: (1) Is our approximate LP-rounding scheme useful in
graph analysis tasks that arise in machine learning? and (2) How does our approach compare to a
state-of-the-art commercial solver? We give favorable answers to both questions.
4.1
Is Our Approximate LP-Rounding Scheme Useful in Graph Analysis Tasks?
LP formulations have been used to solve MAP inference problems on graphical models [16], but
general-purpose LP solvers have rarely been used, for reasons of scalability. We demonstrate that
the rounded solutions obtained using Thetis are of comparable quality to those obtained with stateof-the-art systems. We perform experiments on two different tasks: entity linking and text chunking.
For each task, we produce a factor graph [9], which consists of a set of random variables and a set
of factors to describe the correlation between random variables. We then run MAP inference on the
factor graph using the LP formulation in [9] and compare the quality of the solutions obtained by
Thetis with a Gibbs sampling-based approach [26]. We follow the LP-rounding algorithm in [16]
to solve the MAP estimation problem. For entity linking, we use the TAC-KBP 2010 benchmark1 .
The input graphical model has 12K boolean random variables and 17K factors. For text chunking,
we use the CoNLL 2000 shared task2 . The factor graph contained 47K categorical random variables
(with domain size 23) and 100K factors. We use the training sets provided by TAC-KBP 2010 and
CoNLL 2000 respectively. We evaluate the quality of both approaches using the official evaluation
scripts and evaluation data sets provided by each challenge. Figure 2 contains a description of the
three relevant quality metrics, precision (P), recall (R) and F1-scores. Figure 2 demonstrates that our
algorithm produces solutions of quality comparable with state-of-the-art approaches for these graph
analysis tasks.
4.2
How does our proposed approach compare to a state-of-the-art commercial solver?
We conducted numerical experiments on three different combinatorial problems that commonly arise
in graph analysis tasks in machine learning: vertex cover, independent set, and multiway cuts. For
1
2
http://nlp.cs.qc.cuny.edu/kbp/2010/
http://www.cnts.ua.ac.be/conll2000/chunking/
6
each problem, we compared the performance of our LP solver against the LP and IP solvers of Cplex
(v12.5) (denoted as Cplex-LP and Cplex-IP respectively). The two main goals of this experiment
are to: (1) compare the quality of the integral solutions obtained using LP-rounding with the integral
solutions from Cplex-IP and (2) compare wall-clock times required by Thetis and Cplex-LP to solve
the LPs for the purpose of LP-rounding.
Datasets. Our tasks are based on two families of graphs. The first family of instances (frb59-26-1
to frb59-26-5) was obtained from Bhoslib3 (Benchmark with Hidden Optimum Solutions); they are
considered difficult problems [25]. The instances in this family are similar; the first is reported in the
figures of this section, while the remainder appear in Appendix E. The second family of instances
are social networking graphs obtained from the Stanford Network Analysis Platform (SNAP)4 .
System Setup. Thetis was implemented using a combination of C++ (for Algorithm 1) and Matlab (for the augmented Lagrangian framework). Our implementation of the augmented Lagrangian
framework was based on [6]. All experiments were run on a 4 Intel Xeon E7-4450 (40 cores @
2Ghz) with 256GB of RAM running Linux 3.8.4 with a 15-disk RAID0. Cplex used 32 (of the
40) cores available in the machine, and for consistency, our implementation was also restricted to
32 cores. Cplex implements presolve procedures that detect redundancy, and substitute and eliminate variables to obtain equivalent, smaller LPs. Since the aim of this experiment is compare the
algorithms used to solve LPs, we ran both Cplex-LP and Thetis on the reduced LPs generated by
the presolve procedure of Cplex-LP. Both Cplex-LP and Thetis were run to a tolerance of ? = 0.1.
Additional experiments with Cplex-LP run using its default tolerance options are reported in Appendix E. We used the barrier optimizer while running Cplex-LP. All codes were provided with a
time limit of 3600 seconds excluding the time taken for preprocessing as well as the runtime of the
rounding algorithms that generate integral solutions from fractional solutions.
Tasks. We solved the vertex cover problem using the approximation algorithm described in Section 2. We solved the maximum independent set problem using a variant of the es + o(s)-factor
approximation in [1] where s is the maximum degree of a node in the graph (see Appendix C for
details). For the multiway-cut problem (with k = 3) we used the 3/2 1/k-factor approximation
algorithm described in [22]. The details of the transformation from approximate infeasible solutions to feasible solutions are provided in Appendix C. Since the rounding schemes for maximumindependent set and multiway-cut are randomized, we chose the best feasible integral solution from
10 repetitions.
Instance
frb59-26-1
Amazon
DBLP
Google+
PV
0.12
0.39
0.37
0.71
Minimization problems
VC
MC
NNZ
S
Q
PV NNZ
S
0.37 2.8 1.04
0.75
3.02 53.3
1.17 8.4 1.23
5.89
23.2
1.13 8.3 1.25
6.61
26.1
2.14 9.0 1.21
9.24
36.8
-
Q
1.01
0.42
0.33
0.83
Maximization problems
MIS
PV NNZ
S
Q
0.12
0.38
5.3 0.36
0.39
1.17
7.4 0.82
0.37
1.13
8.5 0.88
0.71
2.14 10.2 0.82
Figure 3: Summary of wall-clock speedup (in comparison with Cplex-LP) and solution quality (in
comparison with Cplex-IP) of Thetis on three graph analysis problems. Each code is run with a time
limit of one hour and parallelized over 32 cores, with ?-? indicating that the code reached the time
limit. PV is the number of primal variables while NNZ is the number of nonzeros in the constraint
matrix of the LP in standard form (both in millions). S is the speedup, defined as the time taken by
Cplex-LP divided by the time taken by Thetis. Q is the ratio of the solution objective obtained by
Thetis to that reported by Cplex-IP. For minimization problems (VC and MC) lower Q is better; for
maximization problems (MIS) higher Q is better. For MC, a value of Q < 1 indicates that Thetis
found a better solution than Cplex-IP found within the time limit.
Results. The results are summarized in Figure 3, with additional details in Figure 4. We discuss
the results for the vertex cover problem. On the Bhoslib instances, the integral solutions from
Thetis were within 4% of the documented optimal solutions. In comparison, Cplex-IP produced
3
4
http://www.nlsde.buaa.edu.cn/?kexu/benchmarks/graph-benchmarks.htm
http://snap.stanford.edu/
7
VC
(min)
t (secs)
frb59-26-1
Amazon
85.5
DBLP
22.1
Google+
MC
(min)
t (secs)
frb59-26-1 72.3
Amazon
DBLP
Google+
MIS
(max)
t (secs)
frb59-26-1
Amazon
35.4
DBLP
17.3
Google+
-
Cplex IP
BFS
Gap (%) t (secs)
1475
0.67
2.48
1.60?105
24.8
1.65?105
22.3
1.06?105
0.01
40.1
Cplex IP
BFS
Gap (%) t (secs)
346
312.2
12
NA
15
NA
6
NA
Cplex IP
BFS
Gap (%) t (secs)
50
18.0
4.65
1.75?105
23.0
1.52?105
23.2
1.06?105
44.5
Cplex LP
LP
RSol
t (secs)
767
1534
0.88
1.50?105 2.04?105 2.97
1.42?105 2.08?105 2.70
1.00?105 1.31?105 4.47
Cplex LP
LP
RSol
t (secs)
346
346
5.86
55.8
63.8
109.9
Cplex LP
LP
RSol
t (secs)
767
15
0.88
1.85?105 1.56?105 3.09
1.75?105 1.41?105 2.72
1.11?105 9.39?104 4.37
Thetis
LP
959.7
1.50?105
1.42?105
1.00?105
Thetis
LP
352.3
7.28
11.7
5.84
Thetis
LP
447.7
1.73?105
1.66?105
1.00?105
RSol
1532
1.97?105
2.06?105
1.27?105
RSol
349
5
5
5
RSol
18
1.43?105
1.34?105
8.67?104
Figure 4: Wall-clock time and quality of fractional and integral solutions for three graph analysis
problems using Thetis, Cplex-IP and Cplex-LP. Each code was given a time limit of one hour, with
?-? indicating a timeout. BFS is the objective value of the best integer feasible solution found by
Cplex-IP. The gap is defined as (BFS BB)/BFS where BB is the best known solution bound found
by Cplex-IP within the time limit. A gap of ?-? indicates that the problem was solved to within
0.01% accuracy and NA indicates that Cplex-IP was unable to find a valid solution bound. LP is the
objective value of the LP solution, and RSol is objective value of the rounded solution.
integral solutions that were within 1% of the documented optimal solutions, but required an hour for
each of the instances. Although the LP solutions obtained by Thetis were less accurate than those
obtained by Cplex-LP, the rounded solutions from Thetis and Cplex-LP are almost exactly the same.
In summary, the LP-rounding approaches using Thetis and Cplex-LP obtain integral solutions of
comparable quality with Cplex-IP ? but Thetis is about three times faster than Cplex-LP.
We observed a similar trend on the large social networking graphs. We were able to recover integral
solutions of comparable quality to Cplex-IP, but seven to eight times faster than using LP-rounding
with Cplex-LP. We make two additional observations. The difference between the optimal fractional and integral solutions for these instances is much smaller than frb59-26-1. We recorded
unpredictable performance of Cplex-IP on large instances. Notably, Cplex-IP was able to find the
optimal solution for the Amazon and DBLP instances, but timed out on Google+, which is of comparable size. On some instances, Cplex-IP outperformed even Cplex-LP in wall clock time, due to
specialized presolve strategies.
5
Conclusion
We described Thetis, an LP rounding scheme based on an approximate solver for LP relaxations
of combinatorial problems. We derived worst-case runtime and solution quality bounds for our
scheme, and demonstrated that our approach was faster than an alternative based on a state-of-theart LP solver, while producing rounded solutions of comparable quality.
Acknowledgements
SS is generously supported by ONR award N000141310129. JL is generously supported in part by
NSF awards DMS-0914524 and DMS-1216318 and ONR award N000141310129. CR?s work on
this project is generously supported by NSF CAREER award under IIS-1353606, NSF award under CCF-1356918, the ONR under awards N000141210041 and N000141310129, a Sloan Research
Fellowship, and gifts from Oracle and Google. SJW is generously supported in part by NSF awards
DMS-0914524 and DMS-1216318, ONR award N000141310129, DOE award DE-SC0002283, and
Subcontract 3F-30222 from Argonne National Laboratory. Any recommendations, findings or opinions expressed in this work are those of the authors and do not necessarily reflect the views of any
of the above sponsors.
8
References
[1] Nikhil Bansal, Nitish Korula, Viswanath Nagarajan, and Aravind Srinivasan. Solving packing integer
programs via randomized rounding with alterations. Theory of Computing, 8(1):533?565, 2012.
[2] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[3] Jacob Bien and Robert Tibshirani. Classification by set cover: The prototype vector machine. arXiv
preprint arXiv:0908.2284, 2009.
[4] Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algorithms
for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence,
26:1124?1137, 2004.
[5] Gruia C?alinescu, Howard Karloff, and Yuval Rabani. An improved approximation algorithm for multiway
cut. In Proceedings of the thirtieth annual ACM symposium on Theory of Computing, pages 48?52. ACM,
1998.
[6] Jonathan Eckstein and Paulo JS Silva. A practical relative error criterion for augmented lagrangians.
Mathematical Programming, pages 1?30, 2010.
[7] Dorit S Hochbaum. Approximation algorithms for the set covering and vertex cover problems. SIAM
Journal on Computing, 11(3):555?556, 1982.
[8] VK Koval and MI Schlesinger. Two-dimensional programming in image analysis problems. USSR
Academy of Science, Automatics and Telemechanics, 8:149?168, 1976.
[9] Frank R Kschischang, Brendan J Frey, and H-A Loeliger. Factor graphs and the sum-product algorithm.
Information Theory, IEEE Transactions on, 47(2):498?519, 2001.
[10] Taesung Lee, Zhongyuan Wang, Haixun Wang, and Seung-won Hwang. Web scale entity resolution using
relational evidence. Technical report, Microsoft Research, 2011.
[11] Victor Lempitsky and Yuri Boykov. Global optimization for shape fitting. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR ?07), pages 1?8. IEEE, 2007.
[12] Ji Liu, Stephen J. Wright, Christopher R?e, and Victor Bittorf. An asynchronous parallel stochastic coordinate descent algorithm. Technical report, University of Wisconsin-Madison, October 2013.
[13] F Manshadi, Baruch Awerbuch, Rainer Gemulla, Rohit Khandekar, Juli?an Mestre, and Mauro Sozio. A
distributed algorithm for large-scale generalized matching. Proceedings of the VLDB Endowment, 2013.
[14] Feng Niu, Benjamin Recht, Christopher R?e, and Stephen J. Wright. Hogwild!: A lock-free approach to
parallelizing stochastic gradient descent. arXiv preprint arXiv:1106.5730, 2011.
[15] Jorge Nocedal and Stephen J Wright. Numerical Optimization. Springer, 2006.
[16] Pradeep Ravikumar, Alekh Agarwal, and Martin J Wainwright. Message-passing for graph-structured
linear programs: Proximal methods and rounding schemes. The Journal of Machine Learning Research,
11:1043?1080, 2010.
[17] J. Renegar. Some perturbation theory for linear programming. Mathenatical Programming, Series A,
65:73?92, 1994.
[18] Dan Roth and Wen-tau Yih. Integer linear programming inference for conditional random fields. In
Proceedings of the 22nd International Conference on Machine Learning, pages 736?743. ACM, 2005.
[19] Sujay Sanghavi, Dmitry Malioutov, and Alan S Willsky. Linear programming analysis of loopy belief
propagation for weighted matching. In Advances in Neural Information Processing Systems, pages 1273?
1280, 2007.
[20] Aravind Srinivasan. Improved approximation guarantees for packing and covering integer programs.
SIAM Journal on Computing, 29(2):648?670, 1999.
[21] Jurgen Van Gael and Xiaojin Zhu. Correlation clustering for crosslingual link detection. In IJCAI, pages
1744?1749, 2007.
[22] Vijay V Vazirani. Approximation Algorithms. Springer, 2004.
[23] Stephen J. Wright. Implementing proximal point methods for linear programming.
Optimization Theory and Applications, 65(3):531?554, 1990.
Journal of
[24] Zheng Wu, Ashwin Thangali, Stan Sclaroff, and Margrit Betke. Coupling detection and data association
for multiple object tracking. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference
on, pages 1948?1955. IEEE, 2012.
[25] Ke Xu and Wei Li. Many hard examples in exact phase transitions. Theoretical Computer Science,
355(3):291?302, 2006.
[26] Ce Zhang and Christopher R?e. Towards high-throughput gibbs sampling at scale: A study across storage
managers. In SIGMOD Proceedings, 2013.
9
| 4990 |@word version:2 norm:1 instrumental:1 nd:1 disk:1 vldb:1 seek:1 jacob:1 sgd:3 mention:2 yih:1 shot:1 liu:2 contains:1 score:1 series:1 loeliger:1 denoting:1 ka:2 intriguing:1 numerical:2 shape:1 update:2 intelligence:1 selected:1 ith:2 core:5 iterates:1 node:2 bittorf:1 zhang:1 mathematical:1 along:1 c2:1 symposium:1 consists:2 prove:1 combine:1 fitting:1 dan:1 introduce:1 x0:3 notably:1 multi:1 manager:1 terminal:1 unpredictable:1 solver:15 ua:1 gift:1 provided:7 project:1 underlying:1 suffice:1 lowest:1 minimizes:1 finding:2 transformation:1 guarantee:4 runtime:10 exactly:1 rm:3 k2:3 demonstrates:1 partitioning:1 unit:1 zhang1:1 omit:1 producing:2 appear:1 bertsekas:1 positive:4 before:1 frey:1 xv:9 limit:6 consequence:2 despite:1 niu:1 approximately:3 chose:1 twice:1 studied:1 range:2 unique:1 practical:1 practice:2 implement:1 procedure:3 nnz:6 projection:1 matching:2 refers:3 onto:1 close:1 storage:1 applying:3 www:2 equivalent:1 map:6 lagrangian:6 demonstrated:1 roth:1 independently:1 convex:2 resolution:3 qc:1 amazon:5 chrismre:1 ke:1 bfs:6 notion:2 coordinate:6 analogous:1 updated:2 commercial:5 suppose:3 modulo:1 exact:2 programming:11 cnts:1 trend:1 sc0002283:1 recognition:2 updating:1 viswanath:1 cut:10 observed:2 role:1 preprint:2 solved:6 wang:2 worst:6 ran:1 intuition:1 benjamin:1 complexity:8 seung:1 scd:9 ultimately:1 solving:3 efficiency:1 packing:6 kxj:1 htm:1 kolmogorov:1 describe:3 labeling:1 choosing:2 whose:5 heuristic:1 stanford:5 solve:16 larger:2 supplementary:1 relax:5 otherwise:1 snap:2 s:1 nikhil:1 cvpr:2 ip:28 timeout:1 advantage:1 sequence:1 propose:4 product:1 remainder:1 relevant:1 combining:1 loop:2 academy:1 description:1 competition:1 scalability:1 convergence:8 enhancement:3 optimum:2 ijcai:1 produce:5 executing:1 object:2 derive:4 coupling:1 ac:1 multicore:1 jurgen:1 srikrishna:1 implemented:2 c:3 skip:1 come:1 indicate:1 quantify:1 stochastic:8 vc:3 opinion:1 material:1 implementing:1 require:1 nagarajan:1 f1:3 wall:4 preliminary:1 lagrangians:1 achilles:1 tighter:1 elementary:1 considered:2 wright:4 algorithmic:1 optimizer:1 purpose:2 estimation:2 favorable:2 travel:1 outperformed:1 combinatorial:7 expose:1 sensitive:1 repetition:1 create:1 weighted:1 minimization:5 clearly:1 generously:4 aim:1 e7:1 rather:1 cr:1 thirtieth:1 rainer:1 ax:2 focus:3 derived:1 korula:1 vk:1 xkc:3 indicates:5 rank:3 contrast:3 brendan:1 sense:1 detect:1 posteriori:1 inference:5 typically:1 entire:1 bt:1 diminishing:1 hidden:1 eliminate:1 arg:1 classification:2 dual:7 denoted:3 stateof:1 ussr:1 art:6 noun:1 summed:1 platform:1 equal:1 construct:5 field:1 sampling:3 throughput:1 theart:1 report:3 sanghavi:1 np:3 others:2 oblivious:10 wen:1 randomly:1 national:1 phase:4 cplex:43 n1:2 microsoft:1 detection:2 message:1 highly:1 zheng:1 evaluation:2 extreme:1 pradeep:1 yielding:1 primal:10 chain:1 accurate:3 poorer:2 edge:3 capable:1 integral:13 partial:1 necessary:1 desired:1 timed:1 theoretical:3 schlesinger:1 instance:11 column:3 xeon:1 boolean:1 n000141310129:4 cover:22 maximization:2 phrase:1 cost:8 loopy:1 vertex:24 subset:1 euler:1 rounding:47 conducted:1 too:2 reported:3 answer:1 proximal:3 chooses:1 recht:1 international:1 randomized:2 siam:2 retain:1 lee:1 mestre:1 rounded:4 analogously:1 quickly:1 na:4 linux:1 reflect:1 recorded:1 choose:2 possibly:2 inefficient:1 dimitri:1 li:1 account:1 paulo:1 de:1 gemulla:1 alteration:1 summarized:1 sec:9 v2v:2 sloan:1 depends:5 script:1 view:1 hogwild:1 liu1:1 sup:1 reached:1 recover:2 czhang:1 parallel:8 option:1 contribution:2 square:1 accuracy:3 efficiently:1 yield:4 produced:2 mc:4 malioutov:1 bk1:1 networking:2 definition:4 against:1 energy:1 e2:1 dm:4 associated:1 mi:4 popular:1 recall:1 fractional:10 aravind:2 higher:1 follow:1 improved:2 wei:1 formulation:11 strongly:1 correlation:2 clock:4 hand:1 web:1 christopher:4 replacing:2 nonlinear:1 propagation:1 google:6 infimum:1 quality:22 scientific:1 grows:2 hwang:1 modulus:1 facilitate:1 concept:1 multiplier:1 ccf:1 awerbuch:1 hence:1 read:1 laboratory:1 semantic:1 round:7 during:1 covering:5 won:1 criterion:2 subcontract:1 generalized:1 bansal:1 crf:1 demonstrate:4 complete:1 performs:1 silva:1 reasoning:1 image:1 novel:2 recently:2 boykov:2 specialized:1 ji:3 qp:9 conditioning:3 million:1 discussed:1 linking:2 m1:1 jl:1 crosslingual:1 association:1 expressing:1 gibbs:3 cv:4 tac:3 ashwin:1 automatic:1 sujay:1 outlined:1 consistency:1 multiway:9 language:1 had:1 cuny:1 access:2 impressive:1 alekh:1 j:1 own:1 recent:1 inequality:3 binary:4 onr:4 jorge:1 yuri:2 victor:3 preserving:1 minimum:1 captured:1 additional:3 parallelized:1 stephen:5 ii:1 multiple:1 gruia:1 nonzeros:1 taesung:1 technical:3 faster:4 xf:2 alan:1 divided:1 host:1 serial:1 ravikumar:2 award:9 feasibility:1 sponsor:1 kax:2 variant:3 vision:4 expectation:1 essentially:1 metric:1 arxiv:4 iteration:6 hochbaum:1 agarwal:1 background:1 addition:1 participated:1 fellowship:1 leaving:1 appropriately:1 supposing:1 undirected:1 flow:1 integer:7 near:1 easy:3 iterate:1 xj:9 architecture:1 karloff:1 idea:1 cn:1 prototype:1 translates:1 bottleneck:1 whether:1 thread:5 gb:1 penalty:6 hessian:2 passing:1 matlab:1 useful:2 gael:1 simplest:1 reduced:1 generate:3 http:4 outperform:1 documented:2 nsf:4 sjw:1 tibshirani:1 srinivasan:2 dominance:1 redundancy:1 threshold:1 achieving:1 wisc:1 ce:2 nocedal:1 ram:1 graph:19 relaxation:8 baruch:1 sum:2 convert:2 run:6 inverse:1 parameterized:1 arrive:1 family:5 almost:2 kbp:4 v12:1 wu:1 appendix:7 acceptable:1 conll:3 comparable:9 bound:13 ct:9 quadratic:6 nonnegative:1 oracle:1 renegar:3 nontrivial:1 annual:1 constraint:4 declared:1 aspect:1 nitish:1 min:10 c20:1 rabani:1 martin:1 speedup:3 department:2 structured:1 according:1 combination:1 belonging:1 smaller:4 slightly:1 describes:1 across:1 wi:1 lp:113 heel:1 projecting:1 restricted:1 taken:3 chunking:4 discus:2 end:1 available:1 apply:1 eight:1 appropriate:1 alternative:1 original:4 substitute:1 denotes:4 clustering:2 nlp:1 running:2 graphical:4 lock:1 madison:3 sigmod:1 establish:1 approximating:2 hypercube:1 alinescu:1 unchanged:1 feng:1 objective:8 question:2 quantity:2 strategy:1 dependence:2 traditional:1 diagonal:2 gradient:3 distance:1 unable:1 link:1 entity:5 athena:1 presolve:3 mauro:1 seven:1 reason:1 khandekar:1 willsky:1 code:5 besides:1 index:1 relationship:1 ratio:3 vladimir:1 difficult:1 setup:1 october:1 robert:1 statement:1 frank:1 implementation:4 perform:2 observation:1 datasets:1 benchmark:3 howard:1 descent:9 immediate:1 extended:1 excluding:1 relational:1 rn:4 perturbation:4 arbitrary:1 parallelizing:1 bk:2 introduced:1 pair:3 required:3 gurobi:1 eckstein:1 connection:1 hour:3 address:1 able:2 usually:2 pattern:3 bien:1 challenge:1 program:9 rf:1 including:3 max:6 memory:1 tau:1 wainwright:1 belief:1 suitable:3 natural:1 regularized:2 task2:1 zhu:1 scheme:31 improve:1 stan:1 categorical:1 xiaojin:1 text:2 review:2 acknowledgement:1 rohit:1 relative:1 wisconsin:2 proven:1 incident:1 degree:1 inexactness:2 bk2:1 endowment:1 lmax:10 summary:2 placed:1 last:1 asynchronous:6 supported:4 infeasible:3 jth:1 free:1 formal:1 barrier:1 sparse:1 tolerance:4 ghz:1 distributed:1 dimension:1 default:1 valid:2 van:1 transition:1 kdk:4 commonly:1 author:1 preprocessing:1 dorit:1 cope:1 social:2 bb:2 transaction:2 approximate:28 vazirani:1 dmitry:1 global:1 xi:1 continuous:1 iterative:1 infeasibility:1 decade:1 additionally:1 ku:1 robust:1 ca:1 career:1 ignoring:2 obtaining:2 conll2000:1 improving:1 kschischang:1 alg:4 necessarily:1 constructing:1 domain:1 official:1 main:5 arise:2 n2:1 xu:3 augmented:7 intel:1 precision:1 nonnegativity:1 pv:6 candidate:1 crude:1 third:1 theorem:9 r2:1 dk:1 evidence:1 exists:1 effectively:1 n000141210041:1 magnitude:2 execution:1 kx:3 dblp:5 gap:5 vijay:1 sclaroff:1 simply:2 forming:1 expressed:1 contained:1 tracking:2 recommendation:1 springer:2 satisfies:3 acm:3 lempitsky:1 conditional:1 goal:1 towards:1 shared:2 feasible:11 hard:6 specifically:1 reducing:1 yuval:1 lemma:2 called:3 e:2 experimental:2 m3:1 argonne:1 indicating:3 formally:1 swright:1 rarely:1 arises:1 jonathan:1 evaluate:1 d1:1 handling:1 |
4,409 | 4,991 | Hierarchical Modular Optimization of Convolutional
Networks Achieves Representations Similar to
Macaque IT and Human Ventral Stream
Daniel Yamins?
McGovern Institute of Brain Research
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Ha Hong?
McGovern Institute of Brain Research
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Charles Cadieu
McGovern Institute of Brain Research
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
James J. Dicarlo
McGovern Institute of Brain Research
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
Humans recognize visually-presented objects rapidly and accurately. To understand this ability, we seek to construct models of the ventral stream, the series of
cortical areas thought to subserve object recognition. One tool to assess the quality of a model of the ventral stream is the Representational Dissimilarity Matrix
(RDM), which uses a set of visual stimuli and measures the distances produced in
either the brain (i.e. fMRI voxel responses, neural firing rates) or in models (features). Previous work has shown that all known models of the ventral stream fail
to capture the RDM pattern observed in either IT cortex, the highest ventral area,
or in the human ventral stream. In this work, we construct models of the ventral
stream using a novel optimization procedure for category-level object recognition
problems, and produce RDMs resembling both macaque IT and human ventral
stream. The model, while novel in the optimization procedure, further develops
a long-standing functional hypothesis that the ventral visual stream is a hierarchically arranged series of processing stages optimized for visual object recognition.
1
Introduction
Humans recognize visually-presented objects rapidly and accurately even under image distortions
and variations that make this a computationally challenging problem [27]. There is substantial
evidence that the human brain solves this invariant object recognition challenge via a hierarchical
cortical neuronal network called the ventral visual stream [13, 17], which has highly homologous
areas in non-human primates [19, 9]. A core, long-standing hypothesis is that the visual input
captured by the retina is rapidly processed through the ventral stream into an effective, ?invariant?
representation of object shape and identity [11, 9, 8]. This hypothesis has been bolstered by recent
developments in neuroscience which have shown that abstract category-level visual information is
accessible in IT (inferotemporal) cortex, the highest ventral cortical area, but much less effectively
accessible in lower areas such as V1, V2 or V4 [23]. This observation has been confirmed both
at the individual neural level, where single-unit responses can be decoded using linear classifiers
?
web.mit.edu/ yamins; ? These authors contributed equally to this work.
1
A)
B)
Basic operations
Filter
??
??
...
??
Threshold &
Saturate
1
Pool
A4
Normalize
L1
2
k
L1
Basic computatationsl are neural-like operations.
L1
L1
L1
Heterogeneity
A3
L1
Convolution
A2
L2
A1
L3
L1
L3
L2
L3
L1
L2
L3
Hierarchical stacking
Figure 1: A) Heterogenous hierarchical convolutional neural networks are composed of basic operations that are simple and neurally plausibly, including linear reweighting (filtering), thresholding,
pooling and normalization. These simple elements are convolutional and are stacked hierarchically to construct non-linear computations of increasingly greater power, ranging through low (L1),
medium (L2), and high (L3) complexity structures. B) Several of these elements are combined
to produce mixtures capturing heterogenous neural populations. Each processing stage across the
heterogeneous networks (A1, A2, ...) can be considered an analogous to a neural visual area.
to to yield category predictions [14, 23] and at the population code level, where response vector
correlation matrices evidence clear semantic structure [19].
Developing encoding models, models that map the stimulus to the neural response, of visual area IT
would likely help us to understand object recognition in humans. Encoding models of lower-level
visual responses (RGC, LGN, V1, V2) have been relatively successful [21, 4] (but cf. [26]). In
higher-visual areas, particularly IT, theoretical work has described a compelling framework which
we ascribe to in this work [29]. However, to this point it has not been possible to produce effective
encoding models of IT. This explanatory gap, between model responses and IT responses, is present
at both the level of the individual neuron responses and at the population code level. Of particular
interest for our analysis in this paper, current models of IT, such as HMAX, have been shown to
fail to achieve the specific categorical structures present in neural populations [18]. In other related
work, descriptions of higher areas (V4, IT) responses have been made for very narrow classes of
artificial stimuli and do not define responses to arbitrary natural images [6, 3].
In a step toward bridging this explanatory gap, we describe advances in constructing models that
capture the categorical structures present in IT neural populations and fMRI measurements of humans. We take a top-down functional approach focused on building invariant object representations,
optimizing biologically-plausible computational architectures for high performance on a challenging
object recognition screening task. We then show that these models capture key response properties
of IT, both at the level of individual neuronal responses as well as the neuronal population code ?
even for entirely new objects and categories never seen in model selection.
2
2.1
Methods
Heterogenous Hierarchical Convolutional Models
Inspired by previous neuronal modeling work [7, 6], we constructed a model class based on three
basic principles: (i) single layers composed of neurally-plausible basic operations, including filtering, nonlinear activation, pooling and normalization (ii) using hierarchical stacking to construct
more complex operations, and (iii) convolutional weight sharing (fig. 1A). This general type of
model has been successful in describing a variety of phenomenology throughout the ventral stream
[30]. In addition, we allow combinations of multiple hierarchical components each with different
2
parameters (such as pooling size, number of filters, etc.), representing different types of units with
different response properties [5] and refer to this concept as (iv) heterogeneity (fig. 1B).
We will now formally define the class of heterogeneous hierarchical convolutional neural networks,
N . First consider a simple neural network function defined by
N? = P ool?p (N ormalize?N (T hreshold?T (F ilter?F (Input))))
(1)
where the pooling, normalization, thresholding and filterbank convolution operations are as described in [28]. The parameters ? = (?p , ?N , ?T , ?F ) control the structure of the constituent operations. Each model stage therefore actually represents a large family of possible operations, specified
by a set of parameters controlling e.g. fan-in, activation thresholds, pooling exponents, spatial interaction radii, and template structure. Like [28], we use randomly chosen filterbank templates in
all models, but additionally allow the mean and variance of the filterbank to vary as parameters. To
produce deep feedforward networks, single layers are stacked:
F ilter
T hreshold
N ormalize
P ool
P`?1
???? F`?F,l ???????? T`?T ,l ???????? N`?N,l ???? P`?P,l
?P,l?1 ?
(2)
We denote such a stacking operation as N (?1 , . . . , ?k ), where the ?l are parameters chosen
separately for each layer, and will refer to networks of this fork as ?single-stack? networks.
Let the set of all depth-k single-stack networks be denoted Nk . Given a sequence of such
single-stack networks N(?i1 , ?12 , . . . , ?ini ) (possibly of different depths), the combination N ?
?ki=1 N(?i1 , ?12 , . . . , ?ini ) is formed by aligning the output layers of these models along the spatial convolutional dimension. These networks N can, of course, also be stacked, just like their singlestack constituents, to form more complicated, deeper heterogenous hierarchies. By definition, the
class N consists of all the iterative chainings and combinations of such networks.
2.2
High-Throughput Screening via Hierarchical Modular Optimization
Our goal is to find models within N that are effective at modeling neural responses to a wide variety
of images. To do this, our basic strategy is to perform high-throughput optimization on a screening
task [28]. By choosing a screening task that is sufficiently representative of the aspects that make
the object recognition problem challenging, we should be able to find network architectures that are
generally applicable. For our screening set, we created a set of 4500 synthetic images composed of
125 images each containing one of 36 three-dimensional mesh models of everyday objects, placed
on naturalistic backgrounds. The screening task we evaluated was 36-way object recognition. We
trained Maximum Correlation Classifiers (MCC) with 3-fold cross-validated 50%/50% train/test
splits, using testing classification percent-correct as the screening objective function.
Because N is a very large space, determining among the vast space of possibilities which parameter
setting(s) produce visual representations that are high performing on the screening set, is a challenge.
We addressed this by applying a novel method we call Hierarchical Modular Optimization (HMO).
The intuitive idea of the HMO optimization procedure is that a good multi-stack heterogeneous network will be found by creating mixtures of single-stack components each of which specializes in
a portion of an overall problem. To achieve this, we implemented a version of adaptive hyperparameter boosting, in which rounds of optimization are interleaved with boosting and hierarchical
stacking.
Specifically, suppose that N ? N and S is a screening stimulus set. Let E be the binary-valued
classification correctness indicator, assigning to each stimulus imageP
s 1 or 0 according to whether
the screening task prediction was right or wrong. Let score(N, S) = s?S N (F (s)). To efficiently
find N that maximizes score(N, S), the HMO procedure follows these steps:
1. Optimization: Optimize the score function within the class of single-stack networks, obtaining
an optimization trajectory of networks in N (fig 2A, left). The optimization procedure that we use
is Hyperparameter Tree Parzen Estimator, as described in [1]. This procedure is effective in large
parameter spaces that include discrete and continuous parameters.
2. Boosting: Consider the set of networks explored during step 1 as a set of weak learners, and
apply a standard boosting algorithm (Adaboost) to identify some number of networks N11 , . . . , N1l1
whose error patterns are complementary (fig 2A, right).
3. Combination: Form the multi-stack network N1 = ?i N1i and evaluate E(N1 (s)) for all s ? S.
3
}
}
Optimization of Adaboost Combination
single-stack
networks
Performance
A)
B)
Round 1 Error Pattern
Round 2 Error Pattern
N11
N12
Screening set
? thr(1)
(3)
? pool
(1)
? sat
Error-based reweighting
Performance
Optimizing
reweighted
objective
N21
N22
N23
N24
(3)
? norm
...
(1)
? filter
? thr(1)
(1)
? sat
(3)
? pool
Faces
Combined Error Pattern
1000
Optimization
Step
N14
...
(1)
? filter
Combined Model
N13
0
Hierarchical Layering
Parameter 1
Param
eter 2
Figure 2: A) The Hierarchical Modular Optimization is a mechanism for efficiently optimizing
neural networks for object recognition performance. The intuitive idea of HMO is that a good multistack hetergenous network will be found by creating mixtures of single-stack components each of
which specializes in a portion of an overall problem. The process first identifies complementary performance gradients in the space of single-stack (non-heterogenous) convolutional neural networks
by using version of adaptive boosting interleaved with hyperparameter optimization. The components identified in this process are then composed nonlinearly using a second convolutional layer
to produce a combined output model. B) Top: the 36-way confusion matrices associated with two
complementary components identified in the HMO process. Bottom Left: The two optimization
trajectories from which the single-stack models were drawn that produced the confusion matrices in
the top panels. The optimization criterion for the second round (red dots) was defined relative to the
errors of the first round (blue dots). Bottom Right: The confusion matrix of the heterogenous model
produced by combining the round 1 and round 2 networks.
4. Error-based Reweighting: Repeat step 1, but reweight the scoring to give the j-th stimulus sj
weight 0 ifP
N1 is correct in sj , and 1 otherwise. That is, the performance function to be optimized for
N is now s?S E(N1 (s)) ? E(N (s)). Repeat the step 2 on the results of the optimization trajectory
obtained to get models N21 , . . . N2k2 , and repeat step 3. Steps 1, 2, 3 are repeated K times.
After K repeats, we will have obtained a multi-stack network N = ?i?K,j?ki Nij . The process can
then simply be terminated, or repeated with the output of N as the input to another stacked network.
In the latter case, the next layer is chosen using the same model class N to draw from, and using the
same adaptive hyperparameter boosting procedure.
The meta-parameters of the HMO procedure include the numbers of components l1 , l2 , . . . to be selected at each boosting round, the number of times K that the interleaved boosting and optimization
is repeated and the number of times M this procedure is stacked. To constrain this space we fix the
metaparameters l1 = l2 . . . .. = 10, K = 3, and M ? 2. With the fixed screening set described
above, and these metaparameter settings, we generated a network NHM O . We will refer back to
this model throughout the rest of the paper. NHM O produces 1250-dimensional feature vectors
for any input stimulus; we will denote NHM O (s) as the resulting feature vector for stimulus s and
NHM O (s)k as its k-th component in 1250-dimensional space.
2.3
Predicting IT Neural Responses
Utilizing the NHM O network, we construct models of IT in one of two ways: 1) we estimate a GLM
model predicting individual neural responses or 2) we estimate linear classifiers of object categories
to produce a candidate IT neural space.
To construct models of individual neural responses we estimate a linear mapping from a non-linear
space produced by a model. This procedure is a standard GLM of individual neural responses.
Because IT responses are highly non-linear functions of the input image, successful models must
4
capture the non-linearity of the IT response. The NHM O network produces a highly-nonlinear
transformation of the input image and we compare the efficacy of this non-linearity against those
produced by other models. Specifically for a neuron ni , we estimate a vector wi to minimize the
regression error from NHM O features to ni ?s responses, over a training set of stimuli. We evaluate
goodness of fit of by measuring the regression r2 values between the neural response and the GLM
predictions on held-out images, averaged over several train/test splits. Taken over the set of predicted
neurons n1 , n2 , ... nk , the collection of regression weight vectors wi comprise a matrix W that can
be thought of as a final linear top level that forms part of the model of IT. This method evidently
requires the presence of low-level neural data on which to train.
We also produce a candidate IT neural space by estimating linear classifiers on an object recognition
task. As we might expect different subregions of IT cortex to have different selectivities for object
categories (for example face, body, and place patches [15, 10]), the output of the linear classifiers
will also respond preferentially to different object categories. We may be able to leverage some
understanding of what a subregion?s task specialization might be to produce the weighting matrix
W . Specifically, we estimate a linear mapping W to be the weights of a set of linear classifiers
trained from the NHM O features on a specific set of object recognition tasks. We can then evaluate
this mapping on a novel set of images and compare to measured IT or human ventral stream data.
This method may have traction even when individual neural response data are not available.
2.4
Representational Dissimilarity Matrices
Implicit in this discussion is the idea of comparing two different representations (in this case, the
model?s predicted population versus the real neural population) on a fixed stimulus set. The Representational Dissimilarity Matrix (RDM) is a convenient tool for this comparison [19]. Formally,
given a stimulus set S = s1 , . . . , sk and vectors of neural population responses R = ~r1 , . . . , ~rk in
which rij is the response of the j-th neuron to the i-th stimulus, define
RDM (R)ij = 1 ?
cov(ri , rj )
.
var(ri ) ? var(rj )
The RDM characterizes the layout of the stimuli in high-dimensional neural population space.
Following [19], we measured similarity between population representations as the Spearman rank
correlations between the RDMs for two populations, in which both RDMs are treated as vectors in
k(k ? 1)/2-dimensional space. Two populations can have similar RDMs on a given stimulus set,
even if details of the neural responses are different.
3
Results
To test the NHM O model, we took two routes, corresponding to the two methods for prediction
described above. First (sec. 3.1), we obtained our own neural data on a testing set of our own
design and tested the NHM O model?s ability to predict individual-level neural responses using the
linear regression methodology described above. This approach allowed us to directly test the NHM O
models? power in a setting were we had acess to low-level neural information. Second (sec. 3.2), we
also compared to neural data collected by a different group, but only released at a very coarse level
of detail ? the RDMs of their measured population. This constraint required us to additionally posit
a task blend, and to make the comparison at the population RDM level.
3.1
The Neural Representation Benchmark Image Set
We analyzed neural data collected on the Neural Representation Benchmark (NRB) dataset, which
was originally developed to compare monkey neural and human behavioral responses [23, 2]. The
NRB dataset consists of 5760 images of 64 distinct objects. The objects come from eight ?basic? categories (animals, boats, cars, chairs, faces, fruits, planes, tables), with eight exemplars per category
(e.g., BMW, Z3, Ford, &c for cars) (see fig 3B bottom left), with objects varying in position, size, and
3d-pose, and placed on a variety uncorrelated natural backgrounds. These parameters were varied
concomitantly, picked randomly from a uniform ranges at three levels of object identity-preserving
variation (low, medium, and high). The NRB set was designed to test explicitly the transformations
of pose, position and size that are at the crux of the invariant object recognition problem. None of the
5
D)
Pixels
V1like
SIFT
V2-like
(relative to human)
Performance Ratio
A)
Unit 104
Animals
Animals
Chairs
Faces
Boats
Boats
8-way
Categorization
Task
Fruits
Cars
Cars
Tables
Planes
Chairs
Faces
Pix
els
Spearman of RDM to IT
C)
Controls
Low
Variation
Fruits
Planes
Tables
Median Cross-Validated R2
Response
level
B)
HMO
V4
V1
like
V
HM
SIF HM
AX 2like
O
T
IT Split-Half
High
Variation
HMAX
V4 Neurons
HMO Model
IT Neurons
IT
Split Half
All
Figure 3: A) 8-way categorization performances. Comparison was made between several existing
models from the literature (cyan bars), the HMO model features, and data from V4 and IT neural
populations. Performances are normalized relative to human behavioral data collected from Amazon Mechanical Turk experiments. High levels variation strongly separates the HMO model and the
high-level IT neural features from the other representations. B) Top: Actual neural response (black)
vs. prediction (red) for a single sample IT unit. This neuron shows high functional selectivity for
faces, which is effectively mirrored by the predicted unit. Bottom Left: Sample Neural Representation Benchmark images. C) Comparison of Representational Dissimilarity Matrices (RDMs)
for NRB dataset. D) As populations increase in complexity and abstraction power, they become
progressively more like that of IT, as category structure that was blurred out at lower levels by variability becomes abstracted at the higher levels. The HMO model shows similarity to IT both on the
block diagonal structure associated with categorization performance, but also on the off-diagonal
comparisons that characterize the neural representation more precisely.
objects, categories or backgrounds used in the HMO screening set appeared in the NRB set; moreover, the NRB image set was created with different image and lighting parameters, with different
rendering software.
Neural data was obtained via large-scale parallel array electrophysiology recordings in the visual
cortex of awake behaving macaques. Testing set images were presented foveally (central 10 deg)
with a Rapid Serial Visual Presentation (RSVP) paradigm, involving passively viewing animals
shown random stimulus sequences with durations comparable to those in natural primate fixations
(e.g. 200 ms). Electrode arrays were surgically implanted in V4 and IT, and recordings took place
daily over a period of several months. A total of 296 multi-unit responses were recorded from
two animals. For each testing stimulus and neuron, final neuron output responses were obtained
by averaging data from between 25 and 50 repeated trials. With this dataset, we addressed two
questions: how well the HMO model was able to perform on the categorization tasks supported by
the dataset, how well the HMO predicted the neural data.
6
Performance was assessed for three types of tasks, including 8-way basic category classification,
8-way car object identification, and 8-way face object identification. We computed the model?s
predicted outputs in response to each of the testing images, and then tested simple, cross-validated
linear classifiers based on these features. As performance controls, we also computed features on
the test images for a number of models from the literature, including a V1-like model [27], a V2like model [12], and an HMAX variant [25]. We also compared to a simple computer vision model,
SIFT[22], as well as the basic pixel control. Performances were also measured for neural output
features, building on previous results showing that V4 neurons performed less well than IT neurons
at higher variation levels[23], and confirming that the testing tasks meaningfully engaged higherlevel vision processing. Figure 3A) compares overall performances, showing that the HMO-selected
model is able to achieve human-level performance at all levels of variation. Critically, the HMO
model performs well not just in low-variation settings in which simple lower-level models can do
so, but is able to achieve near-human performance (within 1 std of the average human) even when
faced with large amounts of variation which caused the other models to perform near chance. Since
the testing set contains entirely different objects in non-overlapping basic categories, with none of
the same backgrounds, this suggests that the nonlinearity identified in the HMO screening phase is
able to achieve significant generalization across image domains.
Given that the model evidenced high transferable performance, we next determined the ability of the
model to explain low-level neuronal responses using regression. The HMO model is able to predict
approximately 48% of the explainable variance in the neural data, more than twice as much as any
competing model (fig. 3B). Using the same transformation matrices W obtained from the regression
fitting, we also computed RDMs, which show significant similarity to IT populations at both nearly
comparable to the split-half similarity range of the IT population itself (fig. 3C). A key comparison
between models and data shows that as populations ascend the ventral hierarchy and increase in
complexity, they become progressively closer to IT, with category structure that was blurred out at
lower levels by variation becoming properly abstracted away at the higher levels (fig. 3D).
3.2
The Monkeys & Man Image Set
Kriegeskorte et. al. analyzed neural recordings made in an anterior patch of macaque IT on a small
number of widely varying naturalistic images of every-day objects, and additionally obtained fMRI
recordings from the analogous region of human visual cortex [19]. These objects included human
and animal faces and body parts, as well as a variety of natural and man-made inanimate objects.
Three striking findings of this work were that (i) the population code (as measured by RDMs) of the
macaque neurons strongly mirrors the structure present in the human fMRI data, (ii) this structure
appears to be dominated by the separation of animate vs inanimate object classes (fig. 4B, lower
right) and (iii) that none of a variety of computational models produced RDMs with this structure.
Individual unit neural response data from these experiments is not publicly available. However, we
were able to obtain a set of approximately 1000 additional training images with roughly similar
categorical distinctions to that of the original target images, including distributions of human and
animal faces and body parts, and a variety of other non-animal objects [16]. We posited that the
population code structure present in the anterior region of IT recorded in the original experiment
is guided by functional goals similar to the task distinctions supported by this dataset. To test
this, we computed linear classifiers from NHM O features for all the binary distinctions possible in
the training set (e.g. ?human/non-human?, ?animate/inanimate?, ?hand/non-hand?, ?bird/non-bird?,
&c). The linear weighting matrix W derived from these linear classifiers was then used to produce
an RDM matrix which could be compared to that measured originally. In fact, the HMO-based
population RDM strongly qualitatively matches that of the monkey IT RDM and, to a significant
but lesser extent, that of the human IT RDM (fig. 4B). This fit is significantly better than that of all
models evaluated by Kriegeskorte, and approaches the human/monkey fit value itself (fig. 4A).
4
Discussion
High consistency with neural data at individual neuronal response and population code levels across
several diverse datasets suggests that the HMO model is a good candidate model of the higher
ventral stream processing. That fact that the model was optimized only for performance, and not
directly for consistency with neural responses, highlights the power of functionally-driven computa7
B)
Spearman Rank Correlation
A)
Pixels
HMO
Pixels
HMAX
V1-like
HMO
HMAX
Monkey IT(Kriegeskorte, 2008)
V1-like
Human (Kriegeskorte, 2008)
Monkey/
Human
Figure 4: A) Comparison of model representations to Monkey IT (solid bars) and Human ventral
stream (hatched bars). The HMO model followed by a simple task-blend based linear reweighting
(red bars) quantitatively approximates the human/monkey fit value (black bar), and captures both
monkey and human ventral stream structure more effectively than any of the large number of models shown in [18], or any of the additional comparison models we evaluated here (cyan bars). B)
Representational Dissimilarity Matrices show a clear qualitative similarity between monkey IT and
human IT on the one hand [19] and between these and the HMO model representation.
tional approaches in understanding cortical processing. These results further develop a long-standing
functional hypothesis about the ventral visual stream, and show that more rigorous versions of its
architecture and functional constraints can be leveraged using modern computational tools to expose
the transformation of visual information in the ventral stream.
The picture that emerges is of a general-purpose object recognition architecture ? approximated by
the NHM O network ? situtated just posterior to a set of several downstream regions that can be
thought of as specialized linear projections ? the matrices W ? from the more general upstream
region. These linear projections can, at least in some cases, be characterized effectively as the signature of interpretable functional tasks in which the system is thought to have gained expertise.
This two-step arrangement makes sense if there is a core set of object recognition primitives that
are comparatively difficult to discover, but which, once found, underlie many recognition tasks.
The particular recognition tasks that the system learns to solve can all draw from this upstream
?non-linear reservoir?, creating downstream specialists that trade off generality for the ability to
make more efficient judgements on new visual data relative to the particular problems on which
they specialize. This hypothesis makes testable predictions about how monkey and human visual
systems should both respond to certain real-time training interventions (e.g. the effects of ?nurture?), while being circumscribed within a range of possible behaviors allowed by the (presumably)
harder-to-change upstream network (e.g. the constraints of ?nature?). It also suggests that it will
be important to explore recent high-performing computer vision systems, e.g. [20], to determine
whether these algorithms provide further insight into ventral stream mechanisms. Our results show
that behaviorally-driven computational approaches have an important role in understanding the details of cortical processing[24]. This is a fruitful direction of future investigation for such models to
engage with additional neural and behavior experiments.
References
[1] J. Bergstra, D. Yamins, and D.D. Cox. Making a Science of Model Search, 2012.
[2] C. Cadieu, H. Hong, D. Yamins, N. Pinto, N. Majaj, and J.J. DiCarlo. The neural representation benchmark and its evaluation on brain and machine. In International Conference on Learning Representations,
May 2013.
[3] C. Cadieu, M. Kouh, A. Pasupathy, C. E. Connor, M. Riesenhuber, and T. Poggio. A model of v4 shape
selectivity and invariance. J Neurophysiol, 98(3):1733?50, 2007.
[4] M. Carandini, J. B. Demb, V. Mante, D. J. Tolhurst, Y. Dan, B. A. Olshausen, J. L. Gallant, and N. C.
Rust. Do we know what the early visual system does? J Neurosci, 25(46):10577?97, 2005.
8
[5] M. Churchland and K. Shenoy. Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex. Journal of Neurophysiology, 97(6):4235?4257, 2007.
[6] C. E. Connor, S. L. Brincat, and A. Pasupathy. Transformation of shape information in the ventral pathway. Curr Opin Neurobiol, 17(2):140?7, 2007.
[7] S. V. David, B. Y. Hayden, and J. L. Gallant. Spectral receptive field properties explain shape selectivity
in area v4. J Neurophysiol, 96(6):3492?505, 2006.
[8] R. Desimone, T. D. Albright, C. G. Gross, and C. Bruce. Stimulus-selective properties of inferior temporal
neurons in the macaque. J Neurosci, 4(8):2051?62, 1984.
[9] J. J. DiCarlo, D. Zoccolan, and N. C. Rust. How does the brain solve visual object recognition? Neuron,
73(3):415?34, 2012.
[10] P.E. Downing, Y. Jiang, M. Shuman, and N. Kanwisher. A cortical area selective for visual processing of
the human body. Science, 293:2470?2473, 2001.
[11] D.J. Felleman and D.C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex.
Cerebral Cortex, 1:1?47, 1991.
[12] J. Freeman and E. Simoncelli. Metamers of the ventral stream. Nature Neuroscience, 14(9):1195?1201,
2011.
[13] K. Grill-Spector, Z. Kourtzi, and N. Kanwisher. The lateral occipital complex and its role in object
recognition. Vision research, 41(10-11):1409?1422, 2001.
[14] C. P. Hung, G. Kreiman, T. Poggio, and J. J. DiCarlo. Fast readout of object identity from macaque
inferior temporal cortex. Science, 310(5749):863?6, 2005.
[15] N. Kanwisher, J. McDermott, and M. M. Chun. The fusiform face area: a module in human extrastriate
cortex specialized for face perception. J Neurosci, 17(11):4302?11, 1997.
[16] R. Kiani, H. Esteky, K. Mirpour, and K. Tanaka. Object category structure in response patterns of neuronal
population in monkey inferior temporal cortex. J Neurophysiol, 97(6):4296?309, 2007.
[17] Z. Kourtzi and N. Kanwisher. Representation of perceived object shape by the human lateral occipital
complex. Science, 293(5534):1506?1509, 2001.
[18] N. Kriegeskorte. Relating population-code representations between man, monkey, and computational
models. Frontiers in Neuroscience, 3(3):363, 2009.
[19] N. Kriegeskorte, M. Mur, D. A. Ruff, R. Kiani, J. Bodurka, H. Esteky, K. Tanaka, and P. A. Bandettini.
Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron,
60(6):1126?41, 2008.
[20] A Krizhevsky, I Sutskever, and G Hinton. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2012.
[21] P. Lennie and J. A. Movshon. Coding of color and form in the geniculostriate visual pathway (invited
review). J Opt Soc Am A Opt Image Sci Vis, 22(10):2013?33, 2005.
[22] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
[23] N. Majaj, H. Najib, E. Solomon, and J.J. DiCarlo. A unified neuronal population code fully explains
human object recognition. In Computational and Systems Neuroscience (COSYNE), 2012.
[24] David Marr, Tomaso Poggio, and Shimon Ullman. Vision. A Computational Investigation Into the Human
Representation and Processing of Visual Information. MIT Press, July 2010.
[25] J. Mutch and D. G. Lowe. Object class recognition and localization using sparse features with limited
receptive fields. IJCV, 2008.
[26] Bruno A Olshausen and David J Field. How close are we to understanding v1? Neural computation,
17(8):1665?1699, 2005.
[27] N. Pinto, D. D. Cox, and J. J. DiCarlo. Why is Real-World Visual Object Recognition Hard. PLoS Comput
Biol, 2008.
[28] N. Pinto, D. Doukhan, J. J. DiCarlo, and D. D. Cox. A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation. PLoS Computational Biology, 5(11),
2009.
[29] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization. Proc Natl
Acad Sci U S A, 104(15):6424?9, 2007. 0027-8424 (Print) Journal Article.
[30] Thomas Serre, Gabriel Kreiman, Minjoon Kouh, Charles Cadieu, Ulf Knoblich, and Tomaso Poggio.
A quantitative theory of immediate visual recognition. In Prog. Brain Res., volume 165, pages 33?56.
Elsevier, 2007.
9
| 4991 |@word neurophysiology:1 trial:1 cox:3 version:3 judgement:1 fusiform:1 norm:1 kriegeskorte:6 seek:1 solid:1 harder:1 extrastriate:1 series:2 score:3 efficacy:1 contains:1 daniel:1 existing:1 current:1 comparing:1 anterior:2 activation:2 assigning:1 must:1 mesh:1 confirming:1 shape:5 motor:1 opin:1 designed:1 interpretable:1 progressively:2 v:2 half:3 selected:2 discovering:1 plane:3 core:2 coarse:1 boosting:8 tolhurst:1 downing:1 along:1 constructed:1 become:2 qualitative:1 consists:2 fixation:1 specialize:1 fitting:1 dan:1 behavioral:2 n22:1 pathway:2 ijcv:2 ascend:1 kanwisher:4 rapid:2 behavior:2 tomaso:2 roughly:1 multi:4 brain:9 inspired:2 freeman:1 n23:1 actual:1 param:1 becomes:1 estimating:1 linearity:2 moreover:1 maximizes:1 medium:2 panel:1 discover:1 what:2 neurobiol:1 monkey:14 developed:1 unified:1 finding:1 transformation:5 temporal:5 quantitative:1 every:1 classifier:9 filterbank:3 wrong:1 control:4 unit:7 underlie:1 intervention:1 shenoy:1 acad:1 encoding:3 demb:1 jiang:1 firing:1 becoming:1 approximately:2 might:2 black:2 twice:1 bird:2 suggests:3 challenging:3 rdms:9 limited:1 doukhan:1 range:3 averaged:1 testing:7 block:1 procedure:10 area:12 mcc:1 majaj:2 thought:4 significantly:1 convenient:1 projection:2 matching:1 naturalistic:2 get:1 close:1 selection:1 applying:1 optimize:1 fruitful:1 map:1 resembling:1 layout:1 primitive:1 occipital:2 duration:1 focused:1 amazon:1 estimator:1 insight:1 utilizing:1 array:2 marr:1 kouh:2 population:27 n12:1 variation:10 analogous:2 controlling:1 hierarchy:2 suppose:1 target:1 engage:1 us:1 hypothesis:5 element:2 circumscribed:1 recognition:22 particularly:1 approximated:1 std:1 observed:1 fork:1 bottom:4 role:2 module:1 rij:1 capture:5 region:4 readout:1 plo:2 trade:1 highest:2 substantial:1 gross:1 complexity:4 signature:1 trained:2 surgically:1 animate:2 churchland:1 distinctive:1 localization:1 learner:1 neurophysiol:3 sif:1 stacked:5 train:3 distinct:1 fast:1 effective:4 describe:1 mcgovern:4 artificial:1 choosing:1 whose:1 premotor:1 modular:4 ace:1 plausible:2 valued:1 distortion:1 otherwise:1 widely:1 solve:2 ability:4 cov:1 ford:1 itself:2 final:2 najib:1 higherlevel:1 sequence:2 evidently:1 took:2 interaction:1 combining:1 rapidly:3 achieve:5 representational:5 description:1 intuitive:2 normalize:1 everyday:1 constituent:2 sutskever:1 electrode:1 r1:1 produce:12 categorization:5 object:45 help:1 develop:1 pose:2 exemplar:1 measured:6 ij:1 subregion:1 soc:1 implemented:1 predicted:5 solves:1 come:1 direction:1 posit:1 radius:1 guided:1 correct:2 filter:4 human:36 viewing:1 explains:1 crux:1 fix:1 generalization:1 investigation:2 opt:2 frontier:1 sufficiently:1 considered:1 visually:2 presumably:1 mapping:3 predict:2 achieves:1 ventral:23 a2:2 released:1 layering:1 purpose:1 perceived:1 vary:1 proc:1 early:1 applicable:1 expose:1 correctness:1 tool:3 mit:6 behaviorally:1 varying:2 validated:3 ax:1 derived:1 properly:1 rank:2 rigorous:1 sense:1 am:1 elsevier:1 tional:1 lennie:1 abstraction:1 el:1 explanatory:2 mirpour:1 selective:2 metaparameters:1 i1:2 lgn:1 pixel:4 overall:3 classification:4 among:1 denoted:1 exponent:1 development:1 animal:8 spatial:2 field:3 construct:6 never:1 comprise:1 once:1 cadieu:4 biology:1 represents:1 throughput:3 nearly:1 fmri:4 future:1 stimulus:17 develops:1 quantitatively:1 retina:1 modern:1 randomly:2 composed:4 recognize:2 individual:10 phase:1 n1:5 curr:1 interest:1 screening:15 highly:3 essen:1 possibility:1 evaluation:1 mixture:3 analyzed:2 natl:1 held:1 desimone:1 closer:1 daily:1 poggio:5 ilter:2 tree:1 iv:1 concomitantly:1 re:1 theoretical:1 nij:1 modeling:2 compelling:1 goodness:1 measuring:1 stacking:4 uniform:1 krizhevsky:1 successful:3 characterize:1 synthetic:1 combined:4 international:1 accessible:2 standing:3 v4:9 off:2 pool:3 parzen:1 central:1 recorded:2 n21:2 containing:1 leveraged:1 possibly:1 solomon:1 cosyne:1 creating:3 ullman:1 bandettini:1 account:1 bergstra:1 sec:2 coding:1 blurred:2 explicitly:1 caused:1 vi:1 stream:19 performed:1 picked:1 lowe:2 characterizes:1 portion:2 red:3 complicated:1 parallel:1 bruce:1 ass:1 formed:1 ni:2 publicly:1 convolutional:10 variance:2 minimize:1 efficiently:2 yield:1 identify:1 ulf:1 weak:1 identification:2 accurately:2 produced:6 critically:1 none:3 shuman:1 trajectory:3 confirmed:1 lighting:1 expertise:1 explain:2 sharing:1 definition:1 against:1 bolstered:1 turk:1 james:1 associated:2 dataset:6 carandini:1 massachusetts:4 color:1 car:5 emerges:1 actually:1 back:1 appears:1 higher:6 originally:2 day:1 adaboost:2 response:38 methodology:1 mutch:1 arranged:1 evaluated:3 strongly:3 generality:1 just:3 stage:3 implicit:1 correlation:4 hand:3 web:1 nonlinear:2 reweighting:4 overlapping:1 quality:1 ascribe:1 olshausen:2 building:2 effect:1 serre:2 concept:1 normalized:1 semantic:1 reweighted:1 round:8 during:1 inferior:4 transferable:1 hong:2 criterion:1 m:1 ini:2 confusion:3 felleman:1 performs:1 l1:11 percent:1 image:24 ranging:1 novel:4 spector:1 charles:2 hreshold:2 ifp:1 specialized:2 functional:7 rust:2 cerebral:2 volume:1 approximates:1 relating:1 functionally:1 measurement:1 refer:3 metaparameter:1 cambridge:4 significant:3 connor:2 subserve:1 consistency:2 nonlinearity:1 bruno:1 had:1 dot:2 l3:5 cortex:12 similarity:5 behaving:1 etc:1 inferotemporal:1 aligning:1 posterior:1 own:2 recent:2 optimizing:3 driven:2 selectivity:4 route:1 certain:1 meta:1 binary:2 mcdermott:1 scoring:1 captured:1 seen:1 greater:1 preserving:1 additional:3 determine:1 paradigm:1 period:1 july:1 ii:2 neurally:2 multiple:1 rj:2 simoncelli:1 keypoints:1 match:1 characterized:1 cross:3 long:3 posited:1 serial:1 equally:1 a1:2 n11:2 n24:1 prediction:6 involving:1 basic:10 regression:6 heterogeneous:3 implanted:1 variant:1 vision:5 oliva:1 normalization:3 eter:1 addition:1 background:4 separately:1 addressed:2 median:1 esteky:2 invited:1 rest:1 pooling:5 n1i:1 recording:4 meaningfully:1 call:1 near:2 presence:1 leverage:1 feedforward:2 iii:2 split:5 rendering:1 variety:6 fit:4 rdm:11 architecture:5 identified:3 competing:1 idea:3 lesser:1 grill:1 whether:2 specialization:1 bridging:1 explainable:1 movshon:1 ool:2 deep:2 gabriel:1 generally:1 clear:2 amount:1 pasupathy:2 traction:1 subregions:1 processed:1 category:15 kiani:2 mirrored:1 neuroscience:4 per:1 blue:1 diverse:1 discrete:1 hyperparameter:4 group:1 key:2 threshold:2 drawn:1 ruff:1 v1:7 vast:1 downstream:2 pix:1 nhm:13 respond:2 striking:1 knoblich:1 place:2 throughout:2 family:1 prog:1 patch:2 separation:1 draw:2 comparable:2 entirely:2 cyan:2 layer:6 capturing:1 ki:2 interleaved:3 followed:1 fan:1 fold:1 mante:1 activity:1 kreiman:2 constraint:3 precisely:1 constrain:1 awake:1 ri:2 software:1 rsvp:1 bodurka:1 dominated:1 aspect:1 chair:3 performing:2 passively:1 relatively:1 n13:1 developing:1 according:1 combination:5 spearman:3 across:3 hmo:24 increasingly:1 wi:2 primate:3 biologically:2 s1:1 making:1 invariant:5 glm:3 taken:1 computationally:1 describing:1 fail:2 mechanism:2 yamins:5 know:1 available:2 operation:9 phenomenology:1 apply:1 eight:2 hierarchical:14 v2:3 away:1 spectral:1 specialist:1 original:2 thomas:1 top:5 cf:1 include:2 a4:1 plausibly:1 testable:1 comparatively:1 objective:2 question:1 arrangement:1 print:1 blend:2 strategy:1 receptive:2 diagonal:2 gradient:1 distance:1 separate:1 lateral:2 sci:2 evaluate:3 collected:3 extent:1 toward:1 code:8 dicarlo:8 z3:1 ratio:1 preferentially:1 difficult:1 reweight:1 design:1 contributed:1 perform:3 gallant:2 observation:1 convolution:2 neuron:16 datasets:1 benchmark:4 inanimate:3 riesenhuber:1 immediate:1 heterogeneity:3 hinton:1 variability:1 varied:1 stack:12 arbitrary:1 david:3 evidenced:1 nonlinearly:1 required:1 specified:1 mechanical:1 optimized:3 thr:2 imagenet:1 distinction:3 narrow:1 heterogenous:6 macaque:7 tanaka:2 able:8 bar:6 pattern:6 perception:1 appeared:1 challenge:2 including:5 power:4 natural:4 treated:1 homologous:1 predicting:2 indicator:1 boat:3 representing:1 technology:4 picture:1 identifies:1 created:2 specializes:2 categorical:4 hm:2 faced:1 review:1 understanding:4 l2:6 literature:2 determining:1 relative:4 fully:1 expect:1 highlight:1 filtering:2 versus:1 var:2 fruit:3 article:1 thresholding:2 principle:1 uncorrelated:1 course:1 placed:2 repeat:4 supported:2 hayden:1 allow:2 understand:2 deeper:1 institute:8 wide:1 template:2 face:11 mur:1 sparse:1 van:1 distributed:1 depth:2 cortical:6 dimension:1 world:1 author:1 made:4 adaptive:3 collection:1 qualitatively:1 voxel:1 sj:2 hatched:1 abstracted:2 deg:1 sat:2 continuous:1 iterative:1 search:1 sk:1 why:1 table:3 additionally:3 nature:2 obtaining:1 complex:3 upstream:3 constructing:1 domain:1 hierarchically:2 neurosci:3 terminated:1 n2:1 repeated:4 complementary:3 allowed:2 body:4 neuronal:8 fig:11 representative:1 reservoir:1 position:2 decoded:1 comput:1 candidate:3 weighting:2 learns:1 hmax:5 shimon:1 saturate:1 down:1 rk:1 specific:2 sift:2 showing:2 explored:1 r2:2 chun:1 evidence:2 a3:1 effectively:4 gained:1 mirror:1 metamers:1 dissimilarity:5 nk:2 gap:2 electrophysiology:1 simply:1 likely:1 explore:1 visual:26 pinto:3 chance:1 ma:4 identity:3 goal:2 bmw:1 presentation:1 month:1 nurture:1 man:4 change:1 hard:1 included:1 specifically:3 determined:1 averaging:1 called:1 total:1 engaged:1 invariance:1 albright:1 rgc:1 formally:2 latter:1 assessed:1 kourtzi:2 tested:2 biol:1 hung:1 |
4,410 | 4,992 | Bayesian inference for low rank spatiotemporal
neural receptive fields
Jonathan W. Pillow
Center for Perceptual Systems
The University of Texas at Austin
[email protected]
Mijung Park
Electrical and Computer Engineering
The University of Texas at Austin
[email protected]
Abstract
The receptive field (RF) of a sensory neuron describes how the neuron integrates
sensory stimuli over time and space. In typical experiments with naturalistic or
flickering spatiotemporal stimuli, RFs are very high-dimensional, due to the large
number of coefficients needed to specify an integration profile across time and
space. Estimating these coefficients from small amounts of data poses a variety of challenging statistical and computational problems. Here we address these
challenges by developing Bayesian reduced rank regression methods for RF estimation. This corresponds to modeling the RF as a sum of space-time separable
(i.e., rank-1) filters. This approach substantially reduces the number of parameters
needed to specify the RF, from 1K-10K down to mere 100s in the examples we
consider, and confers substantial benefits in statistical power and computational
efficiency. We introduce a novel prior over low-rank RFs using the restriction of
a matrix normal prior to the manifold of low-rank matrices, and use ?localized?
row and column covariances to obtain sparse, smooth, localized estimates of the
spatial and temporal RF components. We develop two methods for inference in
the resulting hierarchical model: (1) a fully Bayesian method using blocked-Gibbs
sampling; and (2) a fast, approximate method that employs alternating ascent of
conditional marginal likelihoods. We develop these methods for Gaussian and
Poisson noise models, and show that low-rank estimates substantially outperform
full rank estimates using neural data from retina and V1.
1
Introduction
A neuron?s linear receptive field (RF) is a filter that maps high-dimensional sensory stimuli to a
one-dimensional variable underlying the neuron?s spike rate. In white noise or reverse-correlation
experiments, the dimensionality of the RF is determined by the number of stimulus elements in
the spatiotemporal window influencing a neuron?s probability of spiking. For a stimulus movie with
nx ?ny pixels per frame, the RF has nx ny nt coefficients, where nt is the (experimenter-determined)
number of movie frames in the neuron?s temporal integration window. In typical neurophysiology
experiments, this can result in RFs with hundreds to thousands of parameters, meaning we can think
of the RF as a vector in a very high dimensional space.
In high dimensional settings, traditional RF estimators like the whitened spike-triggered average
(STA) exhibit large errors, particularly with naturalistic or correlated stimuli. A substantial literature has therefore focused on methods for regularizing RF estimates to improve accuracy in the
face of limited experimental data. The Bayesian approach to regularization involves specifying a
prior distribution that assigns higher probability to RFs with particular kinds of structure. Popular
methods have involved priors to impose smallness, sparsity, smoothness, and localized structure in
RF coefficients[1, 2, 3, 4, 5].
1
Here we develop a novel regularization method to exploit the fact that neural RFs can be modeled
as a low-rank matrices (or tensors). This approach is justified by the observation that RFs can be
well described by summing a small number of space-time separable filters [6, 7, 8, 9]. Moreover,
it can substantially reduce the number of RF parameters: a rank p receptive field in nx ny nt dimensions requires only p(nx ny + nt ? 1) parameters, since a single space-time separable filter has
nx ny spatial coefficients and nt ? 1 temporal coefficients (i.e., for a temporal unit vector). When
p min(nx ny , nt ), as commonly occurs in experimental settings, this parametrization yields considerable savings.
In the statistics literature, the problem of estimating a low-rank matrix of regression coefficients is
known as reduced rank regression [10, 11]. This problem has received considerable attention in
the econometrics literature, but Bayesian formulations have tended to focus on non-informative or
minimally informative priors [12]. Here we formulate a novel prior for reduced rank regression using
a restriction of the matrix normal distribution [13] to the manifold of low-rank matrices. This results
in a marginally Gaussian prior over RF coefficients, which puts it on equal footing with ?ridge?,
AR1, and other Gaussian priors. Moreover, under a linear-Gaussian response model, the posterior
over RF rows and columns are conditionally Gaussian, leading to fast and efficient sampling-based
inference methods. We use a ?localized? form for the row and and column covariances in the matrix
normal prior, which have hyperparameters governing smoothness and locality of RF components
in space and time [5]. In addition to fully Bayesian sampling-based inference, we develop a fast
approximate inference method using coordinate ascent of the conditional marginal likelihoods for
temporal (column) and spatial (row) hyperparameters. We apply this method under linear-Gaussian
and linear-nonlinear-Poisson encoding models, and show that the latter gives the best performance
on neural data.
The paper is organized as follows. In Sec. 2, we describe the low-rank RF model with localized
priors. In Sec. 3, we describe a fully Bayesian inference method using the blocked-Gibbs sampling
with interleaved Metroplis Hastings steps. In Sec. 4, we introduce a fast method for approximate
inference using conditional empirical Bayesian hyperparameter estimates. In Sec. 5, we extend our
estimator to the linear-nonlinear Poisson encoding model. Finally, in Sec. 6, we show applications
to simulated and real neural datasets from retina and V1.
2
2.1
Hierarchical low-rank receptive field model
Response model (likelihood)
We begin by defining two probabilistic encoding models that will provide likelihood functions for
RF inference. Let yi denote the number of spikes that occur in response to a (dt ? dx ) matrix stimulus Xi , where dt and dx denote the number of temporal and spatial elements in the RF, respectively.
Let K denote the neuron?s (dt ? dx ) matrix receptive field.
We will consider, first, a linear Gaussian encoding model:
yi |Xi
?
N (x>
i k + b, ?),
(1)
where xi = vec(Xi ) and k = vec(K) denote the vectorized stimulus and vectorized RF, respectively, ? is the variance of the response noise, and b is a bias term. Second, we will consider a
linear-nonlinear-Poisson (LNP) encoding model
yi |Xi , ?
Poiss(g(x>
i k + b)).
(2)
where g denotes the nonlinearity. Examples of g include exponential and soft rectifying function,
log(exp(?) + 1), both of which give rise to a concave log-likelihood [14].
2.2
Prior for low rank receptive field
We can represent an RF of rank p using the factorization
where the columns of the matrix Kt ? R
Kx ? Rdx ?p contain spatial filters.
K
= Kt Kx> ,
dt ?p
contain temporal filters and the columns of the matrix
2
(3)
We define a prior over rank-p matrices using a restriction of the matrix normal distribution
MN (0, Cx , Ct ). The prior can be written:
p(K|Ct , Cx ) = Z1 exp ? 21 Tr[Cx?1 K > Ct?1 K] ,
(4)
where the normalizer Z involves integration over the space of rank-p matrices, which has no known
closed-form expression. The prior is controlled by a ?column? covariance matrix Ct ? Rdt ?dt and
?row? covariance matrix Cx ? Rdx ?dx , which govern the temporal and spatial RF components,
respectively.
If we express K in factorized form (eq. 3), we can rewrite the prior
p(K|Ct , Cx ) = Z1 exp ? 12 Tr (Kx> Cx?1 Kx )(Kt> Ct?1 Kt ) .
(5)
This formulation makes it clear that we have conditionally Gaussian priors on Kt and Kx , that is:
kt |kx , Cx , Ct ? N (0, A?1
x ? Ct ),
kx |kt , Ct , Cx ? N (0, A?1
(6)
t ? Cx ),
pdt ?1
pdx ?1
where ? denotes Kronecker product, and kt = vec(Kt ) ? R
, kx = vec(Kx ) ? R
, and
where we define Ax = Kx> Cx?1 Kx and At = Kt> Ct?1 Kt .
We define Ct and Cx have a parametric form controlled by hyperparameters ?t and ?x , respectively.
This form is adopted from the ?automatic locality determination? (ALD) prior introduced in [5]. In
the ALD prior, the covariance matrix encodes the tendency for RFs to be localized in both space-time
and spatiotemporal frequency.
For the spatial covariance matrix Cx , the hyperparameters are ?x = {?, ?s , ?f , ?s , ?f }, where ? is
a scalar determining the overall scale of the covariance; ?s and ?f are length-D vectors specifying
the center location of the RF support in space and spatial frequency, respectively (where D is the
number of spatial dimensions, e.g., ?D=2? for standard 2D visual pixel stimuli). The positive definite
matrices ?s and ?f are D ? D determine the size of the local region of RF support in space and
spatial frequency, respectively [15]. In the temporal covariance matrix Ct , the hyperparameters ?t ,
which are directly are analogous to ?x , determine the localized RF structure in time and temporal
frequency.
Finally, we place a zero-mean Gaussian prior on the (scalar) bias term: b ? N (0, ?b2 ).
3
Posterior inference using Markov Chain Monte Carlo
For a complete dataset D = {X, y}, where X ? Rn?(dt dx ) is a design matrix, and y is a vector of
responses, our goal is to infer the joint posterior over K and b,
Z Z
p(K, b|D) ?
p(D|K, b)p(K|?t , ?x )p(b|?b2 )p(?t , ?x , ?b2 )d?b2 d?t d?x .
(7)
We develop an efficient Markov chain Monte Carlo (MCMC) sampling method using blocked-Gibbs
sampling. Blocked-Gibbs sampling is possible since the closed-form conditional priors in eq. 6
and the Gaussian likelihood yields closed-form ?conditional marginal likelihood? for ?t |(kx , ?x , D)
and ?x |(kt , ?t , D), respectively1 . The blocked-Gibbs first samples (?b2 , ?t , ?) from the conditional evidence and simultaneously sample kt from the conditional posterior. Given the samples
of (?b2 , ?t , ?, b, kt ), we then sample ?x and kx similarly.
For sampling from the conditional evidence, we use the Metropolis Hastings (MH) algorithm to
sample the low dimensional space of hyperparameters. For sampling (b, kt ) and kx , we use the
closed-form formula (will be introduced shortly) for the mean of the conditional posterior. The
details of our algorithm are as follows.
Step 1 Given (i-1)th samples of (kx , ?x ), we draw ith samples (b, kt , ?t , ?b2 , ?) from
(i)
(i)
p(b(i) , kt , ?t , ?b2
(i)
, ? (i) |k(i?1)
, ?x(i?1) , D)
x
(i)
= p(?t , ?b2
(i)
, ? (i) |kx(i?1) , ?x(i?1) , D)
(i)
(i)
p(b(i) , kt |?t , ?b2
(i)
, ? (i) , kx(i?1) , ?x(i?1) , D),
1
In this section and Sec.4, we fix the likelihood to Gaussian (eq. 1). An extension to the Poisson likelihood
model (eq. 2) will be described in Sec.5.
3
which is divided into two parts2 :
? We sample (?t , ?b2 , ?) from the conditional posterior given by
Z
p(?t , ?b2 , ?|kx , ?x , D) ? p(?t , ?b2 , ?) p(D|b, kt , kx , ?)p(b, kt |kx , ?x , ?t )dbdkt ,
Z
2
? p(?t , ?b , ?)
N (D|Mx0 wt , ?I)N (wt |0, Cwt )dwt , (8)
where wt is a vector of [b kTt ]T , Mx0 is concatenation of a vector of ones and the matrix
Mx , which is generated by projecting each stimulus Xi onto Kx and then stacking it in
each row, meaning that the i-th row of Mx is [vec(Xi Kx )]> , and Cwt is a block diagonal
matrix whose diagonal is ?b2 and A?1
x ? Ct . Using the standard formula for a product of
two Gaussians, we obtain the closed form conditional evidence:
1
|2??t | 2
p(D|?t , ?b2 , ?, kx , ?x ) ?
1
2
|2??I| |2?Cwt |
1
2
exp
h
1 > ?1
2 ?t ?t ?t
1 >
2? y y
?
i
(9)
where the mean and covariance of conditional posterior over wt given kx are given by
?t = ?1 ?t Mx0T y,
and
?1
?t = (Cw
+ ?1 Mx0T Mx )?1 .
t
(10)
We use the MH algorithm to search over the low dimensional hyperparameter space, with
the conditional evidence (eq. 9) as the target distribution, under a uniform hyperprior on
(?t , ?b2 , ?).
? We sample (b, kt ) from the conditional posterior given in eq. 10.
Step 2 Given the ith samples of (b, kt , ?t , ?b2 , ?), we draw ith samples (kx , ?x ) from
(i)
(i) (i)
2
p(k(i)
x , ?x |b , kt , ?b
(i)
(i)
, ?t , ? (i) , D)
(i)
(i)
p(?x(i) |b(i) , kt , ?t , ?b2
=
(i)
(i)
, ? (i) , D),
(i) (i)
2
p(k(i)
x |?x , b , kt , ?b
(i)
(i)
, ?t , ? (i) , D),
which is divided into two parts:
? We sample ?x from the conditional posterior given by
Z
2
p(?x |b, kt , ?t , ?b , ?, D) ? p(?x ) p(D|b, kt , kx , ?)p(kx |kt , ?t , ?x )dkx ,
(11)
Z
? p(?x ) N (D|Mt kx + b1, ?I)N (kx |0, A?1
t ? Cx )dkx ,
where the matrix Mt is generated by projecting each stimulus Xi onto Kt and then stacking
it in each row, meaning that the i-th row of Mt is [vec([Xi> Kt ])]> . Using the standard
formula for a product of two Gaussians, we obtain the closed form conditional evidence:
1
p(D|?x , kt , b) =
|2??x | 2
1
2
|2??I| |2?(A?1
t ? Cx )|
1
2
exp
h
1 > ?1
2 ?x ?x ?x
?
1
2? (y
i
? b1)T (y ? b1) ,
where the mean and covariance of conditional posterior over kx given (b, kt ) are given by
?x = ?1 ?x Mt> (y ? b1),
?x = (At ? Cx?1 + ?1 Mt> Mt )?1 .
and
(12)
As in Step 1, with a uniform hyperprior on ?x , the conditional evidence is the target distribution in the MH algorithm.
? We sample kx from the conditional posterior given in eq. 12.
A summary of this algorithm is given in Algorithm 1.
2
We omit the sample index, the superscript i and (i-1), for notational cleanness.
4
Algorithm 1 fully Bayesian low-rank RF inference using blocked-Gibbs sampling
Given data D, conditioned on samples for other variables, iterate the following:
1. Sample for (b, kt , ?b2 , ?t , ?) from the conditional evidence for (?t , ?b2 , ?) (in eq. 8) and the
conditional posterior over (b, kt ) (in eq. 10).
2. Sample for (kx , ?x ) from the conditional evidence for ?x (in eq. 11) and the conditional
posterior over kx (in eq. 12).
Until convergence.
4
Approximate algorithm for fast posterior inference
Here we develop an alternative, approximate algorithm for fast posterior inference. Instead of integrating over hyperparameters, we attempt to find point estimates that maximize the conditional
marginal likelihood. This resembles empirical Bayesian inference, where the hyperparameters are
set by maximizing the full marginal likelihood. In our model, the evidence has no closed form; however, the conditional evidence for (?t , ?b2 , ?) given (kx , ?x ) and the conditional evidence for ?x given
(b, kt , ?t , ?b2 , ?) are given in closed form (in eq. 8 and eq. 11). Thus, we alternate (1) maximizing the
conditional evidence to set (?t , ?b2 , ?) and finding the MAP estimates of (b, kt ), and (2) maximizing
the conditional evidence to set ?x and finding the MAP estimates of kx , that is,
? x , ??x ),
??t , ?? , ?
? 2 = arg max p(D|?t , ? 2 , ?, k
(13)
b
?b, k
?t
b
?t ,?b2 ,?
=
? x , ??x , D),
arg max p(b, kt |??t , ?? , ?
?b2 , k
(14)
? t , ??t , ?? , ?
?b2 ),
arg max p(D|?x , ?b, k
(15)
? t , ??t , ?? , ?
arg max p(kx |??x , ?b, k
?b2 , D).
(16)
b,kt
??x
=
?x
?x
k
=
kx
The approximate algorithm works well if the conditional evidence is tightly concentrated around its
maximum. Note that if the hyperparameters are fixed, the iterative updates of (b, kt ) and kx given
above amount to alternating coordinate ascent of the posterior over (b, K).
5
Extension to Poisson likelihood
When the likelihood is non-Gaussian, blocked-Gibbs sampling is not tractable, because we do not
have a closed form expression for conditional evidence. Here, we introduce a fast, approximate
inference algorithm for the low-rank RF model under the LNP likelihood. The basic steps are the
same as those in the approximate algorithm (Sec.4). However, we make a Gaussian approximation to
the conditional posterior over (b, kt ) given kx via the Laplace approximation. We then approximate
the conditional evidence for (?t , ?b2 ) given kx at the posterior mode of (b, kt ) given kx . The details
are as follows.
The conditional evidence for ?t given kx is
Z
p(D|?t , ?b2 , kx , ?x ) ?
Poiss(y|g(Mx0 wt ))N (wt |0, Cwt )dwt
(17)
The integrand is proportional to the conditional posterior over wt given kx , which we approximate
to a Gaussian distribution via Laplace approximation
? t , ?t ),
p(wt |?t , ?b2 , kx , D) ? N (w
(18)
? t is the conditional MAP estimate of wt obtained by numerically maximizing the logwhere w
conditional posterior for wt (e.g., using Newton?s method. See Appendix A),
log p(wt |?t , ?b2 , kx , D)
?1
= y> log(g(Mx0 wt )) ? g(Mx0 wt ) ? 12 wt> Cw
wt + c,
t
(19)
and ?t is the covariance of the conditional posterior obtained by the second derivative of the log?1
conditional posterior around its mode ??1
= Ht + Cw
, where the Hessian of the negative logt
t
2
?
0
likelihood is denoted by Ht = ? ?w2 log p(D|wt , Mx ).
t
5
A
1
ML
true k
B
low-rank
Gibbs
low-rank
fast
2
1
1
space
MSE
250
samples
time
16
full-rank
64
2000
samples
ML
full-rank
low-rank (fast)
low-rank (Gibbs)
0.1
0.01
0.003
250
500
1000
2000
# training data
Figure 1: Simulated data. Data generated from the linear Gaussian response model with a rank-2 RF
(16 by 64 pixels: 1024 parameters for full-rank model; 160 for rank-2 model). A. True rank-2 RF
(left). Estimates obtained by ML, full-rank ALD, low-rank approximate method, and blocked-Gibbs
sampling, using 250 samples (top), and using 2000 samples (bottom), respectively. B. Average mean
squared error of the RF estimate by each method (average over 10 independent repetitions).
Under the Gaussian posterior (eq. 18), the log conditional evidence (log of eq. 17) at the posterior
? t is simply
mode wt = w
log p(D|?t , ?b2 , kx ) ?
?1
? t> Cw
?t ?
log p(D|w?t , Mx0 ) ? 12 w
w
t
1
2
log |Cwt ??1
t |,
which we maximize to set ?t and ?b2 . Due to space limit, we omit the derivations for the conditional
posterior for kx and the conditional evidence for ?x given (b, kt ). (See Appendix B).
6
Results
6.1
Simulations
We first tested the performance of the blocked-Gibbs sampling and the fast approximate algorithm
on a simulated Gaussian neuron with a rank-2 RF of 16 temporal bins and 64 spatial pixels shown in
Fig. 1A. We compared these methods with the maximum likelihood estimate and the full-rank ALD
estimate. Fig. 1 shows that the low-rank RF estimates obtained by the blocked-Gibbs sampling
and the approximate algorithm perform similarly, and achieve lower mean squared error than the
full-rank RF estimates.
A
ML
linear Gaussian
full-rank
low-rank
Linear Nonlinear Poisson
ML
full-rank
low-rank
B
Gaussian
ML
full-rank
low-rank
2
1.5
250
samples
MSE
LNP
1
ML
full-rank
low-rank
0.5
2000
samples
0
250
500
1000
2000
# training data
Figure 2: Simulated data. Data generated from the linear-nonlinear Poisson (LNP) response model
with a rank-2 RF (shown in Fig. 1A) and ?softrect? nonlinearity. A. Estimates obtained by ML, fullrank ALD, low-rank approximate method under the linear Gaussian model, and the methods under
the LNP model, using 250 (top) and 2000 (bottom) samples, respectively. B. Average mean squared
error of the RF estimate (from 10 independent repetitions). The low-rank RF estimates under the
LNP model perform better than those under the linear Gaussian model.
We then tested the performance of the above methods on a simulated linear-nonlinear Poisson (LNP)
neuron with the same RF and the softrect nonlinearity. We estimated the RF using each method
under the linear Gaussian model as well as under the LNP model. Fig. 2 shows that the low-rank RF
6
rank-1
low-rank
(Gibbs)
relative likelihood
per stimulus
rank-4
B
24
1
space 16
low-rank STA
0.67
1
2
3
rank-1
low-rank (fast)
low-rank
(Gibbs)
rank-2
rank-4
1
24
1 space 12
low-rank STA
0.67
4
rank
V1 simple cell #2
2.50
time
2.25
rank-2
1
low-rank (fast)
time
V1 simple cell #1
relative likelihood
per stimulus
A
1
2
3
rank
4
Figure 3: Comparison of low-rank RF estimates for V1 simple cells (using white noise flickering
bars stimuli [16]). A: Relative likelihood per test stimulus (left) and low-rank RF estimates for
three different ranks (right). Relative likelihood is the ratio of the test likelihood of rank-1 STA to
that of other estimates. Using 1 minutes of training data, the rank-2 RF estimates obtained by the
blocked-Gibbs sampling and the approximate method achieve the highest test likelihood (estimates
are shown in the top row), while rank-1 STA achieves the highest test likelihood, since more noise is
added to the low-rank STA as the rank increases (estimates are shown in the bottom row). Relative
likelihood under full rank ALD is 2.25. B: Similar plot for another V1 simple cell. The rank-4
estimates obtained by the blocked-Gibbs sampling and the approximate method achieve the highest
test likelihood for this cell. Relative likelihood under full rank ALD is 2.17.
estimates perform better than full-rank estimates regardless of the model, and that the low-rank RF
estimates under the LNP model achieved the lowest MSE.
6.2
Application to neural data
We applied our methods to estimate the RFs of V1 simple cells and retinal ganglion cells (RGCs).
The details of data collection are described in [16, 9]. We performed 10-fold cross-validation using
1 minute of training and 2 minutes of test data. In Fig. 3 and Fig. 4, we show the average test
likelihood as a function of RF rank under the linear Gaussian model. We also show the low-rank
RF estimates obtained by our methods as well as the low-rank STA. The low-rank STA (rank-p) is
? ST A,p = Pp di ui v> , where di is the i-th singular value, ui and vi are the i-th left
computed as K
i
i
and right singular vectors, respectively. If the stimulus distribution is non-Gaussian, the low-rank
STA will have larger bias than the low-rank ALD estimate.
A
RGC off-cell
spatial extent
temporal extent
relative likelihood
per stimulus
1st
low-rank (Gibbs)
0.9
2nd
low-rank STA
1
B
3rd
2nd
low-rank
(fast)
1
2
rank
3
1
10
3rd
0
25
spatial extent
temporal extent
1
3rd
low-rank (Gibbs)
relative likelihood
per stimulus
10
1st
4
RGC on-cell
1
1
1st
2nd
low-rank (fast)
10
1
10
0.9
low-rank STA
1
2
3
rank
4
0
25
7
Figure 4: Comparison of low-rank
RF estimates for retinal data (using
binary white noise stimuli [9]). The
RF consists of 10 by 10 spatial pixels
and 25 temporal bins (2500 RF coefficients). A: Relative likelihood per
test stimulus (left), top three left singular vectors (middle) and right singular vectors (right) of estimated RF
for an off-RGC cell. The samplingbased RF estimate benefits from a
rank-3 representation, making use
of three distinct spatial and temporal components, whereas the performance of the low-rank STA degrades
above rank 1. Relative likelihood
under full rank ALD is 1.0146. B:
Similar plot for on-RGC cell. Relative likelihood under full rank ALD
is 1.006. Both estimates perform best
with rank 1.
16
1
space16
2 min.
rank-2
(LNP)
B
Gaussian
ML
full-rank
rank-2(fast)
rank-2(Gibbs)
0.24
0.22
LNP
0.2
full-rank
rank-2
0.18
0.25
0.5
1
2
# minutes of training data
C
runtime (sec)
time
30 sec. 1
ML
rank -2
(Gaussian) (Gaussian)
prediction error
A
103
10
2
1
10
0
10
0.25
0.5
1
2
# minutes of training data
Figure 5: RF estimates for a V1 simple cell. (Data from [16]). A: RF estimates obtained by ML
(left) and low-rank blocked-Gibbs sampling under the linear Gaussian model (middle), and low-rank
approximate algorithm under the LNP model (right), for two different amounts of training data (30
sec. and 2 min.). The RF consists of 16 temporal and 16 spatial dimensions (256 RF coefficients).
B: Average prediction (on spike count) error across 10-subset of available data. The low-rank RF
estimates under the LNP model achieved the lowest prediction error among all other methods. C:
Runtime of each method. The low-rank approximate algorithms took less than 10 sec., while the
full-rank inference methods took 10 to 100 times longer.
Finally, we applied our methods to estimate the RF of a V1 simple cell with four different amounts
of training data (0.25, 0.5 1, and 2 minutes) and computed the prediction error of each estimate
under the linear Gaussian and the LNP models. In Fig. 5, we show the estimates using 30 sec. and 2
min. of training data. We computed the test likelihood of each estimate to set the RF rank and found
that the rank-2 RF estimates achieved the highest test likelihood. In terms of the average prediction
error, the low-rank RF estimates obtained by our fast approximate algorithm achieved the lowest
error, while the runtime of the algorithm was significantly lower than full-rank inference methods.
7
Conclusion
We have described a new hierarchical model for low-rank RFs. We introduced a novel prior for
low-rank matrices based on a restricted matrix normal distribution, which has the feature of preserving a marginally Gaussian prior over the regression coefficients. We used a ?localized? form to
define row and column covariance matrices in the matrix normal prior, which allows the model to
flexibly learn smooth and sparse structure in RF spatial and temporal components. We developed
two inference methods: an exact one based on MCMC with blocked-Gibbs sampling and an approximate one based on alternating evidence optimization. We applied the model to neural data using
both Gaussian and Poisson noise models, and found that the Poisson (or LNP) model performed
best despite the increased reliance on approximate inference. Overall, we found that low-rank estimates achieved higher prediction accuracy with significantly lower computation time compared to
full-rank estimates.
We believe our localized, low-rank RF model will be especially useful in high-dimensional settings,
particularly in cases where the stimulus covariance matrix does not fit in memory. In future work, we
will develop fully Bayesian inference methods for low-rank RFs under the LNP noise model, which
will allow us to quantify the accuracy of our approximate method. Secondly, we will examine
methods for inferring the RF rank, so that the number of space-time separable components can be
determined automatically from the data.
Acknowledgments
We thank N. C. Rust and J. A. Movshon for V1 data, and E. J. Chichilnisky, J. Shlens, A. .M. Litke,
and A. Sher for retinal data. This work was supported by a Sloan Research Fellowship, McKnight
Scholar?s Award, and NSF CAREER Award IIS-1150186.
8
References
[1] F. Theunissen, S. David, N. Singh, A. Hsu, W. Vinje, and J. Gallant. Estimating spatio-temporal receptive
fields of auditory and visual neurons from their responses to natural stimuli. Network: Computation in
Neural Systems, 12:289?316, 2001.
[2] D. Smyth, B. Willmore, G. Baker, I. Thompson, and D. Tolhurst. The receptive-field organization of
simple cells in primary visual cortex of ferrets under natural scene stimulation. Journal of Neuroscience,
23:4746?4759, 2003.
[3] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions.
NIPS, 15, 2003.
[4] S.V. David and J.L. Gallant. Predicting neuronal responses during natural vision. Network: Computation
in Neural Systems, 16(2):239?260, 2005.
[5] M. Park and J. W. Pillow.
7(10):e1002219, 2011.
Receptive field inference with localized priors.
PLoS Comput Biol,
[6] Jennifer F. Linden, Robert C. Liu, Maneesh Sahani, Christoph E. Schreiner, and Michael M. Merzenich.
Spectrotemporal structure of receptive fields in areas ai and aaf of mouse auditory cortex. Journal of
Neurophysiology, 90(4):2660?2675, 2003.
[7] Anqi Qiu, Christoph E. Schreiner, and Monty A. Escab. Gabor analysis of auditory midbrain receptive
fields: Spectro-temporal and binaural composition. Journal of Neurophysiology, 90(1):456?476, 2003.
[8] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: An information-theoretic
generalization of spike-triggered average and covariance analysis. Journal of Vision, 6(4):414?428, 4
2006.
[9] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454:995?999,
2008.
[10] A.J. Izenman. Reduced-rank regression for the multivariate linear model. Journal of multivariate analysis,
5(2):248?264, 1975.
[11] Gregory C Reinsel and Rajabather Palani Velu. Multivariate reduced-rank regression: theory and applications. Springer New York, 1998.
[12] John Geweke. Bayesian reduced rank regression in econometrics. Journal of Econometrics, 75(1):121 ?
146, 1996.
[13] A.P. Dawid. Some matrix-variate distribution theory: notational considerations and a bayesian application. Biometrika, 68(1):265, 1981.
[14] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15:243?262, 2004.
[15] M. Park and J. W. Pillow. Bayesian active learning with localized priors for fast receptive field characterization. In NIPS, pages 2357?2365, 2012.
[16] N. C. Rust, Schwartz O., J. A. Movshon, and Simoncelli E.P. Spatiotemporal elements of macaque v1
receptive fields. Neuron, 46(6):945?956, 2005.
9
| 4992 |@word neurophysiology:3 middle:2 nd:3 simulation:1 covariance:14 tr:2 reduction:1 liu:1 nt:6 anqi:1 dx:5 written:1 john:1 informative:2 plot:2 update:1 samplingbased:1 parametrization:1 ith:3 footing:1 characterization:1 tolhurst:1 location:1 consists:2 introduce:3 examine:1 mx0:6 automatically:1 dkx:2 mijung:1 window:2 begin:1 estimating:4 underlying:1 moreover:2 baker:1 factorized:1 lowest:3 kind:1 substantially:3 developed:1 finding:2 temporal:19 concave:1 runtime:3 biometrika:1 schwartz:1 unit:1 omit:2 positive:1 engineering:1 influencing:1 local:1 limit:1 willmore:1 despite:1 encoding:6 minimally:1 resembles:1 specifying:2 challenging:1 christoph:2 limited:1 factorization:1 acknowledgment:1 block:1 definite:1 signaling:1 area:1 empirical:2 maneesh:1 significantly:2 gabor:1 cascade:1 integrating:1 naturalistic:2 onto:2 put:1 restriction:3 confers:1 map:4 center:2 maximizing:4 attention:1 regardless:1 flexibly:1 thompson:1 focused:1 formulate:1 assigns:1 schreiner:2 estimator:2 shlens:2 population:1 coordinate:2 analogous:1 laplace:2 target:2 exact:1 smyth:1 element:3 dawid:1 particularly:2 econometrics:3 theunissen:1 bottom:3 electrical:1 thousand:1 region:1 plo:1 highest:4 substantial:2 govern:1 ui:2 singh:1 rewrite:1 efficiency:1 binaural:1 joint:1 mh:3 derivation:1 distinct:1 fast:17 describe:2 monte:2 whose:1 larger:1 statistic:1 think:1 superscript:1 triggered:2 took:2 product:3 achieve:3 convergence:1 develop:7 pose:1 received:1 eq:15 involves:2 quantify:1 filter:6 bin:2 fix:1 scholar:1 generalization:1 secondly:1 extension:2 around:2 normal:6 exp:5 achieves:1 estimation:2 integrates:1 spectrotemporal:1 utexas:2 repetition:2 gaussian:31 rdx:2 reinsel:1 poi:2 ax:1 focus:1 notational:2 rank:124 likelihood:35 normalizer:1 litke:2 inference:20 pixel:5 overall:2 arg:4 among:1 denoted:1 spatial:17 integration:3 marginal:5 field:14 equal:1 saving:1 metroplis:1 sampling:18 park:3 future:1 stimulus:23 employ:1 retina:2 sta:12 simultaneously:1 tightly:1 attempt:1 organization:1 chain:2 kt:42 hyperprior:2 increased:1 column:8 modeling:1 soft:1 stacking:2 subset:1 hundred:1 uniform:2 cwt:5 spatiotemporal:6 gregory:1 st:4 probabilistic:1 off:2 michael:1 mouse:1 squared:3 derivative:1 leading:1 velu:1 retinal:3 sec:13 b2:33 coefficient:11 sloan:1 vi:1 performed:2 closed:9 rectifying:1 e1002219:1 accuracy:3 variance:1 yield:2 bayesian:14 mere:1 marginally:2 carlo:2 tended:1 rdt:1 frequency:4 involved:1 pp:1 di:2 hsu:1 auditory:3 experimenter:1 dataset:1 popular:1 dimensionality:2 geweke:1 organized:1 higher:2 dt:6 specify:2 response:10 formulation:2 governing:1 correlation:2 until:1 hastings:2 nonlinear:6 mode:3 believe:1 contain:2 true:2 rgcs:1 regularization:2 merzenich:1 alternating:3 white:3 conditionally:2 fullrank:1 during:1 ridge:1 complete:2 theoretic:1 meaning:3 regularizing:1 consideration:1 novel:4 respectively1:1 stimulation:1 spiking:1 mt:6 rust:2 extend:1 numerically:1 blocked:14 composition:1 gibbs:21 vec:6 ai:1 smoothness:2 automatic:1 rd:3 similarly:2 nonlinearity:3 longer:1 cortex:2 posterior:25 multivariate:3 reverse:1 binary:1 yi:3 lnp:16 preserving:1 impose:1 determine:2 maximize:2 ii:1 full:22 simoncelli:3 infer:1 reduces:1 smooth:2 determination:1 cross:1 divided:2 award:2 controlled:2 prediction:6 regression:8 basic:1 whitened:1 vision:2 poisson:11 represent:1 achieved:5 cell:14 justified:1 addition:1 whereas:1 fellowship:1 ferret:1 singular:4 w2:1 ascent:3 variety:1 iterate:1 fit:1 variate:1 ktt:1 reduce:1 texas:2 expression:2 movshon:2 hessian:1 york:1 useful:1 clear:1 amount:4 concentrated:1 reduced:6 outperform:1 nsf:1 estimated:2 neuroscience:1 per:7 hyperparameter:2 express:1 four:1 reliance:1 ar1:1 ht:2 v1:11 sum:1 place:1 draw:2 appendix:2 interleaved:1 ct:13 fold:1 occur:1 kronecker:1 scene:1 encodes:1 integrand:1 min:4 separable:4 developing:1 pdx:1 alternate:1 mcknight:1 logt:1 describes:1 across:2 metropolis:1 making:1 midbrain:1 projecting:2 restricted:1 jennifer:1 count:1 needed:2 tractable:1 adopted:1 available:1 gaussians:2 mjpark:1 apply:1 hierarchical:3 pdt:1 dwt:2 alternative:1 shortly:1 denotes:2 top:4 include:1 newton:1 exploit:1 especially:1 tensor:1 izenman:1 added:1 spike:5 occurs:1 receptive:14 parametric:1 degrades:1 primary:1 traditional:1 diagonal:2 exhibit:1 mx:4 cw:4 thank:1 simulated:5 concatenation:1 nx:6 mail:2 manifold:2 extent:4 length:1 modeled:1 index:1 ratio:1 robert:1 negative:1 rise:1 design:1 perform:4 gallant:2 neuron:11 observation:1 datasets:1 markov:2 defining:1 frame:2 rn:1 introduced:3 david:2 chichilnisky:2 z1:2 nip:2 macaque:1 address:1 bar:1 sparsity:1 challenge:1 rf:72 max:4 memory:1 power:1 natural:3 predicting:1 mn:1 smallness:1 improve:1 movie:2 sher:2 sahani:2 prior:25 literature:3 determining:1 relative:11 fully:5 proportional:1 vinje:1 localized:11 validation:1 vectorized:2 austin:2 row:12 summary:1 supported:1 bias:3 allow:1 face:1 sparse:2 benefit:2 dimension:3 pillow:6 sensory:3 commonly:1 collection:1 approximate:22 spectro:1 ml:11 active:1 summing:1 b1:4 spatio:1 xi:9 search:1 iterative:1 learn:1 nature:1 career:1 mse:3 noise:8 hyperparameters:9 profile:1 qiu:1 neuronal:2 fig:7 ny:6 inferring:1 exponential:1 comput:1 perceptual:1 down:1 formula:3 minute:6 linden:2 evidence:21 conditioned:1 kx:48 locality:2 cx:15 simply:1 paninski:2 ganglion:1 visual:4 scalar:2 springer:1 corresponds:1 ald:10 conditional:41 goal:1 flickering:2 considerable:2 monty:1 typical:2 determined:3 wt:17 experimental:2 tendency:1 rgc:4 support:2 latter:1 jonathan:1 mcmc:2 tested:2 biol:1 correlated:1 |
4,411 | 4,993 | Spectral methods for neural characterization using
generalized quadratic models
Il Memming Park?123 , Evan Archer?13 , Nicholas Priebe14 , & Jonathan W. Pillow123
1. Center for Perceptual Systems, 2. Dept. of Psychology,
3. Division of Statistics & Scientific Computation, 4. Section of Neurobiology,
The University of Texas at Austin
{memming@austin., earcher@, nicholas@, pillow@mail.} utexas.edu
Abstract
We describe a set of fast, tractable methods for characterizing neural responses
to high-dimensional sensory stimuli using a model we refer to as the generalized
quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron?s stimulus selectivity in terms of a set linear receptive
fields followed by a quadratic combination rule, and the invertible nonlinearity
maps this output to the desired response range. Special cases of the GQM include
the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson
model [3]. Here we show that for ?canonical form? GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a
large class of non-Gaussian stimulus distributions. We show that these estimators
are fast and provide highly accurate estimates with far lower computational cost
than full maximum likelihood. Moreover, the GQM provides a natural framework
for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data
using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains.
1
Introduction
Although sensory stimuli are high-dimensional, sensory neurons are typically sensitive to only a
small number of stimulus features. Linear dimensionality-reduction methods seek to identify these
features in terms of a subspace spanned by a small number of spatiotemporal filters. These filters,
which describe how the stimulus is integrated over space and time, can be considered the first stage in
a ?cascade? model of neural responses. In the well-known linear-nonlinear-Poisson (LNP) cascade
model, filter outputs are combined via a nonlinear function to produce an instantaneous spike rate,
which generates spikes via an inhomogeneous Poisson process [4, 5].
The most popular methods for dimensionality reduction with spike train data involve the first two
moments of the spike-triggered stimulus distribution: (1) the spike-triggered average (STA) [7?9];
and (2) major and minor eigenvectors of spike-triggered covariance (STC) matrix [10, 11]1 . STC
analysis can be described as a spectral method because the estimate is obtained by eigenvector
?
These authors contributed equally.
Related moment-based estimators have also appeared in the statistics literature under the names ?inverse
regression? and ?sufficient dimensionality reduction?, although the connection to STA and STC analysis does
not appear to have been noted previously [12, 13].
1
1
Generalized Quadratic Model
linear
filters
nonlinear
quadratic function
noise
analog
or
recurrent filters
stimulus
spikes
...
response
Figure 1: Schematic of generalized quadratic model (GQM) for analog or spike train data.
decomposition of an appropriately defined matrix. Compared to likelihood-based methods, spectral
methods are generally computationally efficient and devoid of (non-global) local optima.
Recently, Park and Pillow [3] described a connection between STA/STC analysis and maximum
likelihood estimators based on a quantity called the expected log-likelihood (EL). The EL results
from replacing the nonlinear term in the log-likelihood and with its expectation over the stimulus
distribution. When the stimulus is Gaussian, the EL depends only on moments (mean spike rate,
STA, STC, and stimulus mean and covariance) and leads to a closed-form spectral estimate for LNP
filters, which has STC analysis as a special case. More recently, Ramirez and Paninski derived ELbased estimators for the linear Gaussian model and proposed fast EL-based inference methods for
generalized linear models (GLMs) [14].
Here, we show that the EL framework can be extended to a more general class that we refer to
as the generalized quadratic model (GQM). The GQM represents a straightforward extension of
the generalized linear model GLM [15, 16] wherein the linear predictor is replaced by a quadratic
function (Fig. 1). For Gaussian and Poisson GQMs, we derive computationally efficient EL-based
estimators that apply to a variety of non-Gaussian stimulus distributions; this substantially extends
previous work on the conditions of validity for moment-based estimators [7,17?19]. In the Gaussian
case, the EL-based estimator has a closed form solution that relies only on the first two responseweighted moments and the first four stimulus moments. In the Poisson case, GQMs provide a
natural synthesis of models that have multiple filters (i.e., where the response depends on multiple
projections of the stimulus) and dependencies on spike history. We show that spectral estimates of a
low-dimensional feature space are nearly as accurate as maximum likelihood estimates (for GQMs
without spike-history), and demonstrate the applicability of GQMs for both analog and spiking data.
2
Generalized Quadratic Models
We begin by briefly reviewing of the class of models known as GLMs, which includes the singlefilter LNP model, and the Wiener model from the systems identification literature. A GLM has three
basic components: a linear stimulus filter, an invertible nonlinearity (or ?inverse link? function),
and an exponential-family noise model. The GLM describes the conditional response y to a vector
stimulus x as:
y|x ? P(f (w> x)),
(1)
where w is the filter, f is the nonlinearity, and P(?) denotes a noise distribution function with
mean ?. From the standpoint of dimensionality reduction, the GLM makes the strong modeling
assumption that response y depends upon x only via its one-dimensional projection onto w.
At the other end of the modeling spectrum sits the very general ?multiple filter? linear-nonlinear
(LN) cascade model, which posits that the response depends on a p-dimensional projection of
the stimulus, represented by a bank of filters {wi }pi=1 , and combined via some arbitrary multidimensional function f : Rp ? R:
y|x ? P(f (w1> x, . . . , wp> x)).
(2)
Spike-triggered covariance analysis and related methods provide low-cost estimates of the filters
{wi } under Poisson or Bernoulli noise models, but only under restrictive conditions on the stimulus
2
distribution (e.g., elliptical symmetry) and some weak conditions on f [17, 19]. Semi-parametric
estimators like ?maximally informative dimensions? (MID) eliminate these restrictions [20], but do
not practically scale beyond two or three filters without additional modeling assumptions [21].
The generalized quadratic model (GQM) provides a tractable middle ground between the GLM and
general multi-filter LN models. The GQM allows for multi-dimensional stimulus dependence, yet
restricts the nonlinearity to be a transformed quadratic function [22?25]. The GQM can be written:
y|x ? P(f (Q(x))),
(3)
where Q(x) = x> Cx + b> x + a denotes a quadratic function of x, governed by a (possibly lowrank) symmetric matrix C, a vector b, and a scalar a. Note that the GQM may be regarded as a
GLM in the space of quadratically transformed stimuli [6], although this approach does not allow
Q(x) to be parametrized directly in terms of a projection onto a small number of linear filters.
In the following, we show that the elliptical-LNP model [3] is a GQM with Poisson noise, and make
a detailed study of canonical GQMs with Gaussian noise. We show that the maximum-EL estimates
for C, b, and a have similar forms for both Gaussian and Poisson GQMs, and that the eigenspectrum
of C provides accurate estimates of a neuron?s low-dimensional feature space. Finally, we show that
the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity with
dependencies on spike train history or other response covariates.
3
Estimation with expected log-likelihoods
The expected log-likelihood is a quantity that approximates log-likelihood but can be computed very
efficiently using moments. It exists for any GQM or GLM with ?canonical? nonlinearity (or link
function). The canonical nonlinearity for an exponential-family noise distribution has the special
property that it allows the log-likelihood to be written as the sum of two terms: a term that depends
linearly on the responses {yi }, and a second (nonlinear) term that depends only on the stimuli
{xi } and parameters ?. The expected log-likelihood (EL) results from replacing the nonlinear term
with its expectation over the stimulus distribution P (x), which in neurophysiology settings is often
known a priori to the experimenter. Maximizing the EL results in maximum expected log-likelihood
(MEL) estimators that have very low computational cost while achieving nearly the accuracy of
full maximum likelihood (ML) estimators. Spectral decompositions derived from the EL provide
estimators that generalize STA/STC analysis. In the following, we derive MEL estimators for three
special cases?two for the Gaussian noise model, and one for the Poisson noise model.
3.1
Gaussian GQMs
Gaussian noise provides a natural model for analog neural response variables like membrane potential or fluorescence. The canonical nonlinearity for Gaussian noise is the identity function, f (x) = x.
The the canonical-form Gaussian GQM can therefore be written: y|x ? N (Q(x), ? 2 ). Given a
dataset {xi , yi }N
i=1 , the log-likelihood per sample is:
1 1 X
1 1 X
2
L=? 2
(Q(xi ) ? yi ) = ? 2
?2Q(xi )yi + Q(xi )2 + const
2? N i
2? N i
!
1
1 X
>
2
= ? 2 ?2 Tr(C?) + ? b + a?
y +
Q(xi ) + const,
(4)
2?
N i
P
where ? 2 is the noise variance, const is a parameter-independent constant, y? = N1 i yi is the mean
response, and ? and ? denote cross-correlation statistics that we will refer to (in a slight abuse of
terminology) as the response triggered average and response-triggered covariance:
?=
N
1 X
yi xi (?RTA?)
N i=1
?=
N
1 X
yi xi xi > (?RTC?).2
N i=1
(5)
P
The expected log-likelihood results from replacing the troublesome nonlinear term N1 i Q(xi )2
by its expectation over the stimulus distribution. This is justified by the law of large numbers, which
2
When responses yi are spike counts, these correspond to the STA and STC.
3
P
asserts that N1 i Q(xi )2 converges to EP (x) [Q(x)2 ] asymptotically. Leaving off the const term,
this leads to the per-sample expected log-likelihood [3, 14], which is defined:
L? = ? 1 2 ?2 Tr(C?) + ?> b + a?
y + E[Q(x)2 ] .
(6)
2?
Gaussian stimuli
If the stimuli are drawn from a Gaussian distribution, x ? N (0, ?), then we have (from [26]):
E[Q(x)2 ] = 2 Tr (C?)2 + Tr(bT ?b) + (Tr(C?) + a)2 .
(7)
The EL is concave in the parameters a, b, C, so we can obtain the MEL estimates by finding the
stationary point:
? ?
1
y + 2 (Tr(C?) + a)) = 0
=?
amel = y? ? Tr(Cmel ?))
(8)
L = ? 2 (?2?
?a
2?
? ?
1
=?
bmel = ??1 ?
(9)
L = ? 2 (?2? + 2?b) = 0
?b
2?
? ?
1
1 ?1 ?1
L = ? 2 ?2? + 4?C? + 2?
y? = 0
=? Cmel =
? ?? ? y???1 (10)
?C
2?
2
Note that this coincides with the moment-based estimate for the 2nd-order Volterra model [2].
Axis-symmetric stimuli
More generally, we can derive the MEL estimator for stimuli with arbitrary axis-symmetric distributions with finite 4th-order moments. Axis-symmetric distributions exhibit invariance under
reflections around each axis, that is, P (x1 , . . . , xd ) = P (?1 x1 , . . . , ?d xd ) for any ?i ? {?1, 1}.
The class of axis-symmetric distributions subsumes both radially symmetric and independent product distributions. However, axis symmetry is a strictly weaker condition; significantly, marginals
need not be identically distributed.
To simplify derivation of the MEL estimator for axis-symmetric stimuli, we take the derivative of
Q(x) with respect to (a, b, C) before htaking the expectation.
Derivatives with respect to model
i
?E[Q(x)2 ]
??i
= E 2Q(x) ?Q(x)
. For each ?i , we solve the equation,
??i
? Tr(C?) + ?> b + a?
y
?Q(x)
? L?
= ?2
+ 2E Q(x)
= 0.
??i
??i
??i
parameters are given by
From derivatives w.r.t. a, b, and C, respectively, we obtain conditions for the MEL estimates:
y? = E [Q(x)] = a + b> E[x] + Tr(CE[xx> ])
X
? = E [Q(x)x] = aE[x] + b> E[xx> ] +
Cij E[xi xj x]
i,j
X
X
? = E Q(x)xx> = aE[xx> ] +
bi E[xi xx> ] +
Cij E[xi xj xx> ]
i
i,j
where the subindices within the sums are for components. Due to axis symmetry, E[x], E[xi xj xk ]
and E[xi x3j ] are all zero for distinct indices. Thus, the MEL estimates for a and b are identical to the
Gaussian case given above. If we further assume that the stimulus is whitened so that E[xx> ] = I,
sufficient stimulus
statistics are the 4th order even moments, which we represent with the matrix
Mij = E x2i x2j .
In general, when the marginals are not identical but the joint distribution is axis-symmetric,
X
X
X
Cij E[xi xj xx> ] =
Cii diag(x2i x21 , ? ? ? , x2i x2d ) +
Cij Mij ei e>
j
ij
i
(11)
i6=j
= diag(1> (I ? C)M ) + C ? M ? (11> ? I).
where 1 is a vector of 1?s, ei is the standard basis, and ? denotes the Hadamard product. We can solve
these sets of linear equations for the diagonal terms and off-diagonal terms separately obtaining,
(
?ij
i 6= j
2Mij ,
[Cmel ]ij =
(12)
> ?1
?(M ? 11 ) , i = j
4
2D stimulus distribution
neural response
assumed stimulus distribution
Gaussian [r2 = 0.424]
true
iid axis-sym [r 2 = 0.894]
response
general axis-sym [r2 = 0.99]
time
Figure 2: Maximum expected log-likelihood (MEL) estimators for a Gaussian GQM under different
assumptions about the stimulus distribution. (left) Axis-symmetric stimulus distribution in 2D. The
horizontal axis is a (symmetric) mixture of Gaussian, and the vertical axis is a uniform distribution.
Red dots indicate samples from the distribution. (right) Response prediction based on various C?
estimated using eq. 10, eq. 14, and eq. 12. Performance is evaluated on a cross-validation test set
with no noise for each C, and we see a huge loss in performance as a result of incorrect assumption
about the stimulus distribution.
where ? = diag(1> (I ? ?) ? y?1> ).
For the special case when the marginal distributions are identical, we note that
E[x> Cx(xx> )] = ?22 Tr(C)I + (?4 ? ?22 )C ? I + 2?22 C ? (11> ? I)
(13)
where ?22 = E[x21 x22 ] = M1,2 and ?4 = E[x41 ] = M1,1 . This gives the simplified formula (also
given in [27]):
(
?ij
i 6= j
2?22 ,
[Cmel ]ij =
(14)
?ii ??
y
?4 ??22 , i = j
When the stimulus is not Gaussian or the marginals not identical, the estimates obtained from
(eq. 10) and (eq. 14) are not consistent. In this case, the general axis-symmetric estimate (eq. 12)
gives much better performance, as we illustrate with a simulated example in Fig. 2.
3.2
Poisson GQM
Poisson noise provides a natural model for discrete events like spike counts, and extends easily to
point process models for spike trains. The canonical nonlinearity for Poisson noise is exponential,
f (x) = exp(x), so the canonical-form Poisson GQM is: y|x ? Poiss(exp(Q(x))). Ignoring
irrelevant constants, the log-likelihood per sample is
X
X
L = N1
yi log(exp(Q(xi ))) ? N1
exp(Q(xi ))
i
i
>
= Tr(C?) + ? b + a?
y?
1
N
X
exp(Q(xi )),
(15)
i
where y?, ? and ? denote mean response, STA, and
P STC, as given above (eq. 5). We obtain the
EL for a Poisson GQM by replacing the term N1
exp(Q(xi )) by its expectation with respect to
P (x). Under a zero-mean Gaussian stimulus distribution with covariance ?, the closed-form MEL
estimates are (from [3]):
?1
?1
>
?1
>
1
1
1
bmel = ? + y? 2??
?,
Cmel = 2 ? ? y? ? + y? 2??
,
(16)
where we assume that ? + y1? 2??> is invertible. Note that the MEL estimator combines information
from ? and ?, unlike standard STA and STC-based estimates, which maximize EL only when either
b or C is zero (respectively). Park and Pillow 2011 used Poisson EL in conjunction with a log-prior
to obtain approximate Bayesian estimates, an approach referred to as Bayesian STC [3].
5
filter estimation error
optimization time
MELE (1st eigenvector)
rank?1 MELE
rank?1 ML
2
0
seconds
L1 error
Figure 3: Rank-1 quadratic
filter reconstruction performance. Both rank-1 models
were optimized using conjugate gradient descent. (Left)
l1 distance from the ground
truth filter. (Right) Computation time for the optimization.
10
-1
10
4
5
10
10
# of samples
1
0
4
5
10
10
# of samples
Mixture-of-Gaussians stimuli
Results for Gaussian stimuli extend naturally to mixtures of Gaussians, which can be used to approximate arbitrary stimulus distributions. The EL for mixture-of-Gaussian stimuli
P can be computed
simply via the linearity
of
expectation.
For
stimuli
drawn
from
a
mixture
i ?j N (?j , ?j ) with
P
mixing weights j ?j = 1, the EL is
X
L? = Tr(C?) + ?> b + a?
y?
?j EN (?j ,?j ) [eQ(x) ],
(17)
i
where the Gaussian expectation terms are given by
EN (?j ,?j ) [eQ(x) ] =
1
1 e
?1
?1
>
>
1
a+?>
j C?j +b ?j + 2 (b+2C?j ) (?j ?2C )
|I?2C?j | 2
(b+2C?j )
.
(18)
Although the MEL estimator does not have a closed analytic form in this case, the EL can be efficiently optimized numerically, as it still depends on the responses only via the spike-triggered
moments y?, ? and ?, and on the stimuli only via the mean, covariance, and mixing weight of each
Gaussian.
4
4.1
Spectral estimation for low-dimensional models
Low-rank parameterization
We have so far focused upon MEL estimators for the parameters a, b, and C. These results have a
natural mapping to dimensionality reduction methods. Under the GQM, a low-dimensional stimulus
dependence is equivalent to having a low-rank C. If C = BB > for some d ? p matrix B, we have a
p-filter model (or p + 1 filter model if the linear term b is not spanned by the columns of B). We can
obtain spectral estimates of a low-dimensional GQM by performing an eigenvector decomposition
of Cmel and selecting the eigenvectors corresponding to the largest p eigenvalues. The eigenvectors
of Cmel also make natural initializers for maximization of the full GQM likelihood.
In Fig. 3, we show the results of three different methods for recovering a simulated rank-1 GQM
with Poisson noise: (1) the largest eigenvector of Cmel , (2) numerically maximizing the expected
log-likelihood for a rank-1 GQM (i.e., with C parametrized as a rank-1 matrix), and (3) maximizing
the (full) likelihood for a rank-1 GQM. Although the difference in performance between expected
and full GQM log-likelihood is negligible, there is a drastic difference in optimization time complexity between the full and expected log-likelihood. The expected log-likelihood only requires
computation of the sufficient statistics, while the full ML estimate requires a full pass through the
dataset for each evaluation of the log-likelihood. Thus, the expected log-likelihood offers a fast yet
accurate estimate for C. In the following section we show that, asymptotically, the eigenvectors of
Cmel span the ?correct? (in an appropriate sense) low-dimensional subspace.
4.2
Consistency of subspace estimates
If the conditional probability y|x = y|? > x for a matrix ?, the neural feature space is spanned by the
columns of ?. As a generalization of STC, we introduce moment-based dimensionality reduction
6
A
quadratic filter w 1
B
quadratic filter w 2
0
?2
space (0.70 deg/bar)
0.8
0.4
0.4
0
?1
0
bx
1
space (0.70 deg/bar)
mV
2
mV
mV
space (0.70 deg/bar)
4
Membrane Potential (mV)
time (10ms/frame)
linear filter b
0
?1
0
w1x
1
?1
0
w2x
Prediction Performance (test data)
mean
model (r2 = 0.55)
?45
?50
?55
?60
0.2
1
0.4
time (s)
0.6
0.8
1
Figure 4: GQM fit and prediction for intracellular recording in cat V1 with a trinary noise stimulus.
(A) On top, estimated linear (b) and quadratic (w1 and w2 ) filters for the GQM, lagged by 20ms. On
bottom, the empirical marginal nonlinearities along each dimension (black) and model prediction
(red). (B) Cross-validated model prediction (red) and n = 94 recordings with repeats of identical
stimulus (light grey) along with their mean (black). Reported performance metric (r2 = 0.55) is for
prediction of the mean response.
techniques that recover (portions of) ? and show the relationship of these techniques to the MEL
estimators of GQM.
1
1
1
We propose to use ?? 2 ? and eigenvectors of ?? 2 ??? 2 (whose eigenvalues are significantly
smaller or larger than 1) as the feature space basis. When the response is binary, this coincides
with the traditional STA/STC analysis, which is provably consistent only in the case of stimuli
drawn from a spherically symmetric (for STA) or independent Gaussian distribution (for STC) [5].
Below, we argue that this procedure can identify the subspace when y has mean f (? > x) with finite
variance, f is some function, and the stimulus distribution is zero-mean with white covariance, i.e.,
E[x] = 0 and E[xxT ] = I.
First, note that by the law of large numbers, ? ? E[y xxT ] = E yE[xxT |y] . Let ? = ?? T be a
projection operator to the feature space, and ?? = I ? ? be the perpendicular space. We follow the
discussion in [12, 13] regarding the related ?sliced regression? literature. Recalling that E[X] = 0,
we can exploit the independence of ?? x and y to find,
E xx> |y = ? = E (? + ?? )xx> (? + ?? )|y = ?
= ?E xx> |y = ? ? + ?? E xx> ?? = ?E xx> |y = ? ? + ??
thus, E yxx> = ?E yxx> ? + E[y]?? and therefore the eigenvectors of E yxx> whose
eigenvalues significantly differ from E[y] span a subspace of the range of ?. Effective estimation
of the subspace depends critically
on
both the stimulus distribution and the form of f . Under the
GQM, the eigenvectors of E yxx> are closely related to the expected log-likelihood estimators we
derived earlier. Indeed, those eigenvectors of eq. 10, eq. 12 and eq. 16 whose associated eigenvalues
differ significantly from zero span precisely the same space.
5
5.1
Results
Intracellular membrane potential
We fit a Gaussian GQM to intracellular recordings of membrane potential from a neuron in cat V1,
using a 2D spatiotemporal ?flickering bars? stimulus aligned with the cell?s preferred orientation
(Fig. 4). The recorded time-series is a continuous signal, so the Gaussian GQM provides an appropriate noise model. The recorded voltage was median-filtered (to remove spikes) and down-sampled
to a 10 ms sample rate. We fit the GQM to a 21.6 minute recording of responses to non-repeating
trinary noise stimulus . We validated the model using responses to 94 repeats of a 1 second frozen
noise stimulus. Panel (B) of Fig. 4 illustrates the GQM prediction on cross-validation data.
Although the cell was classified as ?simple?, meaning that its response is predominately linear, the
GQM fit reveals two quadratic filters that also influence the membrane potential response. The GQM
captures a substantial percentage of the variance in the mean response, systematically outperforming
the GLM in terms of r2 (GQM:55% vs. GLM:50%).
7
stimulus filter
gain
linear
rate prediction (test data)
spike history
2
1
GLM (
GQM (
0.6
1
)
)
0.2
0
gain
quadratic
1.8
1
1.4
0.6
1
0.2
0
10
20
30
time (stimulus frames)
40
0
100
time (ms)
200
1 sec
Figure 5: (left) GLM and GQM filters fit to spike responses of a retinal ganglion cell stimulated
with a 120 Hz binary full field noise stimulus [28]. The GLM has only linear stimulus and spike
history filters (top left) while the GQM contains all four filters. Each plot shows the exponentiated
filter, so the ordinate has units of gain, and filters interact multiplicatively. Quadratic filter outputs are
squared and then subtracted from other inputs, giving them a suppressive effect on spiking (although
quadratic excitation is also possible). (right) Cross-validated rate prediction averaged over 167
repeated trials.
5.2
Retinal ganglion spike train
The Poisson GLM provides a popular model for neural spike trains due to its ability to incorporate
dependencies on spike history (e.g., refractoriness, bursting, and adaptation). These dependencies
cannot be captured by models with inhomogeneous Poisson output like the multi-filter LNP model
(which is also implicit in information-theoretic methods like MID [21]). The GLM achieves this
by incorporating a one-dimensional linear projection of spike history as an input to the model. In
general, however, a spike train may exhibit dependencies on more than one linear projection of spike
history.
The GQM extends the GLM by allowing multiple stimulus filters and multiple spike-history filters.
It can therefore capture multi-dimensional stimulus sensitivity (e.g., as found in complex cells) and
produce dynamic spike patterns unachievable by GLMs. We fit a Poisson GQM with a quadratic
history filter to data recorded from a retinal ganglion cell driven by a full-field white noise stimulus [28]. For ease of comparison, we fit a Poisson GLM, then added quadratic stimulus and history
filters, initialized using a spectral decomposition of the MEL estimate (eq. 16) and then optimized by
numerical ascent of the full log-likelihood. Both quadratic filters (which enter with negative sign),
have a suppressive effect on spiking (Fig. 5). The quadratic stimulus filter induces strong suppression at a delay of 5 frames, while the quadratic spike history filter induces strong suppression during
a 50 ms window after a spike.
6
Conclusion
The GQM provides a flexible class of probabilistic models that generalizes the GLM, the 2ndorder Volterra model, the Wiener model, and the elliptical-LNP model [3]. Unlike the GLM, the
GQM allows multiple stimulus and history filters and yet remains tractable for likelihood-based
inference. We have derived expected log-likelihood estimators in a general form that reveals a deep
connection between likelihood-based and moment-based inference methods. We have shown that
GQM performs well on neural data, both for discrete (spiking) and analog (voltage) data. Although
we have discussed the GQM in the context of neural systems, but we believe it (and EL-based
inference methods) will find applications in other areas such as signal processing and psychophysics.
Acknowledgments
We thank the L. Paninski and A. Ramirez for helpful discussions and V. J. Uzzell and E. J. Chichilnisky for
retinal data. This work was supported by Sloan Research Fellowship (JP), McKnight Scholar?s Award (JP),
NSF CAREER Award IIS-1150186 (JP), NIH EY019288 (NP), and Pew Charitable Trust (NP).
8
References
[1] P. Z. Marmarelis and V. Marmarelis. Analysis of physiological systems: the white-noise approach. Plenum
Press, New York, 1978.
[2] Taiho Koh and E. Powers. Second-order volterra filtering and its application to nonlinear system identification. IEEE Transactions on Acoustics, Speech, and Signal Processing, 33(6):1445?1455, 1985.
[3] Il Memming Park and Jonathan W. Pillow. Bayesian spike-triggered covariance analysis. Advances in
Neural Information Processing Systems 24, pp 1692?1700, 2011.
[4] E. P. Simoncelli, J. W. Pillow, L. Paninski, and O. Schwartz. Characterization of neural responses with
stochastic stimuli. The Cognitive Neurosciences, III, chapter 23, pp 327?338. MIT Press, Cambridge,
MA, October 2004.
[5] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15:243?262, 2004.
[6] S. Gerwinn, J. H. Macke, M. Seeger, and M. Bethge. Bayesian inference for spiking neuron models with
a sparsity prior. Advances in Neural Information Processing Systems, pp 529?536, 2008.
[7] J. Bussgang. Crosscorrelation functions of amplitude-distorted gaussian signals. RLE Technical Reports,
216, 1952.
[8] E. deBoer and P. Kuyper. Triggered correlation. IEEE Transact. Biomed. Eng., 15, pp 169?179, 1968.
[9] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in
Neural Systems, 12:199?213, 2001.
[10] R. R. de Ruyter van Steveninck and W. Bialek. Real-time performance of a movement-senstivive neuron
in the blowfly visual system: coding and information transmission in short spike sequences. Proc. R. Soc.
Lond. B, 234:379?414, 1988.
[11] O. Schwartz, J. W. Pillow, N. C. Rust, and E. P. Simoncelli. Spike-triggered neural characterization. J.
Vision, 6(4):484?507, 7 2006.
[12] RD Cook and S. Weisberg. Comment on ?sliced inverse regression for dimension reduction? by k.-c. li.
Journal of the American Statistical Association, 86:328?332, 1991.
[13] Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical
Association, 86(414):316?327, 1991.
[14] Alexandro D. Ramirez and Liam Paninski. Fast inference in generalized linear models via expected
log-likelihoods. Journal of Computational Neuroscience, pp 1?20, 2013.
[15] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework
for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J.
Neurophysiol, 93(2):1074?1089, 2005.
[16] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454:995?999,
2008.
[17] L. Paninski. Convergence properties of some spike-triggered analysis techniques. Network: Computation
in Neural Systems, 14:437?464, 2003.
[18] J. W. Pillow and E. P. Simoncelli. Dimensionality reduction in neural models: An information-theoretic
generalization of spike-triggered average and covariance analysis. J. Vision, 6(4):414?428, 4 2006.
[19] In?es Samengo and Tim Gollisch. Spike-triggered covariance: geometric proof, symmetry properties, and
extension beyond gaussian stimuli. Journal of Computational Neuroscience, 34(1):137?161, 2013.
[20] Tatyana Sharpee, Nicole C. Rust, and William Bialek. Analyzing neural responses to natural signals:
maximally informative dimensions. Neural Comput, 16(2):223?250, Feb 2004.
[21] R. S. Williamson, M. Sahani, and J. W. Pillow. Equating information-theoretic and likelihood-based
methods for neural dimensionality reduction. arXiv:1308.3542 [q-bio.NC], 2013.
[22] J. D. Fitzgerald, R. J. Rowekamp, L. C. Sincich, and T. O. Sharpee. Second order dimensionality reduction
using minimum and maximum mutual information models. PLoS Comput Biol, 7(10):e1002249, 2011.
[23] K. Rajan and W. Bialek. Maximally informative ?stimulus energies? in the analysis of neural responses
to natural signals. arXiv:1201.0321v1 [q-bio.NC], 2012.
[24] James M. McFarland, Yuwei Cui, and Daniel A. Butts. Inferring nonlinear neuronal computation based
on physiologically plausible inputs. PLoS Comput Biol, 9(7):e1003143+, July 2013.
[25] L. Theis, A. M. Chagas, D. Arnstein, C. Schwarz, and M. Bethge. Beyond glms: A generative mixture
modeling approach to neural system identification. PLoS Computational Biology, Nov 2013. in press.
[26] A. M. Mathai and S. B. Provost. Quadratic forms in random variables: theory and applications. M.
Dekker, 1992.
[27] Y. S. Cho and E. J. Powers. Estimation of quadratically nonlinear systems with an i.i.d. input. [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing pp
3117?3120 vol.5. IEEE, 1991.
[28] V. J. Uzzell and E. J. Chichilnisky. Precision of spike trains in primate retinal ganglion cells. Journal of
Neurophysiology, 92:780?789, 2004.
9
| 4993 |@word neurophysiology:2 trial:1 briefly:1 middle:1 nd:2 dekker:1 grey:1 seek:1 decomposition:5 covariance:11 eng:1 tr:12 reduction:11 moment:15 yxx:4 series:1 selecting:1 contains:1 daniel:1 trinary:2 elliptical:4 yet:3 written:3 numerical:1 earcher:1 informative:3 analytic:1 remove:1 plot:1 v:1 stationary:1 generative:1 cook:1 parameterization:1 xk:1 short:1 filtered:1 characterization:3 provides:10 sits:1 along:2 incorrect:1 consists:1 combine:1 introduce:1 indeed:1 expected:17 weisberg:1 multi:6 gollisch:1 window:1 begin:1 xx:14 moreover:1 linearity:1 panel:1 substantially:1 eigenvector:4 finding:1 x3j:1 fellow:1 multidimensional:1 concave:1 xd:2 schwartz:2 bio:2 unit:1 appear:1 before:1 negligible:1 local:1 troublesome:1 encoding:1 analyzing:1 abuse:1 black:2 equating:1 bursting:1 ease:1 liam:1 range:2 bi:1 perpendicular:1 averaged:1 steveninck:1 acknowledgment:1 signaling:1 procedure:1 ker:1 evan:1 area:1 empirical:1 cascade:4 significantly:4 projection:7 onto:2 cannot:1 operator:1 context:1 influence:1 restriction:1 equivalent:1 map:1 center:1 maximizing:3 nicole:1 straightforward:1 focused:1 bussgang:1 rule:1 estimator:24 regarded:1 spanned:3 shlens:1 population:1 plenum:1 ep:1 bottom:1 capture:2 plo:3 movement:1 substantial:1 complexity:1 covariates:1 fitzgerald:1 dynamic:1 transact:1 reviewing:1 upon:2 division:1 basis:2 neurophysiol:1 easily:1 joint:1 icassp:1 represented:1 various:1 cat:2 xxt:3 chapter:1 derivation:1 train:9 x2d:1 distinct:1 fast:5 describe:2 effective:1 whose:3 larger:1 solve:2 plausible:1 ability:1 statistic:5 triggered:14 eigenvalue:4 frozen:1 sequence:1 reconstruction:1 propose:1 product:2 adaptation:1 aligned:1 combining:2 hadamard:1 mixing:2 subindices:1 asserts:1 convergence:1 optimum:1 transmission:1 produce:2 converges:1 tim:1 derive:3 recurrent:1 illustrate:1 chaga:1 ij:5 lowrank:1 minor:1 eq:13 strong:3 soc:1 recovering:1 indicate:1 differ:2 posit:1 inhomogeneous:2 closely:1 correct:1 filter:41 stochastic:1 truccolo:1 generalization:2 scholar:1 mathai:1 extension:2 strictly:1 practically:1 around:1 considered:1 ground:2 exp:6 mapping:1 major:1 achieves:1 estimation:6 proc:1 utexas:1 sensitive:1 fluorescence:1 largest:2 schwarz:1 rowekamp:1 weighted:1 mit:1 gaussian:31 poi:1 voltage:2 conjunction:1 derived:4 validated:3 rank:11 likelihood:35 bernoulli:1 seeger:1 suppression:2 litke:1 sense:1 helpful:1 inference:6 el:19 typically:1 integrated:1 eliminate:1 bt:1 archer:1 transformed:2 provably:1 biomed:1 orientation:1 flexible:1 priori:1 chau:1 special:5 psychophysics:1 mutual:1 marginal:2 field:3 having:1 identical:5 represents:1 park:4 biology:1 nearly:2 np:2 stimulus:67 simplify:1 report:1 sta:10 replaced:1 n1:6 william:1 recalling:1 initializers:1 huge:1 highly:1 evaluation:1 w1x:1 mixture:6 light:2 x22:1 accurate:4 predominately:1 initialized:1 desired:1 column:2 modeling:4 earlier:1 maximization:1 cost:3 applicability:1 predictor:1 uniform:1 delay:1 reported:1 dependency:6 spatiotemporal:3 combined:2 cho:1 st:1 devoid:1 international:1 sensitivity:3 probabilistic:1 off:2 invertible:3 synthesis:1 bethge:2 w1:2 squared:1 recorded:3 mele:2 possibly:1 marmarelis:2 cognitive:1 american:2 derivative:3 macke:1 bx:1 crosscorrelation:1 li:2 potential:6 nonlinearities:1 de:1 retinal:6 sec:1 subsumes:1 includes:1 coding:1 sloan:1 mv:4 depends:8 closed:5 characterizes:1 red:3 x41:1 recover:1 portion:1 memming:3 il:2 accuracy:1 wiener:2 variance:3 efficiently:2 ensemble:1 yield:1 identify:2 correspond:1 generalize:1 weak:1 identification:3 bayesian:4 critically:1 iid:1 history:15 classified:1 energy:1 pp:6 james:1 naturally:1 associated:1 senstivive:1 proof:1 sampled:1 gain:3 experimenter:1 dataset:2 popular:2 radially:1 dimensionality:9 amplitude:1 follow:1 response:33 wherein:1 maximally:3 evaluated:1 refractoriness:1 stage:1 implicit:1 correlation:3 glms:4 horizontal:1 replacing:4 ei:2 nonlinear:12 trust:1 scientific:1 believe:1 name:1 effect:3 validity:1 ye:1 true:1 brown:1 symmetric:12 wp:1 spherically:1 white:4 during:1 noted:1 mel:14 excitation:1 coincides:2 m:5 generalized:10 theoretic:3 demonstrate:1 complete:1 performs:1 l1:2 reflection:1 meaning:1 instantaneous:1 recently:2 nih:1 spiking:8 rust:2 jp:3 analog:6 slight:1 approximates:1 m1:2 marginals:3 extend:1 numerically:2 refer:3 discussed:1 association:2 cambridge:1 relating:1 enter:1 pew:1 rd:1 consistency:1 i6:1 nonlinearity:9 dot:1 feb:1 irrelevant:1 driven:1 selectivity:1 gerwinn:1 binary:2 outperforming:1 yi:9 lnp:6 captured:1 minimum:1 additional:1 cii:1 maximize:1 signal:7 semi:1 ii:2 full:11 multiple:6 simoncelli:4 july:1 technical:1 cross:5 offer:1 equally:1 award:2 schematic:1 prediction:9 regression:4 basic:1 ae:2 whitened:1 expectation:7 poisson:20 metric:1 vision:2 arxiv:2 represent:1 cell:6 justified:1 fellowship:1 separately:1 median:1 leaving:1 standpoint:1 appropriately:1 w2:1 suppressive:2 unlike:2 ascent:1 comment:1 recording:6 hz:1 iii:1 identically:1 variety:1 xj:4 fit:7 psychology:1 independence:1 regarding:1 donoghue:1 texas:1 speech:2 york:1 deep:1 generally:2 detailed:1 involve:1 eigenvectors:8 repeating:1 mid:2 induces:2 percentage:1 restricts:1 canonical:8 nsf:1 sign:1 estimated:2 neuroscience:3 per:3 extrinsic:1 discrete:2 vol:1 rajan:1 four:2 terminology:1 eden:1 achieving:1 drawn:3 ce:1 v1:4 asymptotically:2 sum:2 inverse:4 distorted:1 extends:3 family:3 followed:2 quadratic:28 activity:1 precisely:1 unachievable:1 generates:1 span:3 lond:1 performing:1 extracellular:1 combination:1 mcknight:1 cui:1 conjugate:1 membrane:6 describes:1 smaller:1 wi:2 primate:1 glm:18 koh:1 ln:2 gqm:46 computationally:2 previously:1 equation:2 remains:1 count:2 tractable:3 drastic:1 end:1 generalizes:2 gaussians:2 apply:1 blowfly:1 spectral:10 appropriate:2 nicholas:2 subtracted:1 rp:1 denotes:3 top:2 include:1 x21:2 const:4 exploit:1 giving:1 restrictive:1 added:1 quantity:3 spike:41 volterra:4 receptive:1 parametric:1 dependence:2 diagonal:2 traditional:1 bialek:3 exhibit:2 gradient:1 subspace:6 distance:1 link:2 thank:1 simulated:2 parametrized:2 mail:1 argue:1 eigenspectrum:1 index:1 relationship:1 multiplicatively:1 nc:2 october:1 cij:4 negative:1 lagged:1 contributed:1 allowing:1 vertical:1 neuron:6 finite:2 descent:1 neurobiology:1 extended:1 y1:1 frame:3 provost:1 arbitrary:3 tatyana:1 ordinate:1 chichilnisky:4 connection:3 optimized:3 acoustic:2 quadratically:2 beyond:3 bar:4 mcfarland:1 below:1 pattern:1 appeared:1 sparsity:1 power:2 event:1 natural:9 x2i:3 axis:15 sher:1 sahani:1 prior:2 literature:3 geometric:1 theis:1 law:2 loss:1 filtering:1 validation:2 kuyper:1 sufficient:3 consistent:2 bank:1 systematically:1 charitable:1 pi:1 austin:2 repeat:2 supported:1 sym:2 allow:1 weaker:1 exponentiated:1 characterizing:1 distributed:1 van:1 dimension:5 pillow:9 sensory:3 author:1 simplified:1 far:2 transaction:1 bb:1 approximate:3 nov:1 preferred:1 ml:3 global:1 deg:3 reveals:2 butt:1 assumed:1 xi:21 spectrum:1 continuous:1 physiologically:1 stimulated:1 nature:1 ruyter:1 career:1 ignoring:1 symmetry:4 obtaining:1 rta:1 interact:1 williamson:1 complex:1 stc:14 diag:3 intracellular:4 linearly:1 noise:26 repeated:1 sliced:3 x1:2 neuronal:3 fig:6 referred:1 e1003143:1 en:2 precision:1 inferring:1 exponential:4 comput:3 governed:1 perceptual:1 formula:1 down:1 minute:1 covariate:1 r2:5 physiological:1 exists:1 incorporating:1 illustrates:1 cx:2 paninski:7 simply:1 ramirez:3 ganglion:4 visual:2 scalar:1 mij:3 truth:1 relies:1 ma:1 conditional:2 identity:1 flickering:1 rle:1 rtc:1 called:2 x2j:1 pas:1 invariance:1 e:1 w2x:1 sharpee:2 uzzell:2 maximumlikelihood:1 jonathan:2 incorporate:1 dept:1 biol:2 |
4,412 | 4,994 | Fisher-Optimal Neural Population Codes for
High-Dimensional Diffeomorphic Stimulus
Representations
Alan A. Stocker
Department of Psychology
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Zhuo Wang
Department of Mathematics
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Daniel D. Lee
Department of Electrical and Systems Engineering
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Abstract
In many neural systems, information about stimulus variables is often represented
in a distributed manner by means of a population code. It is generally assumed that
the responses of the neural population are tuned to the stimulus statistics, and most
prior work has investigated the optimal tuning characteristics of one or a small
number of stimulus variables. In this work, we investigate the optimal tuning for
diffeomorphic representations of high-dimensional stimuli. We analytically derive
the solution that minimizes the L2 reconstruction loss. We compared our solution
with other well-known criteria such as maximal mutual information. Our solution
suggests that the optimal weights do not necessarily decorrelate the inputs, and the
optimal nonlinearity differs from the conventional equalization solution. Results
illustrating these optimal representations are shown for some input distributions
that may be relevant for understanding the coding of perceptual pathways.
1
Introduction
There has been much work investigating how information about stimulus variables is represented by
a population of neurons in the brain [1]. Studies on motion perception [2, 3] and sound localization
[4, 5] have demonstrated that these representations adapt to the stimulus statistics on various time
scales [6, 7, 8, 9]. This raises the natural question of what encoding scheme is underlying this
adaptive process?
To address this question, several assumptions about the neural representation and its overall objective
need to be made. In the case of a one-dimensional stimulus, a number of theoretical approaches have
previously been investigated. Some work have focused on the scenario with a single neuron [10, 11,
12, 13, 14, 15], while other work focused on the population level [16, 17, 18, 19, 20, 21, 22, 23],
with different model and noise assumptions. However, the question becomes more difficult when
considering adaptation to high dimensional stimuli. An interesting class of solutions to this question
is related to independent component analysis (ICA) [24, 25, 26], which considers maximizing the
amount of information in the encoding given a distribution of stimulus inputs. The use of mutual
information as a metric to measure neural coding quality has also been discussed in [27].
1
In this paper, we study Fisher-optimal population codes for the diffeomorphic encoding of stimuli
with multivariate Gaussian distributions. Using Fisher information, we investigate the properties of
representations that would minimize the L2 reconstruction error assuming an optimal decoder. The
optimization problem is derived under a diffeomorphic assumption, i.e. the number of encoding
neurons matches the dimensionality of the input and the nonlinearity is monotonic. In this case, the
optimal solution can be found analytically and can be given a geometric interpretation. Qualitative
differences between this solution and the previously studied information maximization solutions are
demonstrated and discussed.
2
2.1
Model and Methods
Encoding and Decoding Model
We consider a n dimensional stimulus input s = (s1 , . . . , sn ) with prior distribution p(s). In general,
a population with m neurons can have m individual activation functions, h1 (s), . . . , hm (s) which
determines the average firing rate of each neuron in response to the stimulus. However, the encoding
process is affected by neural noise. Two commonly used models are Poisson noise model and
constant Gaussian model, for which the observed firing rate vector r = (r1 , . . . , rm ) follows the
probabilistic distribution p(r|s), where
rk T ? Poisson(hk (s)T )
rk T ? Gaussian(hk (s)T, V T )
(Poisson noise)
(Gaussian noise)
(1)
(2)
As opposed to encoding, the decoding process involves constructing an estimator ?s(r), which deterministically maps the response r to an estimate ?s of the true stimulus s. We choose a maximum
likelihood estimator s?MLE (r) = arg maxs p(r|s) because it simplifies the calculation due to its nice
statistical properties as discussed in section 2.3.
2.2
Fisher Information Matrix
The Fisher information is a key concept widely used in optimal coding theory. For multiple dimensions, the Fisher information matrix is defined element-wise for each s, as in [28],
?
?
IF (s)i,j =
(3)
log p(r|s) ?
log p(r|s) s
?si
?sj
r
In the supplementary section A we prove that the Fisher information matrix for a population of m
neurons is
m
X
IF (s) = T ?
hk (s)?1 ?hk (s) ? ?hk (s)T
(Poisson noise)
(4)
k=1
IF (s) = T ?
m
X
k=1
? k (s) ? ?h
? k (s)T
V ?1 ?h
(Gaussian noise)
(5)
where T is length of the encoding time window and V represents the variance of the constant Gaussian noise. The equivalence
for two noise models can be established via the variance stabilizing
?
? k = 2 hk [29]. Without loss of generality, throughout the paper we assume the
transformation h
Gaussian noise model for mathematical convenience. Also we will simply assume V = 1, T = 1
because they do not change the optimal solution for any Fisher information-related quantities.
2.3
Cramer-Rao Lower Bound
Ideally, a good neural population code should produce estimates ?s that are close to the true value of
the stimulus s. However multiple measures exist for how well an estimate matches the true value.
One possibility is the L2 loss which is related to the Fisher information matrix via the Cramer-Rao
lower bound [28]. For any unbiased estimator ?s, including the MLE,
cov[?s ? s] ? IF (s)?1
2
(6)
in the sense that cov[?s ? s] ? IF (s)?1 is a positive semidefinite matrix. Being only a lower bound,
the Cramer-Rao bound can be attained by the MLE ?s because it is asymptotically efficient. The local
L2 decoding error hk?s ? sk2 |sir = tr(cov(?s ? s)) ? tr(IF (s)?1 ). In order to minimize the overall
L2 decoding error, one should minimize the attainable lower bound on the right side of Eq.(7), under
appropriate constraints on hk (?).
k?s ? sk2 s ? htr(IF (s)?1 )is
(7)
2.4
Mutual Information Limit
Another possible measurement of neural coding quality is the mutual information. This quantity
does not explicitly rely on an estimator ?s(r) but directly measures the mutual information between
the response and the stimulus.
The link between mutual information and the Fisher information matrix was established in [16]. One
goal (infomax) is to maximize the mutual information I(r, s) = H(r) ? H(r|s). Assuming perfect
integration, the first term H(r) asymptotically converges to a constant H(s) for long encoding
time because the noise is Gaussian. The second term H(r|s) = hH(r|s? )is? because the noise is
independent. For each s? , the conditional entropy H(r|s = s? ) ? 21 log det IF (s? ) since r|s? is
asymptotically a Gaussian variable with covariance IF (s? ). Therefore the mutual information is
1
I(r, s) = const + hlog det IF (s)is
2
2.5
(8)
Diffeomorphic Population
Before one can formalize the optimal coding problem, some assumptions about the neural population
need to be made. Under a diffeomorphic assumption, the number of neurons (m) in the population
matches the dimensionality (n) of the input stimulus. Each neuron projects the signal s onto its basis
wk and passes the one-dimensional projection tk = wkT s through a sigmoidal tuning curve hk (?)
which is bounded 0 ? hk (?) ? 1. The tuning curve is
rk = hk (wkT s).
(9)
We would like to optimize for the nonlinear functions h1 (?), . . . , hn (?) and the basis {wk }nk=1 simultaneously. We may assume kwk k = 1 since the scale can be compensated by the nonlinearity.
Such an encoding scheme is called diffeomorphic because the population establishes a smooth and
invertible mapping from the stimulus space s ? S to the rate space r ? R. An arbitrary observation
of the firing rate r can be first inverted to calculate the hidden variables tk = h?1
k (rk ) and then
linearly decoded to obtain s?M LE .
Fig.1a shows how the encoding scheme is implemented by a neural network. Fig.1b illustrates
explicitly how a 2D stimulus s is encoded by two neurons with basis w1 , w2 and nonlinear mappings
h1 , h2 .
(a)
(b)
s2
input
stimulus
s1
s2
s3
r1
s4
s
W
w2
nonlinear
map hk (?)
output
r1
r2
r3
w1
s1
h1(w1T s)
w1T s
r2
h2(w2T s)
r4
w2T s
Figure 1: (a) Illustration of a neural network with diffeomorphic encoding. (b) The Linear-Nonlinear (LN)
encoding process of 2D stimulus for a stimulus s.
3
3
Review of One Dimensional Solution
In the case of encoding an one-dimensional stimulus, the diffeomorphic population is just one neuron
with sigmoidal tuning curve r = h(w ? s). The only two options w = ?1 is determined by whether
the sigmoidal tuning curve is increasing or decreasing. Here we simply assume w = 1.
For the L2 -minimization problem, we want to minimize htr(IF (s)?1 )i = hh0 (s)?2 i because of
Eq.(5) and (7). Now apply Holder?s inequality [30] to non-negative functions p(s)/h0 (s)2 and h0 (s),
Z
Z
2 Z
3
p(s)
0
1/3
ds ?
h (s) ds ?
p(s) ds
(10)
h0 (s)2
|
{z
} |
{z
}
=1
overall L2 loss
Rs
The minimum L2 loss is attained by the optimal h? (s) ? ?? p(t)1/3 dt. For one dimensional
?
Gaussian with variance Var[s], the right side of Eq.(10) is 6 3?Var[s]. This preliminary result
will be useful for the high dimensional case discussed in Section 4 and 5.
On the other hand, for the infomax problem we want to maximize I(r, s) because of Eq.(5) and (8).
Note that hlog det IF (s)i = 2hlog h0 (s)i. By treating the sigmoidal activation function h(s) as a
cumulative probability distribution [10], we have
Z
Z
p(s) log h0 (s) ds ? p(s) log p(s) ds
(11)
R
R
0
because the KL-divergence DKL (p||h
) = p(s) log p(s) ds ? p(s) log h0 (s) ds is non-negative.
R
s
The optimal solution is h? (s) = ?? p(t)dt and the optimal value is 2H(p), where H(p) is the
differential entropy of the distribution p(s). This h? (s) is exactly obtained by equalizing the output
probability to maximize the entropy. For a one dimensional Gaussian with variance Var[s], the
optimal value is log Var[s] + const.
4
Optimal Diffeomorphic Population
In the case of encoding high-dimensional random stimulus using a diffeomorphic population code,
n neurons encode n stimulus dimensions. The gradient of the k-th neuron?s tuning curve is ?k =
h0k (wkT s)wk and the Fisher information matrix is thus
IF (s) =
n
X
k=1
?k ?Tk =
n
X
h0k (wkT s)2 wk wkT = W H 2 W T
(12)
k=1
where W = (w1 , . . . , wn ) and H = diag(h01 (w1T s), . . . , h0n (wnT s)). Using the fact that
tr(AB) = tr(BA) for any matrices A, B, we know tr(IF (s)?1 ) = tr((W T )?1 H ?2 W ?1 ) =
tr((W T W )?1 H ?2 ). Because H ?2 is diagonal, the L2 -min problem is simplified as
Z
n
X
p(s)
minimize
L(W, H) = htr(IF (s)?1 )i =
[(W T W )?1 ]kk
ds
(13)
0 (wT s)2
{wk ,hk (?)},k=1...n
h
k
k
k=1
If we define the marginal distribution
Z
pk (t) =
p(s)?(t ? wkT s) ds
(14)
then the optimization over wk and hk can be decoupled in the following way. For any fixed W ,
the integral term can be evaluated by marginalizing
out all those directions perpendicular to wk . As
R
discussed in section 3, the optimal value ( pk (t)1/3 dt)3 is attained when h?k 0 (t) ? pk (t)1/3 . The
optimization problem is now
Z
3
n
X
T
?1
1/3
minimize
Lh? (W ) =
[(W W ) ]kk
pk (t) dt
(15)
{wk },k=1...n
k=1
In general, analytically optimizing such a term for arbitrary prior distribution p(s) is intractable.
However if p(s) is multivariate Gaussian then the optimization can be further simplified and solved
analytically, as discussed in the following section.
4
5
Stimulus with Gaussian Prior
We consider the case when the stimulus prior is Gaussian N (0, ?). This assumption allows us to
calculate the marginal distribution along any direction wk as an one-dimensional Gaussian with
mean zero and variance wkT ?wk = (W T ?W )kk . By plugging in the Gaussian density pk (t) and
using the fact we derived in Section 3, we can further simplify the L2 -optimization problem as
minimize
{wk },k=1...n
5.1
n
X
?
Lh? (W ) = 6 3? ?
[(W T W )?1 ]kk (W T ?W )kk
(16)
k=1
Geometric Interpretation
In the above optimization problem, (W T ?W )kk has a clear and simple meaning ? it is the variance
of the marginal distribution pk (t). For term [(W T W )?1 ]kk , notice that W T W is the inner product
matrix of the basis {wk }nk=1 , i.e. (W T W )ij = wiT wj . Using the adjoint method we can calculate
the diagonal elements of (W T W )?1 ,
[(W T W )?1 ]kk =
det(WkT Wk )
det(W T W )
(17)
where WkT Wk is the inner product matrix of leave-wk -out basis {w1 , . . . , wk?1 , wk+1 , . . . , wn }.
Let ?k be the angle between wk and the hyperplane spanned by all other basis vectors (see Fig.2).
The diagonal element is just [(W T W )?1 ]kk = (det Wk / det W )2 = (sin ?k )?2 simply because
Volume ({w1 , . . . , wn }) = Volume ({w1 , . . . , wk?1 , wk+1 , . . . , wn }) ? |wk | ? sin ?k ,
{z
}
{z
} |
{z
}
|
|
n dim parallelogram
n?1 dim base parallelogram
(18)
height
s3
w3
s2
?3
w2
w1
s1
Figure 2: Illustration of ?k . In this example, w1
and w2 are on the s1 -s2 plane. ?3 is just the angle
between w3 and its projection on the s1 -s2 plane.
The optimization involves two competing parts. Minimizing (W T ?W )kk makes all those directions with small variance favorable. Meanwhile, minimizing [(W T W )?1 ]kk = (sin ?k )?2 strongly
penalizes neurons having similar tuning directions with the rest of population. To qualitatively summarize, the optimal population would tend to encode those directions with small variance while
keeping certain degree of population diversity.
5.2
General Solution
Due to space limitations, we will only present the optimal solution here and the derivation can be
found in Appendix C in the supplementary notes. For any covariance matrix ?, the optimal solution
for Eq.(16) is
1
tr(?1/2 ) for all k = 1, . . . , n (19)
n
Such unitary matrix U is guaranteed to exist yet may not be unique. See Appendix D for a detailed
discussion. In general for dimension n, the solution has a manifold structure with dimension not
less than (n ? 1)(n ? 2)/2. For n = 2 the solution can be easily derived. Let ? = diag(?x2 , ?y2 ).
Then optimal solution is given by
!
?1
?1
?
1 1 ?1
1
?
?
?
x
x
U=?
, WL2
= ??1/4 U = ?
(20)
1
?1
2 1 1
2 ??y
?y
W ? = ??1/4 U,
where U T U = I and (U T ?1/2 U )kk =
This 2D solution is special and is unique under reflection and permutation unless the prior distribution is spherically symmetric i.e. ? = aI.
5
6
Comparison with Infomax Solution
Previous studies have focused on finding solutions that maximize the mutual information (infomax)
between the stimulus and the neural population response. This is related to independent component
analysis (ICA) [24]. Mutual information can be maximized if and only if each neuron encodes
an independent component of the stimulus and uses the proper nonlinear tuning curve. Ideally,
the
Qn joint distribution p(s) can be decomposed as the product of n one dimensional components
k=1 pk (Wk (s)). For a Gaussian prior with covariance ?, the infomax solution is
?
Winfo
= ??1/2 U
?
?T
cov(Winfo
s) = U T ??1/2 ? ? ? ??1/2 U = I
(21)
where ??1/2 is the whitening matrix and U is an arbitrary unitary matrix. The derivation can be
found in Appendix E. In the same 2D example where ? = diag(?x2 , ?y2 ), the family of optimal
solutions is parametrized by an angular variable ?
!
?
cos ?
? sin
1 cos ? ? sin ?
?
?1/2
?x
?x
U (?) = ?
, Winfo (?) = ?
? U (?) = sin ?
(22)
cos ?
2 sin ? cos ?
?y
?y
?
?
In Fig.3 we compare Winfo
(?) and WL2
for different prior covariances. One observation is that, L2
optimal neurons do not fully decorrelate input signals unless the Gaussian prior is spherical. By
correlating the input signal and encoding redundant information, the channel signal to noise ratio
(SNR) can be balanced to reduce the vulnerability of those independent channels with low SNR.
As a consequence, the overall L2 performance is improved at the cost of transferring a suboptimal
amount of information. Another important observation is that the infomax solution allows a greater
degree of symmetry ? Eq.(21) holds for arbitrary unitary matrices while Eq.(19) holds only for a
subset of them.
(b)
s2
w1
w10
s1
? (degree)
?x = 1?y
w2
(d)
s2
180
slope = 1
w20
(c)
w2
90
w1
w10
0
0
180
slope = 1
w20
90
s1
? (degree)
(a)
90
0
0
180
? (degree)
w2
?
2
s2
180
w1
s1
0
w2 w2 w1
90
w10
0
0
90
s1
90
0
0
180
? (degree)
s2
180
3
w1
s1
0
w2 w2 w1
w10
90
180
180
slope = 3
90
0
0
90
? (degree)
s1
? (degree)
?x = 3?y
w2
?
? (degree)
s2
slope =
180
180
slope = 2
? (degree)
?x = 2?y
slope =
? (degree)
s2
90
? (degree)
90
0
0
180
? (degree)
90
180
? (degree)
L2 -min
infomax
Figure 3: Comparison of L2 -min and infomax optimal solution for 2D case. Each row represents the result
for different ratio ?x /?y for the prior distribution. (a) The optimal pair of basis vectors w1 , w2 for L2 -min
with the prior covariance ellipse is unique unless the prior distribution has rotational symmetry. (b) The loss
function with ?+? marking the optimal solution shown in (a). (c) One pair of optimal basis vector w1 , w2 for
infomax with the prior covariance ellipse. (d) The loss function with ?+? marking the optimal solution shown
in (c).
6
7
Application ? 16-by-16 Gaussian Images
In this section we apply our diffeomorphic coding scheme to an image representation problem. We
assume that the intensity values of all pixels from a set of 16-by-16 images follow a 256-D Gaussian
distribution. Instead of directly defining the pairwise covariance between pixels of s, we calculate
its real Fourier components ?s
?s = F T s
?
s = F ?s
(23)
where the real Fourier matrix is F = (f1 , . . . , f256 ) with each filter fa and its spatial frequency ~ka .
The covariance of those Fourier components ?s is typically assumed to be diagonal and the power
decays following some power law
cov(?s) = D = diag(?12 , . . . , ?n2 ),
where ?a2 ? |~ka |?? ,
?>0
(24)
Therefore the original stimulus s has covariance cov(s) = ? = F DF T . Such image statistics are
called stationary because the covariance between pair of pixels is fully determined by their relative
position. For the stimulus s with covariance ?, one naive choice of L2 optimal filter is simply
?
WL2
= ??1/4 ? I = F D?1/4 F T
(25)
because ?1/2 = F D1/2 F T has constant diagonal terms (See Appendix F for detailed calculation)
and U = I qualifies for Eq.(19). The covariance matrix and one sample image generated from ? is
plotted in Fig. 4(a)-(c) below.
(a)
(b)
(c)
Figure 4: For ? = 2.5 in the power law: (a) The 256 ? 256 covariance matrix ?. (b) One column of ?
reshaped to 16 ? 16 matrix representing the covariance between any pixels and a fixed pixel in the center. (c)
A random sample from the Gaussian distribution with covariance ?.
In addition, we have numerically computed the L2 loss using a family of filters
W? = F D?? F T ,
? ? [0, 1/2]
(26)
Note that when ? = 0, we have the naive filter W0 = F F T = I which does nothing to the input
stimulus; when ? = 1/4 or 1/2, we revisit the L2 optimal filter or the infomax filter, respectively. As
we can see from Fig. 5(a)-(d), the L2 optimal filter half-decorrelates the input stimulus channels to
keep the balance between the simplicity of the filters and the simplicity of the correlation structure.
In each simulation run, a set of 10,000 16-by-16 images is randomly sampled from the multivariate
Gaussian distribution with zero mean and covariance matrix ?. For each stimulus image s, we
calculate y = W?T s and zk = hk (yk ) + ?k to simulate the encoding process. Here hk (y) ?
Ry
p (t)1/3 dt and pk (t) is Gaussian N (0, (W?T ?W? )kk ). The additive Gaussian noise ?k is
?? k
?.
independent Gaussian N (0, 10?4 ). To decode, we just calculate y?k = h?1
s = (W?T )?1 y
k (zk ) and ?
2
Then we measure the L2 loss k?s ? sk2 . This procedure is repeated 20 times and the result is plotted
in Fig. 5(e).
8
Discussion and Conclusions
In this paper, we have studied the an optimal diffeomorphic neural population code which minimizes
the L2 reconstruction error. The population of neurons is assumed to have sigmoidal activation
functions encoding linear combinations of a high dimensional stimulus with a multivariate Gaussian
7
(a)
(b)
naive
(c)
filter cross?section
2D filter
(d)
?8
1
1
0.5
0.5
0
L2 optimal
0
infomax
16
0
1
1
0.5
0
0
8
16
0
1
1
0.5
0.5
0
0
2
x 10
8
16
16
0
infomax
1.5
1
8
16
0.5
0
8
L2 loss (? 3?)
naive
L2 optimal
0
8
0.5
0
(e)
correlation cross?section
2D correlation
8
16
0
0
1/4
?
1/2
Figure 5: (a) The 2D filter W? of one specific neuron for ? = 0, 1/4, 1/2 from top to bottom. (b) The
cross-section of the filter W? on one specific row boxed in (a), plotted as a function. (c) The correlation of the
2D filtered stimulus, between one specific neuron and all neurons. (d) The cross-section of the 2D correlation
of the filtered stimulus, between the neuron and other neurons on the same row. (e) The simulation result of L2
loss for different filter W? and optimal nonlinearity h and the vertical bar shows the ?3? interval across trials.
distribution. The optimal solution is provided and compared with solutions which maximize the
mutual information.
In order to derive the optimal solution, we first show that the Poisson noise model is equivalent to
the constant Gaussian noise under the variance stabilizing transformation. Then we relate the L2
reconstruction error to the trace of inverse Fisher information matrix via the Cramer-Rao bound.
Minimizing this bound leads to the global optimal solution in the asymptotic limit of long integration time. The general L2 -minimization problem can be simplified and the optimal solution is
analytically derived when the stimulus distribution is Gaussian.
Compared to the infomax solutions, a careful evaluation and calculation of the Fisher information
matrix is needed for L2 minimization. The manifold of L2 optimal solutions possess a lower dimensional structure compared to the infomax solution. Instead of decorrelating the input statistics,
the L2 -min solution maintains a certain degree of correlation across the channels. Our result suggests that maximizing mutual information and minimizing the overall decoding loss are not the same
in general ? encoding redundant information can be beneficial to improve reconstruction accuracy.
This principle may explain the existence of correlations at many layers in biological perception
systems.
As an example, we have applied our theory to 16-by-16 images with stationary pixel statistics. The
optimal solution exhibits center-surround receptive fields, but with a decay differing from those
found by decorrelating solutions. We speculate that these solutions may better explain observed
correlations measured in certain neural areas of the brain. Finally, we acknowledge the support of
the Office of Naval Research.
References
[1] K Kang, RM Shapley, and H Sompolinsky. Information tuning of populations of neurons in
primary visual cortex. Journal of neuroscience, 24(15):3726?3735, 2004.
[2] AP Georgopoulos, AB Schwartz, and RE Kettner. Adaptation of the motion-sensitive neuron
h1 is generated locally and governed by contrast frequency. Science, 233:1416?1419, 1986.
[3] FE Theunissen and JP Miller. Representation of sensory information in the cricket cercal
sensory system. II. information theoretic calculation of system accuracy and optimal tuningcurve widths of four primary interneurons. J Neurophysiol, 66(5):1690?1703, November 1991.
[4] DC Fitzpatrick, R Batra, TR Stanford, and S Kuwada. A neuronal population code for sound
localization. Nature, 388:871?874, 1997.
8
[5] NS Harper and D McAlpine. Optimal neural population coding of an auditory spatial cue.
Nature, 430:682?686, 2004.
[6] N Brenner, W Bialek, and R de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26:695?702, 2000.
[7] Tvd Twer and DIA MacLeod. Optimal nonlinear codes for the perception of natural colours.
Network: Computation in Neural Systems, 12(3):395?407, 2001.
[8] I Dean, NS Harper, and D McAlpine. Neural population coding of sound level adapts to
stimulus statistics. Nature neuroscience, 8:1684?1689, 2005.
[9] Y Ozuysal and SA Baccus. Linking the computational structure of variance adaptation to
biophysical mechanisms. Neuron, 73:1002?1015, 2012.
[10] SB Laughlin. A simple coding procedure enhances a neurons information capacity. Z. Naturforschung, 36c(3):910?912, 1981.
[11] J-P Nadal and N Parga. Non linear neurons in the low noise limit: A factorial code maximizes
information transfer, 1994.
[12] M Bethge, D Rotermund, and K Pawelzik. Optimal short-term population coding: when Fisher
information fails. Neural Computation, 14:2317?2351, 2002.
[13] M Bethge, D Rotermund, and K Pawelzik. Optimal neural rate coding leads to bimodal firing
rate distributions. Netw. Comput. Neural Syst., 14:303?319, 2003.
[14] MD McDonnell and NG Stocks. Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons and populations. Phys. Rev. Lett., 101:058103, 2008.
[15] Z Wang, A Stocker, and DD Lee. Optimal neural tuning curves for arbitrary stimulus distributions: Discrimax, infomax and minimum lp loss. Adv. Neural Information Processing Systems,
25:2177?2185, 2012.
[16] N Brunel and J-P Nadal. Mutual information, fisher information and population coding. Neural
Computation, 10(7):1731?1757, 1998.
[17] K Zhang and TJ Sejnowski. Neuronal tuning: To sharpen or broaden? Neural Computation,
11:75?84, 1999.
[18] A Pouget, S Deneve, J-C Ducom, and PE Latham. Narrow versus wide tuning curves: Whats
best for a population code? Neural Computation, 11:85?90, 1999.
[19] H Sompolinsky and H Yoon. The effect of correlations on the fisher information of population
codes. Advances in Neural Information Processing Systems, 11, 1999.
[20] AP Nikitin, NG Stocks, RP Morse, and MD McDonnell. Neural population coding is optimized
by discrete tuning curves. Phys. Rev. Lett., 103:138101, 2009.
[21] D Ganguli and EP Simoncelli. Implicit encoding of prior probabilities in optimal neural populations. Adv. Neural Information Processing Systems, 23:658?666, 2010.
[22] S Yaeli and R Meir. Error-based analysis of optimal tuning functions explains phenomena
observed in sensory neurons. Front Comput Neurosci, 4, 2010.
[23] E Doi and MS Lewicki. Characterization of minimum error linear coding with sensory and
neural noise. Neural Computation, 23, 2011.
[24] AJ Bell and TJ Sejnowski. An information-maximization approach to blind separation and
blind deconvolution. Neural Computation, 7:1129?1159, 1995.
[25] DJ Field BA Olshausen. Emergence of simple-cell receptive field properties by learning a
sparse code for natural images. Nature, 381:607?609, 1996.
[26] A Hyvarinen and E Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13:411?430, 2000.
[27] P Berens, A Ecker, S Gerwinn, AS Tolias, and M Bethge. Reassessing optimal neural population codes with neurometric functions. Proceedings of the National Academy of Sciences,
11:4423?4428, 2011.
[28] TM Cover and J Thomas. Elements of Information Theory. Wiley, 1991.
[29] EL Lehmann and G Casella. Theory of point estimation. New York: Springer-Verlag., 1999.
[30] GH Hardy, JE Littlewood, and G Polya. Inequalities, 2nd ed. Cambridge University Press,
1988.
9
| 4994 |@word trial:1 illustrating:1 nd:1 simulation:2 r:1 covariance:16 decorrelate:2 attainable:1 tr:9 daniel:1 tuned:1 hardy:1 ka:2 activation:3 si:1 yet:1 additive:1 informative:1 treating:1 stationary:2 half:1 cue:1 plane:2 short:1 filtered:2 characterization:1 sigmoidal:6 zhang:1 height:1 mathematical:1 along:1 differential:1 qualitative:1 prove:1 pathway:1 shapley:1 manner:1 pairwise:1 upenn:3 twer:1 ica:2 ry:1 brain:2 decreasing:1 decomposed:1 spherical:1 pawelzik:2 window:1 considering:1 increasing:1 becomes:1 project:1 provided:1 underlying:1 bounded:1 maximizes:2 what:1 minimizes:2 nadal:2 differing:1 finding:1 transformation:2 exactly:1 rm:2 schwartz:1 positive:1 before:1 engineering:1 local:1 limit:3 consequence:1 encoding:20 firing:4 ap:2 reassessing:1 studied:2 equivalence:1 suggests:2 r4:1 co:4 perpendicular:1 steveninck:1 unique:3 differs:1 procedure:2 area:1 bell:1 projection:2 convenience:1 close:1 onto:1 equalization:1 optimize:1 conventional:1 map:2 demonstrated:2 compensated:1 maximizing:2 center:2 equivalent:1 dean:1 ecker:1 focused:3 stabilizing:2 wit:1 simplicity:2 pouget:1 estimator:4 spanned:1 population:34 decode:1 astocker:1 us:1 pa:3 element:4 theunissen:1 observed:3 bottom:1 yoon:1 ep:1 wang:2 electrical:1 solved:1 calculate:6 wj:1 adv:2 sompolinsky:2 yk:1 balanced:1 ideally:2 raise:1 localization:2 basis:8 neurophysiol:1 easily:1 joint:1 stock:2 represented:2 various:1 derivation:2 sejnowski:2 doi:1 h0:6 encoded:1 widely:1 supplementary:2 stanford:1 statistic:6 cov:6 reshaped:1 emergence:1 equalizing:1 biophysical:1 reconstruction:5 maximal:1 product:3 adaptation:3 relevant:1 adapts:1 academy:1 adjoint:1 transmission:1 r1:3 sea:1 produce:1 perfect:1 converges:1 leave:1 tk:3 derive:2 measured:1 ij:1 polya:1 sa:3 eq:8 implemented:1 involves:2 h01:1 direction:5 filter:13 explains:1 f1:1 preliminary:1 biological:1 hold:2 cramer:4 mapping:2 fitzpatrick:1 a2:1 favorable:1 estimation:1 vulnerability:1 sensitive:1 establishes:1 minimization:3 gaussian:28 office:1 encode:2 derived:4 naval:1 likelihood:1 hk:16 contrast:1 sense:1 diffeomorphic:13 dim:2 ganguli:1 el:1 sb:1 typically:1 transferring:1 hidden:1 pixel:6 arg:1 overall:5 spatial:2 integration:2 special:1 mutual:13 marginal:3 field:3 having:1 ng:2 represents:2 stimulus:41 simplify:1 randomly:1 oja:1 simultaneously:1 divergence:1 national:1 individual:1 ab:2 interneurons:1 investigate:2 possibility:1 evaluation:1 semidefinite:1 tj:2 stocker:2 integral:1 wl2:3 lh:2 decoupled:1 unless:3 penalizes:1 re:1 plotted:3 theoretical:1 column:1 rao:4 cover:1 maximization:2 cost:1 subset:1 snr:2 front:1 density:1 lee:2 probabilistic:1 decoding:5 infomax:15 invertible:1 bethge:3 w1:16 opposed:1 choose:1 hn:1 qualifies:1 rescaling:1 syst:1 diversity:1 de:1 speculate:1 coding:15 wk:23 tvd:1 explicitly:2 blind:2 h1:5 kwk:1 option:1 maintains:1 wnt:1 slope:6 minimize:7 holder:1 accuracy:2 variance:10 characteristic:1 maximized:1 miller:1 parga:1 explain:2 phys:2 casella:1 ed:1 frequency:2 sampled:1 auditory:1 dimensionality:2 formalize:1 attained:3 dt:5 follow:1 response:5 improved:1 maximally:1 decorrelating:2 evaluated:1 strongly:1 generality:1 just:4 angular:1 implicit:1 correlation:9 d:9 hand:1 nonlinear:6 aj:1 quality:2 olshausen:1 effect:1 concept:1 true:3 unbiased:1 y2:2 analytically:5 spherically:1 symmetric:1 sin:7 width:1 criterion:1 m:1 theoretic:1 latham:1 motion:2 reflection:1 gh:1 meaning:1 wise:1 image:9 mcalpine:2 jp:1 volume:2 discussed:6 interpretation:2 linking:1 numerically:1 measurement:1 naturforschung:1 surround:1 cambridge:1 ai:1 tuning:16 mathematics:1 nonlinearity:4 sharpen:1 dj:1 cortex:1 whitening:1 base:1 multivariate:4 optimizing:1 scenario:1 certain:3 verlag:1 inequality:2 gerwinn:1 wangzhuo:1 inverted:1 minimum:3 greater:1 maximize:5 redundant:2 signal:4 ii:1 multiple:2 sound:3 simoncelli:1 alan:1 smooth:1 match:3 adapt:1 calculation:4 cross:4 long:2 mle:3 dkl:1 plugging:1 metric:1 poisson:5 df:1 bimodal:1 cell:1 addition:1 want:2 interval:1 w2:14 rest:1 posse:1 pass:1 wkt:9 tend:1 unitary:3 wn:4 psychology:1 w3:2 pennsylvania:3 competing:1 suboptimal:1 inner:2 simplifies:1 reduce:1 tm:1 det:7 whether:1 colour:1 york:1 generally:1 useful:1 clear:1 detailed:2 factorial:1 amount:2 s4:1 locally:1 meir:1 exist:2 s3:2 notice:1 revisit:1 neuroscience:2 cercal:1 discrete:1 affected:1 key:1 four:1 deneve:1 asymptotically:3 run:1 angle:2 inverse:1 yaeli:1 lehmann:1 throughout:1 family:2 parallelogram:2 separation:1 appendix:4 rotermund:2 bound:7 layer:1 guaranteed:1 constraint:1 georgopoulos:1 x2:2 encodes:1 fourier:3 simulate:1 min:5 department:3 marking:2 combination:1 mcdonnell:2 across:2 beneficial:1 lp:1 rev:2 s1:12 ln:1 previously:2 r3:1 mechanism:1 hh:1 needed:1 know:1 dia:1 apply:2 appropriate:1 rp:1 existence:1 original:1 broaden:1 top:1 thomas:1 const:2 macleod:1 ellipse:2 w20:2 objective:1 question:4 quantity:2 fa:1 receptive:2 primary:2 md:2 diagonal:5 bialek:1 exhibit:1 gradient:1 cricket:1 enhances:1 link:1 capacity:1 decoder:1 parametrized:1 w0:1 manifold:2 considers:1 neurometric:1 assuming:2 code:13 length:1 illustration:2 kk:13 minimizing:4 ratio:2 rotational:1 balance:1 difficult:1 baccus:1 hlog:3 fe:1 relate:1 trace:1 negative:2 ba:2 proper:1 vertical:1 neuron:29 observation:3 acknowledge:1 november:1 defining:1 dc:1 arbitrary:5 intensity:1 pair:3 kl:1 optimized:1 w2t:2 narrow:1 kang:1 established:2 address:1 zhuo:1 bar:1 below:1 perception:3 summarize:1 max:1 including:1 power:3 natural:3 rely:1 discrimax:1 ducom:1 representing:1 scheme:4 improve:1 hm:1 naive:4 philadelphia:3 sn:1 prior:14 understanding:1 l2:30 geometric:2 nice:1 review:1 marginalizing:1 asymptotic:1 morse:1 relative:1 sir:1 law:2 loss:13 sk2:3 permutation:1 fully:2 interesting:1 limitation:1 var:4 versus:1 h2:2 degree:15 principle:1 dd:1 row:3 keeping:1 side:2 laughlin:1 wide:1 decorrelates:1 sparse:1 distributed:1 van:1 curve:10 dimension:4 lett:2 cumulative:1 qn:1 sensory:4 made:2 adaptive:2 commonly:1 simplified:3 qualitatively:1 hyvarinen:1 sj:1 netw:1 keep:1 global:1 correlating:1 investigating:1 assumed:3 tolias:1 nature:4 channel:4 zk:2 kettner:1 ruyter:1 transfer:1 symmetry:2 boxed:1 investigated:2 necessarily:1 meanwhile:1 constructing:1 berens:1 diag:4 pk:8 linearly:1 neurosci:1 s2:11 noise:18 n2:1 w1t:3 nothing:1 repeated:1 neuronal:2 fig:7 je:1 ddlee:1 wiley:1 n:2 fails:1 position:1 decoded:1 deterministically:1 comput:2 governed:1 perceptual:1 pe:1 htr:3 rk:4 specific:3 littlewood:1 r2:2 decay:2 deconvolution:1 intractable:1 illustrates:1 nk:2 entropy:3 ozuysal:1 simply:4 visual:1 lewicki:1 monotonic:1 brunel:1 springer:1 determines:1 w10:4 conditional:1 goal:1 careful:1 fisher:16 brenner:1 change:1 determined:2 wt:1 hyperplane:1 called:2 batra:1 support:1 harper:2 d1:1 phenomenon:1 |
4,413 | 4,995 | Robust learning of low-dimensional dynamics from
large neural ensembles
David Pfau
Eftychios A. Pnevmatikakis
Liam Paninski
Center for Theoretical Neuroscience
Department of Statistics
Grossman Center for the Statistics of Mind
Columbia University, New York, NY
[email protected]
{eftychios,liam}@stat.columbia.edu
Abstract
Recordings from large populations of neurons make it possible to search for hypothesized low-dimensional dynamics. Finding these dynamics requires models
that take into account biophysical constraints and can be fit efficiently and robustly. Here, we present an approach to dimensionality reduction for neural data
that is convex, does not make strong assumptions about dynamics, does not require
averaging over many trials and is extensible to more complex statistical models
that combine local and global influences. The results can be combined with spectral methods to learn dynamical systems models. The basic method extends PCA
to the exponential family using nuclear norm minimization. We evaluate the effectiveness of this method using an exact decomposition of the Bregman divergence
that is analogous to variance explained for PCA. We show on model data that
the parameters of latent linear dynamical systems can be recovered, and that even
if the dynamics are not stationary we can still recover the true latent subspace.
We also demonstrate an extension of nuclear norm minimization that can separate
sparse local connections from global latent dynamics. Finally, we demonstrate
improved prediction on real neural data from monkey motor cortex compared to
fitting linear dynamical models without nuclear norm smoothing.
1
Introduction
Progress in neural recording technology has made it possible to record spikes from ever larger populations of neurons [1]. Analysis of these large populations suggests that much of the activity can
be explained by simple population-level dynamics [2]. Typically, this low-dimensional activity is
extracted by principal component analysis (PCA) [3, 4, 5], but in recent years a number of extensions have been introduced in the neuroscience literature, including jPCA [6] and demixed principal
component analysis (dPCA) [7]. A downside of these methods is that they do not treat either the
discrete nature of spike data or the positivity of firing rates in a statistically principled way. Standard
practice smooths the data substantially or averages it over many trials, losing information about fine
temporal structure and inter-trial variability.
One alternative is to fit a more complex statistical model directly from spike data, where temporal
dependencies are attributed to latent low dimensional dynamics [8, 9]. Such models can account for
the discreteness of spikes by using point-process models for the observations, and can incorporate
temporal dependencies into the latent state model. State space models can include complex interactions such as switching linear dynamics [10] and direct coupling between neurons [11]. These
methods have drawbacks too: they are typically fit by approximate EM [12] or other methods that
are prone to local minima, the number of latent dimensions is typically chosen ahead of time, and a
certain class of possible dynamics must be chosen before doing dimensionality reduction.
1
In this paper we attempt to combine the computational tractability of PCA and related methods with
the statistical richness of state space models. Our approach is convex and based on recent advances
in system identification using nuclear norm minimization [13, 14, 15], a convex relaxation of matrix
rank minimization. Compared to recent work on spectral methods for fitting state space models
[16], our method more easily generalizes to handle different nonlinearities, non-Gaussian, nonlinear, and non-stationary latent dynamics, and direct connections between observed neurons. When
applied to model data, we find that: (1) low-dimensional subspaces can be accurately recovered,
even when the dynamics are unknown and nonstationary (2) standard spectral methods can robustly
recover the parameters of state space models when applied to data projected into the recovered
subspace (3) the confounding effects of common input for inferring sparse synaptic connectivity can
be ameliorated by accounting for low-dimensional dynamics. In applications to real data we find
comparable performance to models trained by EM with less computational overhead, particularly as
the number of latent dimensions grows.
The paper is organized as follows. In section 2 we introduce the class of models we aim to fit,
which we call low-dimensional generalized linear models (LD-GLM). In section 3 we present a
convex formulation of the parameter learning problem for these models, as well as a generalization
of variance explained to LD-GLMs used for evaluating results. In section 4 we show how to fit these
models using the alternating direction method of multipliers (ADMM). In section 5 we present
results on real and artificial neural datasets. We discuss the results and future directions in section 6.
2
Low dimensional generalized linear models
Our model is closely related to the generalized linear model (GLM) framework for neural data [17].
Unlike the standard GLM, where the inputs driving the neurons are observed, we assume that the
driving activity is unobserved, but lies on some low dimensional subspace. This can be a useful
way of capturing spontaneous activity, or accounting for strong correlations in large populations of
neurons. Thus, instead of fitting a linear receptive field, the goal of learning in low-dimensional
GLMs is to accurately recover the latent subspace of activity.
Let xt ? Rm be the value of the dynamics at time t. To turn this into spiking activity, we project
this into the space of neurons: yt = Cxt + b is a vector in Rn , n m, where each dimension of yt
corresponds to one neuron. C ? Rn?m denotes the subspace of the neural population and b ? Rn
the bias vector for all the neurons. As yt can take on negative values, we cannot use this directly as
a firing rate, and so we pass each element of yt through some convex and log-concave increasing
point-wise nonlinearity f : R ? R+ . Popular choices for nonlinearities include f (x) = exp(x) and
f (x) = log(1 + exp(x)). To account for biophysical effects such as refractory periods, bursting, and
direct synaptic connections, we include a linear dependence on spike history before the nonlinearity.
The firing rate f (yt ) is used as the rate for some point process ? such as a Poisson process to generate
a vector of spike counts st for all neurons at that time:
yt
= Cxt +
k
X
D? st?? + b
(1)
? =1
st
? ?(f (yt ))
(2)
Much of this paper is focused on estimating yt , which is the natural parameter for the Poisson
distribution in the case f (?) = exp(?), and so we refer to yt as the natural rate to avoid confusion
with the actual rate f (yt ). We will see that our approach works with any point process with a
log-concave likelihood, not only Poisson processes.
We can extend this simple model by adding dynamics to the low-dimensional latent state, including
input-driven dynamics. In this case the model is closely related to the common input model used
in neuroscience [11], the difference being that the observed input is added to xt rather than being
directly mapped to yt . The case without history terms and with linear Gaussian dynamics is a wellstudied state space model for neural data, usually fit by EM [19, 12, 20], though a consistent spectral
method has been derived [16] for the case f (?) = exp(?). Unlike these methods, our approach
largely decouples the problem of dimensionality reduction and learning dynamics: even in the case
of nonstationary, non-Gaussian dynamics where A, B and Cov[] change over time, we can still
robustly recover the latent subspace spanned by xt .
2
3
3.1
Learning
Nuclear norm minimization
In the case that the spike history terms D1:k are zero, the natural rate at time t is yt = Cxt + b, so all
yt are elements of some m-dimensional affine space given by the span of the columns of C offset by
b. Ideally, our estimate of y1:T would trade off between making the dimension of this affine space
as low as possible and the likelihood of y1:T as high as possible. Let Y = [y1 , . . . , yT ] be the n ? T
matrix of natural rates and let A(?) be the row mean centering operator A(Y ) = Y ? T1 Y 1T 1TT .
PT
Then rank(A(Y )) = m. Ideally we would minimize ?nT rank(A(Y )) ? t=1 log p(st |yt ), where
? controls how much we trade off between a simple solution and the likelihood of the data, however
general rank minimization is a hard non convex problem. Instead we replace the matrix rank with
its convex envelope: the sum of singular values or nuclear norm k ? k? [13], which can be seen as
the analogue of the `1 norm for vector sparsity. Our problem then becomes:
T
X
?
min ? nT ||A(Y )||? ?
log p(st |yt )
Y
(3)
t=1
Since the log likelihood scales linearly with the size?of the data, and the singular values scale with
the square root of the size, we also add a factor of nT in front of the nuclear norm term. In the
examples in this paper, we assume spikes are drawn from a Poisson distribution:
log p(st |yt ) =
N
X
sit log f (yit ) ? f (yit ) ? log sit !
(4)
i=1
However, this method can be used with any point process with a log-concave likelihood. This can be
viewed as a convex formulation of exponential family PCA [21, 22] which does not fix the number
of principal components ahead of time.
3.2
Stable principal component pursuit
The model above is appropriate for cases where the spike history terms D? are zero, that is the
observed data can entirely be described by some low-dimensional global dynamics. In real data
neurons exhibit history-dependent behavior like bursting and refractory periods. Moreover if the
recorded neurons are close to each other some may have direct synaptic connections. In this case
D? may have full column rank, so from Eq. 1 it is clear that yt is no longer restricted to a lowdimensional affine space. In most practical cases we expect D? to be sparse, since most neurons are
not connected to one another. In this case the natural rates matrix combines a low-rank term and a
sparse term, and we can minimize a convex function that trades off between the rank of one term via
the nuclear norm, the sparsity of another via the `1 norm, and the data log likelihood:
k
T
X
?
T X
||D? ||1 ?
log p(st |yt )
min ? nT ||A(L)||? + ?
Y,D1:k ,L
n ? =1
t=1
s.t. Y = L +
k
X
(5)
D? S? , with S? = [0n,? , s1 , . . . , sT ?? ],
? =1
where 0n,? is a matrix of zeros of size n ? ? , used to account for boundary effects. This is an extension of stable principal component pursuit [23], which separates sparse and low-rank components
of a noise-corrupted matrix. Again to ensure that every term in the objective function of Eq. 5 has
roughly the same scaling O(nT ) we have multiplied each `1 norm with T /n. One can also consider
the use of a group sparsity penalty where each group collects a specific synaptic weight across all
the k time lags.
3.3
Evaluation through Bregman divergence decomposition
We need a way to evaluate the model on held out data, without assuming a particular form for the
dynamics. As we recover a subspace spanned by the columns of Y rather than a single parameter,
this presents a challenge. One option is to compute the marginal likelihood of the data integrated
3
over the entire subspace, but this is computationally difficult. For the case of PCA, we can project
the held out data onto a subspace spanned by principal components and compute what fraction of
total variance is explained by this subspace. We extend this approach beyond the linear Gaussian
case by use of a generalized Pythagorean theorem.
For any exponential family with natural parameters ?, link function g, function F such that
?F = g ?1 and sufficient statistic T , the log likelihood can be written as DF [?||g(T (x))] ? h(x),
where D? [?||?] is a Bregman divergence [24]: DF [x||y] = F (x) ? F (y) ? (x ? y)T ?F (y). Intuitively, the Bregman divergence between x and y is the difference between the value of F (x) and
the value of the best linear approximation around y. Bregman divergences obey a generalization
of the Pythagorean theorem: for any affine set ? and points x ?
/ ? and y ? ?, it follows that
DF [x||y] = DF [x||?? (x)] + DF [?? (x)||y] where ?? (x) = arg min??? DF [x||?] is the projection of x onto ?. In the case of squared error this is just a linear projection, and for the case of GLM
log likelihoods this is equivalent to maximum likelihood estimation when the natural parameters are
restricted to ?.
Given a matrix of natural rates recovered from training data, we compute the fraction of Bregman
divergence explained by a sequence of subspaces as follows. Let ui be the ith singular vector of
(q)
the recovered natural rates. Let b be the mean natural rate, and let yt be the maximum likelihood
natural rates restricted to the space spanned by u1 , . . . , uq :
(q)
yt
=
q
X
i=1
(q)
vt
=
(q)
ui vit +
k
X
D? st?? + b
? =1
q
!
k
X
X
arg max log p st
ui vit +
D? st?? + b
v
i=1
(q)
(6)
? =1
(q)
Here vt is the projection of yt onto the singular vectors. Then the divergence from the mean
explained by the qth dimension is given by
h
i
P
(q?1) (q)
yt
t DF yt
h
i
(7)
P
(0)
t DF yt g(st )
(0)
where yt is the bias b plus the spike history terms. The sum of divergences explained over all q is
equal to one by virtue of the generalized Pythagorean theorem. For Gaussian noise g(x) = x and
F (x) = 12 ||x||2 and this is exactly the variance explained by each principal component, while for
P
Poisson noise g(x) = log(x) and F (x) = i exp(xi ). This decomposition is only exact if f = g ?1
in Eq. 4, that is, if the nonlinearity is exponential. However, for other nonlinearities this may still be
a useful approximation, and gives us a principled way of evaluating the goodness of fit of a learned
subspace.
4
Algorithms
Minimizing Eq. 3 and Eq. 5 is difficult, because the nuclear and `1 norm are not differentiable
everywhere. By using the alternating direction method of multipliers (ADMM), we can turn these
problems into a sequence of tractable subproblems [25]. While not always the fastest method for
solving a particular problem, we use it for its simplicity and generality. We describe the algorithm
below, with more details in the supplemental materials.
4.1
Nuclear norm minimization
To find the optimal Y we alternate between minimizing an augmented Lagrangian with respect to Y ,
minimizing with respect to an auxiliary variable Z, and performing gradient ascent on a Lagrange
multiplier ?. The augmented Lagrangian is
X
?
?
L? (Y, Z, ?) = ? nT ||Z||? ?
log p(st |yt ) + h?, A(Y ) ? Zi + ||A(Y ) ? Z||2F
2
t
(8)
which is a smooth function of Y and can be minimized by Newton?s method. The gradient and
Hessian of L? with respect to Y at iteration k are
4
?Y L?
=
??Y
X
log p(st |yt ) + ?A(Y ) ? AT (?Zk ? ?k )
(9)
t
?2Y L?
=
??2Y
X
t
1
log p(st |yt ) + ?InT ? ? (1T ? In )(1T ? In )T
T
(10)
where ? is the Kronecker product. Note that the first two terms of the Hessian are diagonal and
the third is low-rank, so the Newton step can be computed in O(nT ) time by using the Woodbury
matrix inversion lemma.
The minimum of Eq. 17 with respect to Z is given exactly by singular value thresholding:
Zk+1 = U S??nT /? (?)V T ,
(11)
where U ?V T is the singular value decomposition of A(Yk+1 ) + ?k /?, and St (?) is the (pointwise)
soft thresholding operator St (x) = sgn(x)max(0, |x| ? t). Finally, the update to ? is a simple
gradient ascent step: ?k+1 = ?k + ?(A(Yk+1 ) ? Zk+1 ) where ? is a step size that can be chosen.
4.2
Stable principal component pursuit
To extend ADMM to the problem in Eq. 5 we only need to add one extra step, taking the minimum
over the connectivity matrices with the other parameters held fixed. To simplify the notation, we
group the connectivity matrices into a single matrix D = (D1 , . . . , Dk ), and stack the different
time-shifted matrices of spike histories on top of one another to form a single spike history matrix
H. The objective then becomes
X
?
T
log p(st |yt )
min ? nT ||A(Y ? DH)||? + ? ||D||1 ?
Y,D
n
t
(12)
where we have substituted Y ? DH for the variable L, and the augmented Lagrangian is
X
?
T
L? (Y, Z, D, ?) = ? nT ||Z||? + ? ||D||1 ?
log p(st |yt )
(13)
n
t
?
+h?, A(Y ? DH) ? Zi + ||A(Y ? DH) ? Z||2F
2
The updates for ? and Z are almost unchanged, except that A(Y ) becomes A(Y ? DH). Likewise
for Y the only change is one additional term in the gradient:
X
?Y L? = ??Y
log p(st |yt ) + ?A(Y ) ? AT (?Z + ?A(DH) ? ?)
(14)
t
Minimizing D requires solving:
T
?
||D||1 + ||A(DH) + Z ? A(Y ) ? ?/?||2F
(15)
D
n
2
This objective has the same form as LASSO regression. We solve this using ADMM as well, but
any method for LASSO regression can be substituted.
arg min ?
5
Experiments
We demonstrate our method on a number of artificial datasets and one real dataset. First, we show
in the absence of spike history terms that the true low dimensional subspace can be recovered in
the limit of large data, even when the dynamics are nonstationary. Second, we show that spectral
methods can accurately recover the transition matrix when dynamics are linear. Third, we show
that local connectivity can be separated from low-dimensional common input. Lastly, we show that
nuclear-norm penalized subspace recovery leads to improved prediction on real neural data recorded
from macaque motor cortex.
Model data was generated with 8 latent dimension and 200 neurons, without any external input. For
linear dynamical systems, the transition matrix was sampled from a Gaussian distribution, and the
5
0
?50
1600
1700
1800
1900
2000
2100
2200
2300
2400
Subspace Angle
1.5
50
2500
T = 1000, Spikes
T = 10000, Spikes
T = 1000, NN
T = 10000, NN
T = 1000, True Y
T = 10000, True Y
1
0.5
5
0
10
1e?3
1e?2
20
25
1600
1700
1800
1900
2000
2100
2200
2300
2400
Divergence Explained
15
2500
1e?1
?
1e0
1
1 Dim
5 Dim
10 Dim
0.5
0
1e1
1e?3
1e?2
1e?1
?
1e0
1e1
Figure 1: Recovering low-dimensional subspaces from nonstationary model data. While the subspace remains
the same, the dynamics switch between 5 different linear systems. Left top: one dimension of the latent
trajectory, switching from one set of dynamics to another (red line). Left middle: firing rates of a subset of
neurons during the same switch. Left bottom: covariance between spike counts for different neurons during
each epoch of linear dynamics. Right top: Angle between the true subspace and top principal components
directly from spike data, from natural rates recovered by nuclear norm minimization, and from the true natural
rates. Right bottom: fraction of Bregman divergence explained by the top 1, 5 or 10 dimensions from nuclear
norm minimization. Dotted lines are variance explained by the same number of principal components. For
? < 0.1 the divergence explained by a given number of dimensions exceeds the variance explained by the
same number of PCs.
?
, yielding
eigenvalues rescaled so the magnitude fell between .9 and .99 and the angle between ? 10
slow and stable dynamics. The linear projection C was a random Gaussian matrix with standard
deviation 1/3, and the biases bi were sampled from N (?4, 1), which we found gave reasonable
firing rates with nonlinearity f (x) = log(1 + exp(x)). To investigate the variance of our estimates,
we generated multiple trials of data with the same parameters but different innovations.
We first sought to show that we could accurately recover the subspace in which the dynamics take
place even when those dynamics are not stationary. We split each trial into 5 epochs and in each
epoch resampled the transition matrix A and set the covariance of innovations t to QQT where Q
is a random Gaussian matrix. We performed nuclear norm minimization on data generated from
this model, varying the smoothing parameter ? from 10?3 to 10, and compared the subspace angle
between the top 8 principal components and the true matrix C. We repeated this over 10 trials to
compute the variance of our estimator. We found that when smoothing was optimized the recovered
subspace was significantly closer to the true subspace than the top principal components taken directly from spike data. Increasing the amount of data from 1000 to 10000 time bins significantly
reduced the average subspace angle at the optimal ?. The top PCs of the true natural rates Y , while
not spanning exactly the same space as C due to differences between the mean column and true bias
b, was still closer to the true subspace than the result of nuclear norm minimization.
We also computed the fraction of Bregman divergence explained by the sequence of spaces spanned
by successive principal components, solving Eq. 6 by Newton?s method. We did not find a clear
drop at the true dimensionality of the subspace, but we did find that a larger share of the divergence
could be explained by the top dimensions than by PCA directly on spikes. Results are presented in
Fig. 1.
To show that the parameters of a latent dynamical system can be recovered, we investigated the
performance of spectral methods on model data with linear Gaussian latent dynamics. As the model
is a linear dynamical system with GLM output, we call this a GLM-LDS model. After estimating
natural rates by nuclear norm minimization with ? = 0.01 on 10 trials of 10000 time bins with
unit-variance innovations t , we fit the transition matrix A by subspace identification (SSID) [26].
The transition matrix is only identifiable up to a change of coordinates, so we evaluated our fit by
comparing the eigenvalues of the true and estimated A. Results are presented in Fig. 2. As expected,
SSID directly on spikes led to biased estimates of the transition. By contrast, SSID on the output of
6
1.1
1
0.95
0.9
0.85
0.8
?0.2
?0.1
0
0.1
1
0.95
0.9
0.85
0.8
?0.2
0.2
1.1
True
SSID
1.05
Real Component
True
Best Empirical Estimate
Real Component
Real Component
1.1
1.05
?0.1
Imaginary Component
0
0.1
True
NN+SSID
1.05
1
0.95
0.9
0.85
0.8
?0.2
0.2
?0.1
Imaginary Component
(a)
0
0.1
0.2
Imaginary Component
(b)
(c)
Figure 2: Recovered eigenvalues for the transition matrix of a linear dynamical system from model neural data.
Black: true eigenvalues. Red: recovered eigenvalues. (2a) Eigenvalues recovered from the true natural rates.
(2b) Eigenvalues recovered from subspace identification directly on spike counts. (2c) Eigenvalues recovered
from subspace identification on the natural rates estimated by nuclear norm minimization.
nuclear norm minimization had little bias, and seemed to perform almost as well as SSID directly
on the true natural rates. We found that other methods for fitting linear dynamical systems from the
estimated natural rates were biased, as was SSID on the result of nuclear norm minimization without
mean-centering (see the supplementary material for more details).
We incorporated spike history terms into our model data to see whether local connectivity and global
dynamics could be separated. Our model network consisted of 50 neurons, randomly connected with
95% sparsity, and synaptic weights sampled from a unit variance Gaussian. Data were sampled from
10000 time bins. The parameters ? and ? were both varied from 10?10 to 104 . We found that we
could recover synaptic weights with an r2 up to .4 on this data by combining both a nuclear norm and
`1 penalty, compared to at most .25 for an `1 penalty alone, or 0.33 for a nuclear norm penalty alone.
Somewhat surprisingly, at the extreme of either no nuclear norm penalty or a dominant nuclear norm
penalty, increasing the `1 penalty never improved estimation. This suggests that in a regime with
strong common inputs, some kind of correction is necessary not only for sparse penalties to achieve
optimal performance, but to achieve any improvement over maximum likelihood. It is also of interest
that the peak in r2 is near a sharp transition to total sparsity.
0.25
1.00e?02
0.2
1.00e?04
0.15
1.00e?06
Recovered
0.3
1.00e+00
Synaptic Weights, Optimal
Synaptic Weights, Small ?
0.05
1.00e?10
1.00e?10 1.00e?07 1.00e?04 1.00e?01 1.00e+02
?
1.5
1
1
1
0.5
0.5
0.5
0
0
?0.5
?0.5
?1
?1.5
?2
?1
0
True
1
2
0
?0.5
0.1
1.00e?08
Synaptic Weights, Large ?
1.5
Recovered
0.35
1.00e+02
?
1.5
0.4
Recovered
r2 for Synaptic Weights
1.00e+04
?1
?1
?1.5
?1.5
?2
?1
0
True
1
2
?2
?1
0
True
1
2
Figure 3: Connectivity matrices recovered by SPCP on model data. Left: r2 between true and recovered
synaptic weights across a range of parameters. The position in parameter space of the data to the right is
highlighted by the stars. Axes are on a log scale. Right: scatter plot of true versus recovered synaptic weights,
illustrating the effect of the nuclear norm term.
Finally, we demonstrated the utility of our method on real recordings from a large population of
neurons. The data consists of 125 well-isolated units from a multi-electrode recording in macaque
motor cortex while the animal was performing a pinball task in two dimensions. Previous studies on
this data [27] have shown that information about arm velocity can be reliably decoded. As the electrodes are spaced far apart, we do not expect any direct connections between the units, and so leave
out the `1 penalty term from the objective. We used 800 seconds of data binned every 100 ms for
training and 200 seconds for testing. We fit linear dynamical systems by subspace identification as in
Fig. 2, but as we did not have access to a ?true? linear dynamical system for comparison, we evaluated our model fits by approximating the held out log likelihood by Laplace-Gaussian filtering [28].
7
Prediction of Held out Data from GLM?LDS
?500
Log Likelihood (bits/s)
We also fit the GLM-LDS model by running randomly initialized EM for 50 iterations for models
with up to 30 latent dimensions (beyond which training was prohibitively slow). We found that a strong
nuclear norm penalty improved prediction by several
hundred bits per second, and that fewer dimensions
were needed for optimal prediction as the nuclear
norm penalty was increased. The best fit models predicted held out data nearly as well as models trained
via EM, even though nuclear norm minimization is
not directly maximizing the likelihood of a linear dynamical system.
?1000
?1500
? = 1.00e?04
? = 1.00e?03
? = 1.00e?02
? = 3.16e?02
EM
?2000
0
5
10
15
20
25
30
35
40
45
50
Number of Latent Dimensions
6
Discussion
Figure 4: Log likelihood of held out motor cortex
The method presented here has a number of straight- data versus number of latent dimensions for difforward extensions. If the dimensionality of the la- ferent latent linear dynamical systems. Prediction
tent state is greater than the dimensionality of the improves as ? increases, until it is comparable to
data, for instance when there are long-range history EM.
dependencies in a small population of neurons, we
would extend the natural rate matrix Y so that each
column contains multiple time steps of data. Y is then a block-Hankel matrix. Constructing the
block-Hankel matrix is also a linear operation, so the objective is still convex and can be efficiently
minimized [15]. If there are also observed inputs ut then the term inside the nuclear norm should
also include a projection orthogonal to the row space of the inputs. This could enable joint learning
of dynamics and receptive fields for small populations of neurons with high dimensional inputs.
Our model data results on connectivity inference have important implications for practitioners working with highly correlated data. GLM models with sparsity penalties have been used to infer connectivity in real neural networks [29], and in most cases these networks are only partially observed and
have large amounts of common input. We offer one promising route to removing the confounding
influence of unobserved correlated inputs, which explicitly models the common input rather than
conditioning on it [30].
It remains an open question what kinds of dynamics can be learned from the recovered natural
parameters. In this paper we have focused on linear systems, but nuclear norm minimization could
just as easily be combined with spectral methods for switching linear systems and general nonlinear
systems. We believe that the techniques presented here offer a powerful, extensible and robust
framework for extracting structure from neural activity.
Acknowledgments
Thanks to Zhang Liu, Michael C. Grant, Lars Buesing and Maneesh Sahani for helpful discussions,
and Nicho Hatsopoulos for providing data. This research was generously supported by an NSF
CAREER grant.
References
[1] I. H. Stevenson and K. P. Kording, ?How advances in neural recording affect data analysis,? Nature
neuroscience, vol. 14, no. 2, pp. 139?142, 2011.
[2] M. Okun, P. Yger, S. L. Marguet, F. Gerard-Mercier, A. Benucci, S. Katzner, L. Busse, M. Carandini, and
K. D. Harris, ?Population rate dynamics and multineuron firing patterns in sensory cortex,? The Journal
of Neuroscience, vol. 32, no. 48, pp. 17108?17119, 2012.
[3] K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan, ?Optical imaging of neuronal populations during
decision-making,? Science, vol. 307, no. 5711, pp. 896?901, 2005.
[4] C. K. Machens, R. Romo, and C. D. Brody, ?Functional, but not anatomical, separation of ?what? and
?when? in prefrontal cortex,? The Journal of Neuroscience, vol. 30, no. 1, pp. 350?360, 2010.
[5] M. Stopfer, V. Jayaraman, and G. Laurent, ?Intensity versus identity coding in an olfactory system,?
Neuron, vol. 39, no. 6, pp. 991?1004, 2003.
8
[6] M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V.
Shenoy, ?Neural population dynamics during reaching,? Nature, 2012.
[7] W. Brendel, R. Romo, and C. K. Machens, ?Demixed principal component analysis,? Advances in Neural
Information Processing Systems, vol. 24, pp. 1?9, 2011.
[8] L. Paninski, Y. Ahmadian, D. G. Ferreira, S. Koyama, K. R. Rad, M. Vidne, J. Vogelstein, and W. Wu, ?A
new look at state-space models for neural data,? Journal of Computational Neuroscience, vol. 29, no. 1-2,
pp. 107?126, 2010.
[9] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, ?Gaussian-process
factor analysis for low-dimensional single-trial analysis of neural population activity,? Journal of neurophysiology, vol. 102, no. 1, pp. 614?635, 2009.
[10] B. Petreska, B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, ?Dynamical segmentation of single trials from population neural data,? Advances in neural information processing
systems, vol. 24, 2011.
[11] J. E. Kulkarni and L. Paninski, ?Common-input models for multiple neural spike-train data,? Network:
Computation in Neural Systems, vol. 18, no. 4, pp. 375?407, 2007.
[12] A. Smith and E. Brown, ?Estimating a state-space model from point process observations,? Neural Computation, vol. 15, no. 5, pp. 965?991, 2003.
[13] M. Fazel, H. Hindi, and S. P. Boyd, ?A rank minimization heuristic with application to minimum order
system approximation,? Proceedings of the American Control Conference., vol. 6, pp. 4734?4739, 2001.
[14] Z. Liu and L. Vandenberghe, ?Interior-point method for nuclear norm approximation with application to
system identification,? SIAM Journal on Matrix Analysis and Applications, vol. 31, pp. 1235?1256, 2009.
[15] Z. Liu, A. Hansson, and L. Vandenberghe, ?Nuclear norm system identification with missing inputs and
outputs,? Systems & Control Letters, vol. 62, no. 8, pp. 605?612, 2013.
[16] L. Buesing, J. Macke, and M. Sahani, ?Spectral learning of linear dynamics from generalised-linear observations with application to neural population data,? Advances in neural information processing systems,
vol. 25, 2012.
[17] L. Paninski, J. Pillow, and E. Simoncelli, ?Maximum likelihood estimation of a stochastic integrate-andfire neural encoding model,? Neural computation, vol. 16, no. 12, pp. 2533?2561, 2004.
[18] E. Chornoboy, L. Schramm, and A. Karr, ?Maximum likelihood identification of neural point process
systems,? Biological cybernetics, vol. 59, no. 4-5, pp. 265?275, 1988.
[19] J. Macke, J. Cunningham, M. Byron, K. Shenoy, and M. Sahani, ?Empirical models of spiking in neural
populations,? Advances in neural information processing systems, vol. 24, 2011.
[20] M. Collins, S. Dasgupta, and R. E. Schapire, ?A generalization of principal component analysis to the
exponential family,? Advances in neural information processing systems, vol. 14, 2001.
[21] V. Solo and S. A. Pasha, ?Point-process principal components analysis via geometric optimization,? Neural Computation, vol. 25, no. 1, pp. 101?122, 2013.
[22] Z. Zhou, X. Li, J. Wright, E. Candes, and Y. Ma, ?Stable principal component pursuit,? Proceedings of
the IEEE International Symposium on Information Theory, pp. 1518?1522, 2010.
[23] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, ?Clustering with Bregman divergences,? The Journal
of Machine Learning Research, vol. 6, pp. 1705?1749, 2005.
[24] S. P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ?Distributed optimization and statistical learnR in Machine Learning,
ing via the alternating direction method of multipliers,? Foundations and Trends
vol. 3, no. 1, pp. 1?122, 2011.
[25] P. Van Overschee and B. De Moor, ?Subspace identification for linear systems: theory, implementation,
applications,? 1996.
[26] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski, ?Population decoding of motor cortical activity
using a generalized linear model with hidden states,? Journal of neuroscience methods, vol. 189, no. 2,
pp. 267?280, 2010.
[27] S. Koyama, L. Castellanos P?erez-Bolde, C. R. Shalizi, and R. E. Kass, ?Approximate methods for statespace models,? Journal of the American Statistical Association, vol. 105, no. 489, pp. 170?180, 2010.
[28] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, ?Spatiotemporal correlations and visual signalling in a complete neuronal population,? Nature, vol. 454, no. 7207,
pp. 995?999, 2008.
[29] M. Harrison, ?Conditional inference for learning the network structure of cortical microcircuits,? in 2012
Joint Statistical Meeting, (San Diego, CA), 2012.
9
| 4995 |@word neurophysiology:1 trial:9 illustrating:1 middle:1 inversion:1 norm:34 open:1 decomposition:4 accounting:2 covariance:2 briggman:1 ld:2 reduction:3 liu:3 contains:1 qth:1 imaginary:3 recovered:21 comparing:1 nt:10 ka:1 scatter:1 chu:1 must:1 written:1 multineuron:1 motor:5 drop:1 plot:1 update:2 stationary:3 alone:2 fewer:1 signalling:1 ith:1 smith:1 record:1 successive:1 zhang:1 direct:5 symposium:1 consists:1 combine:3 fitting:4 overhead:1 inside:1 olfactory:1 introduce:1 jayaraman:1 yger:1 inter:1 expected:1 roughly:1 behavior:1 busse:1 multi:1 actual:1 little:1 increasing:3 becomes:3 project:2 estimating:3 moreover:1 notation:1 what:3 kind:2 kaufman:1 substantially:1 monkey:1 supplemental:1 finding:1 unobserved:2 ghosh:1 temporal:3 every:2 concave:3 exactly:3 decouples:1 rm:1 prohibitively:1 ferreira:1 control:3 unit:4 grant:2 shenoy:4 before:2 t1:1 generalised:1 local:5 treat:1 limit:1 switching:3 encoding:1 laurent:1 firing:6 black:1 plus:1 bursting:2 suggests:2 collect:1 fastest:1 liam:2 bi:1 statistically:1 range:2 fazel:1 practical:1 woodbury:1 acknowledgment:1 testing:1 practice:1 block:2 bolde:1 empirical:2 maneesh:1 significantly:2 projection:5 boyd:2 cannot:1 close:1 onto:3 operator:2 interior:1 influence:2 equivalent:1 lagrangian:3 center:2 yt:32 demonstrated:1 maximizing:1 romo:2 missing:1 vit:2 convex:10 focused:2 simplicity:1 recovery:1 estimator:1 nuclear:31 spanned:5 vandenberghe:2 shlens:1 population:18 handle:1 coordinate:1 analogous:1 laplace:1 spontaneous:1 pt:1 diego:1 exact:2 losing:1 machens:2 element:2 velocity:1 trend:1 particularly:1 pythagorean:3 observed:6 bottom:2 connected:2 richness:1 trade:3 rescaled:1 hatsopoulos:2 yk:2 principled:2 ui:3 ideally:2 dynamic:36 trained:2 solving:3 churchland:1 easily:2 joint:2 train:1 separated:2 describe:1 ahmadian:1 artificial:2 lag:1 larger:2 solve:1 supplementary:1 heuristic:1 statistic:3 cov:1 highlighted:1 sequence:3 differentiable:1 biophysical:2 eigenvalue:8 okun:1 lowdimensional:1 interaction:1 product:1 combining:1 achieve:2 electrode:2 gerard:1 leave:1 coupling:1 stat:1 progress:1 eq:8 strong:4 auxiliary:1 recovering:1 predicted:1 direction:4 drawback:1 closely:2 lars:1 stochastic:1 sgn:1 enable:1 material:2 bin:3 require:1 shalizi:1 fix:1 generalization:3 biological:1 extension:4 correction:1 around:1 wright:1 exp:6 driving:2 sought:1 estimation:3 pnevmatikakis:1 moor:1 minimization:18 generously:1 gaussian:12 always:1 aim:1 rather:3 reaching:1 avoid:1 zhou:1 varying:1 derived:1 ax:1 improvement:1 rank:11 likelihood:18 contrast:1 litke:1 kristan:1 dim:3 inference:2 helpful:1 dependent:1 nn:3 typically:3 integrated:1 entire:1 cunningham:4 hidden:1 arg:3 animal:1 smoothing:3 marginal:1 field:2 equal:1 never:1 look:1 yu:2 nearly:1 future:1 minimized:2 pinball:1 simplify:1 randomly:2 divergence:14 attempt:1 interest:1 investigate:1 highly:1 evaluation:1 wellstudied:1 extreme:1 yielding:1 pc:2 held:7 benucci:1 implication:1 solo:1 bregman:9 closer:2 necessary:1 orthogonal:1 initialized:1 e0:2 isolated:1 theoretical:1 increased:1 column:5 soft:1 downside:1 instance:1 nuyujukian:1 castellanos:1 extensible:2 goodness:1 tractability:1 deviation:1 subset:1 hundred:1 too:1 front:1 dependency:3 corrupted:1 spatiotemporal:1 combined:2 st:20 thanks:1 peak:1 siam:1 international:1 off:3 decoding:1 michael:1 connectivity:8 again:1 squared:1 recorded:2 prefrontal:1 hansson:1 positivity:1 external:1 american:2 macke:2 abarbanel:1 grossman:1 li:1 account:4 stevenson:1 nonlinearities:3 de:1 schramm:1 star:1 coding:1 int:1 explicitly:1 performed:1 root:1 doing:1 red:2 recover:8 option:1 candes:1 cxt:3 minimize:2 square:1 brendel:1 variance:10 largely:1 efficiently:2 ensemble:1 likewise:1 spaced:1 merugu:1 lds:3 identification:9 buesing:2 accurately:4 trajectory:1 cybernetics:1 straight:1 history:11 synaptic:12 centering:2 pp:22 attributed:1 sampled:4 dataset:1 carandini:1 popular:1 ut:1 dimensionality:6 improves:1 organized:1 segmentation:1 improved:4 formulation:2 evaluated:2 though:2 microcircuit:1 generality:1 just:2 lastly:1 correlation:2 glms:2 until:1 working:1 nonlinear:2 banerjee:1 grows:1 believe:1 effect:4 hypothesized:1 consisted:1 true:25 multiplier:4 brown:1 alternating:3 dhillon:1 during:4 m:1 generalized:6 tt:1 demonstrate:3 confusion:1 complete:1 wise:1 parikh:1 common:7 functional:1 spiking:2 refractory:2 conditioning:1 extend:4 association:1 refer:1 erez:1 nonlinearity:4 had:1 stable:5 access:1 cortex:6 longer:1 add:2 dominant:1 recent:3 confounding:2 driven:1 apart:1 route:1 certain:1 vt:2 meeting:1 seen:1 minimum:4 additional:1 somewhat:1 greater:1 period:2 vogelstein:1 full:1 multiple:3 simoncelli:2 infer:1 smooth:2 exceeds:1 ing:1 offer:2 long:1 e1:2 prediction:6 basic:1 regression:2 poisson:5 df:8 iteration:2 fine:1 harrison:1 singular:6 envelope:1 extra:1 unlike:2 biased:2 ascent:2 fell:1 recording:5 byron:1 effectiveness:1 call:2 nonstationary:4 practitioner:1 near:1 extracting:1 split:1 switch:2 affect:1 fit:13 zi:2 gave:1 lasso:2 eftychios:2 whether:1 pca:7 utility:1 penalty:12 york:1 hessian:2 useful:2 clear:2 amount:2 reduced:1 generate:1 katzner:1 schapire:1 nsf:1 shifted:1 dotted:1 neuroscience:8 estimated:3 per:1 anatomical:1 discrete:1 dasgupta:1 vol:25 santhanam:2 group:3 drawn:1 yit:2 discreteness:1 andfire:1 imaging:1 relaxation:1 fraction:4 year:1 sum:2 angle:5 everywhere:1 powerful:1 letter:1 hankel:2 extends:1 family:4 almost:2 reasonable:1 place:1 wu:2 separation:1 decision:1 scaling:1 comparable:2 bit:2 capturing:1 entirely:1 resampled:1 brody:1 chornoboy:1 identifiable:1 activity:9 binned:1 ahead:2 constraint:1 kronecker:1 u1:1 span:1 min:5 performing:2 optical:1 department:1 alternate:1 across:2 petreska:1 em:7 making:2 s1:1 tent:1 explained:15 restricted:3 intuitively:1 glm:9 taken:1 computationally:1 remains:2 discus:1 turn:2 count:3 needed:1 mind:1 tractable:1 mercier:1 generalizes:1 pursuit:4 operation:1 multiplied:1 obey:1 spectral:8 appropriate:1 robustly:3 uq:1 alternative:1 vidne:1 denotes:1 top:9 include:4 ensure:1 running:1 clustering:1 newton:3 approximating:1 unchanged:1 objective:5 added:1 question:1 spike:23 receptive:2 dependence:1 diagonal:1 exhibit:1 gradient:4 subspace:31 separate:2 mapped:1 link:1 koyama:2 evaluate:2 spanning:1 assuming:1 pointwise:1 ssid:7 providing:1 minimizing:4 innovation:3 difficult:2 subproblems:1 negative:1 implementation:1 reliably:1 unknown:1 perform:1 neuron:21 observation:3 datasets:2 lawhern:1 ever:1 variability:1 incorporated:1 y1:3 rn:3 varied:1 stack:1 sharp:1 intensity:1 peleato:1 david:1 introduced:1 eckstein:1 chichilnisky:1 connection:5 optimized:1 rad:1 pfau:2 learned:2 ryu:3 macaque:2 beyond:2 dynamical:13 usually:1 below:1 pattern:1 regime:1 sparsity:6 challenge:1 including:2 max:2 analogue:1 overschee:1 natural:21 hindi:1 arm:1 technology:1 demixed:2 marguet:1 sher:1 columbia:3 sahani:5 epoch:3 literature:1 geometric:1 expect:2 filtering:1 versus:3 foundation:1 integrate:1 affine:4 sufficient:1 consistent:1 thresholding:2 foster:1 share:1 row:2 prone:1 penalized:1 surprisingly:1 supported:1 bias:5 karr:1 taking:1 sparse:6 distributed:1 van:1 boundary:1 dimension:15 cortical:2 evaluating:2 transition:8 pillow:2 seemed:1 ferent:1 sensory:1 made:1 projected:1 san:1 far:1 kording:1 approximate:2 ameliorated:1 stopfer:1 global:4 xi:1 search:1 latent:19 promising:1 learn:1 nature:4 robust:2 zk:3 career:1 ca:1 investigated:1 complex:3 constructing:1 substituted:2 did:3 linearly:1 noise:3 repeated:1 augmented:3 fig:3 neuronal:2 neurotheory:1 ny:1 slow:2 inferring:1 position:1 decoded:1 exponential:5 lie:1 third:2 qqt:1 theorem:3 removing:1 xt:3 specific:1 offset:1 dk:1 r2:4 virtue:1 sit:2 adding:1 magnitude:1 dpca:1 led:1 paninski:6 visual:1 lagrange:1 partially:1 corresponds:1 extracted:1 dh:7 harris:1 ma:1 conditional:1 goal:1 viewed:1 identity:1 replace:1 admm:4 absence:1 change:3 hard:1 except:1 averaging:1 principal:17 lemma:1 total:2 pas:1 la:1 collins:1 kulkarni:1 incorporate:1 statespace:1 d1:3 correlated:2 |
4,414 | 4,996 | Sparse nonnegative deconvolution for compressive
calcium imaging: algorithms and phase transitions
Eftychios A. Pnevmatikakis and Liam Paninski
Department of Statistics, Center for Theoretical Neuroscience
Grossman Center for the Statistics of Mind, Columbia University, New York, NY
{eftychios, liam}@stat.columbia.edu
Abstract
We propose a compressed sensing (CS) calcium imaging framework for monitoring
large neuronal populations, where we image randomized projections of the spatial
calcium concentration at each timestep, instead of measuring the concentration at
individual locations. We develop scalable nonnegative deconvolution methods for
extracting the neuronal spike time series from such observations. We also address
the problem of demixing the spatial locations of the neurons using rank-penalized
matrix factorization methods. By exploiting the sparsity of neural spiking we
demonstrate that the number of measurements needed per timestep is significantly
smaller than the total number of neurons, a result that can potentially enable
imaging of larger populations at considerably faster rates compared to traditional
raster-scanning techniques. Unlike traditional CS setups, our problem involves a
block-diagonal sensing matrix and a non-orthogonal sparse basis that spans multiple
timesteps. We provide tight approximations to the number of measurements needed
for perfect deconvolution for certain classes of spiking processes, and show that
this number undergoes a ?phase transition,? which we characterize using modern
tools relating conic geometry to compressed sensing.
1
Introduction
Calcium imaging methods have revolutionized data acquisition in experimental neuroscience; we
can now record from large neural populations to study the structure and function of neural circuits
(see e.g. Ahrens et al. (2013)), or from multiple locations on a dendritic tree to examine the detailed
computations performed at a subcellular level (see e.g. Branco et al. (2010)). Traditional calcium
imaging techniques involve a raster-scanning protocol where at each cycle/timestep the microscope
scans the image in a voxel-by-voxel fashion, or some other predetermined pattern, e.g. through
random access multiphoton (RAMP) microscopy (Reddy et al., 2008), and thus the number of
measurements per timestep is equal to the number of voxels of interest. Although this protocol
produces ?eye-interpretable? measurements, it introduces a tradeoff between the size of the imaged
field and the imaging frame rate; very large neural populations can be imaged only with a relatively
low temporal resolution.
This unfavorable situation can potentially be overcome by noticing that many acquired measurements
are redundant; voxels can be ?void? in the sense that no neurons are located there, and active voxels
at nearby locations or timesteps will be highly correlated. Moreover, neural activity is typically
sparse; most neurons do not spike at every timestep. During recent years, imaging practitioners have
developed specialized techniques to leverage this redundancy. For example, Nikolenko et al. (2008)
describe a microscope that uses a spatial light modulator and allows for the simultaneous imaging
of different (predefined) image regions. More broadly, the advent of compressed sensing (CS) has
found many applications in imaging such as MRI (Lustig et al., 2007), hyperspectral imaging (Gehm
et al., 2007), sub-diffraction microscopy (Rust et al., 2006) and ghost imaging (Katz et al., 2009),
1
with available hardware implementations (see e.g. Duarte et al. (2008)). Recently, Studer et al.
(2012) presented a fluorescence microscope based on the CS framework, where each measurement
is obtained by projection of the whole image on a random pattern. This framework can lead to
significant undersampling ratios for biological fluorescence imaging.
In this paper we propose the application of the imaging framework of Studer et al. (2012) to the case
of neural population calcium imaging to address the problem of imaging large neural populations with
high temporal resolution. The basic idea is to not measure the calcium at each location individually,
but rather to take a smaller number of ?mixed? measurements (based on randomized projections of
the data). Then we use convex optimization methods that exploit the sparse structure in the data in
order to simultaneously demix the information from the randomized projection observations and
deconvolve the effect of the slow calcium indicator to recover the spikes. Our results indicate that the
number of required randomized measurements scales merely with the number of expected spikes
rather than the ambient dimension of the signal (number of voxels/neurons), allowing for the fast
monitoring of large neural populations. We also address the problem of estimating the (potentially
overlapping) spatial locations of the imaged neurons and demixing these locations using methods for
nuclear norm minimization and nonnegative matrix factorization. Our methods scale linearly with
the experiment length and are largely parallelizable, ensuring computational tractability. Our results
indicate that calcium imaging can be potentially scaled up to considerably larger neuron populations
and higher imaging rates by moving to compressive signal acquisition.
In the traditional static compressive imaging paradigm the sensing matrix is dense; every observation
comes from the projection of all the image voxels to a random vector/matrix. Moreover, the underlying
image can be either directly sparse (most of the voxels are zero) or sparse in some orthogonal basis
(e.g. Fourier, or wavelet). In our case the sensing matrix has a block-diagonal form (we can only
observe the activity at one specific time in each measurement) and the sparse basis (which corresponds
to the inverse of the matrix implementing the convolution of the spikes from the calcium indicator) is
non-orthogonal and spans multiple timelags. We analyze the effect of these distinctive features in
Sec. 3 in a noiseless setting. We show that as the number of measurements increases, the probability of
successful recovery undergoes a phase transition, and study the resulting phase transition curve (PTC),
i.e., the number of measurements per timestep required for accurate deconvolution as a function of
the number of spikes. Our analysis uses recent results that connect CS with conic geometry through
the ?statistical dimension? (SD) of descent cones (Amelunxen et al., 2013). We demonstrate that in
many cases of interest, the SD provides a very good estimate of the PTC.
2
Model description and approximate maximum-a-posteriori inference
See e.g. Vogelstein et al. (2010) for background on statistical models for calcium imaging data. Here
we assume that at every timestep an image or light field (either two- or three-dimensional) is observed
for a duration of T timesteps. Each observed field contains a total number of d voxels and can be
vectorized in a single column vector. Thus all the activity can be described by d ? T matrix F . Now
assume that the field contains a total number of N neurons, where N is in general unknown. Each
spike causes a rapid increase in the calcium concentration which then decays with a time constant
that depends on the chemical properties of the calcium indicator. For each neuron i we assume that
the ?calcium activity? ci can be described as a stable autoregressive process AR(1) process1 that
filters the neuron?s spikes si (t) according to the fast-rise slow-decay procedure described before:
ci (t) = ?ci (t ? 1) + si (t),
(1)
where ? is the discrete time constant which satisfies 0 < ? < 1 and can be approximated as
? = 1 ? exp(??t/? ), where ?t is the length of each timestep and ? is the continuous time constant
of the calcium indicator. In general we assume that each si (t) is binary due to the small length of
the timestep in the proposed compressive imaging setting, and we use an i.i.d. prior for each neuron
p(si (t) = 1) = ?i .2 Moreover, let ai ? Rd+ the (nonnegative) location vector for neuron i, and
b ? Rd+ the (nonnegative) vector of baseline concentration for all the voxels. The spatial calcium
concentration profile at time t can be described as
XN
f (t) =
ai ci (t) + b.
(2)
i=1
1
2
Generalization to general AR(p) processes is straightforward, but we keep p = 1 for simplicity.
This choice is merely for simplicity; more general prior distributions can be incorporated in our framework.
2
In conventional raster-scanning experiments, at each timestep we observe a noisy version of the ddimensional image f (t). Since d is typically large, the acquisition of this vector can take a significant
amount of time, leading to a lengthy timestep ?t and low temporal resolution. Instead, we propose
to observe the projections of f (t) onto a random matrix Bt ? Rn?d (e.g. each entry of Bt could be
chosen as 0 or 1 with probability 0.5):
y(t) = Bt f (t) + ?t ,
?t ? N (0, ?t ),
(3)
where ?t denotes measurement noise (Gaussian, with diagonal covariance ?t , for simplicity). If
n = dim(y(t)) satisfies n d, then y(t) represents a compression of f (t) that can potentially be
obtained more quickly than the full f (t). Now if we can use statistical methods to recover f (t) (or
equivalently the location ai and spikes si of each neuron) from the compressed measurements y(t),
the total imaging throughput will be increased by a factor proportional to the undersampling ratio
d/n. Our assumption here is that the random projection matrices Bt can be constructed quickly.
Recent technological innovations have enabled this fast construction by using digital micromirror
devices that enable spatial light modulation and can construct different excitation patterns with a high
frequency (order of kHz). The total fluorescence can then be detected with a single photomultiplier
tube. For more details we refer to Duarte et al. (2008); Nikolenko et al. (2008); Studer et al. (2012).
We discuss the statistical recovery problem next. For future reference, note that eqs. (1)-(3) can be
written in matrix form as (vec(?) denotes the vectorizing operator)
?
?
1
0 ... 0
S = CGT
? ?? 1 . . . 0 ?
. ? , B = blkdiag{B1 , . . . , BT }. (4)
F = AC + b1TT with G = ?
..
..
? ...
.
. .. ?
vec(Y ) = Bvec(F ) + ?,
0 . . . ?? 1
2.1
Approximate MAP inference with an interior point method
For now we assume that A is known. In general MAP inference of S is difficult due to the discrete
nature of S. Following Vogelstein et al. (2010) we relax S to take continuous values in the interval
[0, 1] (remember that we assume binary spikes), and appropriately modify the prior for si (t) to
log p(si (t)) ? ?(?i si (t))1(0 ? si (t) ? 1), where ?i is chosen such that the relaxed prior has the
same mean ?i . To exploit the banded structure of G we seek the MAP estimate of C (instead of S)
? = y(t) ? Bt b)
by solving the following convex quadratic problem (we let y(t)
XT 1
? ? Bt Ac(t))T ??1
? ? Bt Ac(t)) ? log p(C)
minimize
(y(t)
t (y(t)
t=1 2
C
(P-QP)
T
subject to 0 ? CG ? 1, c(1) ? 0,
Using the prior on S and the relation S = CGT , the log-prior of C can be written as log p(C) ?
??T CGT 1T .We can solve (P-QP) efficiently using an interior point method with a log-barrier
(Vogelstein et al., 2010). The contribution of the likelihood term to the Hessian is a block-diagonal
matrix, whereas the barrier-term will contribute a block-tridiagonal matrix where each non-zero
block is diagonal. As a result the Newton search direction ?H ?1 ? can be computed efficiently in
O(T N 3 ) time using a block version of standard forward-backward methods for tridiagonal systems
of linear equations. We note that if N is large this can be inefficient. In this case we can use an
augmented Lagrangian method (Boyd et al., 2011) to derive a fully parallelizable first order method,
with O(T N ) complexity per iteration. We refer to the supplementary material for additional details.
As a first example we consider a simple setup where all the parameters are assumed to be known.
We consider N = 50 neurons observed over T = 1000 timesteps. We assume that A, b are known,
with A = IN (corresponding to non-overlapping point neurons, with one neuron in each voxel) and
b = 0, respectively. This case of known point neurons can be thought as the compressive analog
of RAMP microscopy where the neuron locations are predetermined and then imaged in a serial
manner. (We treat the case of unknown and possibly overlapping neuron locations in section 2.2.)
Each neuron was assumed to fire in an i.i.d. fashion with probability per timestep p = 0.04. Each
measurement was obtained by projecting the spatial fluorescence vector?at time t, f (t), onto a random
matrix Bt . Each row of Bt is taken as an i.i.d. normalized vector 2?/ N , where ? has i.i.d. entries
following a fair Bernoulli distribution. For each set of measurements we assume that ?t = ? 2 In , and
3
True traces
A
Estimated Spikes, SNR = 20db
D
10
1
20
True
5 meas.
20 meas.
30
40
50
0
E
Estimated traces, 5 meas., SNR = 20dB
B
0
100
200
300
400
500
600
700
800
900
1000
40
45
50
Timestep
0
10
20
?1
10
30
40
Relative error
Neuron id
10
50
Estimated traces, 20 meas., SNR = 20dB
C
10
?2
10
?3
10
Inf
30 dB
25 dB
20 dB
15 dB
10 dB
5 dB
0 dB
20
?4
10
30
40
50
100
200
300
400
500
600
700
800
900
1000
Timestep
5
10
15
20
25
30
35
# of measurements per timestep
Figure 1: Performance of proposed algorithm under different noise levels. A: True traces, B:
Estimated traces with n = 5 (10? undersampling), SNR = 20dB. C: Estimated traces with n = 20
(2.5? undersampling), SNR = 20dB. D: True and estimated spikes from the traces shown in panels
B and C for a randomly selected neuron. E: Relative error between true and estimated traces for
different number of measurements per timestep under different noise levels. The error decreases with
the number of observations and the reconstruction is stable with respect to noise.
the signal-to-noise ratio (SNR) in dB is defined as SNR = 10 log10 (Var[? T f (t)]/N ? 2 ); a quick
calculation reveals that SNR = 10 log10 (p(1 ? p)/(1 ? ? 2 )? 2 ).
Fig. 1 examines the solution of (P-QP) when the number of measurements per timestep n varied from
1 to N and for 8 different SNR values 0, 5, . . . , 30 plus the noiseless case (SNR = ?). Fig. 1A
shows the noiseless traces for all the neurons and panels B and C show the reconstructed traces for
SNR = 20dB and n = 5, 20 respectively. Fig. 1D shows the estimated spikes for these cases for a
randomly picked neuron. For very small number of measurements (n = 5, i.e., 10? undersampling)
the inferred calcium traces (Fig. 1B) already closely resemble the true traces. However, the inferred
MAP values of the spikes (computed by S = CGT , essentially a differencing operation here) lie
in the interior of [0, 1], and the results are not directly interpretable at a high temporal resolution.
As n increases (n = 20, red) the estimated spikes lie very close to {0, 1} and a simple thresholding
procedure can recover the true spike times. In Fig. 1E the relative error between the estimated and
? F /kCkF , with k ? kF denoting the the Frobenius norm) is plotted. In general the
true traces (kC ? Ck
error decreases with the number of observations and the reconstruction is robust with noise. Finally,
by observing the noiseless case (dashed curve) we see that when n ? 13 the error becomes practically
zero indicating fully compressed acquisition of the calcium traces with a roughly 4? undersampling
factor. We will see below that this undersampling factor is inversely proportional to the firing rate:
we can recover highly sparse spike signals S using very few measurements n.
2.2
Estimation of the spatial matrix A
The above algorithm assumes that the underlying neurons have known locations, i.e., the matrix
A is known. In some cases A can be estimated a-priori by running a conventional raster-scanning
experiment at a high spatial resolution and locating the active voxels. However this approach is
expensive and can still be challenging due to noise and possible spatial overlap between different
neurons. To estimate A within the compressive framework we note that the baseline-subtracted
spatiotemporal calcium matrix F (see eqs. (2) and (4)) can be written as F? = F ? b1TT = AC; thus
rank(F? ) ? N where N is the number of underlying neurons, with typically N d. Since N is also
in general unknown we estimate F? by solving a nuclear norm penalized problem (Recht et al., 2010)
minimize
F?
T
X
1
t=1
2
? ? Bt f?(t))T ??1
? ? Bt f?(t)) ? log p(F? ) + ?N N kF? k?
(y(t)
t (y(t)
subject to F? G ? 0, f?(1) ? 0,
T
4
(P-NN)
where k ? k? denotes the nuclear norm (NN) of a matrix (i.e., the sum of its singular values), which is
a convex approximation to the nonconvex rank function (Fazel, 2002). The prior of F? can be chosen
in a similar fashion as log p(C), i..e, log p(F? ) ? ??TF F? GT 1T , where ?F ? Rd . Although more
complex than (P-QP), (P-NN) is again convex and can be solved efficiently using e.g. the ADMM
method of Boyd et al. (2011). From the solution of (P-NN) we can estimate N by appropriately
thresholding the singular values of the estimated F? .3 Having N we can then use appropriately
constrained nonnegative matrix factorization (NMF) methods to alternately estimate A and C. Note
that during this NMF step the baseline vector b can also be estimated jointly with A. Since NMF
methods are nonconvex, and thus prone to local optima, informative initialization is important. We
can use the solution of (P-NN) to initialize the spatial component A using clustering methods, similar
to methods typically used in neuronal extracellular spike sorting (Lewicki, 1998). Details are given
in the supplement (along with some discussion of the estimation of the other parameters in this
problem); we refer to Pnevmatikakis et al. (2013) for full details.
True Concentration
A
NN Estimate
B
NMF Estimate
C
20
Voxel #
40
60
80
100
120
100
200
300
400
500
600
700
800
900
1000
100
200
300
Timestep
D
400
500
600
700
800
900
1000
100
200
300
Timestep
Singular Value Scaling
E Baseline estimation
F
1
400
500
600
700
800
900
1000
Timestep
True Locations
Estimated Locations
G
0.8
2
Estimate
10
1
10
0.6
0.4
0.2
0
10
0
2
4
6
8
10
12
14
0
0.5
1
20
True
40
60
80
100
120
20
40
Voxel #
60
80
100
120
Voxel #
Figure 2: Estimating locations and calcium concentration from compressive calcium imaging measurements. A: True spatiotemporal concentration B: estimate by solving (P-NN) C: estimate by using
NMF methods. D: Logarithmic plot of the first singular values of the solution of (P-NN), E: Estimation of baseline vector, F: true spatial locations G: estimated spatial locations. The NN-penalized
method estimates the number of neurons and the NMF algorithm recovers the spatial and temporal
components with high accuracy.
In Fig. 2 we present an application of this method to an example with N = 8 spatially overlapping
neurons. For simplicity we consider neurons in a one-dimensional field with total number of voxels
d = 128 and spatial positions shown in Fig. 2E. At each timestep we obtain just n = 5 noisy
measurements using random projections on binary masks. From the solution to the NN-penalized
problem (P-NN) (Fig. 2B) we threshold the singular values (Fig. 2D) and estimate the number of
underlying neurons (note the logarithmic gap between the 8th and 9th largest singular values that
enables this separation). We then use the NMF approach to obtain final estimates of the spatial
locations (Fig. 2G), the baseline vector (Fig. 2E), and the full spatiotemporal concentration (Fig. 2C).
The estimates match well with the true values. Note that n < N d showing that compressive
imaging with significant undersampling factors is possible, even in the case of classical raster scanning
protocol where the spatial locations are unknown.
3
Estimation of the phase transition curve in the noiseless case
The results presented above indicate that reconstruction of the spikes is possible even with significant
undersampling. In this section we study this problem from a compressed sensing (CS) perspective
in the idealized case where the measurements are noiseless. For simplicity, we also assume that
A = I (similar to a RAMP setup). Unlike the traditional CS setup, where a sparse signal (in some
basis) is sensed with a dense fully supported random matrix, in our case the sensing matrix B has a
block-diagonal form. A standard justification of CS approaches proceeds by establishing that the
sensing matrix satisfies the ?restricted isometry property? (RIP) for certain classes of sparse signals
3
To reduce potential shrinkage but promote low-rank solutions this ?global? NN penalty can be replaced by a
series of ?local? NN penalties on spatially overlapping patches.
5
with high probability (w.h.p.); this property in turn guarantees the correct recovery of the parameters
of interest (Candes and Tao, 2005). Yap et al. (2011) showed that for signals that are sparse in some
orthogonal basis, the RIP holds for random block-diagonal matrices w.h.p. with a number of sufficient
measurement that scales with the squared coherence between the sparse basis and the elementary
(identity) basis. For non-orthogonal basis the RIP property has only been established for fully dense
sensing matrices (Candes et al., 2011). For signals with sparse variations Ba et al. (2012) established
perfect and stable recovery conditions under the assumption that the sensing matrix at each timestep
satisfies certain RIPs, and the sparsity level at each timestep has known upper bounds.
While the RIP is a valuable tool for the study of convex relaxation approaches to compressed sensing
problems, its estimates are usually up to a constant and can be relatively loose (Blanchard et al.,
2011). An alternative viewpoint is offered from conic geometric arguments (Chandrasekaran et al.,
2012; Amelunxen et al., 2013) that examine how many measurements are required such that the
convex relaxed program will have a unique solution which coincides with the true sparse solution.
We use this approach to study the theoretical properties of our proposed compressed calcium imaging
framework in an idealized noiseless setting. When noise is absent, the quadratic program (P-QP) for
the approximate MAP estimate converges to a linear program4 :
with
minimize f (C), subject to: Bvec(C) = vec(Y )
C
(v ? 1N )T vec(C), (G ? Id )vec(C) ? 0
f (C) =
, and v = GT 1T .
?, otherwise
(P-LP)
Here ? denotes the Kronecker product and we used the identity vec(CGT ) = (G ? Id )vec(C). To
examine the properties of (P-LP) we follow the approach of Amelunxen et al. (2013): For a fully
dense sensing i.i.d. Gaussian (or random rotation) matrix B, the linear program (P-LP) will succeed
w.h.p. to reconstruct the true solution C0 , if the total number of measurements nT satisfies
?
(5)
nT ? ?(D(f, C0 )) + O( T N ).
D(f, C0 ) is the descent cone of f at C0 , induced by the set of non-increasing directions from C0 , i.e.,
D(f, C0 ) = ?? ?0 y ? RN ?T : f (C0 + ? y) ? f (C0 ) ,
(6)
and ?(C) is the ?statistical dimension? (SD) of a convex cone C ? Rm , defined as the expected
squared length of a standard normal Gaussian vector projected onto the cone
?(C) = Eg k?C (g)k2 , with g ? N (0, Im ).
Eq. (5), and the analysis of Amelunxen et al. (2013), state that as T N ? ?, the probability that
(P-LP) will succeed to find the true solution undergoes a phase transition, and that the phase transition
curve (PTC), i.e., the number of measurements required for perfect reconstruction normalized by
the ambient dimension N T (Donoho and Tanner, 2009), coincides with the normalized SD. In our
case B is a block-diagonal matrix (not a fully-dense Gaussian matrix), and the SD only provides an
estimate of the PTC. However, as we show below, this estimate is tight in most cases of interest.
3.1
Computing the statistical dimension
Using a result from Amelunxen et al. (2013) the statistical dimension can also be expressed as the
expected squared distance of a standard normal vector from the cone induced by the subdifferential
(Rockafellar, 1970) ?f of f at the true solution C0 :
?(D(f, C0 ) = Eg inf
min
? >0 u?? ?f (C0 )
kg ? uk2 , with g ? N (0, IN T ).
(7)
Although in general (7) cannot be solved in closed form, it can be easily estimated numerically; in the
supplementary material we show that the subdifferential ?f (C0 ) takes the form of a convex polytope,
i.e., an intersection of linear half spaces. As a result, the distance of any vector g from ?f (C0 ) can
be found by solving a simple quadratic program, and the statistical dimension can be estimated with
a simple Monte-Carlo simulation (details are presented in the supplement). The characterization
of (7) also explains the effect of the sparsity pattern on the SD. In the case where the sparse basis
4
To illustrate the generality of our approach we allow for arbitrary nonnegative spike values in this analysis,
but we also discuss the binary case that is of direct interest to our compressive calcium framework.
6
is the identity then the cone induced by the subdifferential can be decomposed as the union of the
respective subdifferential cones induced by each coordinate. It follows that the SD is invariant to
coordinate permutations and depends only on the sparsity level, i.e., the number of nonzero elements.
However, this result is in general not true for a nonorthogonal sparse basis, indicating that the precise
location of the spikes (sparsity pattern) and not just their number has an effect on the SD. In our case
the calcium signal is sparse in the non-orthogonal basis described by the matrix G from (4).
3.2
Relation with the phase transition curve
In this section we examine the relation of the SD with the PTC for our compressive calcium imaging
problem. Let S denote the set of spikes, ? = supp(S), and C the induced calcium traces C = SG?T .
As we argued, the statistical dimension of the descent cone D(f, C) depends both on the cardinality
of the spike set |?| (sparsity level) and the location of the spikes (sparsity pattern). To examine
the effects of the sparsity level and pattern we define the normalized expected statistical dimension
(NESD) with respect to a certain distribution (e.g. Bernoulli) ? from which the spikes S are drawn.
?
?(k/N
T, ?) = E??? [?(D(f, C))/N T ] , with supp(S) = ?, and E??? |?| = k.
In Fig. 3 we examine the relation of the NESD with the phase transition curve of the noiseless problem
(P-LP). We consider a setup with N = 40 point neurons (A = Id , d = N ) observed over T = 50
timesteps and chose discrete time constant ? = 0.99. The spike-times of each neuron came from
the same distribution and we considered two different distributions: (i) Bernoulli spiking, i.e., each
neuron fires i.i.d. spikes with the probability k/T , and (ii) desynchronized periodic spiking where
each neuron fires deterministically spikes with discrete frequency k/T timesteps?1 , and each neuron
has a random phase. We considered two forms of spikes: (i) with nonnegative values (si (t) ? 0),
and (ii) with binary values (si (t) = {0, 1}), and we also considered two forms of sensing matrices:
(i) with time-varying matrix Bt , and (ii) with constant, fully supported matrices B1 = . . . = BT .
The entries of each Bt are again drawn from an i.i.d. fair Bernoulli distribution. For each of these
8 different conditions we varied the expected number of spikes per neuron k from 1 to T and the
number of observed measurements n from 1 to N . Fig. 3 shows the empirical probability that the
program (P-LP) will succeed in reconstructing the true solution averaged over a 100 repetitions.
Success is declared when the reconstructed spike signal S? satisfies5 kS? ? SkF /kSkF < 10?3 . We
also plot the empirical PTC (purple dashed line), i.e., the empirical 50% success probability line, and
the NESD (solid blue line), approximated with a Monte Carlo simulation using 200 samples, for each
of the four distinct cases (note that the SD does not depend on the structure of the sensing matrix B).
In all cases, our problem undergoes a sharp phase transition as the number of measurements per
timestep varies: in the white regions of Fig. 3, S is recovered essentially perfectly, with a transition
to a high probability of at least some errors in the black regions. Note that the phase transitions are
defined as functions of the sparsity index k/T ; the signal sparsity sets the compressibility of the data.
In addition, in the case of time-varying Bt , the NESD provides a surprisingly good estimate of the
PTC, especially in the binary case or when the spiking signal is actually sparse (k/T < 0.5), a result
that justifies our overall approach. Although using time-varying sensing matrices Bt leads to better
results, compression is also possible with a constant B. This is an important result for implementation
purposes where changing the sensing matrix might be a costly or slow operation. On a more technical
side we also observe the following interesting properties:
? Periodic spiking requires more measurements for accurate deconvolution, a property again
predicted by the SD. This comes from the fact that the sparse basis is not orthogonal and
shows that for a fixed sparsity level k/T the sparsity pattern also affects the number of required
measurements. This difference depends on the time constant ?. As ? ? 0, G ? I; the problem
becomes equivalent to a standard nonnegative CS problem, where the spike pattern is irrelevant.
? In the Bernoulli spiking nonnegative case, the SD is numerically very close to the PTC of the
standard nonnegative CS problem (not shown here), adding to the growing body of evidence for
universal behavior of convex relaxation approaches to CS (Donoho and Tanner, 2009).
? In the binary case the results exhibit a symmetry around the axis k/T = 0.5. In fact this symmetry
becomes exact as ? ? 1. In the supplement we prove that this result is predicted by the SD.
5
When calculating this error we excluded the last 10 timesteps. As every spike is filtered by the AR process
it has an effect for multiple timelags in the future and an optimal encoder has to sense it over multiple timelags.
This number depends only on ? and not on the length T , and thus this behavior becomes negligible as T ? ?.
7
Bernouli spiking
Undersampling Index
Nonnegative Spikes
1
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
Time?varying B
Periodic spiking
Undersampling Index
Binary Spikes
1
0.9
Constant B
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.4
0.6
0.8
1
0.2
0.4
0.6
0.9
0.8
0.7
0.6
Time?varying B
1
0.2
1
Statistical dimension
Empirical PTC
0.8
1
0.5
0.4
0.3
0.2
0.1
0
0.2
Sparsity Index
Constant B
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
Sparsity Index
Figure 3: Relation of the statistical dimension with the phase transition curve for two different
spiking distributions (Bernouli, periodic), two different spike values (nonnegative, binary), and two
classes of sensing matrices (time-varying, constant). For each panel: x-axis normalized sparsity k/T ,
y-axis undersampling index n/N . Each panel shows the empirical success probability for each pair
(k/T, n/N ), the empirical 50%-success line (dashed purple line) and the SD (blue solid line). When
B is time-varying the SD provides a good estimate of the empirical PTC.
As mentioned above, our analysis is only approximate since B is block-diagonal and not fully
dense. However, this approximation is tight in the time-varying case. Still, it is possible to construct
adversarial counterexamples where the SD approach fails to provide a good estimate of the PTC.
For example, if all neurons fire in a completely synchronized manner then the required number
of measurements grows at a rate that is not predicted by (5). We present such an example in the
supplement and note that more research is needed to understand such extreme cases.
4
Conclusion
We proposed a framework for compressive calcium imaging. Using convex relaxation tools from
compressed sensing and low rank matrix factorization, we developed an efficient method for extracting
neurons? spatial locations and the temporal locations of their spikes from a limited number of
measurements, enabling the imaging of large neural populations at potentially much higher imaging
rates than currently available. We also studied a noiseless version of our problem from a compressed
sensing point of view using newly introduced tools involving the statistical dimension of convex
cones. Our analysis can in certain cases capture the number of measurements needed for perfect
deconvolution, and helps explain the effects of different spike patterns on reconstruction performance.
Our approach suggests potential improvements over the standard raster scanning protocol (unknown
locations) as well as the more efficient RAMP protocol (known locations). However our analysis is
idealistic and neglects several issues that can arise in practice. The results of Fig. 1 suggest a tradeoff
between effective compression and SNR level. In the compressive framework the cycle length can be
relaxed more easily due to the parallel nature of the imaging (each location is targeted during the
whole ?cycle?). The summed activity is then collected by the photomultiplier tube that introduces the
noise. While the nature of this addition has to be examined in practice, we expect that the observed
SNR will allow for significant compression. Another important issue is motion correction for brain
movement, especially in vivo conditions. While new approaches have to be derived for this problem,
the novel approach of Cotton et al. (2013) could be adaptable to our setting. We hope that our work
will inspire experimentalists to leverage the proposed advanced signal processing methods to develop
more efficient imaging protocols.
Acknowledgements
LP is supported from an NSF career award. This work is also supported by ARO MURI W911NF-121-0594.
8
References
Ahrens, M. B., M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller (2013). Whole-brain functional imaging at
cellular resolution using light-sheet microscopy. Nature methods 10(5), 413?420.
Amelunxen, D., M. Lotz, M. B. McCoy, and J. A. Tropp (2013). Living on the edge: A geometric theory of
phase transitions in convex optimization. arXiv preprint arXiv:1303.6672.
Ba, D., B. Babadi, P. Purdon, and E. Brown (2012). Exact and stable recovery of sequences of signals with
sparse increments via differential l1 -minimization. In Advances in Neural Information Processing Systems
25, pp. 2636?2644.
Blanchard, J. D., C. Cartis, and J. Tanner (2011). Compressed sensing: How sharp is the restricted isometry
property? SIAM review 53(1), 105?125.
Boyd, S., N. Parikh, E. Chu, B. Peleato, and J. Eckstein (2011). Distributed optimization and statistical learning
R in Machine Learning 3(1),
via the alternating direction method of multipliers. Foundations and Trends
1?122.
Branco, T., B. A. Clark, and M. H?ausser (2010). Dendritic discrimination of temporal input sequences in cortical
neurons. Science 329, 1671?1675.
Candes, E. J., Y. C. Eldar, D. Needell, and P. Randall (2011). Compressed sensing with coherent and redundant
dictionaries. Applied and Computational Harmonic Analysis 31(1), 59?73.
Candes, E. J. and T. Tao (2005). Decoding by linear programming. Information Theory, IEEE Transactions
on 51(12), 4203?4215.
Chandrasekaran, V., B. Recht, P. A. Parrilo, and A. S. Willsky (2012). The convex geometry of linear inverse
problems. Foundations of Computational Mathematics 12(6), 805?849.
Cotton, R. J., E. Froudarakis, P. Storer, P. Saggau, and A. S. Tolias (2013). Three-dimensional mapping of
microcircuit correlation structure. Frontiers in Neural Circuits 7.
Donoho, D. and J. Tanner (2009). Observed universality of phase transitions in high-dimensional geometry, with
implications for modern data analysis and signal processing. Philosophical Transactions of the Royal Society
A: Mathematical, Physical and Engineering Sciences 367(1906), 4273?4293.
Duarte, M. F., M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk (2008).
Single-pixel imaging via compressive sampling. Signal Processing Magazine, IEEE 25(2), 83?91.
Fazel, M. (2002). Matrix rank minimization with applications. Ph. D. thesis, Stanford University.
Gehm, M., R. John, D. Brady, R. Willett, and T. Schulz (2007). Single-shot compressive spectral imaging with a
dual-disperser architecture. Opt. Express 15(21), 14013?14027.
Katz, O., Y. Bromberg, and Y. Silberberg (2009). Compressive ghost imaging. Applied Physics Letters 95(13).
Lewicki, M. (1998). A review of methods for spike sorting: the detection and classification of neural action
potentials. Network: Computation in Neural Systems 9, R53?R78.
Lustig, M., D. Donoho, and J. M. Pauly (2007). Sparse MRI: The application of compressed sensing for rapid
MR imaging. Magnetic resonance in medicine 58(6), 1182?1195.
Nikolenko, V., B. Watson, R. Araya, A. Woodruff, D. Peterka, and R. Yuste (2008). SLM microscopy: Scanless
two-photon imaging and photostimulation using spatial light modulators. Frontiers in Neural Circuits 2, 5.
Pnevmatikakis, E., T. Machado, L. Grosenick, B. Poole, J. Vogelstein, and L. Paninski (2013). Rank-penalized
nonnegative spatiotemporal deconvolution and demixing of calcium imaging data. In Computational and
Systems Neuroscience Meeting COSYNE. (journal paper in preparation for PLoS Computational Biology).
Recht, B., M. Fazel, and P. Parrilo (2010). Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM review 52(3), 471?501.
Reddy, G., K. Kelleher, R. Fink, and P. Saggau (2008). Three-dimensional random access multiphoton
microscopy for functional imaging of neuronal activity. Nature Neuroscience 11(6), 713?720.
Rockafellar, R. (1970). Convex Analysis. Princeton University Press.
Rust, M. J., M. Bates, and X. Zhuang (2006). Sub-diffraction-limit imaging by stochastic optical reconstruction
microscopy (STORM). Nature methods 3(10), 793?796.
Studer, V., J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan (2012). Compressive fluorescence microscopy for biological and hyperspectral imaging. Proceedings of the National Academy of Sciences 109(26),
E1679?E1687.
Vogelstein, J., A. Packer, T. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski (2010). Fast non-negative
deconvolution for spike train inference from population calcium imaging. Journal of Neurophysiology 104(6),
3691?3704.
Yap, H. L., A. Eftekhari, M. B. Wakin, and C. J. Rozell (2011). The restricted isometry property for block
diagonal matrices. In Information Sciences and Systems (CISS), 2011 45th Annual Conference on, pp. 1?6.
9
| 4996 |@word neurophysiology:1 version:3 mri:2 compression:4 norm:5 c0:13 seek:1 sensed:1 simulation:2 covariance:1 solid:2 shot:1 series:2 contains:2 woodruff:1 denoting:1 recovered:1 nt:2 si:11 universality:1 chu:1 written:3 john:1 informative:1 predetermined:2 enables:1 cis:1 plot:2 interpretable:2 discrimination:1 half:1 selected:1 device:1 record:1 filtered:1 provides:4 characterization:1 contribute:1 location:26 mathematical:1 along:1 constructed:1 direct:1 differential:1 mousavi:1 prove:1 manner:2 bobin:1 acquired:1 mask:1 expected:5 rapid:2 behavior:2 roughly:1 examine:6 growing:1 brain:2 decomposed:1 cardinality:1 increasing:1 becomes:4 estimating:2 moreover:3 underlying:4 circuit:3 panel:4 advent:1 kg:1 developed:2 compressive:16 brady:1 guarantee:1 temporal:7 remember:1 every:4 fink:1 scaled:1 rm:1 k2:1 slm:1 before:1 negligible:1 engineering:1 local:2 modify:1 sd:16 treat:1 limit:1 id:4 establishing:1 firing:1 modulation:1 black:1 plus:1 chose:1 initialization:1 k:1 might:1 studied:1 suggests:1 challenging:1 examined:1 factorization:4 kckf:1 liam:2 limited:1 averaged:1 fazel:3 unique:1 union:1 block:11 practice:2 procedure:2 empirical:7 universal:1 significantly:1 thought:1 projection:8 boyd:3 studer:4 suggest:1 onto:3 interior:3 close:2 operator:1 cannot:1 deconvolve:1 sheet:1 r78:1 conventional:2 map:5 lagrangian:1 center:2 quick:1 equivalent:1 straightforward:1 duration:1 convex:14 keller:1 resolution:6 simplicity:5 recovery:5 needell:1 examines:1 nuclear:4 enabled:1 population:10 variation:1 justification:1 coordinate:2 increment:1 construction:1 rip:5 exact:2 programming:1 magazine:1 us:2 element:1 yap:2 approximated:2 expensive:1 located:1 trend:1 rozell:1 muri:1 observed:7 preprint:1 solved:2 capture:1 region:3 cycle:3 sun:1 plo:1 decrease:2 technological:1 movement:1 valuable:1 mentioned:1 complexity:1 depend:1 tight:3 solving:4 distinctive:1 basis:12 completely:1 easily:2 train:1 distinct:1 fast:4 describe:1 effective:1 monte:2 modulators:1 detected:1 larger:2 solve:1 supplementary:2 stanford:1 ramp:4 relax:1 compressed:13 otherwise:1 reconstruct:1 statistic:2 encoder:1 storer:1 grosenick:1 jointly:1 noisy:2 final:1 sequence:2 propose:3 reconstruction:6 aro:1 product:1 bernouli:2 photostimulation:1 subcellular:1 academy:1 description:1 frobenius:1 exploiting:1 optimum:1 demix:1 produce:1 perfect:4 converges:1 help:1 derive:1 develop:2 ac:4 stat:1 illustrate:1 eq:3 orger:1 ddimensional:1 c:11 involves:1 indicate:3 come:2 resemble:1 synchronized:1 direction:3 predicted:3 closely:1 correct:1 filter:1 stochastic:1 enable:2 material:2 implementing:1 explains:1 argued:1 generalization:1 opt:1 dendritic:2 biological:2 elementary:1 im:1 frontier:2 correction:1 hold:1 practically:1 around:1 considered:3 normal:2 exp:1 branco:2 nonorthogonal:1 mapping:1 bromberg:1 dictionary:1 purpose:1 idealistic:1 estimation:5 robson:1 currently:1 fluorescence:5 individually:1 largest:1 pnevmatikakis:3 repetition:1 tf:1 tool:4 minimization:4 hope:1 gaussian:4 rather:2 ck:1 shrinkage:1 varying:8 mccoy:1 derived:1 improvement:1 rank:8 likelihood:1 bernoulli:5 adversarial:1 cg:1 baseline:6 sense:2 duarte:3 posteriori:1 inference:4 amelunxen:6 dim:1 nn:13 typically:4 bt:17 relation:5 kc:1 lotz:1 schulz:1 tao:2 pixel:1 overall:1 issue:2 dual:1 classification:1 eldar:1 priori:1 resonance:1 spatial:19 constrained:1 initialize:1 summed:1 laska:1 equal:1 construct:2 field:5 having:1 sampling:1 biology:1 represents:1 throughput:1 promote:1 future:2 few:1 modern:2 randomly:2 simultaneously:1 national:1 packer:1 individual:1 replaced:1 phase:15 geometry:4 fire:4 detection:1 interest:5 highly:2 introduces:2 extreme:1 light:5 predefined:1 accurate:2 ambient:2 implication:1 edge:1 purdon:1 respective:1 orthogonal:7 skf:1 tree:1 plotted:1 theoretical:2 increased:1 column:1 ar:3 w911nf:1 measuring:1 tractability:1 entry:3 snr:13 successful:1 tridiagonal:2 characterize:1 connect:1 scanning:6 spatiotemporal:4 periodic:4 varies:1 considerably:2 recht:3 randomized:4 siam:2 physic:1 decoding:1 tanner:4 quickly:2 again:3 squared:3 tube:2 thesis:1 possibly:1 cosyne:1 davenport:1 cgt:5 inefficient:1 leading:1 grossman:1 li:1 supp:2 potential:3 parrilo:2 photon:1 sec:1 blanchard:2 rockafellar:2 depends:5 idealized:2 performed:1 view:1 picked:1 closed:1 analyze:1 observing:1 red:1 recover:4 parallel:1 candes:5 vivo:1 contribution:1 minimize:3 purple:2 accuracy:1 largely:1 efficiently:3 bates:1 carlo:2 monitoring:2 simultaneous:1 banded:1 parallelizable:2 explain:1 lengthy:1 raster:6 acquisition:4 frequency:2 pp:2 storm:1 recovers:1 static:1 newly:1 bvec:2 actually:1 adaptable:1 higher:2 follow:1 inspire:1 microcircuit:1 nikolenko:3 generality:1 just:2 correlation:1 tropp:1 overlapping:5 undergoes:4 grows:1 r53:1 effect:7 normalized:5 true:20 brown:1 multiplier:1 chemical:1 spatially:2 imaged:4 nonzero:1 excluded:1 alternating:1 eg:2 white:1 during:3 excitation:1 coincides:2 demonstrate:2 motion:1 l1:1 image:8 harmonic:1 novel:1 recently:1 parikh:1 rotation:1 specialized:1 functional:2 spiking:10 rust:2 qp:5 physical:1 machado:2 khz:1 analog:1 relating:1 katz:2 numerically:2 willett:1 measurement:34 significant:5 refer:3 vec:7 ai:3 counterexample:1 rd:3 mathematics:1 moving:1 access:2 stable:4 gt:2 isometry:3 recent:3 showed:1 perspective:1 inf:2 irrelevant:1 ausser:1 revolutionized:1 certain:5 nonconvex:2 binary:9 came:1 success:4 watson:1 meeting:1 peterka:1 pauly:1 minimum:1 additional:1 relaxed:3 mr:1 paradigm:1 redundant:2 living:1 signal:16 vogelstein:5 dashed:3 multiple:5 full:3 ii:3 technical:1 faster:1 match:1 calculation:1 serial:1 award:1 ensuring:1 scalable:1 basic:1 involving:1 noiseless:9 essentially:2 experimentalists:1 arxiv:2 iteration:1 microscopy:8 microscope:3 background:1 whereas:1 subdifferential:4 addition:2 interval:1 void:1 singular:6 desynchronized:1 appropriately:3 unlike:2 subject:3 induced:5 db:14 practitioner:1 extracting:2 leverage:2 affect:1 timesteps:7 modulator:1 perfectly:1 architecture:1 reduce:1 idea:1 tradeoff:2 eftychios:2 absent:1 penalty:2 locating:1 york:1 cause:1 hessian:1 action:1 kskf:1 detailed:1 involve:1 amount:1 ph:1 hardware:1 nsf:1 ahrens:2 uk2:1 neuroscience:4 estimated:17 per:10 blue:2 broadly:1 discrete:4 express:1 redundancy:1 four:1 threshold:1 lustig:2 ptc:11 drawn:2 undersampling:12 changing:1 backward:1 imaging:41 timestep:24 relaxation:3 merely:2 year:1 cone:9 sum:1 inverse:2 noticing:1 baraniuk:1 letter:1 chandrasekaran:2 separation:1 patch:1 coherence:1 diffraction:2 scaling:1 bound:1 guaranteed:1 quadratic:3 babadi:2 nonnegative:14 activity:6 annual:1 kronecker:1 nearby:1 declared:1 fourier:1 argument:1 span:2 min:1 optical:1 relatively:2 extracellular:1 department:1 according:1 smaller:2 reconstructing:1 lp:7 randall:1 projecting:1 restricted:3 invariant:1 taken:1 equation:2 reddy:2 discus:2 turn:1 loose:1 needed:4 mind:1 available:2 operation:2 observe:4 process1:1 spectral:1 magnetic:1 subtracted:1 alternative:1 denotes:4 assumes:1 running:1 clustering:1 wakin:1 newton:1 log10:2 calculating:1 neglect:1 exploit:2 medicine:1 especially:2 classical:1 society:1 already:1 spike:40 concentration:9 costly:1 traditional:5 diagonal:10 exhibit:1 distance:2 polytope:1 collected:1 cellular:1 willsky:1 length:6 index:6 ratio:3 innovation:1 equivalently:1 setup:5 difficult:1 differencing:1 potentially:6 takhar:1 trace:15 negative:1 rise:1 ba:2 implementation:2 calcium:29 unknown:5 allowing:1 upper:1 observation:5 neuron:40 convolution:1 enabling:1 descent:3 situation:1 incorporated:1 precise:1 frame:1 rn:2 varied:2 compressibility:1 arbitrary:1 sharp:2 peleato:1 nmf:7 inferred:2 introduced:1 pair:1 required:6 eckstein:1 philosophical:1 cotton:2 coherent:1 established:2 alternately:1 address:3 proceeds:1 below:2 pattern:10 usually:1 poole:1 ghost:2 sparsity:15 blkdiag:1 program:5 royal:1 overlap:1 indicator:4 advanced:1 zhuang:1 eye:1 inversely:1 conic:3 axis:3 columbia:2 prior:7 kelly:1 voxels:10 vectorizing:1 kf:2 geometric:2 sg:1 relative:3 acknowledgement:1 review:3 fully:8 expect:1 permutation:1 araya:1 mixed:1 interesting:1 yuste:2 proportional:2 var:1 clark:1 digital:1 foundation:2 offered:1 vectorized:1 sufficient:1 thresholding:2 viewpoint:1 silberberg:1 row:1 prone:1 penalized:5 supported:4 surprisingly:1 last:1 side:1 allow:2 understand:1 barrier:2 sparse:21 distributed:1 overcome:1 dimension:12 curve:7 transition:15 xn:1 cortical:1 autoregressive:1 forward:1 projected:1 voxel:6 transaction:2 reconstructed:2 approximate:4 keep:1 global:1 active:2 reveals:1 b1:2 assumed:2 tolias:1 continuous:2 search:1 nature:6 robust:1 career:1 symmetry:2 complex:1 protocol:6 dense:6 linearly:1 whole:3 noise:9 arise:1 profile:1 fair:2 body:1 neuronal:4 augmented:1 fig:16 fashion:3 ny:1 slow:3 sub:2 position:1 fails:1 deterministically:1 lie:2 saggau:2 wavelet:1 specific:1 xt:1 showing:1 sensing:23 meas:4 decay:2 evidence:1 deconvolution:8 demixing:3 adding:1 ci:4 supplement:4 hyperspectral:2 justifies:1 sorting:2 gap:1 intersection:1 logarithmic:2 paninski:3 expressed:1 lewicki:2 corresponds:1 satisfies:5 succeed:3 identity:3 targeted:1 donoho:4 multiphoton:2 admm:1 sippy:1 total:7 experimental:1 unfavorable:1 cartis:1 indicating:2 scan:1 preparation:1 princeton:1 correlated:1 |
4,415 | 4,997 | Generalized Method-of-Moments for Rank
Aggregation
Hossein Azari Soufiani
SEAS
Harvard University
[email protected]
William Z. Chen
Statistics Department
Harvard University
[email protected]
David C. Parkes
SEAS
Harvard University
[email protected]
Lirong Xia
Computer Science Department
Rensselaer Polytechnic Institute
Troy, NY 12180, USA
[email protected]
Abstract
In this paper we propose a class of efficient Generalized Method-of-Moments
(GMM) algorithms for computing parameters of the Plackett-Luce model, where
the data consists of full rankings over alternatives. Our technique is based on
breaking the full rankings into pairwise comparisons, and then computing parameters that satisfy a set of generalized moment conditions. We identify conditions
for the output of GMM to be unique, and identify a general class of consistent
and inconsistent breakings. We then show by theory and experiments that our algorithms run significantly faster than the classical Minorize-Maximization (MM)
algorithm, while achieving competitive statistical efficiency.
1
Introduction
In many applications, we need to aggregate the preferences of agents over a set of alternatives to
produce a joint ranking. For example, in systems for ranking the quality of products, restaurants, or
other services, we can generate an aggregate rank through feedback from individual users. This idea
of rank aggregation also plays an important role in multiagent systems, meta-search engines [4],
belief merging [5], crowdsourcing [15], and many other e-commerce applications.
A standard approach towards rank aggregation is to treat input rankings as data generated from
a probabilistic model, and then learn the MLE of the input data. This idea has been explored in
both the machine learning community and the (computational) social choice community. The most
popular statistical models are the Bradley-Terry-Luce model (BTL for short) [2, 13], the PlackettLuce model (PL for short) [17, 13], the random utility model [18], and the Mallows (Condorcet)
model [14, 3]. In machine learning, researchers have focused on designing efficient algorithms to
estimate parameters for popular models; e.g. [8, 12, 1]. This line of research is sometimes referred
to as learning to rank [11].
Recently, Negahban et al. [16] proposed a rank aggregation algorithm, called Rank Centrality (RC),
based on computing the stationary distribution of a Markov chain whose transition matrix is defined
according to the data (pairwise comparisons among alternatives). The authors describe the approach
as being model independent, and prove that for data generated according to BTL, the output of RC
converges to the ground truth, and the performance of RC is almost identical to the performance of
1
MLE for BTL. Moreover, they characterized the convergence rate and showed experimental comparisons.
Our Contributions. In this paper, we take a generalized method-of-moments (GMM) point of view
towards rank aggregation. We first reveal a new and natural connection between the RC algorithm [16] and the BTL model by showing that RC algorithm can be interpreted as a GMM estimator
applied to the BTL model.
The main technical contribution of this paper is a class of GMMs for parameter estimation under
the PL model, which generalizes BTL and the input consists of full rankings instead of pairwise
comparisons as in the case of BTL and RC algorithm.
Our algorithms first break full rankings into pairwise comparisons, and then solve the generalized
moment conditions to find the parameters. Each of our GMMs is characterized by a way of breaking
full rankings. We characterize conditions for the output of the algorithm to be unique, and we also
obtain some general characterizations that help us to determine which method of breaking leads to
a consistent GMM. Specifically, full breaking (which uses all pairwise comparisons in the ranking)
is consistent, but adjacent breaking (which only uses pairwise comparisons in adjacent positions) is
inconsistent.
We characterize the computational complexity of our GMMs, and show that the asymptotic complexity is better than for the classical Minorize-Maximization (MM) algorithm for PL [8]. We also
compare statistical efficiency and running time of these methods experimentally using both synthetic
and real-world data, showing that all GMMs run much faster than the MM algorithm.
For the synthetic data, we observe that many consistent GMMs converge as fast as the MM algorithm, while there exists a clear tradeoff between computational complexity and statistical efficiency
among consistent GMMs.
Technically our technique is related to the random walk approach [16]. However, we note that
our algorithms aggregate full rankings under PL, while the RC algorithm aggregates pairwise comparisons. Therefore, it is quite hard to directly compare our GMMs and RC fairly since they are
designed for different types of data. Moreover, by taking a GMM point of view, we prove the consistency of our algorithms on top of theories for GMMs, while Negahban et al. proved the consistency
of RC directly.
2
Preliminaries
Let C = {c1 , .., cm } denote the set of m alternatives. Let D = {d1 , . . . , dn } denote the data, where
each dj is a full ranking over C. The PL P
model is a parametric model where each alternative ci is
m
parameterized by ?i ? (0, 1), such that i=1 ?i = 1. Let ~? = (?1 , . . . , ?m ) andP? denote the
?
? = {~? : ?i, ?i ? 0 and m ?i = 1}.
parameter space. Let ? denote the closure of ?. That is, ?
i=1
?
Given ~? ? ?, the probability for a ranking d = [ci1 ci2 ? ? ? cim ] is defined as follows.
?im?1
?i
?i
PrPL (d|~? ) = Pm 1
? Pm 2
? ??? ?
?im?1 + ?im
l=1 ?il
l=2 ?il
In the BTL model, the data is composed of pairwise comparisons instead of rankings, and the
? i1
model is parameterized in the same way as PL, such that PrBTL (ci1 ci2 |~? ) =
.
?i1 + ?i2
BTL
P can be thought of as a special case of PL via marginalization, since PrBTL (ci1 ci2 |~? ) =
? ). In the rest of the paper, we denote Pr = PrPL .
d:ci cc PrPL (d|~
1
2
Generalized Method-of-Moments (GMM) provides a wide class of algorithms for parameter estimation. In GMM, we are given a parametric model whose parametric space is ? ? Rm , an infinite
series of q ? q matrices W = {Wt : t ? 1}, and a column-vector-valued function g(d, ~? ) ? Rq .
For any vectorP
~a ? Rq and any q ? q matrix W , we let k~akW = (~a)T W~a. For any data D, let
g(D, ~? ) = n1 d?D g(d, ~? ), and the GMM method computes parameters ~? 0 ? ? that minimize
kg(D, ~? 0 )kWn , formally defined as follows:
GMMg (D, W) = {~? 0 ? ? : kg(D, ~? 0 )kWn = inf kg(D, ~? )kWn }
~
? ??
(1)
Since ? may not be compact (as is the case for PL), the set of parameters GMMg (D, W) can be
empty. A GMM is consistent if and only if for any ~? ? ? ?, GMMg (D, W) converges in probability
to ~? ? as n ? ? and the data is drawn i.i.d. given ~? ? . Consistency is a desirable property for GMMs.
2
It is well-known that GMMg (D, W) is consistent if it satisfies some regularity conditions plus the
following condition [7]:
Condition 1. Ed|~? ? [g(d, ~? )] = 0 if and only if ~? = ~? ? .
Example 1. MLE as a consistent GMM: Suppose the likelihood function is twice-differentiable,
then the MLE is a consistent GMM where g(d, ~? ) = 5~? log Pr(d|~? ) and Wn = I.
Example 2. Negahban et al. [16] proposed the Rank Centrality (RC) algorithm that aggregates
pairwise comparisons DP = {Y1 , . . . , Yn }.1 Let aij denote the number of ci cj in DP and it is
assumed that for any i 6= j, aij + aji = k. Let dmax denote the maximum pairwise defeats for an
alternative. RC first computes the following m ? m column stochastic matrix:
P aij /(kdmax ) if i 6= j
PRC (DP )ij =
1 ? l6=i ali /(kdmax ) if i = j
Then, RC computes (PRC (DP ))T ?s stationary distribution ~? as the output.
ci cj
1 if Y = [ci cj ]
?
ci cj
P X cl ci
and PRC (Y ) =
Let X
(Y ) =
? l6=i X
0 otherwise
if i 6= j
if i = j .
?
Let gRC (d, ~? ) = PRC
(d) ? ~? . It is not hard to check that the output of RC is the output of GMMgRC .
Moreover, GMMgRC satisfies Condition 1 under the BTL model, and as we will show later in Corollary 4, GMMgRC is consistent for BTL.
3
Generalized Method-of-Moments for the Plakett-Luce model
In this section we introduce our GMMs for rank aggregation under PL. In our methods, q = m,
Wn = I and g is linear in ~? . We start with a simple special case to illustrate the idea.
Example 3. For any full ranking d over C, we let
1 ci d cj
ci cj
?X
(d) =
0 otherwise
ci cj
P X cl ci (d) if i 6= j
? P (d) be an m ? m matrix where P (d)ij =
? l6=i X
(d) if i = j
P
? gF (d, ~? ) = P (d) ? ~? and P (D) = n1 d?D P (d)
For example, let m = 3, D = {[c1 c2 c3 ], [c2 c3 c1 ]}. Then P (D) =
"
#
?1
1/2
1/2
1/2 ?1/2
1
. The corresponding GMM seeks to minimize kP (D) ? ~? k22 for ~? ? ?.
1/2
0
?3/2
?
?i?
?
if i 6= j
?i? +?j?
, which means that
It is not hard to verify that (Ed|~? ? [P (d)])ij =
?
P
?
l
? ? l6=i ? ? if i = j
? +?
i
l
Ed|~? ? [gF (d, ~? ? )] = Ed|~? ? [P (d)] ? ~? ? = 0. It is not hard to verify that ~? ? is the only solution
to Ed|~? ? [gF (d, ~? )] = 0. Therefore, GMMgF satisfies Condition 1. Moreover, we will show in
Corollary 3 that GMMgF is consistent for PL.
In the above example, we count all pairwise comparisons in a full ranking d to build P (d), and define
g = P (D) ? ~? to be linear in ~? . In general, we may consider some subset of pairwise comparisons.
This leads to the definition of our class of GMMs based on the notion of breakings. Intuitively, a
breaking is an undirected graph over the m positions in a ranking, such that for any full ranking
d, the pairwise comparisons between alternatives in the ith position and jth position are counted to
construct PG (d) if and only if {i, j} ? G.
Definition 1. A breaking is a non-empty undirected graph G whose vertices are {1, . . . , m}. Given
any breaking G, any full ranking d over C, and any ci , cj ? C, we let
1
The BTL model in [16] is slightly different from that in this paper. Therefore, in this example we adopt an
equivalent description of the RC algorithm.
3
?
c c
XGi j (d)
=
1
0
{Pos(ci , d), Pos(cj , d)} ? G and ci d cj
, where Pos(ci , d) is the posiotherwise
tion of ci in d.
c c
? PG (d) be an m ? m matrix where PG (d)ij =
?
XGi j (d) if i 6= j
cl ci
(d) if i = j
l6=i XG
P
? gG (d, ~? ) = PG (d) ? ~?
? GMMG (D) be the GMM method that solves Equation (1) for gG and Wn = I.2
In this paper, we focus on the following breakings, illustrated in Figure 1.
? Full breaking: GF is the complete graph. Example 3 is the GMM with full breaking.
? Top-k breaking: for any k ? m, GkT = {{i, j} : i ? k, j 6= i}.
? Bottom-k breaking: for any k ? 2, GkB = {{i, j} : i, j ? m + 1 ? k, j 6= i}.3
? Adjacent breaking: GA = {{1, 2}, {2, 3}, . . . , {m ? 1, m}}.
? Position-k breaking: for any k ? 2, GkP = {{k, i} : i 6= k}.
1
1
2
1
2
2
6
6
3
6
3
3
5
5
4
5
(a) Full breaking.
(b) Top-3 breaking.
1
2
6
(c) Bottom-3 breaking.
1
3
5
4
4
2
6
4
3
5
(d) Adjacent breaking.
4
(e) Position-2 breaking.
Figure 1: Example breakings for m = 6.
Intuitively, the full breaking contains all the pairwise comparisons that can be extracted from each
agent?s full rank information in the ranking; the top-k breaking contains all pairwise comparisons
that can be extracted from the rank provided by an agent when she only reveals her top k alternatives
and the ranking among them; the bottom-k breaking can be computed when an agent only reveals
her bottom k alternatives and the ranking among them; and the position-k breaking can be computed
when the agent only reveals the alternative that is ranked at the kth position and the set of alternatives
ranked in lower positions.
m?k
m
1
1
k
We note that Gm
= GF , and
T = GB = GF , GT = GP , and for any k ? m ? 1, GT ? GB
Sk
k
l
GT = l=1 GP .
We are now ready to present our GMM algorithm (Algorithm 1) parameterized by a breaking G.
2
3
To simplify notation, we use GMMG instead of GMMgG .
We need k ? 2 since GkB is empty.
4
Algorithm 1: GMMG (D)
Input: A breaking G and data D = {d1 , . . . , dn } composed of full rankings.
Output: Estimation GMMG
(D) of parameters under PL.
P
1 Compute PG (D) = n1
d?D PG (d) in Definition 1.
2 Compute GMMG (D) according to (1).
3 return GMMG (D).
Step 2 can be further simplified according to the following theorem. Due to the space constraints,
most proofs are relegated to the supplementary materials.
? such that PG (D) ? ~? = 0.
Theorem 1. For any breaking G and any data D, there exists ~? ? ?
Theorem 1 implies that in Equation (1), inf ~? ?? g(D, ~? )T Wn g(D, ~? )} = 0. Therefore, Step 2 can
be replaced by: 2? Let GMMG = {~? ? ? : PG (D) ? ~? = 0}.
3.1 Uniqueness of Solution
It is possible that for some data D, GMMG (D) is empty or non-unique. Our next theorem characterizes conditions for |GMMG (D)| = 1 and |GMMG (D)| 6= ?. A Markov chain (row stochastic
matrix) M is irreducible, if any state can be reached from any other state. That is, M only has one
communicating class.
Theorem 2. Among the following three conditions, 1 and 2 are equivalent for any breaking G and
any data D. Moreover, conditions 1 and 2 are equivalent to condition 3 if and only if G is connected.
1. (I + PG (D)/m)T is irreducible.
2. |GMMG (D)| = 1.
3. GMMG (D) 6= ?.
Corollary 1. For the full breaking, adjacent breaking, and any top-k breaking, the three statements
in Theorem 2 are equivalent for any data D. For any position-k (with k ? 2) and any bottom-k
(with k ? m ? 1), 1 and 2 are not equivalent to 3 for some data D.
Ford, Jr. [6] identified a necessary and sufficient condition on data D for the MLE under PL to be
unique, which is equivalent to condition 1 in Theorem 2. Therefore, we have the following corollary.
Corollary 2. For the full breaking GF , |GMMGF (D)| = 1 if and only if |MLEP L (D)| = 1.
3.2
Consistency
We say a breaking G is consistent (for PL), if GMMG is consistent (for PL). Below, we show that
some breakings defined in the last subsection are consistent. We start with general results.
Theorem 3. A breaking G is consistent if and only if Ed|~? ? [g(d, ~? ? )] = 0, which is equivalent to
the following equalities:
Pr(ci cj |{Pos(ci , d), Pos(cj , d)} ? G)
??
for all i 6= j,
= i? .
(2)
Pr(cj ci |{Pos(ci ), Pos(cj )} ? G)
?j
Theorem 4. Let G1 , G2 be a pair of consistent breakings.
1. If G1 ? G2 = ?, then G1 ? G2 is also consistent.
2. If G1 ( G2 and (G2 \ G1 ) 6= ?, then (G2 \ G1 ) is also consistent.
Continuing, we show that position-k breakings are consistent, then use this and Theorem 4 as building blocks to prove additional consistency results.
Proposition 1. For any k ? 1, the position-k breaking GkP is consistent.
Sk
m?k
k
We recall that GkT = l=1 GlP , GF = Gm
. Therefore, we have the
T , and GB = GF \ GT
following corollary.
Corollary 3. The full breaking GF is consistent; for any k, GkT is consistent, and for any k ? 2,
GkB is consistent.
Theorem 5. Adjacent breaking GA is consistent if and only if all components in ~? ? are the same.
5
Lastly, the technique developed in this section can also provide an independent proof that the RC
algorithm is consistent for BTL, which is implied by the main theorem in [16]:
Corollary 4. [16] The RC algorithm is consistent for BTL.
RC is equivalent to GM MgRC that satisfies Condition 1. By checking similar conditions as we did
in the proof of Theorem 3, we can prove that GM MgRC is consistent for BTL.
The results in this section suggest that if we want to learn the parameters of PL, we should use
consistent breakings, including full breaking, top-k breakings, bottom-k breakings, and position-k
breakings. The adjacent breaking seems quite natural, but it is not consistent, thus will not provide a
good estimate to the parameters of PL. This will also be verified by experimental results in Section 4.
3.3
Complexity
We first characterize the computational complexity of our GMMs.
Proposition 2. The computational complexity of the MM algorithm for PL [8] and our GMMs are
listed below.
? MM: O(m3 n) per iteration.
? GMM (Algorithm 1) with full breaking: O(m2 n + m2.376 ), with O(m2 n) for breaking and
O(m2.376 ) for computing step 2? in Algorithm 1 (matrix inversion).
? GMM with adjacent breaking: O(mn + m2.376 ), with O(mn) for breaking and O(m2.376 )
for computing step 2? in Algorithm 1.
? GMM with top-k breaking: O((m + k)kn + m2.376 ), with O((m + k)kn) for breaking and
O(m2.376 ) for computing step 2? in Algorithm 1.
It follows that the asymptotic complexity of the GMM algorithms is better than for the classical MM
algorithm. In particular, the GMM with adjacent breaking and top-k breaking for constant k?s are
the fastest. However, we recall that the GMM with adjacent breaking is not consistent, while the
other algorithms are consistent. We would expect that as data size grows, the GMM with adjacent
breaking will provide a relatively poor estimation to ~? ? compared to the other methods.
Moreover in the statistical setting in order to gain consistency we need regimes that m = o(n) and
large ns are going to lead to major computational bottlenecks. All the above algorithms (MM and
different GMMs) have linear complexity in n, hence, the coefficient for n is essential in determining
the tradeoffs between these methods. As it can be seen above the coefficient for n is linear in m for
top-k breaking and quadratic for full breaking while it is cubic in m for the MM algorithm. This
difference is illustrated through experiments in Figure 5.
Among GMMs with top-k breakings, the larger the k is, the more information we use in a single
ranking, which comes at a higher computational cost. Therefore, it is natural to conjecture that for
the same data, GMMGkT with large k converges faster than GMMGkT with small k. In other words,
we expect to see the following time-efficiency tradeoff among GMMGkT for different k?s, which is
verified by the experimental results in the next section.
Conjecture 1. (time-efficiency tradeoff) for any k1 < k2 , GMMGk1 runs faster, while GMMGk2
T
T
provides a better estimate to the ground truth.
4
Experiments
The running time and statistical efficiency of MM and our GMMs are examined for both synthetic
data and a real-world sushi dataset [9]. The synthetic datasets are generated as follows.
? Generating the ground truth: for m ? 300, the ground truth ~? ? is generated from the Dirichlet
distribution Dir(~1).
? Generating data: given a ground truth ~? ? , we generate up to 1000 full rankings from PL.
We implemented MM [8] for 1, 3, 10 iterations, as well as GMMs with full breaking, adjacent
breaking, and top-k breaking for all k ? m ? 1.
6
We focus on the following representative criteria. Let ~? denote the output of the algorithm.
? Mean Squared Error: MSE = E(k~? ? ~? ? k22 ).
? Kendall Rank Correlation Coefficient: Let K(~? , ~? ? ) denote the Kendall tau distance between
the ranking over components in ~? and the ranking over components in ~? ? . The Kendall correlation
K(~
? ,~
??)
is 1 ? 2 m(m?1)/2
.
All experiments are run on a 1.86 GHz Intel Core 2 Duo MacBook Air. The multiple repetitions
for the statistical efficiency experiments in Figure 3 and experiments for sushi data in Figure 5 have
been done using the odyssey cluster. All the codes are written in R project and they are available as
a part of the package ?StatRank?.
4.1 Synthetic Data
In this subsection we focus on comparisons among MM, GMM-F (full breaking), and GMM-A
(adjacent breaking). The running time is presented in Figure 2. We observe that GMM-A (adjacent
breaking) is the fastest and MM is the slowest, even for one iteration.
The statistical efficiency is shown in Figure 3. We observe that in regard to the MSE criterion,
GMM-F (full breaking) performs as well as MM for 10 iterations (which converges), and that these
are both better than GMM-A (adjacent breaking). For the Kendall correlation criterion, GMM-F (full
breaking) has the best performance and GMM-A (adjacent breaking) has the worst performance.
Statistics are calculated over 1840 trials. In all cases except one, GMM-F (full breaking) outperforms
MM which outperforms GMM-A (adjacent breaking) with statistical significance at 95% confidence.
The only exception is between GMM-F (full breaking) and MM for Kendall correlation at n = 1000.
Time (log scale) (s)
Time (log scale) (s)
1.00
1.00
GMM?A
GMM?F
0.10
MM
0.01
0.01
250
500
750
1000
25
n (agents)
50
75
100
m (alternatives)
Figure 2: The running time of MM (one iteration), GMM-F (full breaking), and GMM-A (adjacent breaking),
plotted in log-scale. On the left, m is fixed at 10. On the right, n is fixed at 10. 95% confidence intervals are
too small to be seen. Times are calculated over 20 datasets.
MSE
Kendall Correlation
0.004
0.8
0.003
GMM?A
GMM?F
0.002
MM
0.001
0.7
0.6
0.5
0.000
250
500
750
1000
n (agents)
250
500
750
1000
n (agents)
Figure 3: The MSE and Kendall correlation of MM (10 iterations), GMM-F (full breaking), and GMM-A
(adjacent breaking). Error bars are 95% confidence intervals.
4.2
Time-Efficiency Tradeoff among Top-k Breakings
Results on the running time and statistical efficiency for top-k breakings are shown in Figure 4. We
recall that top-1 is equivalent to position-1, and top-(m ? 1) is equivalent to the full breaking.
For n = 100, MSE comparisons between successive top-k breakings are statistically significant at
95% level from (top-1, top-2) to (top-6, top-7). The comparisons in running time are all significant at
95% confidence level. On average, we observe that top-k breakings with smaller k run faster, while
top-k breakings with larger k have higher statistical efficiency in both MSE and Kendall correlation.
This justifies Conjecture 1.
7
4.3
Experiments for Real Data
In the sushi dataset [9], there are 10 kinds of sushi (m = 10) and the amount of data n is varied,
randomly sampling with replacement. We set the ground truth to be the output of MM applied to
all 5000 data points. For the running time, we observe the same as for the synthetic data: GMM
(adjacent breaking) runs faster than GMM (full breaking), which runs faster than MM (The results
on running time can be found in supplementary material B).
Comparisons for MSE and Kendall correlation are shown in Figure 5. In both figures, 95% confidence intervals are plotted but too small to be seen. Statistics are calculated over 1970 trials.
MSE (n = 100)
0.0025
Kendall Correlation (n = 100)
Time (n = 100)
0.0020
0.4
0.8
0.0015
0.3
0.7
0.0010
0.2
0.0005
0.6
0.1
0.0000
1
2
3
4
5
6
7
8
9
1
2
k (Top k Breaking)
3
4
5
6
7
8
9
1
k (Top k Breaking)
2
3
4
5
6
7
8
9
k (Top k Breaking)
Figure 4: Comparison of GMM with top-k breakings as k is varied. The x-axis represents k in the top-k
breaking. Error bars are 95% confidence intervals and m = 10, n = 100.
MSE
GMM?A
Kendall Correlation
0.0015
1.0
0.0010
0.8
GMM?F
Time
200
0.6
0.0005
100
MM
0.4
0.0000
0
1000 2000 3000 4000 5000
1000
2000
n (agents)
3000
n (agents)
4000
5000
1000
2000
3000
4000
5000
n (agents)
Figure 5: The MSE and Kendall correlation criteria and computation time for MM (10 iterations), GMM-F
(full breaking), and GMM-A (adjacent breaking) on sushi data.
For MSE and Kendall correlation, we observe that MM converges fastest, followed by GMM (full
breaking), which outperforms GMM (adjacent breaking) which does not converge. Differences between performances are all statistically significant with 95% confidence (with exception of Kendall
correlation and both GMM methods for n = 200, where p = 0.07). This is different from comparisons for synthetic data (Figure 3). We believe that the main reason is because PL does not fit
sushi data well, which is a fact recently observed by Azari et al. [1]. Therefore, we cannot expect that GMM converges to the output of MM on the sushi dataset, since the consistency results
(Corollary 3) assumes that the data is generated under PL.
5
Future Work
We plan to work on the connection between consistent breakings and preference elicitation. For
example, even though the theory in this paper is developed for full ranks, the notion of top-k and
bottom-k breaking are implicitly allowing some partial order settings. More specifically, top-k
breaking can be achieved from partial orders that include full rankings for the top-k alternatives.
Acknowledgments
This work is supported in part by NSF Grants No. CCF- 0915016 and No. AF-1301976. Lirong
Xia acknowledges NSF under Grant No. 1136996 to the Computing Research Association for the
CIFellows project and an RPI startup fund. We thank Joseph K. Blitzstein, Edoardo M. Airoldi,
Ryan P. Adams, Devavrat Shah, Yiling Chen, G?abor C?ardi and members of Harvard EconCS group
for their comments on different aspects of this work. We thank anonymous NIPS-13 reviewers, for
helpful comments and suggestions.
8
References
[1] Hossein Azari Soufiani, David C. Parkes, and Lirong Xia. Random utility theory for social choice. In
Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), pages 126?
134, Lake Tahoe, NV, USA, 2012.
[2] Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. The method of
paired comparisons. Biometrika, 39(3/4):324?345, 1952.
[3] Marquis de Condorcet. Essai sur l?application de l?analyse a` la probabilit?e des d?ecisions rendues a` la
pluralit?e des voix. Paris: L?Imprimerie Royale, 1785.
[4] Cynthia Dwork, Ravi Kumar, Moni Naor, and D. Sivakumar. Rank aggregation methods for the web. In
Proceedings of the 10th World Wide Web Conference, pages 613?622, 2001.
[5] Patricia Everaere, S?ebastien Konieczny, and Pierre Marquis. The strategy-proofness landscape of merging. Journal of Artificial Intelligence Research, 28:49?105, 2007.
[6] Lester R. Ford, Jr. Solution of a ranking problem from binary comparisons. The American Mathematical
Monthly, 64(8):28?33, 1957.
[7] Lars Peter Hansen. Large Sample Properties of Generalized Method of Moments Estimators. Econometrica, 50(4):1029?1054, 1982.
[8] David R. Hunter. MM algorithms for generalized Bradley-Terry models. In The Annals of Statistics,
volume 32, pages 384?406, 2004.
[9] Toshihiro Kamishima. Nantonac collaborative filtering: Recommendation based on order responses. In
Proceedings of the Ninth International Conference on Knowledge Discovery and Data Mining (KDD),
pages 583?588, Washington, DC, USA, 2003.
[10] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing Times. American
Mathematical Society, 2008.
[11] Tie-Yan Liu. Learning to Rank for Information Retrieval. Springer, 2011.
[12] Tyler Lu and Craig Boutilier. Learning Mallows models with pairwise preferences. In Proceedings of the
Twenty-Eighth International Conference on Machine Learning (ICML 2011), pages 145?152, Bellevue,
WA, USA, 2011.
[13] Robert Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959.
[14] Colin L. Mallows. Non-null ranking model. Biometrika, 44(1/2):114?130, 1957.
[15] Andrew Mao, Ariel D. Procaccia, and Yiling Chen. Better human computation through principled voting.
In Proceedings of the National Conference on Artificial Intelligence (AAAI), Bellevue, WA, USA, 2013.
[16] Sahand Negahban, Sewoong Oh, and Devavrat Shah. Iterative ranking from pair-wise comparisons. In
Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), pages 2483?
2491, Lake Tahoe, NV, USA, 2012.
[17] Robin L. Plackett. The analysis of permutations. Journal of the Royal Statistical Society. Series C
(Applied Statistics), 24(2):193?202, 1975.
[18] Louis Leon Thurstone. A law of comparative judgement. Psychological Review, 34(4):273?286, 1927.
9
| 4997 |@word trial:2 inversion:1 judgement:1 seems:1 closure:1 seek:1 ci2:3 pg:9 bellevue:2 moment:8 liu:1 series:2 contains:2 outperforms:3 bradley:3 rpi:2 written:1 kdd:1 designed:1 fund:1 stationary:2 intelligence:2 ith:1 short:2 core:1 imprimerie:1 parkes:3 characterization:1 provides:2 proofness:1 preference:3 successive:1 tahoe:2 rc:17 dn:2 c2:2 mathematical:2 consists:2 prove:4 naor:1 introduce:1 pairwise:16 allan:1 behavior:1 provided:1 project:2 moreover:6 notation:1 duo:1 null:1 kg:3 cm:1 interpreted:1 kind:1 developed:2 kdmax:2 voting:1 tie:1 biometrika:2 rm:1 k2:1 lester:1 grant:2 yn:1 louis:1 service:1 treat:1 sushi:7 marquis:2 sivakumar:1 plus:1 twice:1 rendues:1 examined:1 fastest:3 statistically:2 unique:4 commerce:1 acknowledgment:1 mallow:3 block:2 aji:1 probabilit:1 yan:1 significantly:1 thought:1 word:1 confidence:7 suggest:1 prc:4 gkb:3 ga:2 cannot:1 equivalent:10 reviewer:1 focused:1 xgi:2 communicating:1 estimator:2 m2:8 oh:1 notion:2 thurstone:1 annals:1 play:1 suppose:1 user:1 gm:4 us:2 designing:1 harvard:7 bottom:7 role:1 observed:1 worst:1 soufiani:2 azari:4 connected:1 rq:2 principled:1 complexity:8 econometrica:1 ali:1 technically:1 efficiency:11 po:7 joint:1 kwn:3 fast:1 describe:1 kp:1 artificial:2 aggregate:5 startup:1 whose:3 quite:2 supplementary:2 solve:1 valued:1 say:1 larger:2 otherwise:2 statistic:5 g1:6 gp:2 analyse:1 ford:2 differentiable:1 propose:1 yiling:2 product:1 mixing:1 description:1 gkp:2 glp:1 convergence:1 empty:4 regularity:1 cluster:1 sea:2 produce:1 generating:2 adam:1 converges:6 comparative:1 help:1 illustrate:1 andrew:1 ij:4 solves:1 implemented:1 c:1 implies:1 come:1 stochastic:2 lars:1 human:1 vectorp:1 material:2 odyssey:1 ci1:3 preliminary:1 anonymous:1 proposition:2 ryan:1 im:3 pl:20 mm:27 ground:6 tyler:1 major:1 adopt:1 uniqueness:1 estimation:4 hansen:1 repetition:1 corollary:9 focus:3 she:1 rank:17 likelihood:1 check:1 slowest:1 nantonac:1 helpful:1 plackett:2 abor:1 her:2 relegated:1 going:1 i1:2 ralph:1 hossein:2 among:9 plan:1 special:2 fairly:1 construct:1 washington:1 sampling:1 identical:1 represents:1 icml:1 future:1 simplify:1 irreducible:2 randomly:1 composed:2 national:1 individual:2 replaced:1 replacement:1 william:1 n1:3 dwork:1 patricia:1 mining:1 ecisions:1 chain:3 partial:2 necessary:1 incomplete:1 continuing:1 walk:1 plackettluce:1 plotted:2 theoretical:1 psychological:1 column:2 maximization:2 cost:1 vertex:1 subset:1 levin:1 too:2 characterize:3 kn:2 essai:1 eec:1 dir:1 synthetic:7 international:2 negahban:4 probabilistic:1 squared:1 aaai:1 american:2 return:1 de:4 coefficient:3 satisfy:1 ranking:30 later:1 view:2 break:1 tion:1 kendall:14 characterizes:1 reached:1 competitive:1 aggregation:7 start:2 contribution:2 minimize:2 il:2 air:1 collaborative:1 identify:2 landscape:1 cifellows:1 hunter:1 craig:1 lu:1 cc:1 researcher:1 ed:6 definition:3 proof:3 gain:1 proved:1 dataset:3 popular:2 macbook:1 recall:3 subsection:2 knowledge:1 cj:14 higher:2 response:1 done:1 though:1 lastly:1 correlation:13 web:2 quality:1 reveal:1 believe:1 grows:1 usa:6 building:1 k22:2 verify:2 ccf:1 minorize:2 hence:1 equality:1 i2:1 illustrated:2 adjacent:22 criterion:4 generalized:9 gg:2 complete:1 performs:1 wise:1 recently:2 defeat:1 volume:1 association:1 significant:3 monthly:1 consistency:7 pm:2 dj:1 moni:1 gt:4 showed:1 inf:2 meta:1 binary:1 lirong:3 seen:3 additional:1 determine:1 converge:2 colin:1 full:39 desirable:1 multiple:1 technical:1 faster:7 characterized:2 af:1 retrieval:1 mle:5 paired:1 iteration:7 sometimes:1 achieved:1 c1:3 want:1 interval:4 rest:1 ardi:1 comment:2 nv:2 undirected:2 member:1 inconsistent:2 gmms:17 wn:4 marginalization:1 restaurant:1 fit:1 identified:1 idea:3 luce:4 tradeoff:5 bottleneck:1 utility:2 gb:3 sahand:1 edoardo:1 peter:1 boutilier:1 clear:1 listed:1 amount:1 generate:2 nsf:2 per:1 group:1 achieving:1 drawn:1 gmm:52 verified:2 ravi:1 btl:15 graph:3 run:7 package:1 parameterized:3 almost:1 lake:2 duncan:1 followed:1 quadratic:1 annual:2 constraint:1 aspect:1 kumar:1 leon:1 relatively:1 conjecture:3 department:2 according:4 poor:1 jr:2 smaller:1 slightly:1 elizabeth:1 joseph:1 intuitively:2 pr:4 ariel:1 equation:2 devavrat:2 dmax:1 count:1 royale:1 milton:1 generalizes:1 available:1 polytechnic:1 observe:6 pierre:1 centrality:2 alternative:13 shah:2 top:31 running:8 dirichlet:1 assumes:1 include:1 l6:5 gkt:3 k1:1 build:1 classical:3 society:2 implied:1 fa:1 parametric:3 strategy:1 dp:4 kth:1 distance:1 thank:2 condorcet:2 reason:1 code:1 sur:1 robert:1 statement:1 troy:1 design:1 ebastien:1 twenty:1 allowing:1 markov:3 datasets:2 peres:1 y1:1 dc:1 varied:2 ninth:1 community:2 david:4 pair:2 paris:1 c3:2 connection:2 engine:1 nip:3 andp:1 bar:2 below:2 elicitation:1 eighth:1 akw:1 regime:1 including:1 tau:1 royal:1 belief:1 terry:3 natural:3 ranked:2 mn:2 cim:1 axis:1 xg:1 ready:1 acknowledges:1 gf:10 voix:1 review:1 discovery:1 checking:1 determining:1 asymptotic:2 law:1 multiagent:1 expect:3 permutation:1 suggestion:1 filtering:1 agent:11 sufficient:1 consistent:32 sewoong:1 row:1 supported:1 last:1 wilmer:1 jth:1 aij:3 institute:1 wide:2 taking:1 ghz:1 regard:1 xia:3 feedback:1 calculated:3 transition:1 world:3 computes:3 author:1 simplified:1 counted:1 social:2 compact:1 implicitly:1 reveals:3 assumed:1 search:1 rensselaer:1 grc:1 iterative:1 sk:2 robin:1 toshihiro:1 learn:2 mse:11 cl:3 did:1 significance:1 main:3 referred:1 representative:1 intel:1 cubic:1 ny:1 wiley:1 n:1 position:14 mao:1 breaking:99 theorem:13 showing:2 cynthia:1 pluralit:1 explored:1 exists:2 essential:1 merging:2 ci:21 airoldi:1 justifies:1 chen:3 g2:6 recommendation:1 springer:1 truth:6 satisfies:4 extracted:2 kamishima:1 towards:2 experimentally:1 hard:4 specifically:2 infinite:1 except:1 yuval:1 wt:1 called:1 experimental:3 la:2 m3:1 exception:2 formally:1 college:1 procaccia:1 d1:2 crowdsourcing:1 |
4,416 | 4,998 | Generalized Random Utility Models with Multiple
Types
Hossein Azari Soufiani
Hansheng Diao
Zhenyu Lai
David C. Parkes
SEAS
Mathematics Department Economics Department
SEAS
Harvard University
Harvard University
Harvard University
Harvard University
[email protected]
[email protected] [email protected] [email protected]
Abstract
We propose a model for demand estimation in multi-agent, differentiated product settings and present an estimation algorithm that uses reversible jump MCMC
techniques to classify agents? types. Our model extends the popular setup in Berry,
Levinsohn and Pakes (1995) to allow for the data-driven classification of agents?
types using agent-level data. We focus on applications involving data on agents?
ranking over alternatives, and present theoretical conditions that establish the identifiability of the model and uni-modality of the likelihood/posterior. Results on
both real and simulated data provide support for the scalability of our approach.
1
Introduction
Random utility models (RUM), which presume agent utility to be composed of a deterministic component and a stochastic unobserved error component, are frequently used to model choices by individuals over alternatives. In this paper, we focus on applications where the data is rankings by
individuals over alternatives. Examples from economics include the popular random coefficients
logit model [7] where the data may involve a (partial) consumer ranking of products [9]. In a RUM,
each agent receives an intrinsic utility that is common across all agents for a given choice of alternative, a pairwise-specific utility that varies with the interaction between agent characteristics and
the characteristics of the agent?s chosen alternative, as well as an agent-specific taste shock (noise)
for his chosen alternative. These ingredients are used to construct a posterior/likelihood function of
specific data moments, such as the fraction of agents of each type that choose each alternative.
To estimate preferences across heterogenous agents, one approach as allowed by prior work [20, 24]
is to assume a mixture of agents with a finite number of types. We build upon this work by developing an algorithm to endogenously learn the classification of agent types within this mixture. Empirical researchers are increasingly being presented with rich data on the choices made by individuals,
and asked to classify these agents into different types [28, 29] and to estimate the preferences of each
type [10, 23]. Examples of individual-level data used in economics include household purchases
from supermarket-scanner data [1, 21], and patients? hospital or treatment choices from healthcare
data [22].
The partitioning of agents into latent, discrete sets (or ?types?) allows for the study of the underlying
distribution of preferences across a population of heterogeneous agents. For example, preferences
may be correlated with an agent characteristic, such as income, and the true classification of each
agent?s type, such as his income bracket, may be unobserved. By using a model of demand to estimate the elasticity in behavioral response of each type of agent and by aggregating these responses
over the different types of agents, it is possible to simulate the impact of a social or public policy [8],
or simulate the counterfactual outcome of changing the options available to agents [19].
1
1.1
Our Contributions
This paper focuses on estimating generalized random utility models (GRUM1 ) when the observed
data is partial orders of agents? rankings over alternatives and when latent types are present.
We build on recent work [3, 4] on estimating GRUMs by allowing for an interaction between agent
characteristics and the characteristics of the agent?s chosen alternative.The interaction term helps us
to avoid unrealistic substitution patterns due to the independence of irrelevant alternatives [26] by
allowing agent utilities to be correlated across alternatives with similar characteristics. For example,
this prevents a situation where removing the top choices of both a rich household and a poor household lead them to become equally likely to substitute to the same alternative choice. Our model also
allows the marginal utilities associated with the characteristics of alternatives to vary across agent
types.
Alternative 1
Sake Sushi
Alternatives
Characteristics
of Alternatives
Heaviness
Characteristics
of Agents
Sale Volume
Gender
Expected
Utilities ? = ? + x W 1(z )T
j
ij
j
i
?ij = ?j + xi W 2(zj )T
Alternative 2
Ebi Sushi
Price
Alternative 3
Tako Sushi
Customer
Loyalty
Characteristics
Relationships
Age
?ij = ?j + xi W 3(zj)T
Alternatives? Intrinsic Effect
To classify agents? types and
estimate the parameters associated with each type, we propose an algorithm involving a
novel application of reversible
jump Markov Chain Monte Carlo
(RJMCMC) techniques. RJMCMC can be used for model selection and learning a posterior
on the number of types in a mixture model [31]. Here, we use
RJMCMC to cluster agents into
different types, where each type
exhibits demand for alternatives
based on different preferences;
i.e., different interaction terms between agent and alternative characteristics.
?ij = ?j + xi W 4(zj)T
Agents
Agent Type 1
Agent Type 2
Agent Type 3
Agent Type 4
Figure 1: A GRUM with multiple types of agents
We apply the approach to a real-world dataset involving consumers? preference rankings and also
conduct experiments on synthetic data to perform coverage analysis of RJMCMC. The results show
that our method is scalable, and that the clustering of types provides a better fit to real world data.
The proposed learning algorithm is based on Bayesian methods to find posteriors on the parameters.
This differentiates us from previous estimation approaches in econometrics rely on techniques based
on the generalized method of moments.2
The main theoretical contribution establishes identifiability of mixture models over data consisting
of partial orders. Previous theoretical results have established identifiability for data consisting of
vectors of real numbers [2, 18], but not for data consisting of partial orders. We establish conditions
under which the GRUM likelihood function is uni-modal for the case of observable types. We do
not provide results on the log concavity of the general likelihood problem with unknown types and
leave it for future studies.
1.2
Related work
Prior work in econometrics has focused on developing models that use data aggregated across types
of agents, such as at the level of a geographic market, and that allow heterogeneity by using random
coefficients on either agents? preference parameters [7, 9] or on a set of dummy variables that define
types of agents [6, 27], or by imposing additional structure on the covariance matrix of idiosyncratic
taste shocks [16]. In practice, this approach typically relies on restrictive functional assumptions
about the distribution of consumer taste shocks that enter the RUM in order to reduce computational
1
Defined in [4] as a RUM with a generalized linear model for the regression of the mean parameters on the
interaction of characteristics data as in Figure 1
2
There are alternative methods to RJMCMC, such as the saturation method [13]. However, the memory
required to keep track of former sampled memberships in the saturation method quickly becomes infeasible
given the combinatorial nature of our problem.
2
burden. For example, the logit model [26] assumes i.i.d. draws from a Type I extreme value distribution. This may lead to biased estimates, in particular when the number of alternatives grow
large [5].
Previous work on clustering ranking data for variations of the Placket-Luce (PL) model [28, 29]
has been restricted to settings without agent and alternative characteristics. Morover, Gormley et
al. [28] and Chu et al. [14] performed clustering for RUMs with normal distributions, but this was
limited to pairwise comparisons. Inference of GRUMs for partial ranks involved the computational
hardness addressed in [3]. In mixture models, assuming an arbitrary number of types can lead to
biased results, and reduces the statistical efficiency of the estimators [15].
To the best of our knowledge, we are the first to study the identifiability and inference of GRUMs
with multiple types. Inference for GRUMs has been generalized in [4], However, Azari et al. [4]
do not consider existence of multiple types. Our method applies to data involving individual-level
observations, and partial orders with more than two alternatives. The inference method establishes
a posterior on the number of types, resolving the common issue of how the researcher should select
the number of types.
2
Model
Suppose we have N agents and M alternatives {c1 , .., cM }, and there are S types (subgroups) of
agents and s(n) is agent n?s type.
Agent characteristics are observed and defined as an N ?K matrix X, and alternative characteristics
are observed and defined as an L ? M matrix Z, where K and L are the number of agent and
alternative characteristics respectively.
Let unm be agent n?s perceived utility for alternative m, and let W s(n) be a K ? L real matrix that
models the linear relation between the attributes of alternatives and the attributes of agents. We have,
unm = ?m + ~xn W s(n) (~zm )T + nm ,
(1)
where ~xn is the nth row of the matrix X and ~zm is the mth column of the matrix Z. In words, agent
n?s utility for alternative m consists of the following three parts:
1. ?m :gs The intrinsic utility of alternative m, which is the same across all agents;
2. ~xn W s(n) (~zm )T : The agent-specific utility, which is unique to all agents of type s(n), and
where W s(n) has at least one nonzero element;
3. nm : The random noise (agent-specific taste shock), which is generated independently
across agents and alternatives.
The number of parameters for each type is P = KL + M .
See Figure 2 for an illustration of the model. In order to write the model as a linear regression, we
(n)
(n)
(n)
define matrix AM ?P , such that AKL+m,m = 1 for 1 ? m ? M and AKL+m,m0 = 0 for m 6=
(n)
m0 and A(k?1)L+l,m = ~xn (k)~zm (l) for 1 ? l ? L and 1 ? k ? K. We also need to shuffle
the parameters for all types into a P ? S matrix ?, such that ?KL+m,s = ? and ?(k?1)L+l,s =
(n)
s
Wkl
for 1 ? k ? K and 1 ? l ? L. We adopt BS?1 to indicate the type of agent n, with
(n)
(n)
(n)
Bs(n),1 = 1 and Bs,1 = 0 for all s 6= s(n). We also define an M ? 1 matrix, U (n) , as Um,1 = unm .
We can now rewrite (1) as:
U (n) = A(n) ?B (n) +
(2)
Suppose that an agent has type s with probability ?s . Given this, the random utility model can
PS
(n)
|X (n) , Z, ?s ), where ?s is the sth
be written as, Pr(U (n) |X (n) , Z, ?, ?) =
s=1 ?s Pr(U
column of the matrix ?. An agent ranks the alternatives according to her perceived utilities for
the alternatives. Define rank order ? n as a permutation (? n (1), . . . , ? n (m)) of {1, . . . , M }. ? n
represents the full ranking [c?i (1) i c?i (2) i ? ? ? i c?i (m) ] of the alternatives {c1 , .., cM }.
That is, for agent n, cm1 n cm2 if and only if unm1 > unm2 (In this model, situations with tied
perceived utilities have zero probability measure).
3
The model for observed data ? (n) , can be written as:
Z
S
X
(n)
(n)
Pr(? |X , Z, ?, ?) =
Pr(U (n) |X (n) , Z, ?, ?) =
?s Pr(? (n) |X (n) , Z, ?s )
? (n) =order (U (n) )
s=1
Note that X (n) and Z are observed characteristics, while ? and ? are unknown parameters. ? =
order (U ) is the ranking implied by U, and ?(i) is the ith largest utility in U . D = {? 1 , .., ? N }
denotes the collection of all data for different agents. We have that
Pr(D|X, Z, ?, ?) =
N
Y
Pr(? (n) |X (n) , Z, ?, ?)
n=1
3
Strict Log-concavity and Identifiability
In this section, we establish conditions for identifiability of the types and parameters for the model.
Identifiability is a necessary property in order for researchers to be able to infer economicallyrelevant parameters from an econometric model. Establishing identifiability in a model with multiple
types and ranking data requires a different approach from classical identifiability results for mixture
models [2, 18, e.g.].
Moreover, we establish conditions for uni-modality of the
likelihood for the parameters ? and ?, when the types
are observed. Although our main focus is on data with
unobservable types, establishing the conditions for unimodality conditioned on known types remains an essential step because of the sampling and optimization aspects
of RJMCMC. We sample from the parameters conditional
on the algorithm?s specification of types.
The uni-modality result establishes that the sampling approach is exploring a uni-modal distribution conditional
on its specified types. Despite adopting a Bayesian point
of view in presenting the model, we adopt a uniform prior
on the parameter set, and only impose nontrivial priors on
the number of types in order to obtain some regularization. Given this, we present the theory with regards to the
likelihood function from the data rather than the posterior
on parameters.
3.1
W
?
?
?(n)
(n)
B
(n)
X
Z
(n)
A
(n)
u
?(n)
N
Figure 2: Graphical representation of
the multiple type GRUM generative
process.
Strict Log-concavity of the Likelihood Function
For agent n, we define a set Gn of function g n ?s whose positivity is equivalent to giving an order
n ~
? n . More precisely, we define gm
(?,~) = [?n?n (m) + n?n (m) ] ? [?n?n (m+1) + n?n (m+1) ] for
P
s(n)
~ is a
m = 1, .., M ? 1 where ?nj = ?j + k,l xn (k)Wkl zj (l) for 1 ? j ? M . Here, ?
n
~ ? ) = L(?,
~ Gn ) =
vector of KL + M variables consisting of all ?j ?s and Wkl ?s. We have, L(?,
~ ) ? 0, ..., g n (?,~
~ ) ? 0). This is because g n (?,~
~ ) ? 0 is equivalent to saying
Pr(g1n (?,~
m
M ?1
n
n
alternative ? (m) is preferred to alternative ? (m + 1) in the RUM sense.
~ = L(?,
~ ?) is logarithmic concave in the sense that
Then using the result in [3] and [30], L(?)
?
0 1??
0
~ ?
~ 0 ? RLK+M .
~
~
L(?? + (1 ? ?)? ) ? L(?) L(? )
for any 0 < ? < 1 and any two vectors ?,
The detailed statement and proof of this result are contained in the Appendix. Let?s consider all
PN
~ s(n) ). By log-concavity
n agents together. We study the function, l(?, D) = n=1 log P r(? n |?
~ ?) and using the fact that sum of concave functions is concave, we know that l(?, D) is
of L(?,
concave in ?, viewed as a vector in RSKL+M . To show uni-modality, we need to prove that this
4
concave function has a unique maximum. Namely, we need to be able to establish the conditions for
when the equality holds. If our data is subject to some mild condition, which implies boundedness
of the parameter set that maximizes l(?, D), Theorem 1 bellow tells us when the equality holds.
This condition has been explained in [3] as condition (1).
Before stating the main result, we define the following auxiliary (M ? 1)N 0 ? (SKL + M ? 1)
e= A
eN 0 (Here, let N 0 ? N be a positive number that we will specify later.) such that,
matrix A
e(M ?1)(n?1)+m,(s?1)KL+(K?1)l+k is equal to xn (k)(zm (l) ? zM (l))if s = s(n) and is equal
A
to 0 if s 6= s(n), for all 1 ? n ? N 0 , 1 ? m ? M ? 1, 1 ? s ? S, 1 ? k ? K, and
e(M ?1)(n?1)+m,SKL+m0 is equal to 1 if m = m0 and is equal to 0 if m 6= m0 ,
1 ? l ? L. Also, A
0
for all 1 ? m, m ? M ? 1 and 1 ? n ? N 0 .
eN 0 = SKL + M ? 1. Then l(?) =
Theorem 1. Suppose there is an N 0 ? N such that rank A
l(?, D) is strictly concave up to ?-shift, in the sense that,
l(?? + (1 ? ?)?0 ) ? ?l(?) + (1 ? ?)l(?0 ),
(3)
for any 0 < ? < 1 and any ?, ?0 ? RSKL+M , and the equality holds if and only if there exists
c ? R, such that:
0
?m = ?m
+ c for all 1 ? m ? M
s
0s
Wkl
= Wkl
for all s, k, l
The proof of this theorem is in the appendix.
Remark 1. We remark that the strictness ?up to ?-shift? is natural. A ?-shift results in a shift in the
intrinsic utilities of all the products, which does not change the utility difference between products.
So such a shift does not affect our outcome. In practice, we may set one of the ??s to be 0 and then
our algorithm will converge to a single maximum.
SKL
Remark 2. It?s easy to see that N 0 must be larger than or equal to 1 + M
?1 . The reason we
0
introduce N is to avoid cumbersome calculations involving N .
3.2
Identifiability of the Model
In this section, we show that, for the case of unobserved types, our model is identifiable for a certain
class of cdfs for the noise in random utility models. Let?s first specify this class of ?nice? cdfs:
Definition 1. Let ?(x) be a smooth pdf defined on R or [0, ?), and let ?(x) be the associated cdf.
(i+1)
(x)
For each i ? 1, we write ?(i) (x) for the i-th derivative of ?(x). Let gi (x) = ??(i) (x)
. The function
? is called nice if it satisfies one of the following two mutually exclusive conditions:
(a) ?(x) is defined on R. For any x1 , x2 ? R, the sequence
R (as i ? ?) only if either x1 = x2 ; or x1 = ?x2 and
gi (x1 )
gi (x2 )
gi (x1 )
gi (x2 )
converges to some value in
? ?1 as i ? ?.
(i)
1)
is independent of i for i
(b) ?(x) is defined on [0, ?). For any x1 , x2 ? 0, the ratio ??(i) (x
(x2 )
sufficiently large. Moreover, we require that ?(x1 ) = ?(x2 ) if and only if x1 = x2 .
This class of nice functions contains normal distributions and exponential distributions. A proof of
this fact is included in the appendix.
PS
Identifiability is formalized as follows: Let C = {{?s }Ss=1 | S ? Z>0 , ?i ? R>0 , s=1 ?s = 1}.
0
Suppose, for two sequences {?s }Ss=1 and {?s0 }Ss=1 , we have:
S
X
0
?s Pr(?|X
(n)
, Z, ?) =
s=1
S
X
?s0 Pr(?|X (n) , Z, ?0 )
(4)
s=1
for all possible orders ? of M products, and for all agents n. Then, we must have S = S 0 and (up to
a permutation of indices {1, ? ? ? , S}) ?s = ?s0 and ? = ?0 (up to ?-shift).
5
For now, let?s fix the number of agent characteristics, K. One observation is that the number xn (k),
for any characteristic k, reflects certain characteristics of agent n. Varying the agent n, this amount
xn (k) is in a bounded interval in R. Suppose the collection of data D is sufficiently large. Based
on this, assuming that N can be be arbitrarily large, we can assume that the xn (k)?s form a dense
subset in a closed interval Ik ? R. Hence, (4) should hold for any X ? Ik , leading to the following
theorem:
Theorem 2. Define an (M ? 1) ? L matrix Ze by setting Zem,l = zm (l) ? zM (l). Suppose the matrix
Ze has rank L, and suppose,
S
X
0
S
X
?s Pr(?|X, Z, ?) =
s=1
?s0 Pr(?|X, Z, ?0 ),
(5)
s=1
for all x(k) ? Ik and all possible orders ? of M products. Here, the probability measure is associated with a nice cdf. Then we must have S = S 0 and (up to a permutation of indices {1, ? ? ? , S}),
?s = ?s0 and ? = ?0 (up to ?-shift).
The proof of this theorem is provided in the appendix. Here, we illustrate the idea for the simple
case, with two alternatives (m = 2) and no agent or alternative characteristics (K = L = 1).
Equation (5) is merely a single identity. Unwrapping the definition, we obtain:
S
X
0
s
?s Pr(1 ?2 > ?1 ??2 +xW (z1 ?z2 )) =
s=1
S
X
?s0 Pr(1 ?2 > ?10 ??20 +xW 0s (z1 ?z2 )). (6)
s=1
Without loss of generality, we may assume z1 = 1, z2 = 0, and ?2 = 0. We may further assume
that the interval I = I1 contains 0. (Otherwise, we just need to shift I and ? accordingly.) Given
this, the problem reduces to the following lemma:
Lemma 1. Let ?(x) be a nice cdf. Suppose,
S
X
s=1
0
s
?s ?(? + xW ) =
S
X
?s0 ?(? 0 + xW 0s ),
(7)
s=1
for all x in a closed interval I containing 0. Then we must have S = S 0 , ? = ? 0 and (up to a
permutation of {1, ? ? ? , S}) ?s = ?s , W s = W 0s .
The proof of this lemma is in the appendix. By applying this to (6), we can show identifiablity for
the simple case of m = 2 and K = L = 1.
Theorem 2 guarantees identifiability in the limit case that we observe agents with characteristics
that are dense in an interval. Beyond the theoretical guarantee, we would in practice expect (6) to
have a unique solution with a enough agents with different characteristics. Lemma 1 itself is a new
identifiability result for scalar observations from a set of truncated distributions.
4
RJMCMC for Parameter Estimation
We are using a uniform prior for the parameter space and regularize the number of types with a
geometric prior. We use a Gibbs sampler, as detailed in the appendix (supplementary material
Algorithm (1)) to sample from the posterior. In each of T iterations, we sample utilities un for
each agent, matrix ?s for each type, and set of assignments of agents to alternatives Sn . The utility
of each agent for each alternative conditioned on the data and other parameters is sampled from
a truncated Exponential Family (e.g. Normal) distribution. In order to sample agent i?s utility
for alternative j (uij ), we set thresholds for lower and upper truncation based on agent i?s former
samples of utility for the two alternatives that are ranked one below and one above alternative j,
respectively.
We use reversible-jump MCMC [17] for sampling from conditional distributions of the assignment
function (see Algorithm 1). We consider three possible moves for sampling from the assignment
function S(n):
6
(1) Increasing the number of types by one, through moving a random agent
to a new type of its own.
The acceptance ratio for this move is: Prsplit =
(t+1)
1
Pr(M
|D) S+1 p+1
1
min{1, Pr(S+1)
. 1 . p?1 . p(?)
.J(t)?(t+1) }, where M(t)
Pr(S) Pr(M(t) |D)
=
{u, ?, B, S, ?}(t) ,
S
and J(t)?(t+1) = 2P is the Jacobian of the transformation from the previous state to the proposed
state and Pr(S) is the prior (regularizer) for the number of types.
(2) Decrease the number of types by one, through merging two random types. The acceptance ratio
(t+1)
1
Pr(M
|D) S?1 p?1
. 1 . p+1 .J(t)?(t+1) }.
for the merge move is: Prmerge = min{1, Pr(S?1)
Pr(S) Pr(M(t) |D)
S
(3) We do not change the number of types, and consider moving one random agent from one type to
another. This case reduces to a standard Metropolis-Hastings, where because of the normal symmet(t+1)
|D)
ric proposal distribution, the proposal is accepted with probability: Prmh = min{1, Pr(M
}.
Pr(M(t) |D)
Algorithm 1 RJMCMC to update S(t+1) (n) from
S(t) (n)
5 Experimental Study
Set p?1 , p0 , p+1 , Find S: number of distinct
types
in S(t) (n)
We evaluate the performance of the algorithm
Propose move ? from {?1, 0, +1} with probaon synthetic data, and for a real world data
bilities p?1 , p0 , p+1 , respectively.
set in which we observe agents? characteriscase ? = +1:
tics and their orderings on alternatives. For the
Select random type Ms and agent n ? Ms
synthetic data, we generate data with different
uniformly and Assign n to module Ms1 and
numbers of types and perform RJMCMC in orremainder to Ms2 and Draw vector ? ?
der to estimate the parameters and number of
N (0, 1) and Propose ?s1 = ?s ? ? and
types. The algorithm is implemented in MAT?s2 = ?s + ? and Compute proposal
LAB and scales linearly in the number of sam{un , ? n }(t+1)
ples and agents. It takes on average 60 ? 5
seconds to generate 50 samples for N = 200,
Accept S(t+1) (Ms1 )
=
S + 1,
M = 10, K = 4 and L = 3 on an i5 2.70GHz
S(t+1) (Ms2 ) = s with Prsplit from upIntel(R).
date S = S + 1
case ? = ?1:
Coverage Analysis for the number of types S
Select two random types Ms1 and Ms2
for Synthetic Data: In this experiment, the
and Merge into one type Ms and Propose
data is generated from a randomly chosen num?s = (?s1 + ?s1 )/2 and Compute proposed
ber of clusters S for N = 200, K = 3, L = 3
{un , ? n }(i+1)
and M = 10 and the posterior on S is esAccept S(t+1) (n) = s1 for ?n s.t. S(t) (n) =
timated using RJMCMC. The prior is chosen
s2 with Prmerge update S = S ? 1
to be Pr(S) ? exp(?3SKL). We consider
case ? = 0:
a noisy regime by generating data from noise
Select two random types Ms1 and Ms2 and
level of ? = 1, where all the characteristics
Move a random agent n from Ms1 to Ms2
(X,Z) are generated from N (0, 1). We repeat
and Compute proposed {u(n) , ? (n) }(t+1)
the experiment 100 times. Given this, we estiAccept S(t+1) (n) = s2 with probability
mate 60%, 90% and 95% confidence intervals
Prmh
for the number of types from the posterior samend
switch
ples. We also estimate the coverage percentage,
which is defined to be the percentage of samples which include the true number of types in the
interval. The simulations show 61%, 73%, 88%, 93% for the intervals 60%, 75%, 90%, 95%
respectively, which indicates that the method is providing reliable intervals for the number of types.
Performance for Synthetic Data: We generate data randomly from a model with between 1 and
4 types. N is set to 200, and M is set to 10 for K = 4 and L = 3. We draw 10, 000 samples from
the stationary posterior distribution. The prior for S has chosen to be exp(??SKL) where ? is
uniformly chosen in (0, 10). We repeat the experiment 5 times. Table 1 shows that the algorithm
successfully provides larger log posterior when the number of types is the number of true types.
Clustering Performance for Real World Data: We have tested our algorithm on a sushi dataset,
where 5, 000 users provide rankings on M = 10 different kinds of sushi [25]. We fit the multi-type
7
6
Conclusions
Number of Subgroups (S)
GRUM for different number of types, on 100 randomly chosen subsets of the sushi data with size
N = 200 , using the same prior we used in synthetic case and provide the performance on the Sushi
data in Table 1. It can be seen that GRUM with 3 types has significantly better performance in terms
of log posterior (with the prior that we chose, log posterior can be seen as log likelihood penalized
for number of parameters) than GRUM with one, two or four types. We have taken non-categorical
features as K = 4 feature for agents (age, time for filling the questionnaire, region ID, prefecture
ID) and L = 3 features for sushi ( price,heaviness, sales volume).
Frequency
10
10
8
8
6
6
4
4
In this paper, we have proposed an extension of
2
2
GRUMs in which we allow agents to adopt het0
00
2000
4000
6000
8000
10000
erogeneous types. We develop a theory estabIterations
lishing the identifiability of the mixture model Figure 3: Left Panel: 10000 samples for S in Synwhen we observe ranking data. Our theoreti- thetic data, where the true S is 5. Right Panel:
cal results for identifiability show that the num- Histogram of the samples for S with max at 5 and
ber of types and the parameters associated with mean at 4.56.
them can be identified. Moreover, we prove
Synthetic True types Sushi
uni-modality of the likelihood (or posterior)
Type
One two Three Four sushi
function when types are observable. We proone type -2069 -2631 -2780 -2907 -2880
pose a scalable algorithm for inference, which
two types -2755 -2522 -2545 -2692 -2849
can be parallelized for use on very large data
three types -2796 -2642 -2582 -2790 -2819
sets. Our experimental results show that models
four types -2778 -2807 -2803 -2593 -2850
with multiple types provide a significantly betTable
1: Performance of the method for different
ter fit, in real-world data. By clustering agents
number of true types and number of types in algorithm
into multiple types, our estimation algorithm in terms of log posterior. All the standard deviations
allows choices to be correlated across agents are between 15 and 20. Bold numbers indicate the
of the same type, without making any a priori best performance in their column with statistical sigassumptions on how types of agents are to be nificance of 95%.
partitioned. This use of machine learning techniques complements various approaches in economics [11, 7, 8] by allowing the researcher to have
additional flexibility in dealing with missing data or unobserved agent characteristics. We expect
the development of these techniques to grow in importance as large, individual-level datasets become increasingly available. In future research we intend to pursue applications of this method to
problems of economic interest.
Acknowledgments
This work is supported in part by NSF Grants No. CCF- 0915016 and No. AF-1301976. We thank
Elham Azizi for helping in the design and implementation of RJMCMC algorithm. We thank Simon
Lunagomez for helpful discussion on RJMCMC. We thank Lirong Xia, Gregory Lewis, Edoardo
Airoldi, Ryan Adams and Nikhil Agarwal for comments on the modeling and algorithmic aspects of
this paper. We thank anonymous NIPS-13 reviewers, for helpful comments and suggestions.
References
[1] Daniel A. Ackerberg. Advertising, learning, and consumer choice in experience goods: An empirical
examination. International Economic Review, 44(3):1007?1040, 2003.
[2] N. Atienza, J. Garcia-Heras, and J.M. Muoz-Pichardo. A new condition for identifiability of finite mixture distributions. Metrika, 63(2):215?221, 2006.
[3] Hossein Azari Soufiani, David C. Parkes, and Lirong Xia. Random utility theory for social choice. In
Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), pages 126?
134, Lake Tahoe, NV, USA, 2012.
[4] Hossein Azari Soufiani, David C. Parkes, and Lirong Xia. Preference elicitation for generalized random
utility models. In Proceedings of the Annual Conference on Uncertainty in Artificial Intelligence (UAI),
Bellevue, Washington, USA, 2013.
[5] Patrick Bajari and C. Lanier Benkard. Discrete choice models as structural models of demand: Some
economic implications of common approaches. Technical report, Working Paper, 2003.
8
[6] James Berkovec and John Rust. A nested logit model of automobile holdings for one vehicle households.
Transportation Research Part B: Methodological, 19(4):275?285, 1985.
[7] Steven Berry, James Levinsohn, and Ariel Pakes. Automobile prices in market equilibrium. Econometrica, 63(4):841?890, 1995.
[8] Steven Berry, James Levinsohn, and Ariel Pakes. Voluntary export restraints on automobiles: evaluating
a trade policy. The American Economic Review, 89(3):400?430, 1999.
[9] Steven Berry, James Levinsohn, and Ariel Pakes. Differentiated products demand systems from a combination of micro and macro data: The new car market. Journal of Political Economy, 112(1):68?105,
2004.
[10] Steven Berry and Ariel Pakes. Some applications and limitations of recent advances in empirical industrial organization: Merger analysis. The American Economic Review, 83(2):247?252, 1993.
[11] Steven Berry. Estimating discrete-choice models of product differentiation. The RAND Journal of Economics, pages 242?262, 1994.
[12] Edwin Bonilla, Shengbo Guo, and Scott Sanner. Gaussian process preference elicitation. In Advances in
Neural Information Processing Systems 23, pages 262?270. 2010.
[13] Stephen P Brooks, Paulo Giudici, and Gareth O Roberts. Efficient construction of reversible jump
Markov chain Monte Carlo proposal distributions. Journal of the Royal Statistical Society: Series B
(Statistical Methodology), 65(1):3?39, 2003.
[14] Wei Chu and Zoubin Ghahramani. Gaussian processes for ordinal regression. In Journal of Machine
Learning Research, pages 1019?1041, 2005.
[15] Chris Fraley and Adrian E. Raftery. How many clusters? which clustering method? answers via modelbased cluster analysis. THE COMPUTER JOURNAL, 41(8):578?588, 1998.
[16] John Geweke, Michael Keane, and David Runkle. Alternative computational approaches to inference in
the multinomial probit model. Review of Economics and Statistics, pages 609?632, 1994.
[17] P.J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination.
Biometrika, 82(4):711?732, 1995.
[18] Bettina Grn and Friedrich Leisch. Identifiability of finite mixtures of multinomial logit models with
varying and fixed effects. Journal of Classification, 25(2):225?247, 2008.
[19] Jerry A. Hausman. Valuation of new goods under perfect and imperfect competition. In The economies
of new goods, pages 207?248. University of Chicago Press, 1996.
[20] James J. Heckman and Burton Singer. Econometric duration analysis. Journal of Econometrics, 24(12):63?132, 1984.
[21] Igal Hendel and Aviv Nevo. Measuring the implications of sales and consumer inventory behavior.
Econometrica, 74(6):1637?1673, 2006.
[22] Katherine Ho. The welfare effects of restricted hospital choice in the us medical care market. Journal of
Applied Econometrics, 21(7):1039?1079, 2006.
[23] Neil Houlsby, Jose Miguel Hernandez-Lobato, Ferenc Huszar, and Zoubin Ghahramani. Collaborative
gaussian processes for preference learning. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), pages 2105?2113. Lake Tahoe, NV, USA, 2012.
[24] Kamel Jedidi, Harsharanjeet S. Jagpal, and Wayne S. DeSarbo. Finite-mixture structural equation models
for response-based segmentation and unobserved heterogeneity. Marketing Science, 16(1):39?59, 1997.
[25] Toshihiro Kamishima. Nantonac collaborative filtering: Recommendation based on order responses. In
Proceedings of the Ninth International Conference on Knowledge Discovery and Data Mining (KDD),
pages 583?588, Washington, DC, USA, 2003.
[26] Daniel McFadden. The measurement of urban travel demand. Journal of Public Economics, 3(4):303?
328, 1974.
[27] Daniel McFadden. Modelling the choice of residential location. In Daniel McFadden, A Karlqvist,
L Lundqvist, F Snickars, and J Weibull, editors, Spatial Interaction Theory and Planing Models, pages
75?96. New York: Academic Press, 1978.
[28] Gormley-Claire McParland, Damien. Clustering ordinal data via latent variable models. IFCS 2013 Conference of the International Federation of Classification Societies, Tilburg University, The Netherlands,
2013.
[29] Marina Meila and Harr Chen. Dirichlet process mixtures of generalized Mallows models. arXiv preprint
arXiv:1203.3496, 2012.
[30] Andr?as Pr?ekopa. Logarithmic concave measures and related topics. In Stochastic Programming, pages
63?82. Academic Press, 1980.
[31] Mahlet G. Tadesse, Naijun Sha, and Marina Vannucci. Bayesian variable selection in clustering highdimensional data. Journal of the American Statistical Association, 100(470):602?617, 2005.
9
| 4998 |@word mild:1 logit:4 giudici:1 adrian:1 cm2:1 simulation:1 covariance:1 p0:2 bellevue:1 boundedness:1 moment:2 substitution:1 contains:2 series:1 daniel:4 z2:3 chu:2 written:2 must:4 john:2 chicago:1 kdd:1 update:2 stationary:1 generative:1 intelligence:1 metrika:1 accordingly:1 merger:1 ith:1 parkes:4 benkard:1 num:2 provides:2 location:1 preference:10 tahoe:2 become:2 ik:3 consists:1 prove:2 behavioral:1 introduce:1 pairwise:2 market:4 hardness:1 behavior:1 expected:1 frequently:1 hera:1 multi:2 increasing:1 becomes:1 provided:1 estimating:3 underlying:1 moreover:3 maximizes:1 bounded:1 panel:2 tic:1 cm:2 kind:1 akl:2 pursue:1 weibull:1 unobserved:5 transformation:1 differentiation:1 nj:1 guarantee:2 concave:7 um:1 biometrika:1 healthcare:1 partitioning:1 sale:3 grant:1 medical:1 wayne:1 before:1 positive:1 shengbo:1 aggregating:1 sushi:10 limit:1 despite:1 id:2 establishing:2 hernandez:1 merge:2 chose:1 limited:1 cdfs:2 unique:3 acknowledgment:1 mallow:1 practice:3 empirical:3 significantly:2 word:1 confidence:1 zoubin:2 selection:2 cal:1 applying:1 equivalent:2 deterministic:1 customer:1 missing:1 reviewer:1 transportation:1 lobato:1 economics:7 independently:1 duration:1 focused:1 formalized:1 estimator:1 regularize:1 his:2 population:1 variation:1 construction:1 suppose:8 gm:1 user:1 zhenyu:1 programming:1 us:1 harvard:8 element:1 ze:2 econometrics:4 levinsohn:4 observed:6 steven:5 module:1 preprint:1 export:1 burton:1 region:1 soufiani:3 azari:5 ordering:1 shuffle:1 decrease:1 trade:1 bilities:1 questionnaire:1 asked:1 econometrica:2 rewrite:1 ferenc:1 upon:1 efficiency:1 edwin:1 various:1 unimodality:1 regularizer:1 distinct:1 monte:3 artificial:1 tell:1 outcome:2 whose:1 larger:2 supplementary:1 federation:1 nikhil:1 s:3 otherwise:1 statistic:1 gi:5 neil:1 itself:1 noisy:1 sequence:2 propose:5 interaction:6 product:8 zm:8 macro:1 date:1 flexibility:1 competition:1 scalability:1 ebi:1 cluster:4 p:2 sea:2 generating:1 adam:1 leave:1 converges:1 perfect:1 help:1 illustrate:1 develop:1 stating:1 planing:1 pose:1 miguel:1 damien:1 ij:4 coverage:3 auxiliary:1 implemented:1 indicate:2 implies:1 attribute:2 stochastic:2 public:2 material:1 require:1 assign:1 fix:1 anonymous:1 ryan:1 exploring:1 pl:1 strictly:1 scanner:1 hold:4 sufficiently:2 extension:1 helping:1 normal:4 exp:2 welfare:1 equilibrium:1 algorithmic:1 m0:5 vary:1 adopt:3 perceived:3 estimation:5 travel:1 combinatorial:1 largest:1 establishes:3 successfully:1 reflects:1 gaussian:3 rather:1 avoid:2 pn:1 varying:2 gormley:2 focus:4 methodological:1 rank:5 likelihood:9 indicates:1 nantonac:1 modelling:1 political:1 industrial:1 am:1 sense:3 helpful:2 inference:6 economy:2 membership:1 pakes:5 typically:1 accept:1 mth:1 relation:1 her:1 uij:1 i1:1 unobservable:1 issue:1 classification:5 hossein:3 priori:1 development:1 spatial:1 marginal:1 equal:5 construct:1 washington:2 sampling:4 represents:1 filling:1 purchase:1 future:2 report:1 micro:1 randomly:3 composed:1 individual:6 thetic:1 consisting:4 restraint:1 organization:1 acceptance:2 interest:1 mining:1 mixture:11 bracket:1 extreme:1 chain:3 implication:2 heaviness:2 partial:6 necessary:1 elasticity:1 experience:1 conduct:1 ples:2 theoretical:4 classify:3 column:3 modeling:1 gn:2 measuring:1 assignment:3 deviation:1 subset:2 uniform:2 answer:1 varies:1 eec:1 gregory:1 synthetic:7 international:3 modelbased:1 together:1 quickly:1 michael:1 nm:2 diao:2 choose:1 containing:1 positivity:1 american:3 derivative:1 leading:1 paulo:1 skl:6 bold:1 coefficient:2 bonilla:1 ranking:11 performed:1 view:1 later:1 closed:2 lab:1 vehicle:1 houlsby:1 option:1 ifc:1 identifiability:17 simon:1 contribution:2 collaborative:2 characteristic:25 bayesian:4 grn:1 carlo:3 advertising:1 presume:1 researcher:4 cumbersome:1 definition:2 g1n:1 frequency:1 involved:1 james:5 associated:5 proof:5 sampled:2 dataset:2 treatment:1 popular:2 counterfactual:1 knowledge:2 car:1 geweke:1 segmentation:1 methodology:1 response:4 modal:2 specify:2 rjmcmc:12 rand:1 wei:1 generality:1 keane:1 just:1 marketing:1 working:1 receives:1 hastings:1 reversible:5 cm1:1 leisch:1 lanier:1 aviv:1 usa:4 lundqvist:1 effect:3 true:6 geographic:1 ccf:1 former:2 regularization:1 equality:3 hence:1 jerry:1 nonzero:1 m:3 generalized:7 pdf:1 presenting:1 novel:1 common:3 functional:1 multinomial:2 rust:1 volume:2 association:1 measurement:1 imposing:1 enter:1 gibbs:1 meila:1 mathematics:1 moving:2 specification:1 patrick:1 posterior:15 own:1 recent:2 identifiablity:1 irrelevant:1 driven:1 certain:2 arbitrarily:1 der:1 lirong:3 seen:2 additional:2 care:1 impose:1 parallelized:1 aggregated:1 converge:1 stephen:1 resolving:1 multiple:8 full:1 reduces:3 infer:1 smooth:1 technical:1 determination:1 calculation:1 af:1 academic:2 lai:1 equally:1 marina:2 impact:1 involving:5 scalable:2 regression:3 heterogeneous:1 patient:1 arxiv:2 iteration:1 histogram:1 adopting:1 agarwal:1 c1:2 proposal:4 strictness:1 addressed:1 interval:9 grow:2 modality:5 biased:2 strict:2 comment:2 subject:1 nv:2 desarbo:1 hausman:1 structural:2 ter:1 easy:1 enough:1 ms1:5 independence:1 fit:3 affect:1 switch:1 identified:1 reduce:1 idea:1 economic:5 imperfect:1 luce:1 shift:8 utility:26 edoardo:1 fraley:1 york:1 remark:3 bajari:1 detailed:2 involve:1 netherlands:1 amount:1 generate:3 percentage:2 zj:4 nsf:1 andr:1 dummy:1 track:1 discrete:3 write:2 mat:1 four:3 threshold:1 urban:1 changing:1 shock:4 econometric:2 merely:1 fraction:1 sum:1 ms2:5 residential:1 jose:1 i5:1 uncertainty:1 extends:1 saying:1 family:1 lake:2 draw:3 appendix:6 ric:1 huszar:1 g:1 identifiable:1 nontrivial:1 annual:3 precisely:1 x2:9 sake:1 aspect:2 simulate:2 min:3 department:2 developing:2 according:1 combination:1 poor:1 across:9 increasingly:2 sam:1 sth:1 partitioned:1 metropolis:1 b:3 s1:4 making:1 explained:1 restricted:2 pr:27 vannucci:1 ariel:4 taken:1 equation:2 mutually:1 remains:1 differentiates:1 singer:1 know:1 ordinal:2 available:2 apply:1 observe:3 differentiated:2 ekopa:1 alternative:47 ho:1 existence:1 substitute:1 top:1 clustering:8 include:3 assumes:1 denotes:1 graphical:1 harr:1 dirichlet:1 household:4 xw:4 giving:1 restrictive:1 ghahramani:2 build:2 establish:5 classical:1 society:2 implied:1 move:5 intend:1 fa:3 sha:1 exclusive:1 exhibit:1 heckman:1 thank:4 simulated:1 chris:1 topic:1 valuation:1 reason:1 consumer:5 assuming:2 index:2 relationship:1 illustration:1 ratio:3 providing:1 setup:1 katherine:1 idiosyncratic:1 robert:1 statement:1 holding:1 design:1 implementation:1 policy:2 unknown:2 perform:2 allowing:3 upper:1 observation:3 markov:3 datasets:1 finite:4 mate:1 truncated:2 voluntary:1 situation:2 heterogeneity:2 wkl:5 dc:1 ninth:1 arbitrary:1 david:4 complement:1 namely:1 required:1 kl:4 specified:1 z1:3 friedrich:1 timated:1 established:1 subgroup:2 heterogenous:1 nip:3 brook:1 able:2 beyond:1 elicitation:2 below:1 pattern:1 scott:1 regime:1 saturation:2 reliable:1 memory:1 max:1 royal:1 green:1 unrealistic:1 endogenously:1 rely:1 natural:1 ranked:1 examination:1 sanner:1 nth:1 raftery:1 categorical:1 supermarket:1 sn:1 prior:11 nice:5 berry:6 taste:4 geometric:1 review:4 discovery:1 loss:1 expect:2 permutation:4 probit:1 mcfadden:3 suggestion:1 limitation:1 filtering:1 ingredient:1 age:2 agent:86 s0:7 editor:1 unwrapping:1 row:1 claire:1 penalized:1 repeat:2 supported:1 truncation:1 infeasible:1 allow:3 ber:2 ghz:1 regard:1 xia:3 xn:9 rum:6 world:5 rich:2 concavity:4 evaluating:1 made:1 jump:5 collection:2 income:2 social:2 unm:3 observable:2 uni:7 preferred:1 keep:1 dealing:1 uai:1 xi:3 symmet:1 un:3 latent:3 table:2 toshihiro:1 learn:1 nature:1 inventory:1 automobile:3 main:3 dense:2 linearly:1 s2:3 noise:4 allowed:1 x1:8 en:2 nevo:1 exponential:2 tied:1 jacobian:1 removing:1 theorem:7 specific:5 rlk:1 intrinsic:4 burden:1 essential:1 exists:1 merging:1 importance:1 airoldi:1 conditioned:2 demand:6 chen:1 logarithmic:2 garcia:1 likely:1 prevents:1 contained:1 scalar:1 recommendation:1 applies:1 gender:1 nested:1 satisfies:1 relies:1 lewis:1 cdf:3 gareth:1 kamishima:1 conditional:3 viewed:1 identity:1 price:3 change:2 included:1 uniformly:2 sampler:1 bellow:1 lemma:4 called:1 hospital:2 accepted:1 experimental:2 select:4 highdimensional:1 support:1 guo:1 evaluate:1 mcmc:2 tested:1 correlated:3 |
4,417 | 4,999 | Speedup Matrix Completion with Side Information:
Application to Multi-Label Learning
Miao Xu1
Rong Jin2
Zhi-Hua Zhou1
1
National Key Laboratory for Novel Software Technology,
Nanjing University, Nanjing 210023, China
2
Department of Computer Science and Engineering,
Michigan State University, East Lansing, MI 48824
{xum, zhouzh}@lamda.nju.edu.cn
[email protected]
Abstract
In standard matrix completion theory, it is required to have at least O(n ln2 n) observed entries to perfectly recover a low-rank matrix M of size n ? n, leading to
a large number of observations when n is large. In many real tasks, side information in addition to the observed entries is often available. In this work, we develop
a novel theory of matrix completion that explicitly explore the side information
to reduce the requirement on the number of observed entries. We show that, under appropriate conditions, with the assistance of side information matrices, the
number of observed entries needed for a perfect recovery of matrix M can be dramatically reduced to O(ln n). We demonstrate the effectiveness of the proposed
approach for matrix completion in transductive incomplete multi-label learning.
1
Introduction
Matrix completion concerns the problem of recovering a low-rank matrix from a limited number
of observed entries. It has broad applications including collaborative filtering [35], dimensionality
reduction [41], multi-class learning [4, 31], clustering [15, 42], etc. Recent studies show that, with a
high probability, we can efficiently recover a matrix M ? Rn?m of rank r from O(r(n+m) ln2 (n+
m)) observed entries when the observed entries are uniformly sampled from M [11, 12, 34].
Although the sample complexity for matrix completion, i.e., the number of observed entries required
for perfectly recovering a low rank matrix, is already near optimal (up to a logarithmic factor), its
linear dependence on n and m requires a large number of observations for recovering large matrices, significantly limiting its application to real-world problems. Moreover, current techniques for
matrix completion require solving an optimization problem that can be computationally prohibitive
when the size of the matrix is very large. In particular, although a number of algorithms have been
developed for matrix completion [10, 22, 23, 25, 27, 28, 39], most of them require updating the full
matrix M at each iteration of optimization, leading to a high computational cost and a large storage
requirement when both n and m are large. Several recent efforts [5, 19] try to address this issue, at
a price of losing performance guarantee in recovering the target matrix.
On the other hand, in several applications of matrix completion, besides the observed entries, side
information is often available that can potentially benefit the process of matrix completion. Below
we list a few examples:
? Collaborative filtering aims to predict ratings of individual users based on the ratings from
other users [35]. Besides the ratings provided by users, side information, such as the textual
description of items and the demographical information of users, is often available and can
be used to facilitate the prediction of missing ratings.
1
? Link prediction aiming to predict missing links between users in a social network based on
the existing ones can be viewed as a matrix completion problem [20], where side information, such as attributes of users (e.g., browse patterns and interaction among users), can be
used to assist completing the user-user-link matrix.
Although several studies exploit side information for matrix recovery [1, 2, 3, 16, 29, 32, 33], most
of them focus on matrix factorization techniques, which usually result in non-convex optimization
problems without guarantee of perfectly recovering the target matrix. In contrast, matrix completion deals with convex optimization problems and perfect recovery is guaranteed under appropriate
conditions.
In this work, we focus on exploiting side information to improve the sample complexity and scalability of matrix completion. We assume that besides the observed entries in the matrix M , there exist
two side information matrices A ? Rn?ra and B ? Rm?rb , where r ? ra ? n and r ? rb ? m.
We further assume the target matrix and the side information matrices share the same latent information; that is, the column and row vectors in M lie in the subspaces spanned by the column vectors in
A and B, respectively. Unlike the standard theory of matrix completion that needs to find the optimal matrix M of size n ? m, our optimization problem is reduced to searching for an optimal matrix
of size ra ? rb , making the recovery significantly more efficient both computationally and storage
wise provided ra ? n and/or rb ? m. We show that, with the assistance of side information matrices, with a high probability, we can perfectly recover M with O(r(ra + rb ) ln(ra + rb ) ln(n + m))
observed entries, a sample complexity that is sublinear in n and m.
We demonstrate the effectiveness of matrix completion with side information in transductive incomplete multi-label learning [17], which aims to assign multiple labels to individual instances in
a transductive learning setting. We formulate transductive incomplete multi-label learning as a matrix completion problem, i.e., completing the instance-label matrix based on the observed entries
that correspond to the given label assignments. Both the feature vectors of instances and the class
correlation matrix can be used as side information. Our empirical study shows that the proposed
approach is particularly effective when the number of given label assignments is small, verifying
our theoretical result, i.e., side information can be used to reduce the sample complexity.
The rest of the paper is organized as follows: Section 2 briefly reviews some related work. Section 3
presents our main contribution. Section 4 presents our empirical study. Finally Section 5 concludes
with future issues.
2
Related work
Matrix Completion The objective of matrix completion is to fill out the missing entries of a matrix
based on the observed ones. Early work on matrix completion, also referred to as maximum margin
matrix factorization [37], was developed for collaborative filtering. Theoretical studies show that, it
is sufficient to perfectly recover a matrix M ? Rn?m of rank r when the number of observed entries
is O(r(n + m) ln2 (n + m)) [11, 12, 34]. A more general matrix recovery problem, referred to as
matrix regression, was examined in [30, 36]. Unlike these studies, our proposed approach reduces
the sample complexity with the help of side information matrices.
Several computational algorithms [10, 22, 23, 25, 27, 28, 39] have been developed to efficiently
solve the optimization problem of matrix completion. The main problem with these algorithms lies
in the fact that they have to explicitly update the full matrix of size n ? m, which is expensive both
computationally and storage wise for large matrices. This issue has been addressed in several recent
studies [5, 19], where the key idea is to store and update the low rank factorization of the target
matrix. A preliminary convergence analysis is given in [19], however, none of these approaches
guarantees perfect recovery of the target matrix, even with significantly large number of observed
entries. In contrast, our proposed approach reduces the computational cost by explicitly exploring
the side information matrices and still delivers the promise of perfect recovery.
Several recent studies involve matrix recovery with side information. [2, 3, 29, 33] are based on
graphical models by assuming special distribution of latent factors; these algorithms, as well as [16]
and [32], consider side information in matrix factorization. The main limitation lies in the fact that
they have to solve non-convex optimization problems, and do not have theoretical guarantees on
matrix recovery. Matrix completion with infinite dimensional side information was exploited in [1],
2
yet lacking guarantee of perfect recovery. In contrast, our work is based on matrix completion theory
that deals with a general convex optimization problem and is guaranteed to make a perfect recovery
of the target matrix.
Multi-label Learning Multi-label learning allows each instance to be assigned to multiple classes
simultaneously, making it more challenging than multi-class learning. The simplest approach for
multi-label learning is to train one binary model for each label, which is also referred to as BR
(Binary Relevance) [7]. Many advanced algorithms have been developed to explicitly explore the
dependence among labels ( [44] and references therein).
In this work, we will evaluate our proposed approach by transductive incomplete multi-label learning [17]. Let X = (x1 , . . . , xn )? ? Rn?d be the feature matrix with xi ? Rd , where n is
the number of instances and d is the dimension. Let C1 , . . . , Cm denote the m labels, and let
T ? {?1, +1}n?m be the instance-label matrix, where Ti,j = +1 when xi is associated with
the label Cj , and Ti,j = ?1 when xi is not associated with the label Cj . Let ? denote the subset of
the observed entries in T that corresponds to the given label assignments of instances. The objective
of transductive incomplete multi-label learning is to predict the missing entries in T based on the
feature matrix X and the given label assignments in ?. The main challenge lies in the fact that only
a partial label assignment is given for each training instance. This is in contrast to many studies on
common semi-supervised or transductive multi-label learning [18, 24, 26, 43] where each labeled
instance receives a complete set of label assignments. This is also different from multi-label learning with weak labels [8, 38] which assumes that only the positive labels can be observed. Here we
assume the observed labels can be either positive or negative.
In [17], a matrix completion based approach was proposed for transductive incomplete multi-label
learning. To effectively exploit the information in the feature matrix X, the authors proposed to
complete the matrix T ? = [X, T ] that combines the input features with label assignments into a
single matrix. Two algorithms MC-b and MC-1 were presented there, differing only in the treatment
of bias term, whereas the convergence of MC-1 was examined in [9]. The main limitation of both
algorithms lies in their high computational cost when both the number of instances and features are
large. Unlike MC-1 and MC-b, our proposed approach does not need to deal with the big matrix
T ? , and is computationally more efficient. Besides the computational advantage, we show that our
proposed approach significantly improves the sample complexity of matrix completion by exploiting
side information matrices.
3
Speedup Matrix Completion with Side Information
We first describe the framework of matrix completion with side information, and then present its
theoretical guarantee and application to multi-label learning
3.1
Matrix Completion using Side Information
Let M ? Rn?m be the target matrix of rank r to be recovered. Without loss of generality, we
assume n ? m. Let ?k , k ? {1, . . . , r} be the kth largest singular value of M , and let uk ? Rn
and vk ? Rm be the corresponding left and right singular vectors, i.e., M = U ?V ? , where
? = diag(?1 , . . . , ?r ), U = (u1 , . . . , ur ) and V = (v1 , . . . , vr ).
Let ? ? {1, . . . , n} ? {1, . . . , m} be the subset of indices of observed entries sampled uniformly
from all entries in M . Given ?, we define a linear operator R? (M ) : Rn?m 7? Rn?m as
{
Mi,j (i, j) ? ?
[R? (M )]i,j =
0
(i, j) ?
/?
Using R? (?), the standard matrix completion problem is:
? ?tr s. t. R? (M
? ) = R? (M ),
min ?M
? ?Rn?m
M
(1)
where ? ? ?tr is the trace norm.
Let A = (a1 , . . . , ara ) ? Rn?ra and B = (b1 , . . . , brb ) ? Rm?rb be the side information matrices,
where r ? ra ? n and r ? rb ? m. Without loss of generality, we assume that ra ? rb and that
3
?
both A and B are orthonormal matrices, i.e., a?
i aj = ?i,j and bi bj = ?i,j for any i and j, where
?i,j is the Kronecker delta function that outputs 1 if i = j and 0, otherwise. In case when the side
information is not available, A and B will be set to identity matrix.
The objective is to complete a matrix M of rank r with the side information matrices A and B. We
make the following assumption in order to fully exploit the side information:
Assumption A: the column vectors in M lie in the subspace spanned by the column vectors in A,
and the row vectors in M lie in the subspace spanned by the column vectors in B.
To understand the implication of this assumption, let us consider the problem of transductive incomplete multi-label learning [17], where the objective is to complete the instance-label matrix based on
the observed entries corresponding to the given label assignments, and the side information matrices
A and B are given by the feature vectors of instances and the label correlation matrix, respectively.
Assumption A essentially implies that all the label assignments can be accurately predicted by a
linear combination of feature vectors of instances.
Using Assumption A, we can write M as M = AZ0 B ? and therefore, our goal is to learn Z0 ?
Rra ?rb . Following the standard theory for matrix completion [11, 12, 34], we can cast the matrix
completion task into the following optimization problem:
min
Z?Rra ?rb
?Z?tr
s. t.
R? (AZB ? ) = R? (M ).
(2)
Unlike the standard algorithm for matrix completion that requires solving an optimization problem
involved matrix of n ? m, the optimization problem given in (2) only deals with a matrix Z of
ra ? rb , and therefore can be solved significantly more efficiently if ra ? n and rb ? m.
3.2
Theoretical Result
We define ?0 and ?1 , the coherence measurements for matrix M as
(
)
n
2 m
2
?0 = max
max ?PU ei ? ,
max ?PV ej ? ,
r 1?i?n
r 1?j?m
mn
?1 = max
([U V ? ]i,j )2 ,
i,j
r
where ei is the vector with the ith entry equal to 1 and all others equal to 0, and PU and PV project
a vector onto the subspace spanned by the column vectors of U and V , respectively. We also define
the coherence measure for matrices A and B as
(
)
n?Ai,? ?2
m?Bj,? ?2
?AB = max max
, max
,
1?i?n
1?j?m
ra
rb
where Ai,? and Bi,? stand for the ith row of A and B, respectively.
Theorem 1. Let ? = max(?0 , ?AB ). Define q0 = 12 (1 + log2 ra ? log2 r), ?0 =
128?
8? 2
2
3 ? max(?1 , ?)r(ra + rb ) ln n and ?1 = 3 ? (ra rb + r ) ln n. Assume ?1 ? q0 ?0 . With a
probability at least 1 ? 4(q0 + 1)n??+1 ? 2q0 n??+2 , Z0 is the unique optimizer to the problem in
(2) provided
|?| ?
64?
? max(?1 , ?) (1 + log2 ra ? log2 r) r(ra + rb ) ln n.
3
Compared to the standard matrix completion theory [34], the side information matrices reduce sample complexity from O(r(n + m) ln2 (n + m)) to O(r(ra + rb ) ln(ra + rb ) ln n). When ra ? n and
rb ? m, the side information allows us significantly reduce the number of observed entries required
for perfectly recovering matrix M . We defer the technical proof of Theorem 1 to the supplementary
material due to page limit. Note that although we follow the framework of [34] for analysis, namely
first proving the result under deterministic conditions, and then showing that the deterministic conditions hold with a high probability, our technical proof is quite different due to the involvement of
side information matrices A and B.
4
3.3
Application to Multi-Label Learning
Similar to the Singular Vector Thresholding (SVT) method [10], we approximate the problem in ( 2)
by an unconstrained optimization problem, i.e.,
2
1
min
L(Z) = ??Z?tr +
R? (AZB ? ? M )
F ,
(3)
ra ?rb
2
Z?R
where ? > 0 is introduced to weight the trace norm regularization term against the regression error.
We develop an algorithm that exploits the smoothness of the loss function and therefore achieves
O(1/T 2 ) convergence, where T is the number of iterations. Details of the algorithm can be found
in the supplementary material. We refer to the proposed algorithm as Maxide.
For transductive incomplete multi-label learning, we abuse our notation by defining n as the number
of instances, m as the number of labels, and d as the dimensionality of input patterns. Our goal is
to complete the instance-label matrix M ? Rn?m by using (i) the feature matrix X ? Rn?d and
(ii) the observed entries ? in M (i.e., the given label assignments). We thus set the side information
matrix A to include the top left singular vectors of X, and B = I to indicate that no side information
is available for the dependence among labels. We note that the low rank assumption of instance-label
matrix M implies a linear dependence among the label prediction functions. This assumption has
been explored extensively in the previous studies of multi-label learning [17, 21, 38].
4
Experiments
We evaluate the proposed algorithm for matrix completion with side information on both synthetic and real data sets. Our implementation is in Matlab except that the operation R? (L ? R) is
implemented in C. All the results were obtained on a Linux server with CPU 2.53GHz and 48GB
memory.
4.1
Experiments on Synthetic Data
To create the side information matrices A and B, we first generate a random matrix F ? Rn?m ,
with each entry Fi,j drawn independently from N (0, 1). Side information matrix A includes the
first ra left singular vectors of F , and B includes the first rb right singular vectors. To create Z0 ,
we generate two Gaussian random matrices ZA ? Rra ?r and ZB ? Rrb ?r , where each entry
is sampled independently from N (0, 1). The singular value decomposition of AZA and BZB is
given by AZA = U ?1 V1T and BZB = V ?2 V2T , respectively. We create a diagonal matrix ? ?
Rr?r , whose diagonal entries are drawn independently from N (0, 104 ). Z0 is then given by Z0 =
(ZA ??1 (V1T )? )?(ZB ??2 (V2T )? )T where ? denotes the pseudo inverse of a matrix. Finally, the target
matrix M is given by M = AZ0 B ? .
Settings and Baselines Our goal is to show that the proposed algorithm is able to accurately recover the target matrix with significantly smaller number of entries and less computational time. In this study, we only consider square matrices (i.e., m = n), with n =
1, 000; 5, 000; 10, 000; 20, 000; 30, 000 and rank r = 10; 50; 100. Both ra and rb of side information matrices are set to be 2r, and |?|, the number of observed entries, is set to be r(2n ? r), which
is significantly smaller than the number of observed entries used in previous studies [10, 25, 27].
We repeat each experiment 10 times, and report the result averaged over 10 runs. We compare the
proposed Maxide algorithm with three state-of-the-art matrix completion algorithms: Singular Vector Thresholding (SVT) [10], Fixed Point Bregman Iterative Method (FPCA) [27] and Augmented
Lagrangian Method (ALM) [25]. In addition to these matrix completion methods, we also compare with a trace norm minimizing method (TraceMin) [6]. For all the baseline, we use the codes
provided by their original authors with their default parameter settings.
Results We measure the performance of matrix completion by the relative error ?AZB ? ?
M ?F /?M ?F and report the results of both relative error and running time in Table 1. For TraceMin,
we observe that for n = 1, 000 and r = 10, it gives the result of 1.75 ? 10?7 within 2.94 ? 104
seconds, which is really slow compared to our proposal. For n = 1, 000 and r = 50, it gives no
result within one week. In Table 1, we first observed that for all the cases, the relative error achieved
5
Table 1: Results on synthesized data sets. n is the size of a squared matrix and r is its rank. Rate is the
number of observed entries divided by the size of the matrix, that is, |?|/(nm). Time measures the running
time in seconds and Relative error measures ?AF B ? ? M ?F /?M ?F . The best performance for each setting
are bolded. We do not report the results for FPCA and SVT when n ? 5, 000 because they were unable to
finish the computation after 50 hours.
n r
Rate
Alg.
Time
1, 000 10 1.99 ? 10?2 Maxide 1.89 ? 101
SVT 3.23 ? 103
50 9.75 ? 10?2 Maxide 6.44 ? 101
SVT 3.51 ? 103
100 1.900 ? 10?1 Maxide 1.94 ? 102
SVT 3.82 ? 103
5, 000 10 3.96 ? 10?3 Maxide 3.50 ? 101
50 1.99 ? 10?2 Maxide 4.56 ? 102
100 3.96 ? 10?2 Maxide 1.29 ? 103
10, 000 10 2.00 ? 10?3 Maxide 6.18 ? 101
50 9.98 ? 10?3 Maxide 8.39 ? 102
100 1.99 ? 10?2 Maxide 4.47 ? 103
20, 000 10 1.00 ? 10?3 Maxide 1.22 ? 102
50 4.99 ? 10?3 Maxide 2.16 ? 103
30, 000 10 6.67 ? 10?4 Maxide 4.37 ? 102
Relative error
6.42 ? 10?7
8.76 ? 104
5.28 ? 10?8
2.77 ? 105
1.91 ? 10?8
7.45 ? 104
6.38 ? 10?4
1.43 ? 10?7
2.44 ? 10?8
1.63 ? 10?3
9.97 ? 10?2
1.67 ? 10?7
3.54 ? 10?3
4.51 ? 10?4
3.25 ? 10?3
Algo.
Time
FPCA 5.55 ? 103
ALM 2.92 ? 101
FPCA 7.60 ? 103
ALM 7.72 ? 101
FPCA 1.71 ? 104
ALM 8.57 ? 101
ALM 1.24 ? 103
ALM 1.79 ? 103
ALM 2.14 ? 103
ALM 7.16 ? 103
ALM 7.87 ? 103
ALM 9.50 ? 103
ALM 3.62 ? 104
ALM 4.09 ? 104
ALM 8.69 ? 104
Relative error
8.79 ? 10?1
8.46 ? 10?1
5.53 ? 10?1
5.58 ? 10?1
4.63 ? 10?1
3.59 ? 10?1
9.07 ? 10?1
7.26 ? 10?1
5.51 ? 10?1
9.10 ? 10?1
7.19 ? 10?1
6.41 ? 10?1
9.49 ? 10?1
8.51 ? 10?1
9.53 ? 10?1
by the baseline methods is ?(1), implying that none of them is able to make accurate recovery of
the target matrix given the small number of observed entries. In contrast, our proposed algorithm
is able to recover the target matrix with small relative error. In addition, our proposed algorithm
is computationally more efficient than the baseline methods. The improvement in computational
efficiency becomes more significant for large matrices.
4.2
Application to Transductive Incomplete Multi-Label Learning
We evaluate the proposed algorithm for transductive incomplete multi-label learning on thirteen
benchmark data sets, including eleven data sets for web page classification from ?yahoo.com? [40],
and two image classification data sets NUS-WIDE [14] and Flickr [45]. For the eleven ?yahoo.com?
data sets, the number of instances is n = 5, 000 and the number of dimensions varies from 438 to
1,047, with the number of labels varies from 21 to 40. Detailed information of these eleven data sets
can be found in [40]. For NUS-WIDE data set, we have n = 209, 347 images each represented by
a bag-of-words model with d = 500 visual words, and 81 labels. For the Flickr data set, we only
keep the first 1, 000 most popular keywords for labels, leaving us with n = 565, 444 images, each
represented by a d = 297-dimension vector.
Settings and Baselines For each data set, we randomly sample 10% instances for testing (unlabeled data) and use the remaining 90% data for training. No label assignment is provided for any test
instance. To create partial label assignments for training data, for each label Cj , we expose the label
assignment of Cj for ?% randomly sampled positive and negative training instances and keep the
label assignment of Cj unknown for the rest of the training instances. To examine the performance of
the proposed algorithm, we vary the ?% in the range {10%, 20%, 40%}. We repeat each experiment
10 times, and report the result averaged over 10 trials. The regularization parameter ? is selected
from 2{?10,?9,...,9,10} by cross validation on training data for smaller data sets and set as 1 for larger
ones. Parameters ? and ? are set to be 2 and 10?5 , respectively, for the proposed algorithm, and the
maximum number of iterations is set to be 100. The Average Precision [44], which measures the
average number of relevant labels ranked before a particular relevant label, is computed over the test
data (the metric on all the data is provided in the supplementary material) and used as our evaluation
metric.
We compare the proposed Maxide method with MC-1 and MC-b, the state-of-the-art methods for
transductive incomplete multi-label learning developed in [17]. In addition, we also compare with
two reference methods for multi-label learning that train one binary classifier for each label; that
is, the Binary Relevance method [7] based on Linear kernel (BR-L) and the method based on RBF
kernel (BR-R), where the kernel width is set to 1. For the eleven data sets from ?yahoo.com?,
6
LIBSVM [13] is used by BR-L and BR-R to learn both a linear and nonlinear SVM classifier. For
the two image data sets, due to their large size, only BR-L method is included in comparison and
LIBLINEAR is used for the implementation of BR-L due to its high efficiency for large data sets. A
similar strategy is used to determine the optimal ? as our proposal.
Results Table 2 summarizes the results on transductive incomplete multi-label learning. We observe that the proposed Maxide algorithm outperforms the baseline methods, for most setting on
several data sets (e.g., Business, Education, and Recreation), and the improvements are significant.
More impressively, for most data sets, the proposed algorithm is three order faster than MC-1 and
MC-b. For the NUS-WIDE data set, none of MC-1 and MC-b, the two existing matrix completion
based algorithms for transductive incomplete multi-label learning, is able to finish within one week.
For the Flickr data set, MC-1 and MC-b are not runnable due to the out of memory problem. For the
NUS-WIDE and Flickr data sets, our proposed Maxide method gets an average of more than 50%
improvement against BR-L, the only runnable baseline, on the Average Precision.
5
Conclusion
In this paper, we develop the theory of matrix completion with side information. We show theoretically that, with side information matrices A ? Rn?ra and B ? Rm?rb , we can perfectly recover an
n ? m rank-r matrix with only O(r(ra + rb ) ln(ra + rb ) ln(n + m)) observed entries, a significant
improvement compared to the sample complexity O(r(n + m) ln2 (n + m)) for the standard theory
for matrix completion. We present the Maxide algorithm that can efficiently solve the optimization
problem for matrix completion with side information. Empirical studies with synthesized data sets
and transductive incomplete multi-label learning show the promising performance of the proposed
algorithm.
Acknowledgement This research was partially supported by 973 Program (2010CB327903), NSFC (61073097, 61273301), and ONR Award (N000141210431).
References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. A new approach to collaborative filtering: Operator
estimation with spectral regularization. JMLR, 10:803?826, 2009.
[2] R. Adams, G. Dahl, and I. Murray. Incorporating side information in probabilistic matrix factorization
with gaussian processes. In UAI, 2010.
[3] D. Agarwal and B.-C. Chen. Regression-based latent factor models. In KDD, 2009.
[4] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. MLJ, 73(3):243?272, 2008.
[5] H. Avron, S. Kale, S. Kasiviswanathan, and V. Sindhwani. Efficient and practical stochastic subgradient
descent for nuclear norm regularization. In ICML, 2012.
[6] F. Bach. Consistency of trace norm minimization. JMLR, 9:1019?1048, 2008.
[7] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown. Learning multi-label scene classification. Pattern
Recognition, 37(9):1757?1771, 2004.
[8] S. Bucak, R. Jin, and A. Jain. Multi-label learning with incomplete class assignments. In CVPR, 2011.
[9] R. Cabral, F. Torre, J. Costeira, and A. Bernardino. Matrix completion for multi-label image classification.
In NIPS, 2011.
[10] J.-F. Cai, E. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 20(4):1956?1982, 2010.
[11] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. CACM, 55(6):111?119, 2012.
[12] E. Cand`es and T. Tao. The power of convex relaxation: near-optimal matrix completion. IEEE TIT,
56(5):2053?2080, 2010.
[13] C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM TIST, 2(3):27, 2011.
[14] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y.-T. Zheng. Nus-wide: A real-world web image database
from national university of singapore. In CIVR, 2009.
[15] B. Eriksson, L. Balzano, and R. Nowak. High-rank matrix completion and subspace clustering with
missing data. CoRR, 2011.
7
Table 2: Results on transductive incomplete multi-label learning. Algo. specifies the name of the algorithms.
Time is the CPU time measured in seconds. AP is Average Precision measured based on test data; the higher the
AP, the better the performance. ?% represents the percentage of training instances with observed label assignment for each label. The best result and its comparable ones (pairwise single-tailed t-tests at 95% confidence
level) are bolded.
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
Maxide
MC-b
MC-1
BR-R
BR-1
?% = 10%
time
AP
3.09 ? 100
0.548
2.47 ? 104
0.428
2.39 ? 104
0.430
1
1.63 ? 10
0.540
1.77 ? 101
0.540
3.24 ? 100
0.868
2.94 ? 104
0.865
3.25 ? 104
0.865
1
1.02 ? 10
0.846
1
1.19 ? 10
0.846
4.67 ? 100
0.635
5.58 ? 104
0.597
6.56 ? 104
0.600
1
2.34 ? 10
0.622
1
2.70 ? 10
0.621
4.40 ? 100
0.566
3.82 ? 104
0.472
4.68 ? 104
0.484
1.77 ? 101
0.535
1
1.94 ? 10
0.535
0
2.77 ? 10
0.631
4.86 ? 104
0.474
4.40 ? 104
0.489
1.89 ? 101
0.628
2.04 ? 101
0.627
4.31 ? 100
0.725
4
4.98 ? 10
0.609
5.82 ? 104
0.626
2.03 ? 101
0.725
2.16 ? 101
0.725
2.75 ? 100
0.559
4
3.56 ? 10
0.381
3.48 ? 104
0.381
1.97 ? 101
0.548
2.24 ? 101
0.547
5.11 ? 100
0.635
4
9.38 ? 10
0.565
1.11 ? 105
0.576
1
2.28 ? 10
0.644
2.71 ? 101
0.644
6.21 ? 100
0.513
6.80 ? 104
0.395
8.50 ? 104
0.411
1
2.93 ? 10
0.506
1
3.60 ? 10
0.506
7.18 ? 100
0.721
1.71 ? 105
0.582
2.22 ? 105
0.602
1
3.09 ? 10
0.717
1
3.71 ? 10
0.717
3.69 ? 100
0.580
4.75 ? 104
0.550
4.14 ? 104
0.550
2.50 ? 101
0.571
1
2.84 ? 10
0.572
?% = 20%
time
AP
3.60 ? 100
0.572
1.59 ? 104
0.444
2.05 ? 104
0.494
1
2.98 ? 10
0.563
3.07 ? 101
0.563
3.89 ? 100
0.860
1.83 ? 104
0.851
2.18 ? 104
0.855
1
1.78 ? 10
0.841
1
1.96 ? 10
0.841
5.81 ? 100
0.660
3.38 ? 104
0.599
4.40 ? 104
0.608
1
4.13 ? 10
0.649
1
4.50 ? 10
0.648
5.41 ? 100
0.604
2.40 ? 104
0.478
3.02 ? 104
0.536
3.16 ? 101
0.568
1
3.28 ? 10
0.568
0
3.41 ? 10
0.650
3.13 ? 104
0.467
4.15 ? 104
0.492
3.38 ? 101
0.638
3.44 ? 101
0.640
0
5.36 ? 10
0.746
4
2.99 ? 10
0.607
3.82 ? 104
0.632
3.61 ? 101
0.742
3.59 ? 101
0.741
3.38 ? 100
0.592
4
2.41 ? 10
0.381
3.25 ? 104
0.430
3.48 ? 101
0.574
3.74 ? 101
0.573
6.47 ? 100
0.666
4
5.38 ? 10
0.561
6.53 ? 104
0.576
1
3.89 ? 10
0.670
4.34 ? 101
0.669
7.67 ? 100
0.543
3.94 ? 104
0.403
4.97 ? 104
0.470
1
5.06 ? 10
0.535
1
5.91 ? 10
0.535
9.09 ? 100
0.748
9.65 ? 104
0.595
1.17 ? 105
0.625
1
5.35 ? 10
0.746
1
6.00 ? 10
0.746
4.54 ? 100
0.594
2.93 ? 104
0.545
3.65 ? 104
0.561
4.54 ? 101
0.590
1
4.92 ? 10
0.590
?% = 40%
time
AP
4.42 ? 100
0.596
9.54 ? 103
0.434
1.27 ? 104
0.473
1
5.71 ? 10
0.574
7.10 ? 101
0.575
5.04 ? 100
0.872
1.08 ? 104
0.858
1.21 ? 104
0.862
1
3.32 ? 10
0.854
1
4.30 ? 10
0.854
7.79 ? 100
0.675
1.87 ? 104
0.604
2.30 ? 104
0.618
1
7.68 ? 10
0.662
1
8.25 ? 10
0.661
6.73 ? 100
0.618
1.32 ? 104
0.474
1.55 ? 104
0.564
6.01 ? 101
0.583
1
6.94 ? 10
0.583
0
4.56 ? 10
0.679
1.73 ? 104
0.468
2.27 ? 104
0.578
6.47 ? 101
0.668
6.41 ? 101
0.667
0
7.11 ? 10
0.769
4
1.71 ? 10
0.610
2.03 ? 104
0.645
6.83 ? 101
0.757
7.05 ? 101
0.757
0
4.44 ? 10
0.614
4
1.30 ? 10
0.378
1.90 ? 104
0.421
6.53 ? 101
0.596
6.86 ? 101
0.596
8.49 ? 100
0.696
4
2.75 ? 10
0.575
3.22 ? 104
0.575
1
7.08 ? 10
0.693
7.48 ? 101
0.692
1.02 ? 101
0.568
2.06 ? 104
0.394
2.52 ? 104
0.414
1
9.30 ? 10
0.557
2
1.04 ? 10
0.557
1.21 ? 101
0.754
4.56 ? 104
0.594
5.41 ? 104
0.604
1
9.74 ? 10
0.751
2
1.02 ? 10
0.751
5.80 ? 100
0.616
1.62 ? 104
0.552
2.04 ? 104
0.590
8.59 ? 101
0.600
1
9.58 ? 10
0.601
Maxide
BR-1
Maxide
BR-1
1.47 ? 103
1.24 ? 102
1.33 ? 104
2.48 ? 104
2.10 ? 103
2.38 ? 102
1.89 ? 104
4.74 ? 104
3.53 ? 103
4.81 ? 102
2.67 ? 104
1.11 ? 105
Data
Algo.
Arts
Business
Computers
Education
Entertainment
Health
Recreation
Reference
Science
Social
Society
NUS-WIDE
Flickr
0.513
0.329
0.124
0.064
0.519
0.398
0.124
0.074
0.522
0.466
0.124
0.077
[16] Y. Fang and L. Si. Matrix co-factorization for recommendation with rich side information and implicit
feedback. In Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in
8
Recommender Systems, 2011.
[17] A. Goldberg, X. Zhu, B. Recht, J.-M. Xu, and R. Nowak. Transduction with matrix completion: Three
birds with one stone. In NIPS, 2010.
[18] Y. Guo and D. Schuurmans. Semi-supervised multi-label classification - a simultaneous large-margin,
subspace learning approach. In ECML, 2012.
[19] P. Jain, P. Netrapalli, and S. Sanghavi. Provable matrix sensing using alternating minimization. In NIPS
Workshop on Optimization for Machine Learning, 2012.
[20] A. Jalali, Y. Chen, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization.
In ICML, 2011.
[21] S. Ji, L. Tang, S. Yu, and J. Ye. Extracting shared subspace for multi-label classification. In KDD, 2008.
[22] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In ICML, 2009.
[23] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE TIT, 56(6):2980?
2998, 2010.
[24] X. Kong, M. Ng, and Z.-H. Zhou. Transductive multi-label learning via label set propagation. IEEE
TKDE, 25(3):704?719, 2013.
[25] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery of
corrupted low-rank matrices. Technical report, UIUC, 2009.
[26] Y. Liu, R. Jin, and L. Yang. Semi-supervised multi-label learning by constrained non-negative matrix
factorization. In AAAI, 2006.
[27] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, 128(1-2):321?353, 2011.
[28] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. JMLR, 11:2287?2322, 2010.
[29] A. Menon, K. Chitrapura, S. Garg, D. Agarwal, and N. Kota. Response prediction using collaborative
filtering with hierarchies and side-information. In KDD, 2011.
[30] S. Negahban and M. Wainwright. Estimation of (near) low-rank matrices with noise and high dimensional
scaling. Annual of Statistics, 39(2):1069?1097, 2011.
[31] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection and joint subspace selection for multiple
classification problems. Statistics and Computing, 20(2):231?252, 2010.
[32] W. Pan, E. Xiang, N. Liu, and Q. Yang. Transfer learning in collaborative filtering for sparsity reduction.
In AAAI, 2010.
[33] I. Porteous, A. Asuncion, and M. Welling. Bayesian matrix factorization with side information and
dirichlet process mixtures. In AAAI, 2010.
[34] B. Recht. A simpler approach to matrix completion. JMLR, 12:3413?3430, 2011.
[35] J. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In
ICML, 2005.
[36] A. Rhode and A. Tsybakov. Estimation of high dimensional low rank matrices. Annual of Statistics,
39(2):887?930, 2011.
[37] N. Srebro, Jason D. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. In NIPS. 2005.
[38] Y.-Y. Sun, Y. Zhang, and Z.-H. Zhou. Multi-label learning with weak label. In AAAI, 2010.
[39] K.-C. Toh and Y. Sangwoon. An accelerated proximal gradient algorithm for nuclear norm regularized
linear least squares problems. Pacific Journal of Optimization, 2010.
[40] N. Ueda and K. Saito. Parametric mixture models for multi-labeled text. In NIPS, 2002.
[41] K. Weinberger and L. Saul. Unsupervised learning of image manifolds by semidefinite programming.
IJCV, 70(1):77?90, 2006.
[42] J. Yi, T. Yang, R. Jin, A. Jain, and M. Mahdavi. Robust ensemble clustering by matrix completion. In
ICDM, 2012.
[43] G. Yu, C. Domeniconi, H. Rangwala, G. Zhang, and Z. Yu. Transductive multi-label ensemble classification for protein function prediction. In KDD, 2012.
[44] M.-L. Zhang and Z.-H. Zhou. A review on multi-label learning algorithms. IEEE TKDE, in press.
[45] J. Zhuang and S. Hoi. A two-view learning approach for image tag ranking. In WSDM, 2011.
9
| 4999 |@word trial:1 kong:1 briefly:1 norm:7 nd:1 decomposition:1 tr:4 liblinear:1 reduction:2 liu:2 tist:1 outperforms:1 existing:2 current:1 recovered:1 com:3 luo:2 si:1 yet:1 toh:1 kdd:4 eleven:4 update:2 implying:1 prohibitive:1 selected:1 item:1 ith:2 chua:1 cse:1 kasiviswanathan:1 simpler:1 zhang:3 mathematical:1 ijcv:1 combine:1 lansing:1 theoretically:1 pairwise:1 alm:13 ra:26 cand:3 examine:1 uiuc:1 multi:39 v1t:2 ara:1 wsdm:1 zhouzh:1 zhi:1 cpu:2 becomes:1 provided:6 project:1 moreover:1 notation:1 cabral:1 cm:1 azb:3 developed:5 differing:1 guarantee:6 pseudo:1 avron:1 ti:2 rm:4 classifier:2 uk:1 fpca:5 positive:3 nju:1 engineering:1 svt:6 before:1 limit:1 aiming:1 nsfc:1 v2t:2 abuse:1 ap:5 rhode:1 bird:1 therein:1 china:1 examined:2 garg:1 challenging:1 co:1 limited:1 factorization:10 bi:2 range:1 averaged:2 unique:1 practical:1 testing:1 n000141210431:1 pontil:1 saito:1 empirical:3 significantly:8 vert:1 word:2 confidence:1 protein:1 nanjing:2 onto:1 unlabeled:1 get:1 operator:2 eriksson:1 storage:3 maxide:31 selection:2 deterministic:2 lagrangian:1 missing:5 kale:1 independently:3 convex:8 formulate:1 boutell:1 shen:2 recovery:13 spanned:4 fill:1 orthonormal:1 nuclear:2 fang:1 oh:1 proving:1 searching:1 limiting:1 target:11 hierarchy:1 user:9 exact:2 losing:1 programming:2 goldberg:1 expensive:1 particularly:1 updating:1 recognition:1 labeled:2 database:1 observed:30 taskar:1 solved:1 verifying:1 sun:1 complexity:8 solving:2 algo:3 tit:2 efficiency:2 joint:2 represented:2 train:2 jain:3 fast:1 effective:1 describe:1 zhou1:1 abernethy:1 cacm:1 quite:1 whose:1 supplementary:3 solve:3 larger:1 cvpr:1 balzano:1 otherwise:1 rennie:2 statistic:3 transductive:19 advantage:1 rr:1 cai:1 interaction:1 relevant:2 description:1 scalability:1 exploiting:2 convergence:3 requirement:2 perfect:6 adam:1 help:1 develop:3 completion:49 measured:2 keywords:1 netrapalli:1 recovering:6 predicted:1 implemented:1 implies:2 indicate:1 bzb:2 attribute:1 torre:1 stochastic:1 material:3 hoi:1 education:2 require:2 assign:1 civr:1 really:1 preliminary:1 rong:1 exploring:1 hold:1 predict:3 bj:2 week:2 optimizer:1 early:1 achieves:1 vary:1 estimation:3 bag:1 label:78 expose:1 largest:1 create:4 minimization:4 gaussian:2 aim:2 lamda:1 zhou:3 ej:1 jaakkola:1 focus:2 vk:1 improvement:4 rank:17 contrast:5 baseline:7 tao:1 issue:3 among:4 classification:8 yahoo:3 art:3 special:1 constrained:1 equal:2 evgeniou:2 ng:1 represents:1 broad:1 yu:3 icml:4 unsupervised:1 future:1 others:1 report:5 sanghavi:2 few:2 randomly:2 simultaneously:1 national:2 individual:2 ab:2 zheng:1 evaluation:1 recreation:2 mixture:2 semidefinite:1 implication:1 accurate:1 bregman:2 nowak:2 partial:2 incomplete:17 theoretical:5 instance:22 column:6 assignment:16 cost:3 entry:33 subset:2 varies:2 corrupted:1 proximal:1 synthetic:2 recht:3 international:1 siam:1 negahban:1 probabilistic:1 linux:1 squared:1 aaai:4 nm:1 leading:2 li:1 mahdavi:1 includes:2 xu1:1 explicitly:4 ranking:1 try:1 jason:1 view:1 recover:7 asuncion:1 defer:1 collaborative:7 contribution:1 square:2 bolded:2 efficiently:4 ensemble:2 correspond:1 weak:2 bayesian:1 accurately:2 none:3 mc:35 za:2 simultaneous:1 flickr:5 against:2 involved:1 associated:2 mi:2 proof:2 sampled:4 treatment:1 popular:1 dimensionality:2 improves:1 organized:1 cj:5 mlj:1 miao:1 higher:1 supervised:3 follow:1 costeira:1 response:1 generality:2 implicit:1 correlation:2 hand:1 receives:1 web:2 ei:2 keshavan:1 nonlinear:1 propagation:1 chitrapura:1 aj:1 menon:1 facilitate:1 name:1 ye:2 brown:1 multiplier:1 regularization:5 assigned:1 alternating:1 q0:4 laboratory:1 goldfarb:1 deal:4 assistance:2 width:1 hong:1 ln2:5 stone:1 complete:5 demonstrate:2 cb327903:1 delivers:1 image:8 wise:2 novel:2 fi:1 common:1 ji:2 synthesized:2 measurement:1 refer:1 significant:3 ai:2 smoothness:1 rd:1 unconstrained:1 consistency:1 etc:1 pu:2 rra:3 recent:4 involvement:1 store:1 browse:1 server:1 binary:4 onr:1 yi:1 exploited:1 determine:1 semi:3 ii:1 full:2 multiple:3 reduces:2 technical:3 faster:1 af:1 cross:1 bach:2 lin:2 divided:1 icdm:1 award:1 a1:1 prediction:6 regression:3 essentially:1 metric:2 iteration:3 kernel:3 agarwal:2 achieved:1 c1:1 proposal:2 addition:4 whereas:1 addressed:1 singular:9 leaving:1 rest:2 unlike:4 sangwoon:1 effectiveness:2 jordan:1 extracting:1 aza:2 near:3 yang:3 finish:2 hastie:1 perfectly:7 reduce:4 idea:1 cn:1 br:32 assist:1 gb:1 effort:1 matlab:1 dramatically:1 detailed:1 involve:1 tsybakov:1 extensively:1 simplest:1 reduced:2 generate:2 specifies:1 exist:1 percentage:1 singapore:1 delta:1 tibshirani:1 rb:26 tkde:2 write:1 promise:1 key:2 drawn:2 libsvm:2 dahl:1 v1:1 graph:1 subgradient:1 relaxation:1 run:1 inverse:1 wu:1 ueda:1 coherence:2 summarizes:1 scaling:1 comparable:1 completing:2 guaranteed:2 annual:2 kronecker:1 scene:1 software:1 kota:1 tag:1 u1:1 min:3 speedup:2 department:1 pacific:1 combination:1 smaller:3 pan:1 ur:1 making:2 ln:10 computationally:5 needed:1 available:5 operation:1 observe:2 appropriate:2 spectral:2 weinberger:1 original:1 assumes:1 dirichlet:1 clustering:4 include:1 top:1 denotes:1 graphical:1 log2:4 running:2 remaining:1 entertainment:1 porteous:1 exploit:4 murray:1 society:1 objective:4 already:1 strategy:1 parametric:1 dependence:4 diagonal:2 jalali:1 gradient:2 kth:1 subspace:8 link:3 unable:1 manifold:1 provable:1 assuming:1 besides:4 code:1 index:1 minimizing:1 thirteen:1 potentially:1 trace:5 negative:3 implementation:2 unknown:1 bucak:1 recommender:1 observation:2 benchmark:1 descent:1 jin:3 ecml:1 defining:1 heterogeneity:1 rn:14 rating:4 introduced:1 cast:1 required:3 namely:1 textual:1 hour:1 nu:6 nip:5 address:1 able:4 below:1 pattern:3 usually:1 sparsity:1 challenge:1 jin2:1 program:1 including:2 max:10 memory:2 brb:1 power:1 wainwright:1 ranked:1 business:2 regularized:1 advanced:1 mn:1 zhu:1 improve:1 zhuang:1 technology:1 library:1 concludes:1 health:1 text:1 review:2 acknowledgement:1 relative:7 xiang:1 lacking:1 loss:3 fully:1 sublinear:1 impressively:1 limitation:2 filtering:6 srebro:2 validation:1 sufficient:1 thresholding:3 share:1 row:3 repeat:2 supported:1 side:45 bias:1 understand:1 wide:6 saul:1 benefit:1 ghz:1 feedback:1 dimension:3 xn:1 world:2 stand:1 default:1 rich:1 author:2 social:2 welling:1 approximate:1 keep:2 uai:1 b1:1 xi:3 msu:1 latent:3 iterative:2 tailed:1 table:5 promising:1 learn:2 transfer:1 robust:1 rongjin:1 mazumder:1 schuurmans:1 alg:1 diag:1 main:5 montanari:1 big:1 noise:1 x1:1 augmented:2 xu:2 referred:3 transduction:1 slow:1 vr:1 precision:3 pv:2 lie:7 jmlr:4 rangwala:1 tang:2 z0:5 theorem:2 covariate:1 showing:1 sensing:1 list:1 explored:1 svm:1 concern:1 fusion:1 incorporating:1 workshop:2 effectively:1 corr:1 margin:4 chen:4 michigan:1 logarithmic:1 explore:2 visual:1 bernardino:1 lagrange:1 partially:2 recommendation:1 sindhwani:1 hua:1 chang:1 corresponds:1 acm:1 ma:2 obozinski:1 viewed:1 identity:1 goal:3 rbf:1 price:1 shared:1 included:1 infinite:1 except:1 uniformly:2 zb:2 domeniconi:1 e:3 east:1 support:1 guo:1 relevance:2 accelerated:2 evaluate:3 argyriou:1 |
4,418 | 5 | 485
TOWARDS AN ORGANIZING PRINCIPLE FOR
A LAYERED PERCEPTUAL NETWORK
Ralph Linsker
IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598
Abstract
An information-theoretic optimization principle is proposed for the development
of each processing stage of a multilayered perceptual network. This principle of
"maximum information preservation" states that the signal transformation that is to be
realized at each stage is one that maximizes the information that the output signal values
(from that stage) convey about the input signals values (to that stage), subject to certain
constraints and in the presence of processing noise. The quantity being maximized is a
Shannon information rate. I provide motivation for this principle and -- for some simple
model cases -- derive some of its consequences, discuss an algorithmic implementation,
and show how the principle may lead to biologically relevant neural architectural
features such as topographic maps, map distortions, orientation selectivity, and
extraction of spatial and temporal signal correlations. A possible connection between
this information-theoretic principle and a principle of minimum entropy production in
nonequilibrium thermodynamics is suggested.
Introduction
This paper describes some properties of a proposed information-theoretic
organizing principle for the development of a layered perceptual network. The purpose
of this paper is to provide an intuitive and qualitative understanding of how the principle
leads to specific feature-analyzing properties and signal transformations in some simple
model cases. More detailed analysis is required in order to apply the principle to cases
involving more realistic patterns of signaling activity as well as specific constraints on
network connectivity.
This section gives a brief summary of the results that motivated the formulation
of the organizing principle, which I call the principle of "maximum information
preservation." In later sections the principle is stated and its consequences studied.
In previous work l I analyzed the development of a layered network of model cells
with feedforward connections whose strengths change in accordance with a Hebb-type
synaptic modification rule. I found that this development process can produce cells that
are selectively responsive to certain input features, and that these feature-analyzing
properties become progressively more sophisticated as one proceeds to deeper cell
layers. These properties include the analysis of contrast and of edge orientation, and
are qualitatively similar to properties observed in the first several layers of the
mammalian visual pathway.2
Why does this happen? Does a Hebb-type algorithm (which adjusts synaptic
strengths depending upon correlations among signaling activities 3 ) cause a developing
perceptual network to optimize some property that is deeply connected with the mature
network's functioning as an information processing system?
? American Institute ofPhvsics 1988
486
Further analysis 4 .s has shown that a suitable Hebb-type rule causes a
linear-response cell in a layered feedforward network (without lateral connections) to
develop so that the statistical variance of its output activity (in response to an ensemble
of inputs from the previous layer) is maximized, subject to certain constraints. The
mature cell thus performs an operation similar to principal component analysis (PCA),
an approach used in statistics to expose regularities (e.g., clustering) present in
high-dimensional input data. (Oja 6 had earlier demonstrated a particular form of
Hebb-type rule that produces a model cell that implements PCA exactly.)
Furthermore, given a linear device that transforms inputs into an output, and given
any particular output value, one can use optimal estimation theory to make a "best
estimate" of the input values that gave rise to that output. Of all such devices, I have
found that an appropriate Hebb-type rule generates that device for which this "best
estimate" comes closest to matching the input values. 4 ?s Under certain conditions, such
a cell has the property that its output preserves the maximum amount of information
about its input values. s
Maximum Information Preservation
The above results have suggested a possible organizing principle for the
development of each layer of a multilayered perceptual network. s The principle can be
applied even if the cells of the network respond to their inputs in a nonlinear fashion,
and even if lateral as well as feedforward connections are present. (Feedback from later
to earlier layers, however, is absent from this formulation.) This principle of "maximum
information preservation" states that for a layer of cells L that is connected to and
provides input to another layer M, the connections should develop so that the
transformation of signals from L to M (in the presence of processing noise) has the
property that the set of output values M conveys the maximum amount of information
about the input values L, subject to various constraints on, e.g., the range of lateral
connections and the processing power of each cell. The statistical properties of the
ensemble of inputs L are assumed stationary, and the particular L-to-M transformation
that achieves this maximization depends on those statistical properties. The quantity
being maximized is a Shannon information rate. 7
An equivalent statement of this principle is: The L-to-M transformation is chosen
so as to minimize the amount of information that would be conveyed by the input values
L to someone who already knows the output values M.
We shall regard the set of input signal values L (at a given time) as an input
"message"; the message is processed to give an output message M. Each message is in
general a set of real-valued signal activities. Because noise is introduced during the
processing, a given input message may generate any of a range of different output
messages when processed by the same set of connections.
The Shannon information rate (i.e., the average information transmitted from L
to M per message) is7
R = LL LMP(L,M) log [P(L,M)/P(L)P(M)].
(1)
For a discrete message space, peL) [resp. P(M)] is the probability of the input (resp.
output) message being L (resp. M), and P(L,M) is the joint probability of the input
being L and the output being M. [For a continuous message space, probabilities are
487
replaced by probability densities, and sums (over states) by integrals.] This rate can be
written as
(2)
where
h == -
LL P(L) log P(L)
(3)
is the average information conveyed by message Land
(4)
is the average information conveyed by message L to someone who already knows M.
Since II. is fixed by the properties of the input ensemble, maximizing R means
minimizing I LIM , as stated above.
The information rate R can also be written as
(5)
where 1M and IMI L are defined by interchanging Land M in Eqns. 3 and 4. This form is
heuristically useful, since it suggests that one can attempt to make R large by (if
possible) simultaneously making 1M large and IMI L small. The term 1M is largest when
each message M occurs with equal probability. The term 1"'1/. is smallest when each L
is transformed into a unique M, and more generally is made small by "sharpening" the
P(M IL) distribution, so that for each L, P(M IL) is near zero except for a small set of
messages M.
How can one gain insight into biologically relevant properties of the L - M
transformation that may follow from the principle of maximum information preservation
(which we also call the "infomax" principle)? In a network, this L - M transformation
may be a function of the values of one or a few variables (such as a connection strength)
for each of the allowed connections between and within layers, and for each cell. The
search space is quite large, particularly from the standpoint of gaining an intuitive or
qualitative understanding of network behavior. We shall therefore consider a simple
model in which the dimensionalities of the Land M signal spaces are greatly reduced,
yet one for which the infomax analysis exhibits features that may also be important
under more general conditions relevant to biological and synthetic network
development.
The next four sections are organized as follows. (i) A model is introduced in
which the Land M messages, and the L-to-M transformation, have simple forms. The
infomax principle is found to be satisfied when some simple geometric conditions (on
the transformation) are met. (ii) I relate this model to the analysis of signal processing
and noise in an interconnection network. The formation of topographic maps is
discussed. (iii) The model is applied to simplified versions of biologically relevant
problems, such as the emergence of orientation selectivity. (iv) I show that the main
properties of the infomax principle for this model can be realized by certain local
algorithms that have been proposed to generate topographic maps using lateral
interactions.
488
A Simple Geometric Model
In this model, each input message L is described by a point in a low-dimensional
vector space, and the output message M is one of a number of discrete states. For
definiteness, we will take the L space to be two-dimensional (the extension to higher
dimensionality is straightforward). The L - M transformation consists of two steps.
(i) A noise process alters L to a message L' lying within a neighborhood of radius v
centered on L. (ii) The altered message L' is mapped deterministically onto one of the
output messages M.
A given L' - M mapping corresponds to a partitioning of the L space into regions
labeled by the output states M. (We do not exclude a priori the possibility that multiple
disjoint regions may be labeled by the same M.) Let A denote the total area of the L
state space. For each M, let A (M) denote the area of L space that is labeled by M. Let
sCM) denote the total border length that the region(s) labeled M share with regions of
unlike M -label. A point L lying within distance v of a border can be mapped onto either
M-value (because of the noise process L - L'). Call this a "borderline" L. A point L
that is more than a distance v from every border can only be mapped onto the M-value
of the region containing it.
Suppose v is sufficiently small that (for the partitionings of interest) the area
occupied by borderline L states is small compared to the total area of the L space.
Consider first the case in which peL) is uniform over L. Then the information rate R
(using Eqn. 5) is given approximately (through terms of order v) by
R = - ~M[A(M)/A] 10g[A(M)/A] - (yv/A) ~Ms(M).
(6)
To see this, note that P(M) = A(M)/A and that P(M IL) log P(M IL) is zero except for
borderline L (since 0 log 0 = 1 log 1 = 0). Here y is a positive number whose value
depends upon the details of the noise process, which determines P(M I L) for borderline
L as a function of distance from the border.
For small v (low noise) the first term (1M) on the RHS of Eqn. 6 dominates. It is
maximized when the A(M) [and hence the P(M)] values are equal for all M. The second
term (with its minus sign), which equals ( -~'4IL)' is maximized when the sum of the
border lengths of all M regions is minimized. This corresponds to "sharpening" the
P(M IL) distribution in our earlier, more general, discussion. This suggests that the
infomax solution is obtained by partitioning the L space into M-regions (one for each
M value) that are of substantially equal area, with each M-region tending to have
near-minimum border length.
Although this simple analysis applies to the low-noise case, it is plausible that even
when v is comparable to the spatial scale of the M regions, infomax will favor making
the M regions have approximately the same extent in all directions (rather than be
elongated), in order to "sharpen" p(MI L) and reduce the probability of the noise
process mapping L onto many different M states.
What if peL) is nonuniform? Then the same result (equal areas, minimum border)
is obtained except that both the area and border-length elements must now be weighted
by the local value of peL). Therefore the infomax principle tends to produce maps in
which greater representation in the output space is given to regions of the input signal
space that are activated more frequently.
To see how lateral interactions within the M layer can affect these results, let us
suppose that the L - M mapping has three, not two, process steps: L - L'
489
- M - M, where the first two steps are as above, and the third step changes the output
M into any of a number of states M (which by definition comprise the
"M-neighborhood" of M). We consider the case in which this M-neighborhood relation
is symmetric.
This type of "lateral interaction" between M states causes the infomax principle
to favor solutions for which M regions sharing a border in L space are M-neighbors in
the sense defined. For a simple example in which each state M has n M-neighbors
(including itself), and each M-neighbor has an equal chance of being the final state
(given M), infomax tends to favor each M-neighborhood having similar extent in all
directions (in L space).
Relation Between the Geometric Model and Network Properties
The previous section dealt with certain classes of transformations from one
message space to another, and made no specific reference to the implementation of these
transformations by an interconnected network of processor cells. Here we show how
some of the features discussed in the previous section are related to network properties.
For simplicity suppose that we have a two-dimensional layer of uniformly
distributed cells, and that the signal activity of each cell at any given time is either 1
(active) or 0 (quiet). We need to specify the ensemble of input patterns. Let us first
consider a simple case in which each pattern consists of a disk of activity of fixed radius,
but arbitrary center position, against a quiet background. In this case the pattern is fully
defined by specifying the coordinates of the disk center. In a two-dimensional L state
space (previous section), each pattern would be represented by a point having those
coordinates.
Now suppose that each input pattern consists not of a sharply defined disk of
activity, but of a "fuzzy" disk whose boundary (and center position) are not sharply
defined. [Such a pattern could be generated by choosing (from a specified distribution)
a position Xc as the nominal disk center, then setting the activity of the cell at position
X to 1 with a probability that decreases with distance I x - Xc I . ] Any such pattern can
be described by giving the coordinates of the "center of activity" along with many other
values describing (for example) various moments of the activity pattern relative to the
center.
For the noise process L - L' we suppose that the activity of an L cell can be
"misread" (by the cells of the M layer) with some probability. This set of distorted
activity values is the "message" L'. We then suppose that the set of output activities M
is a deterministic function of L'.
We have constructed a situation in which (for an appropriate choice of noise level)
two of the dimensions of the L state space -- namely, those defined by the disk center
coordinates -- have large variance compared to the variance induced by the noise
process, while the other dimensions have variance comparable to that induced by noise.
In other words, the center position of a pattern is changed only a small amount by the
noise process (compared to the typical difference between the center positions of two
patterns), whereas the values of the other attributes of an input pattern differ as much
from their noise-altered values as two typical input patterns differ from each other.
(Those attributes are "lost in the noise. ")
Since the distance between L states in our geometric model (previous section)
corresponds to the likelihood of one L state being changed into the other by the noise
490
process, we can heuristically regard the L state space (for the present example) as a
"slab" that is elongated in two dimensions and very thin in all other dimensions. (In
general this space could have a much more complicated topology, and the noise process
which we here treat as defining a simple metric structure on the L state space need not
do so. These complications are beyond the scope of the present discussion.)
This example, while simple. illustrates a feature that is key to understanding the
operation of the infomax principle: The character of the ensemble statistics and of the
noise process jointly determine which attributes of the input pattern are statistically
most significant; that is, have largest variance relative to the variance induced by noise.
We shall see that the infomax principle selects a number of these most significant
attributes to be encoded by the L - M transformation.
We turn now to a description of the output state space M. We shall assume that
this space is also of low dimensionality. For example, each M pattern may also be a disk
of activity having a center defined within some tolerance. A discrete set of discriminable
center-coordinate values can then be used as the M-region "labels" in our geometric
model.
Restricting the form of the output activity in this particular way restricts us to
considering positional encodings L - M, rather than encodings that make use of the
shape of the output pattern, its detailed activity values, etc. However, this restriction
on the form of the output does not determine which features of the input patterns are
to be encoded, nor whether or not a topographic (neighbor-preserving) mapping is to
be used. These properties will be seen to emerge from the operation of the infomax
principle.
In the previous section we saw that the infomax principle will tend to lead to a
partitioning of the L space into M regions having equal areas [if peL) is uniform in the
coordinates of the L disk center] and minimum border length. For the present case this
means that the M regions will tend to "tile" the two long dimensions of the L state space
"slab," and that a single M value will represent all points ill L space that differ only in
their low-variance coordinates. If peL) is nonuniform, then the area of the M region
at L will tend to be inversely proportional to peL). Furthermore, if there are local lateral
connections between M cells, then (depending upon the particular form of such
interaction) M states corresponding to nearby localized regions of layer-M activity can
be M-neighbors in the sense of the previous section. In this case the mapping from the
two high-variance coordinates of L space to M space will tend to be topographic.
Examples: Orientation Selectivity and Temporal Feature Maps
The simple example in the previous section illustrates how infomax can lead to
topographic maps, and to map distortions [which provide greater M-space
representation for regions of L having large peL)]. Let us now consider a case in which
information about input features is positionally encoded in the output layer as a result
of the infomax principle.
Consider a model case in which an ensemble of patterns is presented to the input
layer L. Each pattern consists of a rectangular bar of activity (of fixed length and width)
against a quiet background. The bar's center position and orientation are chosen for
each pattern from uniform distributions over some spatial interval for the position, and
over all orientation angles (i.e., from 0? to 180?). The bar need not be sharply defined,
but can be "fuzzy" in the sense described above. We assume, however, that all
491
properties that distinguish different patterns of the ensemble -- except for center
position and orientation -- are "lost in the noise" in the sense we discussed.
To simplify the representation of the solution, we further assume that only one
coordinate is needed to describe the center position of the bar for the given ensemble.
For example, the ensemble could consist of bar patterns all of which have the same y
coordinate of center position, but differ in their x coordinate and in orientation 0.
We can then represent each input state by a point in a rectangle (the L state space
defined in a previous section) whose abscissa is the center-position coordinate x and
whose ordinate is the angle 0. The horizontal sides of this rectangle are identified with
each other, since orientations of 0 0 and 180 0 are identical. (The interior of the
rectangle can thus be thought of as the surface of a horizontal cylinder.)
The number N x of different x positions that are discriminable is given by the range
of x values in the input ensemble divided by the tolerance with which x can be measured
(given the noise process L - L'); similarly for No. The relative lengths Llx and MJ of the
sides of the L state space rectangle are given by Llx/ MJ = Nj No. We discuss below the
case in which Nx > > No; if No were> > Nx the roles of x and 0 in the resulting mappings
would be reversed.
There is one complicating feature that should be noted, although in the interest
of clarity we will not include it in the present analysis. Two horizontal bar patterns that
are displaced by a horizontal distance that is small compared with the bar length, are
more likely to be rendered indiscriminable by the noise process than are two vertical bar
patterns that are displaced by the same horizontal distance (which may be large
compared with the bar's width). The Hamming distance, or number of binary activity
values that need to be altered to change one such pattern into the other, is greater in the
latter case than in the former. Therefore, the distance in L state space between the two
UNORIENTED RECEPTIVE FIELDS
Figure 1.
Orientation Selectivity in a Simple Model: As the input domain size
(see text) is reduced [from (a) upper left, to (b) upper right, to (c)
lower left figure], infomax favors the emergence of an
orientation-selective L - M mapping. (d) Lower right figure shows
a solution obtained by applying Kohonen's relaxation algorithm with
50 M-points (shown as dots) to this mapping problem.
492
states should be greater in the latter case. This leads to a "warped" rather than simple
rectangular state space. We ignore this effect here, but it must be taken into account in
a fuller treatment of the emergence of orientation selectivity.
Consider now an L - M transformation that consists of the three-step process
(discussed above) (i) noise-induced L - L' ; (ii) deterministic L' - M'; (iii)
lateral-interaction-induced M' - M. Step (ii) maps the two-dimensional L state space
of points (x, 0) onto a one-dimensional M state space. For the present discussion, we
.consider L' - M' maps satisfying the following Ansatz: Points corresponding to the
M states are spaced uniformly, and in topographic order, along a helical line in L state
space (which we recall is represented by the surface of a horizontal cylinder). The pitch
of the helix (or the slope dO/dx) remains to be determined by the infomax principle.
Each M-neighborhood of M states (previous section) then corresponds to an interval
on such a helix. A state L' is mapped onto a state in a particular M-neighborhood if L'
is closer (in L space) to the corresponding interval of the helix than to any other portion
of the helix. We call this set of L states (for an M-neighborhood centered on M ) the
"input domain" of M. It has rectangular shape and lies on the cylindrical surface of the
L space.
We have seen (previous sections) that infomax tends to produce maps having (i)
equal M-region areas, (ii) topographic organization, and (iii) an input domain (for each
M-neighborhood) that has similar extent in all directions (in L space). Our choice of
Ansatz enforces (i) and (ii) explicitly. Criterion (iii) is satisfied by choosing dO / dx such
that the input domain is square (for a given M-neighborhood size).
Figure 1a (having dO/dx = 0) shows a map in which the output M encodes only
information about bar center position x, and is independent of bar orientation o. The
size of the M -neighborhood is relatively large in this case. The input domain of the state
M denoted by the 'x' is shown enclosed by dotted lines. (The particular 0 value at which
we chose to draw the M line in Fig. 1a is irrelevant.) For this M-neighborhood size, the
length of the border of the input domain is as small as it can be.
As the M -neighborhood size is reduced, the dotted lines move closer together. A
vertically oblong input domain (which would result if we kept dO/dx = 0 ) would not
satisfy the infomax criterion. The helix for which the input domain is square (for this
smaller choice of M-neighborhood size) is shown in Fig. lb. The M states for this
solution encode information about bar orientation as well as center position. If each M
state corresponds to a localized output activity pattern centered at some position in a
one-dimensional array of M cells, then this solution corresponds to orientation-selective
cells organized in "orientation columns" (really "orientation intervals" in this
one-dimensional model). A "labeling" of the linear array of cells according to whether
their orientation preferences lie between 0 and 60, 60 and 120, or 120 and 180 degrees
is indicated by the bold, light, and dotted line segments beneath the rectangle in Fig. 1b
(and 1c).
As the M-neighborhood size is decreased still further, the mapping shown in Fig.
Ie becomes favored over that of either Fig. 1a or lb. The "orientation columns" shown
in the lower portion of Fig. 1c are narrower than in Fig. 1b.
A more detailed analysis of the information rate function for various mappings
confirms the main features we have here obtained by a simple geometric argument.
The same type of analysis can be applied to different types of input pattern
ensembles. To give just one other example, consider a network that receives an
ensemble of simple patterns of acoustic input. Each such pattern consists of a tone of
493
some frequency that is sensed by two "ears" with some interaural time delay. Suppose
that the initial network layers organize the information from each ear (separately) into
tonotopic maps, and that (by means of connections having a range of different time
delays) the signals received by both ears over some time interval appear as patterns of
cell activity at some intermediate layer L. We can then apply the infomax principle to
the signal transformation from layer L to the next layer M. The L state space can (as
before) be represented as a rectangle, whose axes are now frequency and interaural
delay (rather than spatial position and bar orientation). Apart from certain differences
(the density of L states may be nonuniform, and states at the top and bottom of the
rectangle are no longer identical), the infomax analysis can be carried out as it was for
the simplified case of orientation selectivity.
Local Algorithms
The information rate (Eqn. I), which the infomax principle states is to be
maximized subject to constraints (and possibly as part of an optimization function
containing other cost terms not discussed here), has a very complicated mathematical
form. How might this optimization process, or an approximation to it, be implemented
by a network of cells and connections each of which has limited computational power?
The geometric form in which we have cast the infomax principle for some very simple
model cases, suggests how this might be accomplished.
An algorithm due to Kohonen 8 demonstrates how topographic maps can emerge
as a result of lateral interactions within the output layer. I applied this algorithm to a
one-dimensional M layer and a two-dimensional L layer, using a Euclidean metric and
imposing periodic boundary conditions on the short dimension of the L layer. A
resulting map is shown in Fig. Id. This map is very similar to those of Figs. 1band Ic,
except for one reversal of direction. The reversal is not surprising, since the algorithm
involves only local moves (of the M-points) while the infomax principle calls for a
globally optimal solution.
More generally, Kohonen's algorithm tends empirically8 to produce maps having
the property that if one constructs the Voronoi diagram corresponding to the positions
of the M-points (that is, assigns each point L to an M region based on which M-point
L is closest to), one obtains a set of M regions that tend to have areas inversely
proportional to P(L) , and neighborhoods (corresponding to our input domains) that
tend to have similar extent in all directions rather than being elongated.
The Kohonen algorithm makes no reference to noise, to information content, or
even to an optimization principle. Nevertheless, it appears to implement, at least in a
qualitative way, the geometric conditions that infomax imposes in some simple cases.
This suggests that local algorithms along similar lines may be capable of implementing
the infomax principle in more general situations.
Our geometric formulation of the infomax principle also suggests a connection
with an algorithm proposed by von der Malsburg and Willshaw9 to generate topographic
maps. In their "tea trade" model, neighborhood relationships are postulated within the
source and the target spaces, and the algorithm's operation leads to the establishment
of a neighborhood-preserving mapping from source to target space. Such neighborhood
relationships arise naturally in our analysis when the infomax principle is applied to our
The noise process induces a
three-step L - L' - M' - M transformation.
494
neighborhood relation on the L space, and lateral connections in the M cell layer can
induce a neighborhood relation on the M space.
More recently, Durbin and Willshaw lO have devised an approach to solving certain
geometric optimization problems (such as the traveling salesman problem) by a gradient
descent method bearing some similarity to Kohonen's algorithm.
There is a complementary relationship between the infomax principle and a local
algorithm that may be found to implement it. On the one hand, the principle may
explain what the algorithm is "for" -- that is, how the algorithm may contribute to the
generation of a useful perceptual system. This in turn can shed light on the system-level
role of lateral connections and synaptic modification mechanisms in biological networks.
On the other hand, the existence of such a local algorithm is important for demonstrating
that a network of relatively simple processors -- biological or synthetic -- can in fact find
global near-maxima of the Shannon information rate.
A Possible Connection Between Infomax and a Thermodynamic Principle
The principle of "maximum preservation of information" can be viewed
equivalently as a principle of "minimum dissipation of information." When the principle
is satisfied, the loss of information from layer to layer is minimized, and the flow of
information is in this sense as "nearly reversible" as the constraints allow. There is a
resemblance between this principle and the principle of "minimum entropy production"
II in nonequilibrium thermodynamics. It has been suggested by Prigogine and others
that the latter principle is important for understanding self-organization in complex
systems. There is also a resemblance, at the algorithmic level, between a Hebb-type
modification rule and the autocatalytic processes l2 considered in certain models of
evolution and natural selection. This raises the possibility that the connection I have
drawn between synaptic modification rules and an information-theoretic optimization
principle may be an example of a more general relationship that is important for the
emergence of complex and apparently "goal-oriented If structures and behaviors from
relatively simple local interactions, in both neural and non-neural systems.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
R. Linsker, Proc. Natl. Acad. Sci. USA 83,7508,8390,8779 (1986).
D. H . Hubel and T. N. Wiesel, Proc. Roy. Soc. London 8198,1 (1977).
D. O. Hebb, The Organization of Behavior (Wiley, N. Y., 1949).
R. Linsker, in: R. Cotterill (ed.), Computer Simulation in Brain Science
20-22 August 1986; Cambridge Univ. Press, in press), p. 416.
R. Linsker, Computer (March 1988, in press).
E. Oja,J. Math. Bioi. 15 , 267 (1982).
C . E. Shannon, Bell Syst. Tech. J. 27 . 623 (1948).
T . Kohonen, Self-Organization and Associative Memory (Springer-Verlag,
C. von der Malsburg and D. J. Willshaw, Proc. Nat I. A cad. Sci. USA 74 ,
R. Durbin and D. J. Willshaw, Nature 326,689 (1987).
P. Glansdorff and I. Prigogine, Thermodynamic Theory of Structure,
Fluctuations (Wiley-Interscience, N. Y., 1971).
M. Eigen and P. Schuster, Die Naturwissenschaften 64 , 541 (1977).
(Copenhagen.
N. Y .. 19S4).
5176 (1977).
Stabili(v. and
| 5 |@word cylindrical:1 version:1 wiesel:1 disk:8 heuristically:2 confirms:1 simulation:1 sensed:1 minus:1 moment:1 initial:1 surprising:1 cad:1 yet:1 dx:4 written:2 must:2 realistic:1 happen:1 shape:2 progressively:1 stationary:1 device:3 tone:1 short:1 positionally:1 provides:1 math:1 complication:1 contribute:1 preference:1 height:1 mathematical:1 along:3 constructed:1 become:1 qualitative:3 consists:6 pathway:1 interaural:2 interscience:1 behavior:3 abscissa:1 frequently:1 nor:1 brain:1 globally:1 considering:1 becomes:1 maximizes:1 pel:8 what:2 substantially:1 fuzzy:2 transformation:16 sharpening:2 nj:1 temporal:2 every:1 shed:1 exactly:1 willshaw:3 demonstrates:1 partitioning:4 appear:1 organize:1 positive:1 before:1 accordance:1 local:9 tends:4 treat:1 consequence:2 vertically:1 acad:1 encoding:2 analyzing:2 id:1 fluctuation:1 approximately:2 might:2 chose:1 studied:1 suggests:5 specifying:1 someone:2 limited:1 range:4 statistically:1 unique:1 enforces:1 borderline:4 lost:2 implement:3 signaling:2 area:11 bell:1 thought:1 matching:1 word:1 induce:1 onto:6 interior:1 layered:4 selection:1 applying:1 restriction:1 optimize:1 equivalent:1 map:18 demonstrated:1 center:20 maximizing:1 straightforward:1 elongated:3 deterministic:2 rectangular:3 simplicity:1 assigns:1 rule:6 adjusts:1 insight:1 array:2 coordinate:12 resp:3 target:2 suppose:7 nominal:1 element:1 roy:1 satisfying:1 particularly:1 mammalian:1 labeled:4 observed:1 role:2 bottom:1 region:21 connected:2 decrease:1 trade:1 deeply:1 raise:1 solving:1 segment:1 upon:3 joint:1 various:3 represented:3 univ:1 describe:1 london:1 labeling:1 formation:1 neighborhood:20 choosing:2 whose:6 quite:1 encoded:3 valued:1 plausible:1 distortion:2 interconnection:1 favor:4 statistic:2 topographic:10 emergence:4 itself:1 jointly:1 final:1 associative:1 interaction:7 interconnected:1 kohonen:6 relevant:4 beneath:1 organizing:4 helical:1 intuitive:2 description:1 regularity:1 produce:5 derive:1 depending:2 develop:2 measured:1 received:1 soc:1 implemented:1 involves:1 come:1 met:1 differ:4 direction:5 radius:2 attribute:4 centered:3 implementing:1 really:1 biological:3 extension:1 lying:2 sufficiently:1 considered:1 ic:1 algorithmic:2 mapping:11 scope:1 slab:2 achieves:1 smallest:1 purpose:1 estimation:1 proc:3 label:2 expose:1 saw:1 largest:2 weighted:1 establishment:1 rather:5 occupied:1 encode:1 ax:1 likelihood:1 greatly:1 contrast:1 tech:1 sense:5 voronoi:1 relation:4 transformed:1 selective:2 selects:1 ralph:1 among:1 orientation:21 ill:1 denoted:1 priori:1 favored:1 development:6 spatial:4 equal:8 construct:1 comprise:1 field:1 extraction:1 having:9 fuller:1 identical:2 nearly:1 thin:1 linsker:4 interchanging:1 minimized:2 others:1 simplify:1 few:1 oriented:1 oja:2 simultaneously:1 preserve:1 replaced:1 attempt:1 cylinder:2 organization:4 interest:2 message:22 possibility:2 cotterill:1 analyzed:1 light:2 activated:1 natl:1 edge:1 integral:1 closer:2 capable:1 iv:1 euclidean:1 column:2 earlier:3 maximization:1 cost:1 nonequilibrium:2 uniform:3 delay:3 imi:2 discriminable:2 periodic:1 synthetic:2 density:2 ie:1 ansatz:2 infomax:30 together:1 connectivity:1 von:2 satisfied:3 ear:3 containing:2 possibly:1 tile:1 american:1 warped:1 syst:1 account:1 exclude:1 naturwissenschaften:1 bold:1 satisfy:1 postulated:1 explicitly:1 depends:2 later:2 apparently:1 portion:2 yv:1 complicated:2 slope:1 scm:1 minimize:1 il:6 square:2 variance:8 who:2 maximized:6 ensemble:12 spaced:1 dealt:1 processor:2 explain:1 sharing:1 synaptic:4 ed:1 definition:1 against:2 frequency:2 conveys:1 naturally:1 mi:1 hamming:1 gain:1 treatment:1 recall:1 lim:1 dimensionality:3 organized:2 sophisticated:1 appears:1 higher:1 follow:1 response:2 specify:1 formulation:3 furthermore:2 just:1 stage:4 correlation:2 traveling:1 hand:2 eqn:3 horizontal:6 receives:1 nonlinear:1 reversible:1 indicated:1 resemblance:2 usa:2 effect:1 functioning:1 former:1 hence:1 evolution:1 symmetric:1 ll:2 during:1 self:2 eqns:1 width:2 noted:1 die:1 yorktown:1 m:1 criterion:2 theoretic:4 performs:1 dissipation:1 recently:1 tending:1 discussed:5 unoriented:1 significant:2 cambridge:1 imposing:1 llx:2 similarly:1 sharpen:1 had:1 dot:1 longer:1 surface:3 similarity:1 etc:1 closest:2 irrelevant:1 apart:1 selectivity:6 certain:9 verlag:1 binary:1 watson:1 accomplished:1 der:2 transmitted:1 minimum:6 greater:4 preserving:2 seen:2 determine:2 signal:14 preservation:6 ii:8 multiple:1 thermodynamic:2 long:1 divided:1 devised:1 pitch:1 involving:1 metric:2 represent:2 cell:24 background:2 whereas:1 separately:1 interval:5 decreased:1 diagram:1 source:2 standpoint:1 unlike:1 subject:4 induced:5 tend:6 mature:2 flow:1 call:5 near:3 presence:2 feedforward:3 iii:4 intermediate:1 affect:1 gave:1 topology:1 identified:1 reduce:1 absent:1 whether:2 motivated:1 pca:2 cause:3 useful:2 generally:2 detailed:3 transforms:1 amount:4 s4:1 band:1 induces:1 processed:2 reduced:3 generate:3 restricts:1 alters:1 dotted:3 sign:1 disjoint:1 per:1 discrete:3 shall:4 tea:1 key:1 four:1 nevertheless:1 demonstrating:1 drawn:1 clarity:1 kept:1 rectangle:7 relaxation:1 sum:2 angle:2 respond:1 distorted:1 architectural:1 draw:1 comparable:2 layer:25 distinguish:1 durbin:2 activity:21 strength:3 constraint:6 sharply:3 encodes:1 nearby:1 generates:1 argument:1 rendered:1 relatively:3 developing:1 according:1 march:1 tonotopic:1 describes:1 smaller:1 character:1 biologically:3 modification:4 making:2 taken:1 remains:1 discus:2 describing:1 turn:2 mechanism:1 needed:1 know:2 reversal:2 salesman:1 operation:4 apply:2 appropriate:2 responsive:1 eigen:1 existence:1 thomas:1 top:1 clustering:1 include:2 malsburg:2 xc:2 giving:1 move:2 already:2 realized:2 quantity:2 occurs:1 receptive:1 exhibit:1 gradient:1 quiet:3 distance:9 reversed:1 mapped:4 lateral:11 sci:2 nx:2 extent:4 length:9 relationship:4 minimizing:1 equivalently:1 statement:1 relate:1 stated:2 rise:1 implementation:2 upper:2 vertical:1 displaced:2 descent:1 situation:2 defining:1 nonuniform:3 arbitrary:1 lb:2 august:1 ordinate:1 introduced:2 namely:1 required:1 specified:1 cast:1 connection:17 copenhagen:1 acoustic:1 lmp:1 suggested:3 proceeds:1 beyond:1 pattern:30 bar:13 below:1 gaining:1 including:1 memory:1 power:2 suitable:1 natural:1 thermodynamics:2 altered:3 brief:1 inversely:2 carried:1 text:1 understanding:4 geometric:10 l2:1 relative:3 fully:1 loss:1 generation:1 proportional:2 enclosed:1 localized:2 degree:1 conveyed:3 imposes:1 principle:47 helix:5 share:1 land:4 ibm:1 production:2 lo:1 summary:1 changed:2 side:2 allow:1 deeper:1 institute:1 neighbor:5 emerge:2 distributed:1 regard:2 feedback:1 boundary:2 dimension:6 tolerance:2 complicating:1 qualitatively:1 made:2 simplified:2 obtains:1 ignore:1 global:1 active:1 hubel:1 assumed:1 continuous:1 search:1 why:1 mj:2 nature:1 bearing:1 complex:2 domain:9 main:2 multilayered:2 rh:1 motivation:1 noise:27 border:11 arise:1 allowed:1 complementary:1 convey:1 fig:9 fashion:1 hebb:7 ny:1 definiteness:1 wiley:2 position:18 deterministically:1 lie:2 perceptual:6 third:1 specific:3 dominates:1 consist:1 restricting:1 nat:1 illustrates:2 entropy:2 likely:1 visual:1 positional:1 applies:1 springer:1 corresponds:6 determines:1 chance:1 bioi:1 viewed:1 narrower:1 goal:1 towards:1 content:1 change:3 typical:2 except:5 uniformly:2 determined:1 principal:1 total:3 shannon:5 selectively:1 latter:3 schuster:1 |
4,419 | 50 | 662
AN ADAPTIVE AND HETERODYNE FILTERING PROCEDURE
FOR THE IMAGING OF MOVING OBJECTS
F. H. Schuling, H. A. K. Mastebroek and W. H. Zaagman
Biophysics Department, Laboratory for General Physics
Westersingel 34, 9718 eM Groningen, The Netherlands
ABSTRACT
Recent experimental work on the stimulus velocity dependent time resolving
power of the neural units, situated in the highest order optic ganglion of the
blowfly, revealed the at first sight amazing phenomenon that at this high level of
the fly visual system, the time constants of these units which are involved in the
processing of neural activity evoked by moving objects, are -roughly spokeninverse proportional to the velocity of those objects over an extremely wide range.
In this paper we will discuss the implementation of a two dimensional heterodyne
adaptive filter construction into a computer simulation model. The features of this
simulation model include the ability to account for the experimentally observed
stimulus-tuned adaptive temporal behaviour of time constants in the fly visual
system. The simulation results obtained, clearly show that the application of such
an adaptive processing procedure delivers an improved imaging technique of
moving patterns in the high velocity range.
A FEW REMARKS ON THE FLY VISUAL SYSTEM
The visual system of the diptera, including the blowfly Calliphora
erythrocephala (Mg.) is very regularly organized and allows therefore very precise
optical stimulation techniques. Also, long term electrophysiological recordings can
be made relatively easy in this visual system, For these reasons the blowfly (which
is well-known as a very rapid and 'clever' pilot) turns out to be an extremely
suitable animal for a systematic study of basic principles that may underlie the
detection and further processing of movement information at the neural level.
In the fly visual system the input retinal mosaic structure is precisely
mapped onto the higher order optic ganglia (lamina, medulla, lobula), This means
that each neural column in each ganglion in this visual system corresponds to a
certain optical axis in the visual field of the compound eye. In the lobula complex
a set of wide-field movement sensitive neurons is found, each of which integrates
the input signals over the whole visual field of the entire eye, One of these wide
field neurons, that has been classified as H I by Hausen J has been extensively
studied both anatomically2, 3, 4 as well as electrophysiologically5, 6, 7, The
obtained results generally agree very well with those found in behavioral
optomotor experiments on movement detection 8 and can be understood in terms of
Reichardts correlation model 9, 10.
The H I neuron is sensitive to horizontal movement and directionally
selective: very high rates of action potentials (spikes) up to 300 per second can be
recorded from this element in the case of visual stimuli which move horizontally
inward, i.e. from back to front in the visual field (pre/erred direction), whereas
movement horizontally outward, i.e, from front to back (null direction) suppresses
its activity,
? American Institute of Physics 1988
663
EXPERIMENTAL RESULTS AS A MODELLING BASE
When the H I neuron is stimulated in its preferred direction with a step wise
pattern displacement, it will respond with an increase of neural activity. By
repeating this stimulus step over and over one can obtain the averaged response:
after a 20 ms latency period the response manifests itself as a sharp increase in
average firing rate followed by a much slower decay to the spontaneous activity
level. Two examples of such averaged responses are shown in the Post Stimulus
Time Histograms (PSTH's) of figure 1. Time to peak and peak height are related
and depend on modulation depth, stimulus step size and spatial extent of the
stimulus. The tail of the responses can be described adequately by an exponential
decay toward a constant spontaneous firing rate:
R(t)=c+a - e( -t/1)
(1)
For each setting of the stimulus parameters, the response parameters,
defined by equation (1), can be estimated by a least-squares fit to the tail of the
PSTH. The smooth lines in figure 1 are the results of two such fits.
tlmsl
300
!~,
.o "
o
"o
'00 -
OJ
8
)0
w= 11'/s
~ I'JO
"
0\
~
tf
10
100
? M:O.'O
o MoO IO
" Mdl05
1.00
Fig.l
600
800
_ _ .L... _ - - ' -_
0.3
I
_---',_ _
L-'
_-----L,_ _-'--_ _
10)0
W ("lsI
100
A veraged responses (PSTH's) obtained from the H I neuron, being
adapted to smooth stimulus motion with velocities 0.36 /s (top) and
11 /s (bottom) respectively. The smooth lines represent least-squares
fits to the PSTH's of the form R(t)=c+a-e(-t/1). Values of f for the
two PSTH's are 331 and 24 ms respectively (de Ruyter van Steveninck et
al.7).
Fitted values of f as a function of adaptation velocity for three
modulation depths M. The straight line is a least-squares fit to represent
the data for M=0.40 in the region w:0.3-100 o/s. It has the form
f=Q - w-13 with Q=150 ms and 13=0.7 (de Ruyter van Steveninck et al. 7 ).
0
0
Fig.2
)00
664
Figure 2 shows fitted values of the response time constant T as a function of
the angular velocity of a moving stimulus (a square wave grating in most
experiments) which was presented to the animal during a period long enough to let
its visual system adapt to this moving pattern and before the step wise pattern
displacement (which reveals 1') was given. The straight line, described by
(2)
(with W in Is and T in ms) represents a least-squares fit to the data over the
velocity range from 0.36 to 125 0 Is. For this range, T varies from 320 to roughly
10 ms, with a=150?1O ms and ~=0.7?0.05. Defining the adaptation range of 1 as
that interval of velocities for which 1 decreases with increasing velocity, we may
conclude from figure 2 that within the adaptation range, 1 is not very sensitive to
the modulation depth.
The outcome of similar experiments with a constant modulation depth of the
pattern (M=0.40) and a constant pattern velocity but with four different values of
the contrast frequency fc (Le. the number of spatial periods per second that
traverse an individual visual axis as determined by the spatial wavelength As of the
pattern and the pattern velocity v according to fc=v lAs) reveal also an almost
complete independency of the behaviour of 1 on contrast frequency. Other
experiments in which the stimulus field was subdivided into regions with different
adaptation velocities, made clear that the time constants of the input channels of
the H I neuron were set locally by the values of the stimulus velocity in each
stimulus sub-region. Finally, it was found that the adaptation of 1 is driven by
the stimulus velocity, independent of its direction.
These findings can be summarized qualitatively as follows: in steady state,
the response time constants 1 of the neural units at the highest level in the fly
visual system are found to be tuned locally within a large velocity range
exclusively by the magnitude of the velocity of the moving pattern and not by its
direction, despite the directional selectivity of the neuron itself. We will not go
into the question of how this amazing adaptive mechanism may be hard-wired in
the fly visual system. Instead we will make advantage of the results derived thus
far and attempt to fit the experimental observations into an image processing
approach. A large number of theories and several distinct classes of algorithms to
encode velocity and direction of movement in visual systems have been suggested
by, for example, Marr and Ullman I I and van Santen and Sperling12.
We hypothesize that the adaptive mechanism for the setting of the time
constants leads to an optimization for the overall performance of the visual system
by realizing a velocity independent representation of the moving object. In other
words: within the range of velocities for which the time constants are found to be
tuned by the velocity, the representation of that stimulus at a certain level within
the visual circuitry, should remain independent of any variation in stimulus
velocity.
0
OBJECT MOTION DEGRADATION: MODELLING
Given the physical description of motion and a linear space invariant model,
the motion degradation process can be represented by the following convolution
integral:
co co
JJ
g(x,y)=
-00
(h(x - u,y-v) ? flu, v? dudv
-00
(3)
665
where f(u,v) is the object intensity at position (u,v) in the object coordinate
frame, h(x-u,y-v) is the Point Spread Function (PSF) of the imaging system,
which is the response at (x,y) to a unit pulse at (u,v) and g(x,y) is the image
intensity at the spatial position (x,y) as blurred by the imaging system. Any
possible additive white noise degradation of the already motion blurred image is
neglected in the present considerations.
For a review of principles and techniques in the field of digital image
degradation and restoration, the reader is referred to Harris 13, Sawchuk 14,
Sondhi 15, Nahi 16, A boutalib et al. 17, 18, Hildebrand 19, Rajala de Figueiredo20 .
It has been demonstrated first by Aboutalib et al.17 that for situations in which
the motion blur occurs in a straight line along one spatial coordinate, say along the
horizontal axis, it is correct to look at the blurred image as a collection of
degraded line scans through the entire image. The dependence on the vertical
coordinate may then be dropped and eq. (3) reduces to:
g~~
J~x-u)
-f(u)du
Given the mathematical description of the relative movement,
corresponding PSF can be derived exactly and equation (4) becomes:
g(x)=
k
b(x - u) - f(u)du
(4)
the
(5)
where R is the extent of the motion blur. Typically, a discrete version of (5),
applicable for digital image processing purposes, is described by:
L
g(k)=l: h(k-I)? f(l)
; k=I, ... ,N
(6)
I
where k and I take on integer values and L is related to the motion blur extent.
According to Aboutalib et al. 18 a scalar difference equation model (M,a,b,c)
can then be derived to model the motion degradation process:
x(k+l)
=
M? x(k)+a? f(k)
g(k) = b? x(k)+c ? f(k)
; k=I, ... ,N
(7)
h(i) = cof1(i)+Cl~(i-l)+ ...... +cmA(i-m)
where x(k) is the m-dimensional state vector at position k along a scan line, f(k) is
the input intensity at position k, g(k) is the output intensity, m is the blur extent,
N is the number of elements in a line, c is a scalar, M, a and b are constant
matrices of order (mxm), (mxl) and (lxm) respectively, containing the discrete
values Cj of the blurring PSF h(j) for j=O, ... ,m and 1::.(.) is the Kronecker delta
function.
666
INFLUENCE OF BOTH TIME CONSTANT AND VELOCITY
ON THE AMOUNT OF MOTION BLUR IN AN ARTIFICIAL
RECEPTOR ARRAY
To start with, we incorporate in our simulation model a PSF, derived from
equation (1), to model the performance of all neural columnar arranged filters in
the lobula complex, with the restriction that the time constants f remain fixed
throughout the whole range of stimulus velocities. Realization of this PSF can
easily be achieved via the just mentioned state space model.
300
250
I.
200
\
150
\
\
\
\
\
:3 100
.
<{
w
0
50
,,,
7
\
\
..
I.
..
", ,
,
.
\
\
\
\
\
\
\
\
\
\
\
", ,
\
""
"-
0
1a..::i 250
:::>
.....
~
<{
200
150
100
"
50
O~--~~----~~~~--~~--~
o
Fig.3
5
10
15
20
POSITION IN
ARTIFICIAL RECEPTOR ARRAY
?
upper part. Demonstration of the effect that an increase in magnitude of
the time constants of an one-dimensional array of filters will result in
increase in motion blur (while the pattern velocity remains constant).
Original pattern shown in solid lines is a square-wave grating with a
spatial wavelength equal to 8 artificial receptor distances. The three
other wave forms drawn, show that for a gradual increase increase in
magnitude of the time constants, the representation of the original
square-wave will consequently degrade. lower part. A gradual increase in
velocity of the moving square-wave (while the filter time constants are
kept fixed) results also in a clear increase of degradation.
667
First we demonstrate the effect that an increase in time constant (while the
pattern velocity remains the same) will result in an increase in blur. Therefore we
introduce an one dimensional array of filters all being equipped with the same
time constant in their impulse response. The original pattern shown in square and
solid lines in the upper part of figure 3 consists of a square wave grating with a
spatial period overlapping 8 artificial receptive filters. The 3 other patterns drawn
there show that for the same constant velocity of the moving grating, an increase
in the magnitude of the time constants of the filters results in an increased blur in
the representation of that grating. On the other hand, an increase in velocity
(while the time constants of the artificial receptive units remain the same) also
results in a clear increase in motion blur, as demonstrated in the lower part of
figure 3.
Inspection of the two wave forms drawn by means of the dashed lines in
both upper and lower half of the figure, yields the conclusion, that (apart from
rounding errors introduced by the rather small number of artificial filters
available), equal amounts of smear will be produced when the product of time
constant and pattern velocity is equal. For the upper dashed wave form the
velocity was four times smaller but the time constant four times larger than for its
equivalent in the lower part of the figure.
ADAPTIVE SCHEME
In designing a proper image processing procedure our next step is to
incorporate the experimentally observed flexibility property of the time constants
in the imaging elements of our device. In figure 4a a scheme is shown, which
filters the information with fixed time constants, not influenced by the pattern
velocity. In figure 4b a network is shown where the time constants also remain
fixed no matter what pattern movement is presented, but now at the next level of
information processing, a spatially differential network is incorporated in order to
enhance blurred contrasts.
In the filtering network in figure 4c , first a measurement of the magnitude
of the velocity of the moving objects is done by thus far hypothetically introduced
movement processing algorithms, modelled here as a set of receptive elements
sampling the environment in such a manner that proper estimation of local pattern
velocities can be done. Then the time constants of the artificial receptive elements
will be tuned according to the estimated velocities and finally the same
differential network as in scheme 4b , is used.
The actual tuning mechanism used for our simulations is outlined in figure
5: once given the range of velocities for which the model is supposed to be
operational, and given a lower limit for the time constant 'f min ('f min can be the
smallest value which physically can be realized), the time constant will be tuned to
a new value according to the experimentally observed reciprocal relationship, and
will, for all velocities within the adaptive range, be larger than the fixed minimum
value. As demonstrated in the previous section the corresponding blur in the
representation of the moving stimulus will thus always be larger than for the
situation in which the filtering is done with fixed and smallest time constants
.,. min. More important however is the fact that due to this tuning mechanism the
blur will be constant since the product of velocity and time constant is kept
constant. So, once the information has been processed by such a system, a velocity
independent representation of the image will be the result, which can serve as the
input for the spatially differentiating network as outlined in figure 4c .
The most elementary form for this differential filtering procedure is the one
668
in which the gradient of two filters K-I and K+l which are the nearest neighbors
of filter K, is taken and then added with a constant weighing factor to the central
output K as drawn in figure 4 b and 4 c , where the sign of the gradient depends on
the direction of the estimated movement. Essential for our model is that we claim
that this weighing factor should be constant throughout the whole set of filters
and for the whole high velocity range in which the heterodyne imaging has to be
performed. Important to notice is the existence of a so-called settling time, i.e. the
minimal time needed for our movement processing device to be able to accurately
measure the object velocity. [Note: this time can be set equal to zero in the case
that the relative stimulus velocity is known a priori, as demonstrated in figure 3].
Since, without doubt, within this settling period estimated velocity values will
come out erroneously and thus no optimal performance of our imaging device can
be expected, in all further examples, results after this initial settling procedure
will be shown.
2
3
,
5
A
B
;
C
Fig. 4
~
v
yV
9'
r39 rYO
/ y
i
7.' r~ ;/Y ?. Y
[~l [~l i~J
~'
't"if ~' n
Pattern movement in this figure is to the right.
A: Network consisting of a set of filters with a fixed, pattern velocity
independent, time constant in their impulse response.
B: Identical network as in figure 4A now followed by a spatially
differentiating circuitry which adds the weighed gradients of two
neighboring filter outputs K-l and K+I to the central filter output
K.
C: The time constants of the filtering network are tuned by a
hypothetical movement estimating mechanism, visualized here as a
number of receptive elements, of which the combined output tunes
the filters. A detailed description of this mechanism is shown in
figure 5. This tuned network is followed by an identical spatially
differentiating circuit as described in figure 4B.
669
increasing velocity
?
v
(<?s)
1:
1:
----_
.
decreasing time constant
Fig. 5
min
Detailed description of the mechanism used to tune the time constants.
The time constant f of a specific neural channel is set by the pattern
velocity according to the relationship shown in the insert, which is
derived from eq. (2) with cx=- I and 13= I.
6 r
i',
4r
"
,,'I,
,, ,,
"
I,
'"
\
2
=f
< 0
~
,
v
,
~
I
I
I
, ,,
w
o::;)
-
I
I
I
~
,,-
\
h
,/'
,
I
:J:
<
4V
r;-"
, ::-:-
- ~be-.--1
,
=-.:!
I
I
, -... -
-~
"
"\
:'
a.
2V
"V
2
- /,-----
o Wi
8V
12 V
I
J/---'1;"- -- ---
1:
16 V
J
\
. r------l...- --
POSITION IN ARTIFICIAL RECEPTOR ARRAY
Fig.6
Thick lines: square-wave stimulus pattern with a spatial wavelength
overlapping 32 artificial receptive elements. Thick lines: responses for 6
different pattern velocities in a system consisting of paralleling neural
filters equipped with time constants, tuned by this velocity, and followed
by a spatially differentiating network as described.
Dashed lines: responses to the 6 different pattern velocities in a filtering
system with fixed time constants, followed by the same spatial
differentiating circuitry as before. Note the sharp over- and under
shoots for this case.
670
Results obtained with an imaging procedure as drawn in figure 4 b and 4c
are shown in figure 6. The pattern consists of a square wave, overlapping 32
picture elements. The pattern moves (to the left) with 6 different velocities v, 2v,
4v, 8v, 12v, 16v. At each velocity only one wavelength is shown. Thick lines:
square wave pattern. Dashed lines: the outputs of an imaging device as depicted in
figure 4 b: constant time constants and a constant weighing factor in the spatial
processing stage. Note the large differences between the several outputs. Thin
continuous lines: the outputs of an imaging device as drawn in figure 4c: tuned
time constants according to the reciprocal relationship between pattern velocity
and time constant and a constant weighing factor in the spatial processing stage.
For further simulation details the reader is referred to Zaagman et al. 21 . Now the
outputs are almost completely the same and in good agreement with the original
stimulus throughout the whole velocity range.
Figure 7 shows the effect of the gradient weighing factor on the overall
filter performance, estimated as the improvement of the deblurred images as
compared with the blurred image, measured in dB. This quantitative measure has
been determined for the case of a moving square wave pattern with motion blur
7.-------~------~r-------_r------~
6
5
IX)
"0
4
0)
u
C
ItI
E
-
3
c-
o
~ 2
a.
c-
O)
~
;z:
1
0
-1
0
1
2
weighing factor
Fig. 7
3
4
?
Effect of the weighing factor on the overall filter performance. Curve
measured for the case of a moving square-wave grating. Filter
performance is estimated as the improvement in signal to noise ratio:
1=10? 1010g (
I:iI:j?V(i,j)-U(i,j?2)
I:iI: j? O(i,j) - u(i,j? 2
where u(i,j) is the original intensity at position (i,j) in the image, v(i,j)
is the intensity at the same position (i,j) in the motion blurred image and
O(i,j) is the intensity at (i,j) in the image, generated with the adaptive
tuning procedure.
671
extents comparable to those used for the simulations to be discussed in section IV.
From this curve it is apparent that for this situation there is an optimum value for
this weighing factor. Keeping the weight close to this optimum value will result in
a constant output of our adaptive scheme, thus enabling an optimal deblurring of
the smeared image of the moving object.
On the other hand, starting from the point of view that the time constants
should remain fixed throughout the filtering process, we should had have to tune
the gradient weights to the velocity in order to produce a constant output as
demonstrated in figure 6 where the dashed lines show strongly differing outputs of
a fixed time constant system with spatial processing with constant weight (figure
4b ). In other words, tuning of the time constants as proposed in this section results
in: I) the realization of the blur-constancy criterion as formulated previously, and
2) -as a consequence- the possibility to deblur the obtained image oPtimally with
one and the same weighing factor of the gradient in the final spatial processing
layer over the whole heterodyne velocity range.
COMPUTER SIMULATION RESULTS AND
CONCLUSIONS
The image quality improvement algorithm developed in the present
contribution has been implemented on a general purpose DG Eclipse Sjl40 minicomputer for our two dimensional simulations. Figure Sa shows an undisturbed
image, consisting of 256 lines of each 256 pixels, with S bit intensity resolution.
Figure Sb shows what happens with the original image if the PSF is modelled
according to the exponential decay (2). In this case the time constants of all
spatial information processing channels have been kept fixed. Again, information
content in the higher spatial frequencies has been reduced largely. The
implementation of the heterodyne filtering procedure was now done as follows:
first the adaptation range was defined by setting the range of velocities. This
means that our adaptive heterodyne algorithm is supposed to operate adequately
only within the thus defined velocity range and that -in that range- the time
constants are tuned according to relationship (2) and will always come out larger
than the minimum value 1 min. For demonstration purposes we set Q=I and /3=1 in
eq. (2), thus introducing the phenomenon that for any velocity, the two
dimensional set of spatial filters with time constants tuned by that velocity, will
always produce a constant output, independent of this velocity which introduces
the motion blur. Figure Sc shows this representation. It is important to note here
that this constant output has far more worse quality than any set of filters with
smallest and fixed time constants 1 min would produce for velocities within the
operational range. The advantage of a velocity independent output at this level in
our simulation model, is that in the next stage a differential scheme can be
implemented as discussed in detail in the preceding paragraph. Constancy of the
weighing factor which is used in this differential processing scheme is guaranteed
by the velocity independency of the obtained image representation.
Figure Sd shows the result of the differential operation with an optimized
gradient weighing factor. This weighing factor has been optimized based on an
almost identical performance curve as described previously in figure 7. A clear
and good restoration is apparent from this figure, though close inspection reveals
fine structure (especially for areas with high intensities) which is unrelated with
the original intensity distribution. These artifacts are caused by the phenomenon
that for these high intensity areas possible tuning errors will show up much more
pronounced than for low intensities.
672
Fig.8a
Fig.8b
Fig. 8c
Fig.8d
a
(
b
d
Original 256x256x8 bit picture.
Motion degraded image with a PSF derived from R(t)=c+a -e( -t/r).
where T is kept fixed to 12 pixels and the motion blur extent is 32
pixels.
Worst case, i.e. the result of motion degradation of the original image
with a PSF as in figure 8b , but with tuning of the time constants based
on the velocity.
Restored version of the degraded image using the heterodyne adaptive
processing scheme.
In conclusion: a heterodyne adaptive image processing technique, inspired by
the fly visual system, has been presented as an imaging device for moving objects.
A scalar difference equation model has been used to represent the motion blur
degradation process. Based on the experimental results described and on this state
space model, we developed an adaptive filtering scheme. which produces at a
certain level within the system a constant output, permitting further differential
operations in order to produce an optimally deblurred representation of the
moving object.
ACKNOWLEDGEMENTS
The authors wish to thank mT. Eric Bosman for his expert programming
673
assistance, mr. Franco Tommasi for many inspiring discussions and advises during
the implementation of the simulation model and dr. Rob de Ruyter van Steveninck
for experimental help. This research was partly supported by the Netherlands
Organization lor the Advancement 01 Pure Research (Z.W.O.) through the
foundation Stichting voor Biolysica.
REFERENCES
I. K. Hausen, Z. Naturforschung 31c, 629-633 (1976).
2. N. J. Strausfeld, Atlas of an insect brain (Springer Verlag, Berlin, Heidelberg,
New York, 1976).
3. K. Hausen, BioI. Cybern. 45, 143-156 (1982).
4. R. Hengstenberg, J. Compo Physiol. 149, 179-193 (1982).
5. W. H. Zaagman, H. A. K. Mastebroek, J. W. Kuiper, BioI. Cybern. 31, 163-168
( 1978).
6. H. A. K. Mastebroek, W. H. Zaagman, B. P. M. Lenting, Vision Res. 20, 467474 (1980)
7. R. R. de Ruyter van Steveninck, W. H. Zaagman, H. A. K. Mastebroek, BioI.
Cybern., 54, 223-236 (1986).
8. W. Reichardt, T. Poggio, Q. Rev. Biophys. 9, 311-377 (1976).
9. W. Reichardt, in Reichardt, W. (Ed.) Processing of optical Data by Organisms
and Machines (Academic Press, New York, 1969), pp. 465-493.
10. T. Poggio, W. Reichardt, Q. Rev. Bioph. 9, 377-439 (1976).
11. D. Marr, S. Ullman, Proc. R. Soc. Lond. 211, 151-180 (1981).
12. J. P. van Santen, G. Sperling, J. Opt. Soc. Am. A I, 451-473 (1984).
13. J. L. Harris SR., J. Opt. Soc. Am. 56, 569-574 (1966).
14. A. A. Sawchuk, Proc. IEEE, Vol. 60, No.7, 854-861 (1972).
15. M. M.Sondhi, Proc. IEEE, Vol. 60, No.7, 842-853 (1972).
16. N. E. Nahi, Proc. IEEE, Vol. 60, No.7, 872-877 (1972).
17. A. O. Aboutalib, L. M. Silverman, IEEE Trans. On Circuits And Systems TCAS 75, 278-286 (1975).
18. A. O. Aboutalib, M. S. Murphy, L.M. Silverman, IEEE Trans. Automat. Contr.
AC 22, 294-302 (1977).
19. Th. Hildebrand, BioI. Cybern. 36, 229-234 (1980).
20. S. A. Rajala, R. J. P. de Figueiredo, IEEE Trans. On Acoustics, Speech and
Signal Processing, Vol. ASSSP-29, No.5, 1033-1042 (1981).
21. W. H. Zaagman, H. A. K. Mastebroek, R. R. de Ruyter van Steveninck, IEEE
Trans, Syst. Man Cybern. SMC 13, 900-906 (1983).
| 50 |@word version:2 pulse:1 gradual:2 simulation:11 automat:1 solid:2 initial:1 exclusively:1 tuned:11 moo:1 physiol:1 additive:1 blur:16 hypothesize:1 atlas:1 half:1 device:6 weighing:12 advancement:1 inspection:2 reciprocal:2 realizing:1 compo:1 psth:5 traverse:1 lor:1 height:1 mathematical:1 along:3 differential:7 consists:2 behavioral:1 paragraph:1 manner:1 introduce:1 psf:8 expected:1 rapid:1 roughly:2 brain:1 inspired:1 decreasing:1 actual:1 kuiper:1 equipped:2 increasing:2 becomes:1 estimating:1 unrelated:1 circuit:2 inward:1 null:1 what:2 suppresses:1 developed:2 differing:1 finding:1 temporal:1 quantitative:1 hypothetical:1 exactly:1 unit:5 zaagman:6 underlie:1 before:2 understood:1 dropped:1 local:1 sd:1 limit:1 io:1 consequence:1 despite:1 receptor:4 flu:1 firing:2 modulation:4 studied:1 evoked:1 co:2 advises:1 smc:1 range:19 averaged:2 steveninck:5 silverman:2 procedure:8 displacement:2 area:2 pre:1 word:2 onto:1 clever:1 close:2 influence:1 cybern:5 restriction:1 equivalent:1 weighed:1 demonstrated:5 go:1 starting:1 resolution:1 pure:1 array:5 marr:2 his:1 variation:1 coordinate:3 construction:1 spontaneous:2 paralleling:1 programming:1 mosaic:1 designing:1 deblurring:1 agreement:1 velocity:63 element:8 santen:2 observed:3 bottom:1 constancy:2 fly:7 worst:1 region:3 voor:1 movement:13 highest:2 decrease:1 mentioned:1 environment:1 calliphora:1 neglected:1 depend:1 serve:1 eric:1 blurring:1 completely:1 easily:1 sondhi:2 represented:1 distinct:1 artificial:9 sc:1 outcome:1 apparent:2 heterodyne:8 larger:4 say:1 ability:1 cma:1 itself:2 final:1 directionally:1 advantage:2 mg:1 product:2 adaptation:6 neighboring:1 realization:2 flexibility:1 supposed:2 description:4 pronounced:1 optimum:2 wired:1 produce:5 object:12 lamina:1 help:1 ac:1 amazing:2 measured:2 nearest:1 sa:1 eq:3 soc:3 implemented:2 grating:6 come:2 direction:7 thick:3 correct:1 filter:22 subdivided:1 behaviour:2 opt:2 elementary:1 insert:1 claim:1 mastebroek:5 circuitry:3 smallest:3 purpose:3 estimation:1 proc:4 integrates:1 applicable:1 sensitive:3 tf:1 smeared:1 clearly:1 always:3 sight:1 rather:1 encode:1 derived:6 groningen:1 improvement:3 modelling:2 contrast:3 am:2 contr:1 dependent:1 sb:1 entire:2 typically:1 selective:1 pixel:3 overall:3 insect:1 priori:1 animal:2 spatial:16 field:7 equal:4 once:2 undisturbed:1 sampling:1 identical:3 represents:1 look:1 thin:1 stimulus:21 few:1 deblurred:2 dg:1 individual:1 murphy:1 consisting:3 attempt:1 detection:2 organization:1 possibility:1 introduces:1 integral:1 poggio:2 iv:1 re:1 minimal:1 fitted:2 increased:1 column:1 restoration:2 introducing:1 stichting:1 medulla:1 rounding:1 front:2 optimally:2 varies:1 combined:1 peak:2 systematic:1 physic:2 enhance:1 jo:1 again:1 central:2 recorded:1 containing:1 dr:1 worse:1 american:1 expert:1 ullman:2 doubt:1 syst:1 account:1 potential:1 de:7 retinal:1 summarized:1 blurred:6 matter:1 caused:1 depends:1 performed:1 view:1 wave:13 start:1 yv:1 contribution:1 square:15 degraded:3 largely:1 yield:1 directional:1 modelled:2 accurately:1 produced:1 straight:3 classified:1 tcas:1 influenced:1 ed:1 frequency:3 involved:1 pp:1 erred:1 pilot:1 manifest:1 electrophysiological:1 organized:1 cj:1 back:2 higher:2 response:13 improved:1 arranged:1 done:4 though:1 strongly:1 angular:1 just:1 stage:3 correlation:1 hand:2 horizontal:2 overlapping:3 quality:2 reveal:1 impulse:2 artifact:1 effect:4 adequately:2 spatially:5 laboratory:1 white:1 assistance:1 during:2 steady:1 m:6 criterion:1 smear:1 complete:1 demonstrate:1 delivers:1 motion:19 image:24 wise:2 consideration:1 shoot:1 stimulation:1 mt:1 physical:1 tail:2 discussed:2 organism:1 strausfeld:1 measurement:1 naturforschung:1 tuning:6 outlined:2 had:1 moving:16 base:1 add:1 recent:1 driven:1 apart:1 compound:1 certain:3 selectivity:1 verlag:1 minimum:2 preceding:1 mr:1 erythrocephala:1 period:5 signal:3 dashed:5 resolving:1 ii:2 reduces:1 smooth:3 adapt:1 academic:1 long:2 post:1 permitting:1 biophysics:1 basic:1 vision:1 mxm:1 physically:1 histogram:1 represent:3 achieved:1 whereas:1 fine:1 interval:1 operate:1 sr:1 recording:1 db:1 regularly:1 integer:1 revealed:1 easy:1 enough:1 fit:6 tommasi:1 speech:1 york:2 jj:1 remark:1 action:1 generally:1 latency:1 clear:4 detailed:2 tune:3 netherlands:2 outward:1 repeating:1 amount:2 extensively:1 situated:1 locally:2 processed:1 visualized:1 inspiring:1 reduced:1 lsi:1 notice:1 sign:1 estimated:6 delta:1 per:2 discrete:2 vol:4 independency:2 four:3 lobula:3 drawn:6 kept:4 imaging:11 respond:1 almost:3 reader:2 throughout:4 comparable:1 bit:2 layer:1 followed:5 guaranteed:1 activity:4 adapted:1 optic:2 precisely:1 kronecker:1 erroneously:1 franco:1 extremely:2 min:6 lxm:1 lond:1 optical:3 relatively:1 department:1 according:8 remain:5 smaller:1 em:1 wi:1 rob:1 rev:2 happens:1 invariant:1 taken:1 equation:5 agree:1 remains:2 previously:2 discus:1 turn:1 mechanism:7 sperling:1 needed:1 available:1 operation:2 blowfly:3 dudv:1 slower:1 existence:1 original:9 top:1 include:1 especially:1 move:2 question:1 already:1 spike:1 occurs:1 receptive:6 realized:1 dependence:1 added:1 restored:1 gradient:7 distance:1 thank:1 mapped:1 berlin:1 degrade:1 extent:6 reason:1 toward:1 relationship:4 ratio:1 demonstration:2 implementation:3 proper:2 upper:4 vertical:1 neuron:7 observation:1 convolution:1 iti:1 enabling:1 defining:1 situation:3 incorporated:1 precise:1 frame:1 sharp:2 intensity:12 introduced:2 optimized:2 acoustic:1 trans:4 able:1 suggested:1 pattern:29 including:1 oj:1 power:1 suitable:1 settling:3 scheme:8 eye:2 picture:2 axis:3 reichardt:4 review:1 acknowledgement:1 relative:2 filtering:9 proportional:1 digital:2 foundation:1 principle:2 supported:1 keeping:1 figueiredo:1 hengstenberg:1 institute:1 wide:3 neighbor:1 differentiating:5 van:7 curve:3 depth:4 hildebrand:2 hausen:3 made:2 adaptive:14 qualitatively:1 collection:1 author:1 far:3 preferred:1 reveals:2 conclude:1 continuous:1 ryo:1 stimulated:1 channel:3 ruyter:5 operational:2 heidelberg:1 du:2 complex:2 cl:1 spread:1 whole:6 noise:2 fig:11 referred:2 sub:1 position:8 wish:1 exponential:2 ix:1 specific:1 bosman:1 decay:3 essential:1 optomotor:1 magnitude:5 biophys:1 columnar:1 cx:1 depicted:1 fc:2 wavelength:4 ganglion:3 visual:19 horizontally:2 deblur:1 scalar:3 eclipse:1 springer:1 corresponds:1 harris:2 bioi:4 formulated:1 consequently:1 man:1 content:1 experimentally:3 hard:1 determined:2 degradation:8 called:1 partly:1 experimental:5 la:1 hypothetically:1 scan:2 incorporate:2 phenomenon:3 |
4,420 | 500 | Segmentation Circuits Using Constrained
Optimization
John G. Harris'"
MIT AI Lab
545 Technology Sq., Rm 767
Cambridge, MA 02139
Abstract
A novel segmentation algorithm has been developed utilizing an absolutevalue smoothness penalty instead of the more common quadratic regularizer. This functional imposes a piece-wise constant constraint on the
segmented data. Since the minimized energy is guaranteed to be convex,
there are no problems with local minima and no complex continuation
methods are necessary to find the unique global minimum. By interpreting the minimized energy as the generalized power of a nonlinear resistive
network, a continuous-time analog segmentation circuit was constructed.
1
INTRODUCTION
Analog hardware has obvious advantages in terms of its size, speed, cost, and power
consumption. Analog chip designers, however, should not feel constrained to mapping existing digital algorithms to silicon. Many times, new algorithms must be
adapted or invented to ensure efficient implementation in analog hardware. Novel
analog algorithms embedded in the hardware must be simple and obey the natural
constraints of physics. Much algorithm intuition can be gained from experimenting
with these continuous-time nonlinear systems. For example, the algorithm described
in this paper arose from experimentation with existing analog segmentation hardware. Surprisingly, many of these "analog" algorithms may prove useful even if a
computer vision researcher is limited to simulating the analog hardware on a digital
computer [7] .
... A portion of this work is part. of a Ph.D dissertation at Caltech [7].
797
798
Harris
2
ABSOLUTE-VALUE SMOOTHNESS TERM
Rather than deal with systems that. have many possible stable states , a network
t.hat has a unique stable stat.e will be studied . Consider a net.work that minimizes:
E(u) =
~2 I:(d
i
.
-
lid:?
1
+,\
I:.
1
1I i+1 -
lIi l
(2)
I
Thf' absolute-vahIf.' function is used for the smoothness penalty instead of the more
familiar quadratic term. There are two intuitive reasons why the absolut.e-value
pena1t.y is an improvement over the quadratic penalty for piece-wise const.ant. segnwntation. First, for large values of Illi - 1Ii+11, the penalty is not. as severE" which
means that edges will be smoothed less. Second, small values of Illi - lIi+11 are
penalized more than they are in t.he quadratic case, resulting in a flat.ter surface
bet.ween edges. Since no complex continuation or annealing methods are necessary
t.o avoid local minima. this computat.ional model is of interest to vision researchers
independent of any hardware implicat.ions.
This method is very similar to constrained optimization methods uisclIssed by Platt
[14] and Gill [4]. Uncler this interpretation, the problem is to minimize L(di - Ui f
with t.he constraint. that lIj
lIi+l for all i. Equation 1 is an inst.ance of the penalty
met.hod, as ,\ ~ (Xl, the const.raint lIi = lIi+l is fulfilled exactly. The absolute-value
value penalt.y function given in Equat.ion 2 is an example of a nondifferent.ial pena.lty.
The const.raint. lli = Ui+1 is fulfilled exactly for a finit.e value of ,\. Howewr, unlike
typical constrained optimization methous, this application requires some of these
"exact ,. constraints to fail (at discontinuities) and others to be fulfilled .
=
This algorithm also resembles techniques in robust st.at.istics, a field pioneered and
formalized by Huber [9]. The need for robust estimation techniques in visual processing is clear since, a single out.lier may cause wild variations in standard regularization networks which rely on quadrat.ic data constraint.s [171. Rather than use the
quadratic data constraints, robust. regression techniques tend to limit the infl uence
of outlier dat.a points. 2 The absolut.e-value function is one method commonly used
to reduce outlier succeptability. In fact, the absolute-value network developed in
this paper is a robust method if discontinuities in the data are interpret.ed as outliers. The line process or resistive fuse networks can also be interpreted as robust
methods using a more complex influence functions.
3
ANALOG MODELS
As pointed out by Poggio and Koch [15], the notion of minimizing power in linear
networks implementing quadrat.ic "regularized" a.lgorithms must be replaced by t.he
more general notion of minimizing the total resistor co-content [1:31 for nonlinear
networks. For a voltage-controlled resistor characterized by I = f(V), the cocontent is defined as
J(V) =
i
v
f(V')dV'
20 utlier detect.ion techniques have been mapped to analog hardware [8).
(3)
Segmentation Circuits Using Constrained Optimization
? ? ?
???
Figure 1: Nonlinear resist.ive network for piece-wise const.ant segmentation.
One-dimensional surface int.erpolation from dense dat.a will be used as the model
problem in t.his paper, but these techniques generalize to sparse data in multiple
dimensions. A standarJ technique for smoothing or int.erpolating noisy input.s di is
to minimize an energy! of the form:
(1)
The first. term ensures t.hat the solution Ui will be close to the data while the second
term implements a smoothness constraint. The parameter A controls the tradeoff
between the degree of smoothness and the fidelity to the data. Equation 1 can
be interpreted as a regularization method [1] or as the power dissipa.ted the linear
version of the resistive network shown in Figure 1 [16].
Since the energy given by Equation 1 oversmoothes discontinuities, numerous researchers (starting with Geman and Geman [3]) have modified Equa.tion 1 with
line processes and successfully demonstrated piece-wise smooth segmentation. In
these methods, the resultant energy is nonconvex and complex annealing or continuation methods are required to converge to a good local minima of the energy
space. This problem is solved using probabilistic [11] or deterministic annealing
techniques [2, 10]. Line-process discontinuities have been successfully demonstrated
in analog hardware using resistive fuse networks [5], but continuation methods are
still required to find a good solution [6].
lThe term ene'yy is used throughout this paper as a cost functional to be minimized.
It does not necessarily relate t.o any true energy dissipated in the real world.
799
800
Harris
(b) 6 = lOOmV
(c) S = lOmV
(d)S=lmV
Figure 2: Various examples of tiny-tanh network simulation for varying 6. The I-V
characteristic of the saturating resistors is I = ,\ tanh(V/6). (a) shows a synthetic
1.0V tower image with additive Gaussian noise of q = O.3V which is input to the
network. The network outputs are shown in Figures (b) 6 = 100mV, (c) 6 = 10mV
and (d) 6 = 1m V. For all simulations ,\ = 1.
Segmentation Circuits Using Constrained Optimization
\'i
~'R
.....-ji-t-------------t
Figure 3: Tiny tanh circuit. The saturating tanh characteristic is measured between
nodes VI and \/2, Controls FR and VG set the conductance and saturation voltage
for the device.
For a linear resistor, I = ev, the co-cont.ent. is given by ~ev2, which is half the
dissipa.ted power P = eV~.
The absolute-value functional in Equat.ion 2 is not strictly convex. Also, since the
absolut.e-value function is nondifferentiable at the origin, hardware and software
methods of solution will be plagued with instabilities and oscillations. We approximate Equation 2 with the following well-behaved convex co-content:
(4)
The co-content becomes the absolute-va.lue cost function in Equation 2 in the limiting case as 8 -----t O. The derivative of Equation 2 yields Kirchoff's current equation
at each node of the resistive network in Figure 1:
(Uj-dj)+Atanh(
Uj -
Ui+l
8
)+Atanh(
Ui -
Uj-l
8
)=0
(5)
Therefore, construction of this network requires a nonlinear resistor with a hyperbolic tangent I-V characteristic with an extremely narrow linear region. For this
801
802
Harris
reason, t.his element. is called t.he tiTly-tanh resist.or. This saturating resistor is used
as the nonlinear element. in the resistive network shown in Figure 1. Its I-V charact.eristic is I = -\ tanh(l'/ b). It is well-known that any circuit made of inuependent.
voltage sources and two-terminal resistors \'\lit.h strictly increasing 1- V characterist.ics
has a unique st.able st.ate.
4
COMPUTER SIMULATIONS
Figure 2a shows a synthetic 1.0V tower image with additive Gaussian noise of
= 0.3V. Figure 2b shows the simulated result for b = 100m V and -\ = 1. As
Mead has observed, a network of saturating resistors has a limited segmentation
effect. [12]. Unfortunately, as seen in the figure, noise is still evident in the output,
and the curves on either side of the step have started t.o slope toward one anot.her.
As -\ is increased to further smooth the noise, the t.wo sides of the st.ep will blend
together into one homogeneous region. However, a'3 the width of the linear region
of t.he sat.urating resist.or is reduced, network segmentation propert.ies are greatly
enhanced. Segmentation performance improves for b = 10m V shown in Figure LC
and further improves for f, = 1mF in Figure 2d. The best. segment.ation occurs when
the I-V curve resembles a step function, and co-content., therefore, approximates an
absolute-value. Decreasing b less than 1m V shows no discernible change in the
output.. 3
(J
One drawback of this net.work is t.hat it does not. recover the exact heights of input
steps. Rather it. subtracts a const.ant from the height of each input. It is st.raight.forward to show that the amount each uniform region is pulled towards the background
is given by -\(perimeter/area) [7]. Significant features with large area/perimeter ratios will retain their original height. Noise point.s have small area/perimeter ratios
and therefore will be pulled towards the background. Typically, the exact values of
the height.s are less important than the location of the discontinuities. Furthermore,
it. would not be uifficult to construct a t.wo-stage network t.o recover the exact values
of the step height.s if desired. In this scheme a tiny-tanh network would control the
switches on a second fuse network.
5
ANALOG IMPLEMENTATION
Mead has constructed a CMOS saturating resistor with an I-V characteristic of the
form I = -\ tanh(ll/b), where delta must be larger than 50mV because of fundamental physical limitations [12]. Simulation results from section 4 suggest that for
a tower of height h to be segmented, h/8 must be at least on the order of 1000.
Therefore a network using Mead's saturating resistor (8 = 50m V) could segment a
tower on the order of 50V, which is much too large a voltage to input to these chips.
Furt.hermore, since we are typically interested in segmenting images into more than
two levels even higher voltages would be required. The tiny-tanh circuit (shown in
Figure 3) builds upon an older version of Mead's saturating resistor [18] using a
gain stage t.o decrease the linear region of the device. This device can be made to
saturate at voltages as low as 5m V.
3These simulations were also used to smooth and segment noisy depth da.ta from a
correlation-based stereo algorithm run on real images [7).
Segmentation Circuits Using Constrained Optimization
:3.6
(V)
(V)
3.2
:3.2
2.8
2.8
2.4
2.4~'''''''''' Jt.j'""'''''''''''.11.''''"",,,,,,?,,,~,,~,,,,,,,,,
2.0
2.0
...
Chip Input
Segment.ed Step
Figure 4: Measured segmentat.ion performance of the tiny-tanh network for a step.
The input shown 011 the left. is about. a IV step. The out.put shown on the right. is
a sf'gment.ed step about 0.5V in height.
By implementing the nonlinear resistors in Figure 1 with the tiny-t.anh circuit. a
ID segmentation network was successfully fabricated and t.ested. Figure 4 shows
t.he segmentation which resulted when a st.ep (about 1V) ,vas scanned into the chip.
The segment.ed step has been reduced to about 0.5V. No special annealing met.hods
,,,ere necessary because a convex energy is being minimized.
6
CONCLUSION
A novel energy functional was developed for piece-wise constant segmentatioll. 4
This computational model is of interest to vision researchers independent of any
hardware implications, because a convex energy is minimized. In sharp contrast to
previous solutions of t.his problem, no complex continuation or annealing methods
are necessary to avoid local minima. By interpreting this Lyapunov energy as the
co-content of a nonlinear circuit, we have built and demonstrated the tiny-tanh
network, a cont.inuous-time segmentation network in analog VLSI.
Acknowledgements
Much of this work was perform at Calt.ech with the support of Christof Koch and
Carver Mead. A Hughes Aircraft graduate student fellowship and an NSF postdoctoral fellowship are gratefully acknowledged.
4This work has also been extended to segment piece-wise lillea.r regions, instead of the
purely piece-wise constant processing discussed in this paper [7].
803
804
Harris
References
[1] M. Bert.ero, T. Poggio, and V. Torre. Ill-posed problems in early vision . Proc.
IEEE, 76:869-889, 1988.
[2] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press. Cambridge,
MA. 1987.
[3] S. Geman and D. Geman. Stochast.ic relaxation. gibbs distribut.ion and the
bayesian rest.oration of images. IEEE Trans. Pafifrll Anal. Mach. Intdl.,
6:721-741, 1984.
[4] P. E. Gill, "V. Murray, and M. H. 'Vright. Practical Optimization. Academic
Press, 1981.
[5] .J. G. Harris, C. Koch, and .J. Luo. A two-dimensional analog VLSI circuit for
detecting discontinuities in early vision. Science, 248:1209-1211,1990.
[6] .J. G. Harris, C. Koch, .J. Luo, and .J . 'Wyat.t.. Resist.ive fuses: analog hardware
for det.ecting discontinuities in early vision. In Ivl. Mead, C.and Ismail, editor,
Analog VLSI Implementations of Neural Systems. Kluwer, Norwell. MA, 1989.
[7] .J .G. Harris. Analog models for early vision. PhD thesis, California Inst.itut.e of
Technology, Pasadena, CA, 1991. Dept. of Computat.ion and Neural Syst.ems.
[8] .J .G. Harris, S.C. Liu, and B. Mathur. Discarding out.liers in a nonlinear resistive network. In blter1lational Joint Conference 011 NEural .Networks, pages
501-506, Seattie, 'VA., July 1991.
[9] P ..l. Huber. Robust Statistics . .J. 'Viley & Sons, 1981.
[10] C. Koch, .J. Marroquin, and A. Yuille. Analog "neuronal" networks in early
vision. Proc Nail. Acad. Sci. B. USA, 83:4263-4267, 1987.
[11] J. Marroquin, S. Mitter, and T. Poggio. Probabilistic solut.ion of ill-posed
problems in computational vision. J. Am. Statistic Assoc. 82:76-89, 1987.
[12] C. Mead. Analog VLSI and Neural Systems. Addison-\Vesley, 1989.
[13] w. Millar. Some general theorems for non-linear systems possessing resistance.
Phil. Mag., 42:1150-1160, 1951.
[14] .J. Platt. Constraint methods for neural networks and computer graphics. Dept.
of Comput.er Science Technical Report Caltech-CS-TR-89-07, California Institute of Technology, Pasadena, CA, 1990.
[15] T. Poggio and C. Koch. An analog model of computation for the ill-posed problems of early vision. Technical report, MIT Artificial Intelligence Laboratory,
Cambridge, MA, 1984. AI Memo No. 783.
[16] T. Poggio and C. Koch. Ill-posed problems in early vision: from computational
theory to analogue networks. Proc. R. Soc. Lond. B, 226:303-323, 1985.
[17] B.G. Schunck. Robust computational vision. In Robust methods in computer
tJision workshop., 1989.
[18] M. A. Sivilotti, M. A. Mahowald, and C. A. Mead. Real-time visual computation using analog CMOS processing arrays. In 1987 Stanford Conference on
Very Large Scale Integration, Cambridge, MA, 1987. MIT Press.
| 500 |@word aircraft:1 version:2 lgorithms:1 simulation:5 tr:1 liu:1 mag:1 existing:2 current:1 luo:2 must:5 john:1 additive:2 discernible:1 lue:1 half:1 intelligence:1 device:3 ial:1 dissertation:1 detecting:1 node:2 location:1 ional:1 height:7 constructed:2 prove:1 resistive:7 wild:1 huber:2 terminal:1 decreasing:1 increasing:1 becomes:1 circuit:11 anh:1 sivilotti:1 interpreted:2 minimizes:1 developed:3 fabricated:1 lomv:1 exactly:2 rm:1 assoc:1 platt:2 control:3 christof:1 segmenting:1 local:4 limit:1 acad:1 mach:1 id:1 mead:8 studied:1 resembles:2 co:6 limited:2 graduate:1 equa:1 unique:3 practical:1 hughes:1 implement:1 ance:1 sq:1 area:3 hyperbolic:1 suggest:1 close:1 viley:1 put:1 influence:1 instability:1 deterministic:1 demonstrated:3 phil:1 starting:1 convex:5 formalized:1 utilizing:1 array:1 his:3 notion:2 variation:1 feel:1 limiting:1 construction:1 enhanced:1 pioneered:1 exact:4 homogeneous:1 origin:1 element:2 geman:4 invented:1 observed:1 ep:2 solved:1 region:6 ensures:1 decrease:1 equat:2 intuition:1 ui:5 segment:6 purely:1 upon:1 yuille:1 finit:1 joint:1 chip:4 various:1 regularizer:1 ech:1 artificial:1 larger:1 ive:2 posed:4 stanford:1 statistic:2 noisy:2 advantage:1 net:2 ero:1 reconstruction:1 fr:1 ismail:1 intuitive:1 ent:1 cmos:2 stat:1 measured:2 soc:1 c:1 met:2 lyapunov:1 drawback:1 torre:1 implementing:2 strictly:2 koch:7 ic:4 blake:1 plagued:1 mapping:1 kirchoff:1 early:7 estimation:1 proc:3 tanh:11 ere:1 successfully:3 mit:4 gaussian:2 modified:1 rather:3 arose:1 avoid:2 bet:1 voltage:6 varying:1 improvement:1 experimenting:1 greatly:1 contrast:1 ivl:1 detect:1 am:1 inst:2 typically:2 her:1 vlsi:4 pasadena:2 interested:1 fidelity:1 ill:4 distribut:1 smoothing:1 constrained:7 special:1 integration:1 field:1 construct:1 ted:2 lit:1 minimized:5 others:1 report:2 resulted:1 familiar:1 replaced:1 conductance:1 interest:2 ested:1 severe:1 perimeter:3 implication:1 furt:1 norwell:1 edge:2 necessary:4 poggio:5 iv:1 carver:1 desired:1 uence:1 increased:1 mahowald:1 cost:3 uniform:1 too:1 graphic:1 synthetic:2 st:6 fundamental:1 retain:1 probabilistic:2 physic:1 together:1 thesis:1 lii:5 derivative:1 syst:1 student:1 int:2 mv:3 vi:1 piece:7 tion:1 lab:1 portion:1 millar:1 recover:2 slope:1 minimize:2 characteristic:4 yield:1 ant:3 generalize:1 bayesian:1 lli:1 researcher:4 ed:4 energy:11 obvious:1 resultant:1 di:2 gain:1 improves:2 segmentation:15 marroquin:2 higher:1 ta:1 zisserman:1 charact:1 furthermore:1 stage:2 correlation:1 nonlinear:9 behaved:1 usa:1 effect:1 true:1 regularization:2 laboratory:1 deal:1 ll:1 width:1 generalized:1 evident:1 interpreting:2 image:5 wise:7 novel:3 possessing:1 common:1 functional:4 ji:1 physical:1 analog:21 he:6 interpretation:1 pena:1 interpret:1 approximates:1 discussed:1 silicon:1 significant:1 kluwer:1 cambridge:4 gibbs:1 ai:2 smoothness:5 eristic:1 pointed:1 gratefully:1 dj:1 stable:2 surface:2 nonconvex:1 caltech:2 seen:1 minimum:5 gill:2 converge:1 ween:1 july:1 ii:1 multiple:1 segmented:2 smooth:3 technical:2 academic:1 characterized:1 y:1 controlled:1 va:3 regression:1 vision:12 ion:8 background:2 fellowship:2 annealing:5 source:1 rest:1 unlike:1 tend:1 ter:1 switch:1 reduce:1 tradeoff:1 det:1 penalty:5 ev2:1 wo:2 stereo:1 resistance:1 cause:1 penalt:1 useful:1 clear:1 amount:1 ph:1 hardware:11 reduced:2 continuation:5 computat:2 nsf:1 designer:1 fulfilled:3 delta:1 yy:1 acknowledged:1 fuse:4 relaxation:1 run:1 oration:1 throughout:1 nail:1 oscillation:1 guaranteed:1 quadratic:5 adapted:1 scanned:1 constraint:8 flat:1 software:1 anot:1 propert:1 speed:1 extremely:1 ecting:1 lond:1 ate:1 em:1 son:1 lid:1 characterist:1 outlier:3 dv:1 ene:1 equation:7 fail:1 addison:1 experimentation:1 obey:1 simulating:1 hat:3 original:1 ensure:1 absolut:3 const:5 uj:3 build:1 murray:1 dat:2 atanh:2 occurs:1 blend:1 mapped:1 simulated:1 sci:1 consumption:1 nondifferentiable:1 tower:4 lthe:1 reason:2 lmv:1 toward:1 cont:2 ratio:2 minimizing:2 unfortunately:1 relate:1 vright:1 memo:1 implementation:3 anal:1 perform:1 extended:1 smoothed:1 sharp:1 bert:1 mathur:1 required:3 resist:4 california:2 narrow:1 discontinuity:7 trans:1 able:1 ev:2 solut:1 saturation:1 built:1 analogue:1 power:5 ation:1 natural:1 rely:1 regularized:1 scheme:1 older:1 technology:3 numerous:1 started:1 dissipated:1 thf:1 lij:1 acknowledgement:1 tangent:1 embedded:1 limitation:1 vg:1 digital:2 degree:1 imposes:1 editor:1 tiny:7 penalized:1 surprisingly:1 lier:1 side:2 pulled:2 institute:1 absolute:7 sparse:1 curve:2 dimension:1 depth:1 world:1 forward:1 commonly:1 made:2 subtracts:1 approximate:1 global:1 sat:1 postdoctoral:1 continuous:2 why:1 robust:8 ca:2 complex:5 necessarily:1 da:1 dense:1 noise:5 neuronal:1 mitter:1 lc:1 illi:2 resistor:12 sf:1 xl:1 comput:1 saturate:1 theorem:1 discarding:1 jt:1 er:1 workshop:1 gained:1 phd:1 hod:2 mf:1 visual:3 schunck:1 saturating:7 harris:9 ma:5 raint:2 towards:2 content:5 change:1 typical:1 total:1 called:1 support:1 dept:2 |
4,421 | 5,000 | Correlated random features for
fast semi-supervised learning
Brian McWilliams
ETH Z?urich, Switzerland
[email protected]
David Balduzzi
ETH Z?urich, Switzerland
[email protected]
Joachim M. Buhmann
ETH Z?urich, Switzerland
[email protected]
Abstract
This paper presents Correlated Nystr?om Views (XNV), a fast semi-supervised algorithm for regression and classification. The algorithm draws on two main ideas.
First, it generates two views consisting of computationally inexpensive random
features. Second, multiview regression, using Canonical Correlation Analysis
(CCA) on unlabeled data, biases the regression towards useful features. It has
been shown that CCA regression can substantially reduce variance with a minimal increase in bias if the views contains accurate estimators. Recent theoretical
and empirical work shows that regression with random features closely approximates kernel regression, implying that the accuracy requirement holds for random
views. We show that XNV consistently outperforms a state-of-the-art algorithm
for semi-supervised learning: substantially improving predictive performance and
reducing the variability of performance on a wide variety of real-world datasets,
whilst also reducing runtime by orders of magnitude.
1
Introduction
As the volume of data collected in the social and natural sciences increases, the computational cost
of learning from large datasets has become an important consideration. For learning non-linear
relationships, kernel methods achieve excellent performance but na??vely require operations cubic in
the number of training points.
Randomization has recently been considered as an alternative to optimization that, surprisingly, can
yield comparable generalization performance at a fraction of the computational cost [1, 2]. Random features have been introduced to approximate kernel machines when the number of training
examples is very large, rendering exact kernel computation intractable. Among several different
approaches, the Nystr?om method for low-rank kernel approximation [1] exhibits good theoretical
properties and empirical performance [3?5].
A second problem arising with large datasets concerns obtaining labels, which often requires a domain expert to manually assign a label to each instance which can be very expensive ? requiring significant investments of both time and money ? as the size of the dataset increases. Semi-supervised
learning aims to improve prediction by extracting useful structure from the unlabeled data points
and using this in conjunction with a function learned on a small number of labeled points.
Contribution. This paper proposes a new semi-supervised algorithm for regression and classification, Correlated Nystr?om Views (XNV), that addresses both problems simultaneously. The method
1
consists in essentially two steps. First, we construct two ?views? using random features. We investigate two ways of doing so: one based on the Nystr?om method and another based on random
Fourier features (so-called kitchen sinks) [2, 6]. It turns out that the Nystr?om method almost always
outperforms Fourier features by a quite large margin, so we only report these results in the main
text.
The second step, following [7], uses Canonical Correlation Analysis (CCA, [8, 9]) to bias the optimization procedure towards features that are correlated across the views. Intuitively, if both views
contain accurate estimators, then penalizing uncorrelated features reduces variance without increasing the bias by much. Recent theoretical work by Bach [5] shows that Nystr?om views can be expected to contain accurate estimators.
We perform an extensive evaluation of XNV on 18 real-world datasets, comparing against a modified
version of the SSSL (simple semi-supervised learning) algorithm introduced in [10]. We find that
XNV outperforms SSSL by around 10-15% on average, depending on the number of labeled points
available, see ?3. We also find that the performance of XNV exhibits dramatically less variability
than SSSL, with a typical reduction of 30%.
We chose SSSL since it was shown in [10] to outperform a state of the art algorithm, Laplacian
Regularized Least Squares [11]. However, since SSSL does not scale up to large sets of unlabeled
data, we modify SSSL by introducing a Nystr?om approximation to improve runtime performance.
This reduces runtime by a factor of ?1000 on N = 10, 000 points, with further improvements as N
increases. Our approximate version of SSSL outperforms kernel ridge regression (KRR) by > 50%
on the 18 datasets on average, in line with the results reported in [10], suggesting that we lose little
by replacing the exact SSSL with our approximate implementation.
Related work. Multiple view learning was first introduced in the co-training method of [12] and
has also recently been extended to unsupervised settings [13,14]. Our algorithm builds on an elegant
proposal for multi-view regression introduced in [7]. Surprisingly, despite guaranteeing improved
prediction performance under a relatively weak assumption on the views, CCA regression has not
been widely used since its proposal ? to the best of our knowledge this is first empirical evaluation
of multi-view regression?s performance. A possible reason for this is the difficulty in obtaining
naturally occurring data equipped with multiple views that can be shown to satisfy the multi-view
assumption. We overcome this problem by constructing random views that satisfy the assumption
by design.
2
Method
This section introduces XNV, our semi-supervised learning method. The method builds on two
main ideas. First, given two equally useful but sufficiently different views on a dataset, penalizing
regression using the canonical norm (computed via CCA), can substantially improve performance
[7]. The second is the Nystr?om method for constructing random features [1], which we use to
construct the views.
2.1
Multi-view regression
Suppose we have data T = (x1 , y1 ), . . . , (xn , yn ) for xi 2 RD and yi 2 R, sampled according to
joint distribution P (x, y). Further suppose we have two views on the data
z(?) : RD ! H(?) = RM : x 7! z(?) (x) =: z(?) for ? 2 {1, 2}.
We make the following assumption about linear regressors which can be learned on these views.
Assumption 1 (Multi-view assumption [7]). Define mean-squared error loss function `(g, x, y) =
(g(x) y)2 and let loss(g) := EP `(g(x), y). Further let L(Z) denote the space of linear maps
from a linear space Z to the reals, and define:
f (?) := argmin loss(g) for ? 2 {1, 2} and f :=
argmin
loss(g).
g2L(H(?) )
The multi-view assumption is that
?
?
loss f (?)
g2L(H(1) H(2) )
loss(f ) ? ?
2
for ? 2 {1, 2}.
(1)
In short, the best predictor in each view is within ? of the best overall predictor.
Canonical correlation analysis. Canonical correlation analysis [8, 9] extends principal component analysis (PCA) from one to two sets of variables. CCA finds bases for the two sets of variables
such that the correlation between projections onto the bases are maximized.
?
?
(1)
(2)
The first pair of canonical basis vectors, b1 , b1
is found by solving:
?
?
argmax corr b(1)> z(1) , b(2)> z(2) .
(2)
b(1) ,b(2) 2RM
Subsequent pairs are found by maximizing correlations subject to being orthogonal
to previously
h
i
(?)
(?)
found pairs. The result of performing CCA is two sets of bases, B(?) = b1 , . . . , bM for
?(?) satisfies
? 2 {1, 2}, such that the projection of z(?) onto B(?) which we denote z
j
? (?)> (?)
?j z
?k ] = jk , where jk is the Kronecker delta, and
1. Orthogonality: ET z
? (1)> (2) ?
?j z
?k = j ? jk where w.l.o.g. we assume 1
2. Correlation: ET z
1
2
???
0.
is referred to as the j th canonical correlation coefficient.
?(?) in the canonical basis, define its canonical norm
Definition 1 (canonical norm). Given vector z
as
v
uD
?
?2
uX 1
j
(?)
(?)
k?
z kCCA := t
z?j
.
j=1
j
Canonical ridge regression. Assume we observe n pairs of views coupled with real valued labels
n
on
h
i>
(?)
(1) (2)
(?)
(?)
zi , zi , yi
, canonical ridge regression finds coefficients b = b1 , . . . , bM
such that
i=1
X?
b (?) := argmin 1
yi
n i=1
n
(?) > (?)
?i
z
?2
+k
(?) 2
kCCA .
(3)
The resulting estimator, referred to as the canonical shrinkage estimator, is
b(?) =
j
j
n
n
X
(?)
(4)
?i,j yi .
z
i=1
Penalizing with the canonical norm biases the optimization towards features that are highly correlated across the views. Good regressors exist in both views by Assumption 1. Thus, intuitively,
penalizing uncorrelated features significantly reduces variance, without increasing the bias by much.
More formally:
Theorem 1 (canonical ridge regression, [7]). Assume E[y 2 |x] ? 1 and that Assumption 1 holds. Let
(?)
f b denote the estimator constructed with the canonical shrinkage estimator, Eq. (4), on training
set T , and let f denote the best linear predictor across both views. For ? 2 {1, 2} we have
PM 2
(?)
ET [loss(f b )]
loss(f ) ? 5? +
j=1
j
n
where the expectation is with respect to training sets T sampled from P (x, y).
P 2
The first term, 5?, bounds the bias of the canonical estimator, whereas the second, n1
j bounds
P 2
the variance. The
j can be thought of as a measure of the ?intrinsic dimensionality? of the
unlabeled data, which controls the rate of convergence. If the canonical correlation coefficients
decay sufficiently rapidly, then the increase in bias is more than made up for by the decrease in
variance.
3
2.2
Constructing random views
We construct two views satisfying Assumption 1 in expectation, see Theorem 3 below. To ensure our
method scales to large sets of unlabeled data, we use random features generated using the Nystr?om
method [1].
Suppose we have data {xi }N
i=1 . When N is very large, constructing and manipulating the N ? N
Gram matrix [K]ii0 = h (xi ), (xi0 )i = ?(xi , xi0 ) is computationally expensive. Where here, (x)
defines a mapping from RD to a high dimensional feature space and ?(?, ?) is a positive semi-definite
kernel function.
The idea behind random features is to instead define a lower-dimensional mapping, z(xi ) : RD !
RM through a random sampling scheme such that [K]ii0 ? z(xi )> z(xi0 ) [6, 15]. Thus, using
random features, non-linear functions in x can be learned as linear functions in z(x) leading to
significant computational speed-ups. Here we give a brief overview of the Nystr?om method, which
uses random subsampling to approximate the Gram matrix.
The Nystr?om method. Fix an M ? N and randomly (uniformly) sample a subset M = {?
xi }M
i=1
N
0
b denote the Gram matrix [K]
b ii0 where i, i 2 M. The
of M points from the data {xi }i=1 . Let K
Nystr?om method [1, 3] constructs a low-rank approximation to the Gram matrix as
? :=
K?K
N X
N
X
i=1
i0 =1
b ? [?(xi0 , x
? 1 ), . . . , ?(xi , x
? M )] K
? 1 ), . . . , ?(xi0 , x
? M )]> ,
[?(xi , x
(5)
b ? 2 RM ?M is the pseudo-inverse of K.
b Vectors of random features can be constructed as
where K
b
z(xi ) = D
1/2
b > [?(xi , x
? 1 ), . . . , ?(xi , x
? M )]> ,
V
b are the eigenvectors of K
b with D
b the diagonal matrix whose entries are
where the columns of V
the corresponding eigenvalues. Constructing features in this way reduces the time complexity of
learning a non-linear prediction function from O(N 3 ) to O(N ) [15].
An alternative perspective on the Nystr?om approximation, that will be useful below, is as follows.
Consider integral operators
LN [f ](?) :=
N
1 X
?(xi , ?)f (xi )
N i=1
and
LM [f ](?) :=
M
1 X
?(xi , ?)f (xi ),
M i=1
(6)
b and the '?i are the first
? = span {'?1 , . . . , '?r } where r is the rank of K
and introduce Hilbert space H
r eigenfunctions of LM . Then the following proposition shows that using the Nystr?om approxima? spanned
tion is equivalent to performing linear regression in the feature space (?view?) z : X ! H
by the eigenfunctions of linear operator LM in Eq. (6):
Proposition 2 (random Nystr?om view, [3]). Solving
minr
w2R
is equivalent to solving
N
1 X
`(w> z(xi ), yi ) + kwk22
N i=1
2
N
1 X
`(f (xi ), yi ) + kf k2H? .
?
N
2
f 2H
i=1
min
2.3
(7)
(8)
The proposed algorithm: Correlated Nystr?om Views (XNV)
Algorithm 1 details our approach to semi-supervised learning based on generating two views consisting of Nystr?om random features and penalizing features which are weakly correlated across views.
The setting is that we have labeled data {xi , yi }ni=1 and a large amount of unlabeled data {xi }N
i=n+1 .
Step 1 generates a set of random features. The next two steps implement multi-view regression using
the randomly generated views z(1) (x) and z(2) (x). Eq. (9) yields a solution for which unimportant
4
Algorithm 1 Correlated Nystr?
om Views (XNV).
n
Input: Labeled data: {xi , yi }i=1 and unlabeled data: {xi }N
i=n+1
?1, . . . , x
? 2M uniformly from the dataset, compute the eigendecom1: Generate features. Sample x
? (1) and K
? (2) which are constructed from the
positions of the sub-sampled kernel matrices K
samples 1, . . . , M and M + 1, . . . , 2M respectively, and featurize the input:
z(?) (xi )
? (?),
D
1/2
? (?)> [?(xi , x
? 1 ), . . . , ?(xi , x
? M )]> for ? 2 {1, 2}.
V
2: Unlabeled data. Compute CCA bases B(1) , B(2) and canonical correlations
?i
two views and set z
3: Labeled data. Solve
B(1) z(1) (xi ).
X ?
b = argmin 1
`
n i=1
n
Output: b
>
1, . . . ,
?
?i , yi + k k2CCA + k k22 .
z
M
for the
(9)
features are heavily downweighted in the CCA basis without introducing an additional tuning parameter. The further penalty on the `2 norm (in the CCA basis) is introduced as a practical measure
to control the variance of the estimator b which can become large if there are many highly correlated
1
features (i.e. the ratio j j ? 0 for large j). In practice most of the shrinkage is due to the CCA
norm: cross-validation obtains optimal values of in the range [0.00001, 0.1].
Computational complexity. XNV is extremely fast. Nystr?om sampling, step 1, reduces the O(N 3 )
operations required for kernel learning to O(N ). Computing the CCA basis, step 2, using standard
algorithms is in O(N M 2 ). However, we reduce the runtime to O(N M ) by applying a recently
proposed randomized CCA algorithm of [16]. Finally, step 3 is a computationally cheap linear
program on n samples and M features.
Performance guarantees. The quality of the kernel approximation in (5) has been the subject of
detailed study in recent years leading to a number of strong empirical and theoretical results [3?5,
15]. Recent work of Bach [5] provides theoretical guarantees on the quality of Nystr?om estimates in
the fixed design setting that are relevant to our approach.1
Theorem 3 (Nystr?om generalization bound, [5]). Let ? 2 RN be a random vector with finite
>
? kernel := (K +
variance and zero mean, y = [y1 , . . . , yN ] , and define smoothed estimate y
1
? + N I) 1 K(y
?
? Nystr?om := (K
N I) K(y + ?) and smoothed Nystr?om estimate y
+ ?), both
computed by minimizing the MSE with ridge penalty . Let ? 2 (0, 1). For sufficiently large M
(depending on ?, see [5]), we have
?
?
?
?
? Nystr?om k22 ? (1 + 4?) ? E? ky y
? kernel k22
EM E? ky y
?
where EM refers to the expectation over subsampled columns used to construct K.
In short, the best smoothed estimators in the Nystr?om views are close to the optimal smoothed
estimator. Since the kernel estimate is consistent, loss(f ) ! 0 as n ! 1. Thus, Assumption 1
holds in expectation and the generalization performance of XNV is controlled by Theorem 1.
Random Fourier Features. An alternative approach to constructing random views is to use
Fourier features instead of Nystr?om features in Step 1. We refer to this approach as Correlated
Kitchen Sinks (XKS) after [2]. It turns out that the performance of XKS is consistently worse than
XNV, in line with the detailed comparison presented in [3]. We therefore do not discuss Fourier
features in the main text, see ?SI.3 for details on implementation and experimental results.
1
Extending to a random design requires techniques from [17].
5
Table 1: Datasets used for evaluation.
Set
1
2
3
4
5
6
7
8
9
2.4
Name
abalone2
adult2
ailerons4
bank84
bank324
cal housing4
census2
CPU2
CT2
Task
C
C
R
C
C
R
R
R
R
N
D Set Name
2, 089
6 10 elevators4
32, 561 14 11 HIVa3
7, 154 40 12 house4
4, 096
8 13 ibn Sina3
4, 096 32 14 orange3
10, 320
8 15 sarcos 15
18, 186 119 16 sarcos 55
6, 554 21 17 sarcos 75
30, 000 385 18 sylva3
Task
R
C
R
C
C
R
R
R
C
N
D
8, 752
18
21, 339 1, 617
11, 392
16
10, 361
92
25, 000
230
44, 484
21
44, 484
21
44, 484
21
72, 626
216
A fast approximation to SSSL
The SSSL (simple semi-supervised learning) algorithm proposed in [10] finds the first s eigenfunctions i of the integral operator LN in Eq. (6) and then solves
argmin
w2Rs
n
X
i=1
0
s
X
@
wj
j=1
k (xi )
12
yi A ,
(10)
where s is set by the user. SSSL outperforms Laplacian Regularized Least Squares [11], a state of
the art semi-supervised learning method, see [10]. It also has good generalization guarantees under
reasonable assumptions on the distribution of eigenvalues of LN . However, since SSSL requires
computing the full N ? N Gram matrix, it is extremely computationally intensive for large N .
Moreover, tuning s is difficult since it is discrete.
We therefore propose SSSLM , an approximation to SSSL. First, instead of constructing the full
Gram matrix, we construct a Nystr?om approximation by sampling M points from the labeled and
unlabeled training set. Second, instead of thresholding eigenfunctions, we use the easier to tune
ridge penalty which penalizes directions proportional to the inverse square of their eigenvalues [18].
As justification, note that Proposition 2 states that the Nystr?om approximation to kernel regression
? M . As M increases,
actually solves a ridge regression problem in the span of the eigenfunctions of L
?
the span of LM tends towards that of LN [15]. We will also refer to the Nystr?om approximation to
SSSL using 2M features as SSSL2M . See experiments below for further discussion of the quality
of the approximation.
3
Experiments
Setup. We evaluate the performance of XNV on 18 real-world datasets, see Table 1. The datasets
cover a variety of regression (denoted by R) and two-class classification (C) problems. The sarcos
dataset involves predicting the joint position of a robot arm; following convention we report results
on the 1st, 5th and 7th joint positions.
The SSSL algorithm was shown to exhibit state-of-the-art performance over fully and semisupervised methods in scenarios where few labeled training examples are available [10]. However, as discussed in ?2.2, due to its computational cost we compare the performance of XNV to the
Nystr?om approximations SSSLM and SSSL2M .
We used a Gaussian kernel for all datasets. We set the kernel width, and the `2 regularisation
strength, , for each method using 5-fold cross validation with 1000 labeled training examples. We
trained all methods using a squared error loss function, `(f (xi ), yi ) = (f (xi ) yi )2 , with M = 200
random features, and n = 100, 150, 200, . . . , 1000 randomly selected training examples.
2
Taken from the UCI repository http://archive.ics.uci.edu/ml/datasets.html
Taken from http://www.causality.inf.ethz.ch/activelearning.php
4
Taken from http://www.dcc.fc.up.pt/?ltorgo/Regression/DataSets.html
5
Taken from http://www.gaussianprocess.org/gpml/data/
3
6
Runtime performance. The SSSL algorithm of [10] is not computationally feasible on large
datasets, since it has time complexity O(N 3 ). For illustrative purposes, we report run times6 in
seconds of the SSSL algorithm against SSSLM and XNV on three datasets of different sizes.
runtimes bank8 cal housing sylva
SSSL
72s
2300s
SSSL2M
0.3s
0.6s
24s
XNV
0.9s
1.3s
26s
For the cal housing dataset, XNV exhibits an almost 1800? speed up over SSSL. For the largest
dataset, sylva, exact SSSL is computationally intractable. Importantly, the computational overhead of XNV over SSSL2M is small.
Generalization performance. We report on the prediction performance averaged over 100 experiments. For regression tasks we report on the mean squared error (MSE) on the testing set normalized
by the variance of the test output. For classification tasks we report the percentage of the test set that
was misclassified.
The table below shows the improvement in performance of XNV over SSSLM and SSSL2M (taking
whichever performs better out of M or 2M on each dataset), averaged over all 18 datasets. Observe
that XNV is considerably more accurate and more robust than SSSLM .
XNV vs SSSLM/2M
Avg reduction in error
Avg reduction in std err
n = 100
11%
15%
n = 200
16%
30%
n = 300
15%
31%
n = 400
12%
33%
n = 500
9%
30%
The reduced variability is to be expected from Theorem 1.
1
0.24
SSSL
SSSLM
SSSL2M
0.9
2M
SSSL
2M
0.05
XNV
XNV
0.22
0.06
SSSLM
SSSLM
0.23
XNV
0.2
0.19
0.18
prediction error
0.21
prediction error
prediction error
0.8
0.7
0.6
0.5
0.04
0.03
0.02
0.17
0.16
0.15
0.01
0.4
100
200
300
400
500
600
700
800
number of labeled training points
900
100
1000
(a) adult
300
400
500
600
700
800
number of labeled training points
0
1000
XNV
XNV
0.3
prediction error
prediction error
0.3
0.14
0.12
0.1
0.25
0.2
0.15
0.08
0.2
0.06
0.1
0.1
0.04
0.05
300
400
500
600
700
800
number of labeled training points
(d) elevators
900
1000
1000
SSSL2M
0.35
2M
0.16
0.4
900
SSSLM
SSSL
0.18
0.6
0.5
300
400
500
600
700
800
number of labeled training points
0.4
XNV
200
200
SSSLM
SSSL2M
100
100
(c) census
0.2
SSSLM
prediction error
900
(b) cal housing
0.8
0.7
200
100
200
300
400
500
600
700
800
number of labeled training points
(e) ibn Sina
900
1000
100
200
300
400
500
600
700
800
number of labeled training points
900
1000
(f) sarcos 5
Figure 1: Comparison of mean prediction error and standard deviation on a selection of datasets.
Table 2 presents more detailed comparison of performance for individual datasets when n =
200, 400. The plots in Figure 1 shows a representative comparison of mean prediction errors for
several datasets when n = 100, . . . , 1000. Error bars represent one standard deviation. Observe that
XNV almost always improves prediction accuracy and reduces variance compared with SSSLM and
SSSL2M when the labeled training set contains between 100 and 500 labeled points. A complete
set of results is provided in ?SI.1.
Discussion of SSSLM . Our experiments show that going from M to 2M does not improve generalization performance in practice. This suggests that when there are few labeled points, obtaining a
6
Computed in Matlab 7.14 on a Core i5 with 4GB memory.
7
more accurate estimate of the eigenfunctions of the kernel does not necessarily improve predictive
performance. Indeed, when more random features are added, stronger regularization is required to
reduce the influence of uninformative features, this also has the effect of downweighting informative
features. This suggests that the low rank approximation SSSLM to SSSL suffices.
Finally, ?SI.2 compares the performance of SSSLM and XNV to fully supervised kernel ridge regression (KRR). We observe dramatic improvements, between 48% and 63%, consistent with the
results observed in [10] for the exact SSSL algorithm.
Random Fourier features. Nystr?om features significantly outperform Fourier features, in line
with observations in [3]. The table below shows the relative improvement of XNV over XKS:
XNV vs XKS
Avg reduction in error
Avg reduction in std err
n = 100
30%
36%
n = 200
28%
44%
n = 300
26%
34%
n = 400
25%
37%
n = 500
24%
36%
Further results and discussion for XKS are included in the supplementary material.
Table 2: Performance (normalized MSE/classification error rate). Standard errors in parentheses.
set
SSSLM
n = 200
1
0.054 (0.005)
2
0.198 (0.014)
3
0.218 (0.016)
4 0.558 (0.027)
5
0.058 (0.004)
6
0.567 (0.081)
7
0.020 (0.012)
8
0.395 (0.395)
9
0.437 (0.096)
n = 400
1
0.051 (0.003)
2
0.177 (0.008)
3
0.199 (0.011)
4
0.517 (0.018)
5
0.050 (0.003)
6
0.513 (0.055)
7
0.019 (0.010)
8
0.209 (0.171)
9
0.249 (0.024)
4
set
SSSL2M
XNV
SSSLM
SSSL2M
XNV
0.055 (0.006)
0.184 (0.010)
0.231 (0.020)
0.567 (0.029)
0.060 (0.005)
0.634 (0.103)
0.022 (0.014)
0.463 (0.414)
0.367 (0.060)
0.053 (0.004)
0.175 (0.010)
0.213 (0.016)
0.561 (0.030)
0.055 (0.003)
0.459 (0.045)
0.019 (0.005)
0.263 (0.352)
0.222 (0.015)
10 0.309 (0.059) 0.358 (0.077) 0.226 (0.020)
11 0.146 (0.048) 0.072 (0.024) 0.036 (0.001)
12 0.761 (0.075) 0.787 (0.091) 0.792 (0.100)
13 0.109 (0.017) 0.109 (0.017) 0.068 (0.010)
14 0.019 (0.001) 0.019 (0.001) 0.019 (0.000)
15 0.076 (0.008) 0.078 (0.009) 0.071 (0.006)
16 0.172 (0.032) 0.192 (0.036) 0.119 (0.014)
17 0.041 (0.004) 0.043 (0.005) 0.040 (0.004)
18 0.036 (0.007) 0.039 (0.007) 0.028 (0.009)
0.052 (0.003)
0.172 (0.006)
0.209 (0.013)
0.527 (0.019)
0.051 (0.003)
0.555 (0.063)
0.021 (0.012)
0.286 (0.248)
0.304 (0.037)
0.050 (0.002)
0.167 (0.005)
0.193 (0.010)
0.510 (0.016)
0.050 (0.002)
0.432 (0.036)
0.014 (0.003)
0.110 (0.107)
0.201 (0.013)
10 0.218 (0.022) 0.233 (0.027) 0.192 (0.010)
11 0.051 (0.009) 0.122 (0.031) 0.036 (0.001)
12 0.691 (0.040) 0.701 (0.051) 0.709 (0.058)
13 0.070 (0.009) 0.072 (0.008) 0.054 (0.004)
14 0.019 (0.001) 0.019 (0.001) 0.019 (0.000)
15 0.059 (0.004) 0.060 (0.005) 0.057 (0.003)
16 0.105 (0.014) 0.106 (0.014) 0.090 (0.007)
17 0.032 (0.002) 0.033 (0.003) 0.032 (0.002)
18 0.029 (0.006) 0.032 (0.005) 0.023 (0.006)
Conclusion
We have introduced the XNV algorithm for semi-supervised learning. By combining two randomly
generated views of Nystr?om features via an efficient implementation of CCA, XNV outperforms the
prior state-of-the-art, SSSL, by 10-15% (depending on the number of labeled points) on average
over 18 datasets. Furthermore, XNV is over 3 orders of magnitude faster than SSSL on medium
sized datasets (N = 10, 000) with further gains as N increases. An interesting research direction
is to investigate using the recently developed deep CCA algorithm, which extracts higher order
correlations between views [19], as a preprocessing step.
In this work we use a uniform sampling scheme for the Nystr?om method for computational reasons
since it has been shown to perform well empirically relative to more expensive schemes [20]. Since
CCA gives us a criterion by which to measure the important of random features, in the future we
aim to investigate active sampling schemes based on canonical correlations which may yield better
performance by selecting the most informative indices to sample.
Acknowledgements. We thank Haim Avron for help with implementing randomized CCA and
Patrick Pletscher for drawing our attention to the Nystr?om method.
8
References
[1] Williams C, Seeger M: Using the Nystr?om method to speed up kernel machines. In NIPS 2001.
[2] Rahimi A, Recht B: Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Adv in Neural Information Processing Systems (NIPS) 2008.
[3] Yang T, Li YF, Mahdavi M, Jin R, Zhou ZH: Nystr?om Method vs Random Fourier Features: A Theoretical and Empirical Comparison. In NIPS 2012.
[4] Gittens A, Mahoney MW: Revisiting the Nystr?om method for improved large-scale machine learning.
In ICML 2013.
[5] Bach F: Sharp analysis of low-rank kernel approximations. In COLT 2013.
[6] Rahimi A, Recht B: Random Features for Large-Scale Kernel Machines. In Adv in Neural Information
Processing Systems 2007.
[7] Kakade S, Foster DP: Multi-view Regression Via Canonical Correlation Analysis. In Computational
Learning Theory (COLT) 2007.
[8] Hotelling H: Relations between two sets of variates. Biometrika 1936, 28:312?377.
[9] Hardoon DR, Szedmak S, Shawe-Taylor J: Canonical Correlation Analysis: An Overview with Application to Learning Methods. Neural Comp 2004, 16(12):2639?2664.
[10] Ji M, Yang T, Lin B, Jin R, Han J: A Simple Algorithm for Semi-supervised Learning with Improved
Generalization Error Bound. In ICML 2012.
[11] Belkin M, Niyogi P, Sindhwani V: Manifold regularization: A geometric framework for learning
from labeled and unlabeled examples. JMLR 2006, 7:2399?2434.
[12] Blum A, Mitchell T: Combining labeled and unlabeled data with co-training. In COLT 1998.
[13] Chaudhuri K, Kakade SM, Livescu K, Sridharan K: Multiview clustering via Canonical Correlation
Analysis. In ICML 2009.
[14] McWilliams B, Montana G: Multi-view predictive partitioning in high dimensions. Statistical Analysis
and Data Mining 2012, 5:304?321.
[15] Drineas P, Mahoney MW: On the Nystr?om Method for Approximating a Gram Matrix for Improved
Kernel-Based Learning. JMLR 2005, 6:2153?2175.
[16] Avron H, Boutsidis C, Toledo S, Zouzias A: Efficient Dimensionality Reduction for Canonical Correlation Analysis. In ICML 2013.
[17] Hsu D, Kakade S, Zhang T: An Analysis of Random Design Linear Regression. In COLT 2012.
[18] Dhillon PS, Foster DP, Kakade SM, Ungar LH: A Risk Comparison of Ordinary Least Squares vs
Ridge Regression. Journal of Machine Learning Research 2013, 14:1505?1511.
[19] Andrew G, Arora R, Bilmes J, Livescu K: Deep Canonical Correlation Analysis. In ICML 2013.
[20] Kumar S, Mohri M, Talwalkar A: Sampling methods for the Nystr?om method. JMLR 2012, 13:981?
1006.
9
| 5000 |@word repository:1 version:2 norm:6 stronger:1 dramatic:1 nystr:39 reduction:6 contains:2 selecting:1 outperforms:6 err:2 comparing:1 si:3 subsequent:1 informative:2 cheap:1 plot:1 v:4 implying:1 selected:1 short:2 core:1 sarcos:5 provides:1 org:1 zhang:1 constructed:3 become:2 consists:1 overhead:1 introduce:1 indeed:1 expected:2 multi:9 little:1 equipped:1 increasing:2 hardoon:1 provided:1 moreover:1 medium:1 argmin:5 substantially:3 developed:1 whilst:1 guarantee:3 pseudo:1 avron:2 runtime:5 biometrika:1 rm:4 control:2 mcwilliams:3 partitioning:1 yn:2 positive:1 modify:1 tends:1 despite:1 chose:1 montana:1 suggests:2 co:2 range:1 averaged:2 practical:1 testing:1 investment:1 practice:2 definite:1 implement:1 procedure:1 empirical:5 eth:3 significantly:2 thought:1 projection:2 ups:1 refers:1 onto:2 unlabeled:11 close:1 operator:3 cal:4 selection:1 risk:1 applying:1 influence:1 www:3 equivalent:2 map:1 maximizing:1 williams:1 urich:3 attention:1 ii0:3 estimator:11 importantly:1 spanned:1 justification:1 pt:1 suppose:3 heavily:1 user:1 exact:4 us:2 livescu:2 expensive:3 jk:3 satisfying:1 std:2 labeled:20 ep:1 observed:1 revisiting:1 wj:1 adv:2 decrease:1 complexity:3 trained:1 weakly:1 solving:3 predictive:3 basis:5 sink:3 drineas:1 joint:3 fast:4 quite:1 whose:1 widely:1 valued:1 solve:1 supplementary:1 drawing:1 niyogi:1 housing:3 eigenvalue:3 propose:1 relevant:1 uci:2 combining:2 rapidly:1 chaudhuri:1 achieve:1 ky:2 convergence:1 requirement:1 extending:1 p:1 generating:1 guaranteeing:1 help:1 depending:3 andrew:1 approxima:1 eq:4 solves:2 strong:1 ibn:2 involves:1 convention:1 switzerland:3 direction:2 closely:1 material:1 implementing:1 require:1 assign:1 ungar:1 fix:1 generalization:7 suffices:1 randomization:2 proposition:3 brian:2 hold:3 around:1 considered:1 sufficiently:3 ic:1 k2h:1 mapping:2 lm:4 purpose:1 lose:1 label:3 krr:2 gaussianprocess:1 largest:1 weighted:1 minimization:1 always:2 gaussian:1 aim:2 modified:1 zhou:1 shrinkage:3 conjunction:1 gpml:1 joachim:1 improvement:4 consistently:2 rank:5 seeger:1 talwalkar:1 i0:1 relation:1 manipulating:1 misclassified:1 going:1 overall:1 classification:5 among:1 html:2 denoted:1 colt:4 proposes:1 art:5 construct:6 sampling:6 manually:1 runtimes:1 unsupervised:1 icml:5 future:1 report:6 few:2 belkin:1 randomly:4 simultaneously:1 individual:1 elevator:1 subsampled:1 kitchen:3 argmax:1 consisting:2 n1:1 investigate:3 highly:2 mining:1 evaluation:3 mahoney:2 introduces:1 behind:1 accurate:5 integral:2 lh:1 orthogonal:1 vely:1 taylor:1 penalizes:1 theoretical:6 minimal:1 instance:1 column:2 cover:1 ordinary:1 cost:3 introducing:2 deviation:2 minr:1 subset:1 entry:1 predictor:3 uniform:1 reported:1 considerably:1 st:1 recht:2 randomized:2 ct2:1 na:1 squared:3 ltorgo:1 dr:1 worse:1 expert:1 leading:2 li:1 downweighted:1 suggesting:1 mahdavi:1 coefficient:3 satisfy:2 tion:1 view:45 doing:1 contribution:1 om:39 square:4 php:1 accuracy:2 ni:1 variance:9 maximized:1 yield:3 weak:1 bilmes:1 comp:1 definition:1 inexpensive:1 against:2 boutsidis:1 naturally:1 sampled:3 gain:1 dataset:7 hsu:1 mitchell:1 knowledge:1 dimensionality:2 improves:1 hilbert:1 actually:1 higher:1 dcc:1 supervised:13 improved:4 furthermore:1 correlation:17 replacing:2 defines:1 yf:1 quality:3 semisupervised:1 name:2 effect:1 k22:3 requiring:1 contain:2 normalized:2 regularization:2 dhillon:1 width:1 xks:5 illustrative:1 criterion:1 multiview:2 ridge:9 complete:1 performs:1 consideration:1 recently:4 empirically:1 overview:2 ji:1 volume:1 discussed:1 xi0:5 approximates:1 significant:2 refer:2 rd:4 tuning:2 pm:1 shawe:1 robot:1 han:1 money:1 base:4 patrick:1 recent:4 perspective:1 inf:4 scenario:1 yi:12 additional:1 zouzias:1 ud:1 semi:13 multiple:2 full:2 reduces:6 rahimi:2 faster:1 bach:3 cross:2 lin:1 equally:1 laplacian:2 controlled:1 prediction:13 parenthesis:1 regression:27 kcca:2 essentially:1 expectation:4 kernel:22 represent:1 proposal:2 whereas:1 uninformative:1 featurize:1 archive:1 eigenfunctions:6 subject:2 kwk22:1 elegant:1 sridharan:1 extracting:1 mw:2 yang:2 rendering:1 variety:2 variate:1 zi:2 reduce:3 idea:3 intensive:1 pca:1 gb:1 penalty:3 matlab:1 deep:2 dramatically:1 useful:4 detailed:3 eigenvectors:1 unimportant:1 tune:1 amount:1 reduced:1 generate:1 http:4 outperform:2 exist:1 percentage:1 canonical:25 delta:1 arising:1 discrete:1 blum:1 downweighting:1 penalizing:5 fraction:1 year:1 sum:1 run:1 inverse:2 i5:1 extends:1 almost:3 reasonable:1 draw:1 comparable:1 cca:17 bound:4 haim:1 fold:1 adult2:1 strength:1 kronecker:1 orthogonality:1 generates:2 fourier:8 speed:3 span:3 min:1 extremely:2 performing:2 kumar:1 relatively:1 according:1 across:4 em:2 gittens:1 kakade:4 intuitively:2 census:1 taken:4 computationally:6 ln:4 previously:1 turn:2 discus:1 whichever:1 available:2 operation:2 observe:4 hotelling:1 alternative:3 clustering:1 ensure:1 subsampling:1 balduzzi:2 build:2 approximating:1 added:1 diagonal:1 exhibit:4 dp:2 thank:1 manifold:1 collected:1 reason:2 index:1 relationship:1 ratio:1 minimizing:1 difficult:1 setup:1 implementation:3 design:4 perform:2 observation:1 datasets:19 sm:2 finite:1 jin:2 extended:1 variability:3 y1:2 rn:1 smoothed:4 sharp:1 david:2 introduced:6 pair:4 required:2 extensive:1 learned:3 toledo:1 nip:3 address:1 adult:1 bar:1 below:5 program:1 memory:1 difficulty:1 natural:1 regularized:2 predicting:1 buhmann:1 pletscher:1 arm:1 scheme:4 improve:5 xnv:36 brief:1 arora:1 coupled:1 extract:1 szedmak:1 text:2 prior:1 geometric:1 acknowledgement:1 kf:1 zh:1 regularisation:1 relative:2 loss:10 fully:2 interesting:1 proportional:1 validation:2 consistent:2 jbuhmann:1 thresholding:1 foster:2 uncorrelated:2 mohri:1 surprisingly:2 bias:8 wide:1 taking:1 overcome:1 dimension:1 xn:1 world:3 gram:7 made:1 avg:4 regressors:2 preprocessing:1 bm:2 social:1 approximate:4 obtains:1 ml:1 active:1 b1:4 xi:30 table:6 robust:1 obtaining:3 improving:1 mse:3 excellent:1 necessarily:1 constructing:7 domain:1 main:4 x1:1 causality:1 referred:2 representative:1 cubic:1 sub:1 position:3 jmlr:3 theorem:5 decay:1 concern:1 intractable:2 intrinsic:1 corr:1 magnitude:2 occurring:1 margin:1 easier:1 fc:1 ux:1 sindhwani:1 ch:4 satisfies:1 sized:1 towards:4 feasible:1 included:1 typical:1 reducing:2 uniformly:2 principal:1 called:1 w2r:1 experimental:1 formally:1 sina:1 ethz:4 evaluate:1 correlated:10 |
4,422 | 5,001 | Manifold-based Similarity Adaptation
for Label Propagation
Masayuki Karasuyama and Hiroshi Mamitsuka
Bioionformatics Center, Institute for Chemical Research, Kyoto University, Japan
{karasuyama,mami}@kuicr.kyoto-u.ac.jp
Abstract
Label propagation is one of the state-of-the-art methods for semi-supervised learning, which estimates labels by propagating label information through a graph.
Label propagation assumes that data points (nodes) connected in a graph should
have similar labels. Consequently, the label estimation heavily depends on edge
weights in a graph which represent similarity of each node pair. We propose a
method for a graph to capture the manifold structure of input features using edge
weights parameterized by a similarity function. In this approach, edge weights
represent both similarity and local reconstruction weight simultaneously, both being reasonable for label propagation. For further justification, we provide analytical considerations including an interpretation as a cross-validation of a propagation model in the feature space, and an error analysis based on a low dimensional
manifold model. Experimental results demonstrated the effectiveness of our approach both in synthetic and real datasets.
1
Introduction
Graph-based learning algorithms have received considerable attention in machine learning community. For example, label propagation (e.g., [1, 2]) is widely accepted as a state-of-the-art approach
for semi-supervised learning, in which node labels are estimated through the input graph structure.
A common important property of these graph-based approaches is that the manifold structure of the
input data can be captured by the graph. Their practical performance advantage has been demonstrated in various application areas.
On the other hand, it is well-known that the accuracy of the graph-based methods highly depends on
the quality of the input graph (e.g., [1, 3?5]), which is typically generated from a set of numerical
input vectors (i.e., feature vectors). A general framework of graph-based learning can be represented
as the following three-step procedure:
Step 1: Generating graph edges from given data, where nodes of the generated graph correspond to
the instances of input data.
Step 2: Giving weights to the graph edges.
Step 3: Estimating node labels based on the generated graph, which is often represented as an
adjacency matrix.
In this paper, we focus on the second step in the three-step procedure; estimating edge weights for
the subsequent label estimation. Optimizing edge weights is difficult in semi-supervised learning,
because there are only a small number of labeled instances. Also this problem is important because
edge weights heavily affect final prediction accuracy of graph-based methods, while in reality rather
simple heuristics strategies have been employed.
There are two standard approaches for estimating edge weights: similarity function based- and
locally linear embedding (LLE) [6] based-approaches. Each of these two approaches has its own
1
disadvantage. The similarity based approaches use similarity functions, such as Gaussian kernel,
while most similarity functions have tuning parameters (such as the width parameter of Gaussian
kernel) that are in general difficult to be tuned. On the other hand, in LLE, the true underlying
manifold can be approximated by a graph which minimizes a local reconstruction error. LLE is
more sophisticated than the similarity-based approach, and LLE based graphs have been applied to
semi-supervised learning [5, 7?9]. However LLE is noise-sensitive [10]. In addition, to avoid a kind
of degeneracy problem [11], LLE has to have additional tuning parameters.
Our approach is a similarity-based method, yet also captures the manifold structure of the input data;
we refer to our approach as adaptive edge weighting (AEW). In AEW, graph edges are determined
by a data adaptive manner in terms of both similarity and manifold structure. The objective function
in AEW is based on local reconstruction, by which estimated weights capture the manifold structure,
where each edge is parameterized as a similarity function of each node pair. Consequently, in spite
of its simplicity, AEW has the following three advantages:
? Compared to LLE based approaches, our formulation alleviates the problem of over-fitting
due to the parameterization of weights. In our experiments, we observed that AEW is robust
against noise of input data using synthetic data set, and we also show the performance
advantage of AEW in eight real-world datasets.
? Similarity based representation of edge weights is reasonable for label propagation because
transitions of labels are determined by those weights, and edge weights obtained by LLE
approaches may not represent node similarity.
? AEW does not have additional tuning parameters such as regularization parameters. Although the number of edges in a graph cannot be determined by AEW, we show that performance of AEW is robust against the number of edges compared to standard heuristics
and a LLE based approach.
We provide further justifications for our approach based on the ideas of feature propagation and local
linear approximation. Our objective function can be seen as a cross validation error of a propagation
model for feature vectors, which we call feature propagation. This allows us to interpret that AEW
optimizes graph weights through cross validation (for prediction) in the feature vector space instead
of label space, assuming that input feature vectors and given labels share the same local structure.
Another interpretation is provided through local linear approximation, by which we can analyze the
error of local reconstruction in the output (label) space under the assumption of low dimensional
manifold model.
2
Graph-based Semi-supervised Learning
In this paper we use label propagation, which is one of the state-of-the-art graph-based learning
algorithms, as the methods in the third step in the three-step procedure. Suppose that we have n
feature vectors X = {x1 , . . . , xn }, where xi ? Rp . An undirected graph G is generated from X ,
where each node (or vertex) corresponds to each data point xi . The graph G can be represented by
the adjacency matrix W ? Rn?n where (i, j)-element Wij is a weight of the edge between xi and
xj . The key idea of graph-based algorithms is so-called manifold assumption, in which instances
connected by large weights Wij on a graph have similar labels (meaning that labels smoothly change
on the graph).
For the adjacency matrix Wij , the following weighted k-nearest neighbor (k-NN) graph is commonly used in graph-based learning algorithms [1]:
(
P
(x ?x )2
p
exp ? d=1 id ?2 jd
, j ? Ni or i ? Nj ,
Wij =
(1)
d
0,
otherwise,
where xid is the d-th element of xi , Ni is a set of indices of the k-NN of xi , and {?d }pd=1 is a set of
parameters. [1] shows this weighting can also be interpreted as the solution of the heat equation on
the graph.
From this adjacency matrix, the graph Laplacian can be defined by
L = D ? W,
2
P
where D is a diagonal matrix with the diagonal entry Dii = j Wij . Instead of L, normalized
variants of Laplacian such as L = I ? D ?1 W or L = I ? D ?1/2 W D ?1/2 is also used, where
I ? Rn?n is the identity matrix.
Among several label propagation algorithms, we mainly use the formulation by [1], which is the
standard formulation of graph-based semi-supervised learning. Suppose that the first ? data points
in X are labeled by Y = {y1 , . . . , y? }, where yi ? {1, . . . , c} and c is the number of classes. The
goal of label propagation is to predict the labels of unlabeled nodes {x?+1 , . . . , xn }. The scoring
matrix F gives an estimation of the label of xi by argmaxj Fij . Label propagation can be defined
as estimating F in such a way that score F smoothly changes on a given graph as well as it can
predict given labeled points. The following is standard formulation, which is called the harmonic
Gaussian field (HGF) model, of label propagation [1]:
min trace F ? LF subject to Fij = Yij , for i = 1, . . . , ?.
F
where Yij is the label matrix with Yij = 1 if xi is labeled as yi = j; otherwise, Yij = 0, In this
formulation, the scores for labeled nodes are fixed as constants. This formulation can be reduced
to linear systems, which can be solved efficiently, especially when Laplacian L has some sparse
structure.
3
Basic Framework of Proposed Approach
The performance of label propagation heavily depends on quality of an input graph. Our proposed
approach, adaptive edge weighting (AEW), optimizes edge weights for the graph-based learning
algorithms. We note that AEW is for the second step of the three step procedure and has nothing
to do with the first and third steps, meaning that any methods in the first and third steps can be
combined with AEW. In this paper we consider that the input graph is generated by k-NN graph (the
first step is based on k-NN), while we note that AEW can be applied to any types of graphs.
First of all, graph edges should satisfy the following conditions:
? Capturing the manifold structure of the input space.
? Representing similarity between two nodes.
These two conditions are closely related to manifold assumption of graph-based learning algorithms,
in which labels vary smoothly along the input manifold. Since the manifold structure of the input
data is unknown beforehand, the graph is used to approximate the manifold (the first condition).
Subsequent predictions are performed in such a way that labels smoothly change according to the
similarity structure provided by the graph (the second condition). Our algorithm simultaneously
pursues these two important aspects of the graph for the graph-based learning algorithms.
We define Wij as a similarity function of two nodes (1), using Gaussian kernel in this paper (Note:
other similarity functions can also be used). We estimate ?d so that the graph represents manifold
structure of the input data by the following optimization problem:
min
p
{?d }d=1
n
X
kxi ?
i=1
1 X
Wij xj k22 ,
Dii j?i
(2)
where j ? i means that j is connected to i. This minimizes the reconstruction error by local
linear approximation, which captures the input manifold structure, in terms of the parameters of
the similarity function. We will describe the motivation and analytical properties of the objective
function in Section 4. We further describe advantages of this function over existing approaches
including well-known locally linear embedding (LLE) [6] based methods in Section 5, respectively.
To optimize (2), we can use any gradient-based algorithm such as steepest descent and conjugate
gradient (in the later experiments, we used steepest descent method). Due to the non-convexity
of the objective function, we cannot guarantee that solutions converge to the global optimal which
means that the solutions depend on the initial ?d . In our experiments, we employed well-known
median heuristics (e.g., [12]) for setting initial values of ?d (Section 6). Another possible strategy
is to use a number of different initial values for ?d , which needs a high computational cost. The
3
gradient can be computed efficiently, due to the sparsity of the adjacency matrix. Since the number
of edges of a k-NN graph is O(nk), the derivative of adjacency matrix W can be calculated by
O(nkp). Then the entire derivative of the objective function can be calculated by O(nkp2 ). Note
that k often takes a small value such as k = 10.
4
Analytical Considerations
In Section 3, we defined our approach as the minimization of the local reconstruction error of input
features. We describe several interesting properties and interpretations of this definition.
4.1
Derivation from Feature Propagation
First, we show that our objective function can be interpreted as a cross-validation error of the HGF
model for the feature vector x on the graph. Let us divide a set of node indices {1, . . . , n} into a
training set T and a validation set V. Suppose that we try to predict x in the validation set {xi }i?V
from the given training set {xi }i?T and the adjacency matrix W . For this prediction problem, we
consider the HGF model for x:
?
? LX
?
subject to x
?ij = xij , for i ? T ,
min trace X
?
X
? = (?
? 2, . . . x
? n )? , and xij and x
where X = (x1 , x2 , . . . xn )? , X
x1 , x
?ij indicate (i, j)-th entries of
?
? i corresponds to a prediction for xi . Note that only
X and X respectively. In this formulation, x
? i in the validation set V is regarded as free variables in the optimization problem because the other
x
{?
xi }i?T is fixed at the observed values by the constraint. This can be interpreted as propagating
{xi }i?T to predict {xi }i?V . We call this process as feature propagation.
When we employ leave-one-out as the cross-validation of the feature propagation model, we obtain
n
X
? ?i k22 ,
kxi ? x
(3)
i=1
? ?i is a prediction for xi with T = {1, . . .P
where x
, i ? 1, i + 1, . . . , n} and V = {i}. Due to the
? ?i = j Wij xj /Dii , and then (3) is equivalent to our
local averaging property of HGF [1], we see x
objective function (2). From this equivalence, AEW can be interpreted as the parameter optimization
in graph weights of the HGF model for feature vectors through the leave-one-out cross-validation.
This also means that our framework estimates labels using the adjacency matrix W optimized in the
feature space instead of the output (label) space. Thus, if input features and labels share the same
adjacency matrix (i.e., sharing the same local structure), the minimization of the objective function
(2) should estimate the adjacency matrix which accurately propagates the labels of graph nodes.
4.2
Local Linear Approximation
The feature propagation model provides the interpretation of our approach as the optimization of the
adjacency matrix under the assumption that x and y can be reconstructed by the same adjacency matrix. We here justify this assumption in a more formal way from a viewpoint of local reconstruction
with a lower dimensional manifold model.
As shown in [1], HGF can be regarded as local reconstruction methods, which means the prediction
can be represented as weighted local averages:
P
j Wij Fjk
Fik =
for i = ? + 1, . . . , n.
Dii
We show the relationship between the local reconstruction error in the feature space described by
our objective function (2) and the output space. For simplicity we consider the vector form of the
score function f ? Rn which can be considered as a special case of the score matrix F , and
discussions here can be applied to F . The same analysis can be approximately applied to other
graph learning methods such as local global consistency [2] because it has similar local averaging
form as the above, though we omitted here.
4
We assume the following manifold model for the input feature space, in which x is generated from
corresponding some lower dimensional variable ? ? Rq : x = g(? ) + ?x , where g : Rq ? Rp
is a smooth function, and ?x ? Rp represents noise. In this model, y is also represented by some
function form of ? : y = h(? ) + ?y , where h : Rq ? R is a smooth function, and ?y ? R represents
noise (for simplicity, we consider a continuous output rather than discrete labels). For this model,
the following theorem shows the relationship between the reconstruction error of the feature vector
x and the output y:
Theorem 1. Suppose xi can be approximated by its neighbors as follows
1 X
Wij xj + ei ,
(4)
xi =
Dii j?i
where ei ? Rp represents an approximation error. Then, the same adjacency matrix reconstructs
the output yi ? R with the following error:
1 X
yi ?
Wij yj = J ei + O(?? i ) + O(?x + ?y ),
(5)
Dii j?i
where J =
? j k22 ).
?h(? i )
?? ?
?g(? i )
?? ?
+
with superscript + indicates pseudoinverse, and ?? i = maxj (k? i ?
See our supplementary material for the proof of this theorem. From (5), we can see that the reconstruction error of yi consists of three terms. The first term includes the reconstruction error for xi
which is represented by ei , and the second term is the distance between ? i and {? j }j?i . These two
terms have a kind of trade-off relationship because we can reduce ei if we use a lot of data points
xj , but then ?? i would increase. The third term is the intrinsic noise which we cannot directly
control. In spite of its importance, this simple relationship has not been focused on in the context
of graph estimation for semi-supervised learning, in which a LLE based objective function has been
used without clear justification [5, 7?9].
A simple approach to exploit this theorem would be a regularization formulation, which can be a
minimization of a combination of the reconstruction error for x and a penalization term for distances
between data points connected by edges. Regularized LLE [5, 8, 13, 14] can be interpreted as one
realization of such an approach. However, in semi-supervised learning, selecting appropriate values
of the regularization parameter is difficult. We therefore optimize edge weights through parameters
of the similarity function, especially the bandwidth parameter of Gaussian similarity function ?. In
this approach, a very large bandwidth (giving large weights to distant data points) may cause a large
reconstruction error, while an extremely small bandwidth causes the problem of not giving enough
weights to reconstruct.
For symmetric normalized graph Laplacian, we can not apply Theorem 1 to our algorithm. See
supplementary material for modified version of Theorem 1 for normalized Laplacian. In the experiments, we also report results for normalized Laplacian and show that our approach can improve
prediction accuracy as in the case of unnormalized Laplacian.
5
Related Topics
LLE [6] can also estimate graph edges based on a similar objective function, in which W is directly
optimized as a real valued matrix. This manner has been used in many methods for graph-based
semi-supervised learning and clustering [5, 7?9], but LLE is very noise-sensitive [10], and resulting
weights Wij cannot necessarily represent similarity between the corresponding nodes (i, j). For
example, for two nearly identical points xj1 and xj2 , both connecting to xi , it is not guaranteed
that Wij1 and Wij2 have similar values. To solve this problem, a regularization term can be introduced [11], while it is not easy to optimize the regularization parameter for this term. On the
other hand, we optimize parameters of the similarity (kernel) function. This parameterized form of
edge weights can alleviate the over-fitting problem. Moreover, obviously, the optimized weights still
represent the node similarity.
Although several model selection approaches (such as cross-validation and marginal likelihood maximization) have been applied to optimizing graph edge weights by regarding them as usual hyper5
parameters in supervised learning [3, 4, 15], most of them need labeled instances and become unreliable under the cases with few labels. Another approach is optimizing some criterion designed
specifically for graph-based algorithms (e.g., [1, 16]). These criteria often have degenerate (trivial)
solutions for which heuristics are used to prevent, but the validity of those heuristics is not clear.
Compared to these approaches, our approach is more general and flexible for problem settings, because AEW is independent of the number of classes, the number of labels, and subsequent label
estimation algorithms. In addition, model selection based approaches are basically for the third
step in the three-step procedure, by which AEW can be combined with such methods, like that the
optimized graph by AEW can be used as the input graph of these methods.
Besides k-NN, there have been several methods generating a graph (edges) from the feature vectors
(e.g., [9, 17]). Our approach can also be applied to those graphs because AEW only optimizes
weights of edges. In our experiments, we used the edges of the k-NN graph as the initial graph of
AEW. We then observed that AEW is not sensitive to the choice of k, comparing with usual k-NN
graphs. This is because the Gaussian similarity value becomes small if xi and xj are not close
to each other to minimize the reconstruction error (2). In other words, redundant weights can be
reduced drastically, because in the Gaussian kernel, weights decay exponentially according to the
squared distance.
Metric learning is another approach to adapting similarity, while metric learning is not for graphs.
A standard method for incorporating graph information into metric learning is to use some graphbased regularization, in which graph weights must be determined beforehand. For example, in [18],
a graph is generated by LLE, of which we already described the disadvantages. Another approach
is [19], which estimates a distance metric so that the k-NN graph in terms of the obtained metric
should reproduce a given graph. This approach is however not for semi-supervised learning, and it
is unclear if this approach works for semi-supervised settings. Overall metric learning is developed
from a different context from our setting, by which it has not been justified that metric learning can
be applied to label propagation.
6
Experiments
We evaluated the performance of our approach using synthetic and real-world datasets. We investigated the performance of AEW using the harmonic Gaussian field (HGF) model. For comparison,
we used linear neighborhood propagation (LNP) [5], which generates a graph using a LLE based
objective function. LNP can have two regularization parameters, one of which is for the LLE process (the first and second steps in the three-step procedure), and the other is for the label estimation
process (the third step in the three-step procedure). For the parameter in the LLE process, we used
the heuristics suggested by [11], and for the label propagation process, we chose the best parameter
value in terms of the test accuracy. HGF does not have such hyper-parameters. All results were
averaged over 30 runs with randomly sampled data points.
6.1
Synthetic datasets
We here use two datasets in Figure 1 having the same form, but Figure 1 (b) has several noisy data
points which may become bridge points (points connecting different classes [5]). In both cases, the
number of classes is 4 and each class has 100 data points (thus, n = 400).
Table 1 shows the error rates for the unlabeled nodes of HGF and LNP under 0-1 loss. For HGF,
we used the median heuristics to choose the parameter ?d in similarity function (1), meaning that a
common ? (= ?1 = . . . = ?p ) is set as the median distance between all connected pairs of xi .The
symmetric normalized version of graph Laplacian was used. The optimization of AEW started from
the median ?d . The results by AEW are shown in the column ?AEW + HGF? of Table 1. The number
of labeled nodes was 10 in each class (? = 40, i.e., 10% of the entire datasets), and the number of
neighbors in the graphs was set as k = 10 or 20.
In Table 1, we see HGF with AEW achieved better prediction accuracies than the median heuristics
and LNP in all cases. Moreover, for both of datasets (a) and (b), AEW was most robust against
the change of the number of neighbors k. This is because ?d is automatically adjusted in such
a way that the local reconstruction error is minimized and then weights for connections between
6
1
1
0.5
0.5
0
0
?0.5
?0.5
?1
?1
?0.5
0
0.5
1
Table 1: Test error comparison for synthetic
datasets. The best methods according to t-test
with the significant level of 5% are highlighted
with boldface.
?1
?1
(a)
?0.5
0
0.5
1
(b)
Figure 1: Synthetic datasets.
data
(a)
(a)
(b)
(b)
k
10
20
10
20
HGF
.057 (.039)
.261 (.048)
.119 (.054)
.280 (.051)
400
400
400
350
350
350
300
300
300
250
250
250
200
200
200
150
150
150
100
100
50
50
50
100 150 200 250 300 350 400
(a) k-NN
AEW + HGF
.020 (.027)
.020 (.028)
.073 (.035)
.077 (.035)
LNP
.039 (.026)
.103 (.042)
.103 (.038)
.148 (.047)
100
50
50
100 150 200 250 300 350 400
(b) AEW
50
100 150 200 250 300 350 400
(c) LNP
Figure 2: Resulting graphs for the synthetic dataset of Figure 1 (a) (k = 20).
different manifolds are reduced. Although LNP also minimizes the local reconstruction error, LNP
may connect data points far from each other if it reduces the reconstruction error.
Figure 2 shows the graphs generated by (a) k-NN, (b) AEW, and (c) LNP, under k = 20 for the
dataset of Figure 1 (a). In Figure 2, the k-NN graph connects a lot of nodes in different classes,
while AEW favorably eliminates those undesirable edges. LNP also has less edges between different
classes compared to k-NN, but it still connects different classes. AEW reveals the class structure
more clearly, which can lead the better prediction performance of subsequent learning algorithms.
6.2
Real-world datasets
We examined the performance of our approach on the
eight popular datasets shown in Table 2, namely COIL
(COIL-20) [20], USPS (a preprocessed version from
[21]), MNIST [22], ORL [23], Vowel [24], Yale (Yale
Face Database B) [25], optdigit [24], and UMIST [26].
Table 2: List of datasets.
COIL
USPS
MNIST
ORL
Vowel
Yale
optdigit
UMIST
n
500
1000
1000
360
792
250
1000
518
p
256
256
784
644
10
1200
256
644
# classes
10
10
10
40
11
5
10
20
We evaluated two variants of the HGF model. In
what follows, ?HGF? indicates HGF using unnormalized
graph Laplacian L = D ? W , and ?N-HGF? indicates HGF using symmetric normalized Laplacian L =
I ? D ?1/2 W D ?1/2 . For both of two variants, the median heuristics was used to set ?d . To adapt the difference of local scale, we here use local scaling
kernel [27] as the similarity function. Figure 3 shows the test error for unlabeled nodes. In this
figure, two dashed lines with different markers are by HGF and N-HGF, while two solid lines with
the same markers are by HGF with AEW. The performance difference within the variants of HGF
was not large, compared to the effect of AEW, particularly in COIL, ORL, Vowel, Yale, and UMIST.
We can rather see that AEW substantially improved the prediction accuracy of HGF in most cases.
LNP is by the solid line without any markers. LNP outperformed HGF (without AEW, shown as the
dashed lines) in COIL, ORL, Vowel, Yale and UMIST, while HGF with AEW (at least one of three
variants) achieved better performance than LNP in all these datasets except for Yale (In Yale, LNP
and HGF with AEW attained a similar accuracy).
Overall AEW-N-HGF had the best prediction accuracy, where typical examples were USPS and
MNIST. Although Theorem 1 exactly holds only for AEW-HGF, we can see that AEW-N-HGF, in
which the degrees of the graph nodes are scaled by normalized Laplacian had highly stable performance.
We further examined the effect of k. Figure 4 shows the test error for k = 20 and 10, using N-HGF,
AEW-N-HGF, and LNP for COIL dataset. The number of labeled instances is the midst value in
7
0.15
0.5
0.45
0.25
0.2
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.4
0.35
0.3
0.3
0.25
Test error rate
0.2
0.3
0.55
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
Test error rate
Test error rate
0.25
0.35
Test error rate
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.3
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.2
0.15
0.25
0.1
0.15
0.1
0.2
0.1
2
4
6
8
10
# labeled instances in each class
Test error rate
0.5
0.45
0.4
0.35
0.65
0.6
Test error rate
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.55
(b) USPS
0.7
0.55
(c) MNIST
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.16
0.14
Test error rate
(a) COIL
0.6
0.5
0.45
0.4
0.3
0.12
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.1
0.08
0.3
0.2
0.25
5
10
15
20
# labeled instances in each class
(e) Vowel
2
4
6
8
10
# labeled instances in each class
(d) ORL
0.4
0.35
0.3
HGF
N?HGF
AEW?HGF
AEW?N?HGF
LNP
0.25
0.2
0.06
0.1
0.04
0.05
2
4
6
8
10
# labeled instances in each class
(f) Yale
2
3
4
5
# labeled instances in each class
0.15
0.35
0.25
0.05
1
2
4
6
8
10
# labeled instances in each class
Test error rate
2
4
6
8
10
# labeled instances in each class
(g) optdigit
2
4
6
8
10
# labeled instances in each class
(h) UMIST
Figure 3: Performance comparison on real-world datasets. HGFs with AEW are by solid lines with
markers, while HGFs with median heuristics is by dashed lines with the same markers, and LNP
is by a solid line without any markers. For N-HGF and AWE-N-HGF, ?N? indicates normalized
Laplacian.
the horizontal axis of Figure 3 (a) (5 in each class). We can see that the test error of AEW is not
sensitive to k. Performance of N-HGF with k = 20 was worse than that with k = 10. On the other
hand, AEW-N-HGF with k = 20 had a similar performance to that with k = 10.
7
Conclusions
Test error rate
0.25
We have proposed the adaptive edge weighting (AEW)
method for graph-based semi-supervised learning. AEW
0.2
is based on the local reconstruction with the constraint
0.15
that each edge represents the similarity of each pair of
0.1
nodes. Due to this constraint, AEW has numerous ad0.05
vantages against LLE based approaches. For example,
N?HGF
AEW?N?HGF
LNP
noise sensitivity of LLE can be alleviated by the parameterized form of the edge weights, and the similarity form
for the edges weights is very reasonable for graph-based Figure 4: Comparison in test error rates
methods. We also provide several interesting properties of k = 10 and 20 (COIL ? = 50). Two
of AEW, by which our objective function can be mo- boxplots of each method correspond to
tivated analytically. We examined the performance of k = 10 in the left (with a smaller width)
AEW by using two synthetic and eight real benchmark and k = 20 in the right (with a larger
datasets. Experimental results demonstrated that AEW width).
can improve the performance of the harmonic Gaussian
field (HGF) model substantially, and we also saw that AEW outperformed LLE based approaches in
all cases of real datasets except only one case.
References
[1] X. Zhu, Z. Ghahramani, and J. D. Lafferty, ?Semi-supervised learning using Gaussian fields and harmonic
functions,? in Proc. of the 20th ICML (T. Fawcett and N. Mishra, eds.), pp. 912?919, AAAI Press, 2003.
[2] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf, ?Learning with local and global consistency,? in Advances in NIPS 16 (S. Thrun, L. Saul, and B. Sch?olkopf, eds.), MIT Press, 2004.
8
[3] A. Kapoor, Y. A. Qi, H. Ahn, and R. Picard, ?Hyperparameter and kernel learning for graph based semisupervised classification,? in Advances in NIPS 18 (Y. Weiss, B. Sch?olkopf, and J. Platt, eds.), pp. 627?
634, MIT Press, 2006.
[4] X. Zhang and W. S. Lee, ?Hyperparameter learning for graph based semi-supervised learning algorithms,?
in Advances in NIPS 19 (B. Sch?olkopf, J. Platt, and T. Hoffman, eds.), pp. 1585?1592, MIT Press, 2007.
[5] F. Wang and C. Zhang, ?Label propagation through linear neighborhoods,? IEEE TKDE, vol. 20, pp. 55?
67, 2008.
[6] S. Roweis and L. Saul, ?Nonlinear dimensionality reduction by locally linear embedding,? Science,
vol. 290, no. 5500, pp. 2323?2326, 2000.
[7] S. I. Daitch, J. A. Kelner, and D. A. Spielman, ?Fitting a graph to vector data,? in Proc. of the 26th ICML,
(New York, NY, USA), pp. 201?208, ACM, 2009.
[8] H. Cheng, Z. Liu, and J. Yang, ?Sparsity induced similarity measure for label propagation,? in IEEE 12th
ICCV, pp. 317?324, IEEE, 2009.
[9] W. Liu, J. He, and S.-F. Chang, ?Large graph construction for scalable semi-supervised learning,? in Proc.
of the 27th ICML, pp. 679?686, Omnipress, 2010.
[10] J. Chen and Y. Liu, ?Locally linear embedding: a survey,? Artificial Intelligence Review, vol. 36, pp. 29?
48, 2011.
[11] L. K. Saul and S. T. Roweis, ?Think globally, fit locally: unsupervised learning of low dimensional
manifolds,? JMLR, vol. 4, pp. 119?155, Dec. 2003.
[12] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?olkopf, and A. J. Smola, ?A kernel method for the twosample-problem,? in Advances in NIPS 19 (B. Sch?olkopf, J. C. Platt, and T. Hoffman, eds.), pp. 513?520,
MIT Press, 2007.
[13] E. Elhamifar and R. Vidal, ?Sparse manifold clustering and embedding,? in Advances in NIPS 24
(J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 55?63, 2011.
[14] D. Kong, C. H. Ding, H. Huang, and F. Nie, ?An iterative locally linear embedding algorithm,? in Proc.
of the 29th ICML (J. Langford and J. Pineau, eds.), pp. 1647?1654, Omnipress, 2012.
[15] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty, ?Nonparametric transforms of graph kernels for semisupervised learning,? in Advances in NIPS 17 (L. K. Saul, Y. Weiss, and L. Bottou, eds.), pp. 1641?1648,
MIT Press, 2005.
[16] F. R. Bach and M. I. Jordan, ?Learning spectral clustering,? in Advances in NIPS 16 (S. Thrun, L. K. Saul,
and B. Sch?olkopf, eds.), 2004.
[17] T. Jebara, J. Wang, and S.-F. Chang, ?Graph construction and b-matching for semi-supervised learning,?
in Proc. of the 26th ICML (A. P. Danyluk, L. Bottou, and M. L. Littman, eds.), pp. 441?448, ACM, 2009.
[18] M. S. Baghshah and S. B. Shouraki, ?Metric learning for semi-supervised clustering using pairwise constraints and the geometrical structure of data,? Intelligent Data Analysis, vol. 13, no. 6, pp. 887?899,
2009.
[19] B. Shaw, B. Huang, and T. Jebara, ?Learning a distance metric from a network,? in Advances in NIPS 24
(J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), pp. 1899?1907, 2011.
[20] S. A. Nene, S. K. Nayar, and H. Murase, ?Columbia object image library,? tech. rep., CUCS-005-96,
1996.
[21] T. Hastie, R. Tibshirani, and J. H. Friedman, The elements of statistical learning: data mining, inference,
and prediction. New York: Springer-Verlag, 2001.
[22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,? Proceedings of the IEEE, vol. 86, no. 11, pp. 2278?2324, 1998.
[23] F. Samaria and A. Harter, ?Parameterisation of a stochastic model for human face identification,? in
Proceedings of the Second IEEE Workshop on Applications of Computer Vision, pp. 138?142, 1994.
[24] A.
Asuncion
and
D.
J.
Newman,
?UCI
machine
learning
http://www.ics.uci.edu/?mlearn/MLRepository.html, 2007.
repository.?
[25] A. Georghiades, P. Belhumeur, and D. Kriegman, ?From few to many: Illumination cone models for face
recognition under variable lighting and pose,? IEEE TPAMI, vol. 23, no. 6, pp. 643?660, 2001.
[26] D. B. Graham and N. M. Allinson, ?Characterizing virtual eigensignatures for general purpose face recognition,? in Face Recognition: From Theory to Applications ; NATO ASI Series F, Computer and Systems
Sciences (H. Wechsler, P. J. Phillips, V. Bruce, F. Fogelman-Soulie, and T. S. Huang, eds.), vol. 163,
pp. 446?456, 1998.
[27] L. Zelnik-Manor and P. Perona, ?Self-tuning spectral clustering,? in Advances in NIPS 17, pp. 1601?1608,
MIT Press, 2004.
9
| 5001 |@word kong:1 repository:1 version:3 zelnik:1 solid:4 reduction:1 initial:4 liu:3 series:1 score:4 selecting:1 tuned:1 document:1 existing:1 mishra:1 comparing:1 yet:1 must:1 subsequent:4 distant:1 numerical:1 designed:1 intelligence:1 parameterization:1 steepest:2 provides:1 node:22 lx:1 kelner:1 zhang:2 along:1 become:2 consists:1 fitting:3 manner:2 pairwise:1 globally:1 automatically:1 becomes:1 provided:2 estimating:4 underlying:1 moreover:2 awe:1 what:1 kind:2 interpreted:5 minimizes:3 substantially:2 developed:1 nj:1 guarantee:1 exactly:1 scaled:1 platt:3 control:1 local:24 id:1 approximately:1 chose:1 examined:3 equivalence:1 averaged:1 practical:1 lecun:1 yj:1 lf:1 procedure:7 area:1 asi:1 adapting:1 alleviated:1 matching:1 word:1 vantage:1 spite:2 cannot:4 unlabeled:3 selection:2 close:1 undesirable:1 context:2 optimize:4 equivalent:1 www:1 demonstrated:3 center:1 pursues:1 attention:1 focused:1 survey:1 simplicity:3 fik:1 regarded:2 embedding:6 justification:3 construction:2 suppose:4 heavily:3 element:3 approximated:2 particularly:1 recognition:4 labeled:16 database:1 observed:3 ding:1 solved:1 capture:4 wang:2 connected:5 trade:1 rq:3 pd:1 convexity:1 nie:1 littman:1 kriegman:1 depend:1 usps:4 georghiades:1 various:1 represented:6 derivation:1 samaria:1 heat:1 describe:3 hiroshi:1 artificial:1 zemel:2 newman:1 hyper:1 neighborhood:2 heuristic:10 widely:1 supplementary:2 valued:1 solve:1 larger:1 otherwise:2 reconstruct:1 think:1 highlighted:1 noisy:1 final:1 superscript:1 obviously:1 advantage:4 tpami:1 analytical:3 propose:1 reconstruction:19 adaptation:1 uci:2 kuicr:1 realization:1 kapoor:1 alleviates:1 degenerate:1 roweis:2 harter:1 olkopf:7 xj2:1 generating:2 leave:2 object:1 ac:1 pose:1 propagating:2 ij:2 nearest:1 received:1 murase:1 indicate:1 rasch:1 fij:2 closely:1 stochastic:1 human:1 dii:6 material:2 adjacency:13 xid:1 virtual:1 alleviate:1 yij:4 adjusted:1 hold:1 considered:1 ic:1 exp:1 predict:4 mo:1 danyluk:1 vary:1 omitted:1 purpose:1 estimation:6 proc:5 outperformed:2 label:43 sensitive:4 bridge:1 saw:1 weighted:2 hoffman:2 minimization:3 mit:6 clearly:1 gaussian:10 modified:1 rather:3 manor:1 avoid:1 zhou:1 focus:1 indicates:4 mainly:1 likelihood:1 tech:1 aew:69 inference:1 nn:13 typically:1 entire:2 perona:1 wij:12 reproduce:1 fogelman:1 overall:2 among:1 flexible:1 classification:1 html:1 art:3 special:1 marginal:1 field:4 having:1 identical:1 represents:5 icml:5 nearly:1 unsupervised:1 minimized:1 report:1 intelligent:1 employ:1 few:2 randomly:1 simultaneously:2 kandola:1 maxj:1 connects:2 vowel:5 friedman:1 highly:2 mining:1 picard:1 beforehand:2 edge:35 divide:1 taylor:2 masayuki:1 instance:13 column:1 umist:5 disadvantage:2 maximization:1 cost:1 vertex:1 entry:2 connect:1 kxi:2 synthetic:8 combined:2 borgwardt:1 sensitivity:1 lee:1 off:1 connecting:2 squared:1 aaai:1 reconstructs:1 choose:1 huang:3 worse:1 derivative:2 japan:1 nkp:1 includes:1 satisfy:1 depends:3 performed:1 later:1 try:1 lot:2 analyze:1 graphbased:1 asuncion:1 bruce:1 minimize:1 ni:2 accuracy:8 efficiently:2 karasuyama:2 correspond:2 eigensignatures:1 identification:1 accurately:1 basically:1 lighting:1 mlearn:1 nene:1 sharing:1 ed:12 definition:1 against:4 pp:22 proof:1 degeneracy:1 sampled:1 dataset:3 popular:1 dimensionality:1 sophisticated:1 attained:1 supervised:18 improved:1 wei:2 formulation:8 evaluated:2 though:1 smola:1 langford:1 hand:4 horizontal:1 ei:5 nonlinear:1 marker:6 propagation:25 pineau:1 quality:2 semisupervised:2 usa:1 effect:2 k22:3 normalized:8 true:1 xj1:1 validity:1 regularization:7 analytically:1 chemical:1 symmetric:3 width:3 self:1 allinson:1 unnormalized:2 mlrepository:1 criterion:2 omnipress:2 geometrical:1 meaning:3 harmonic:4 consideration:2 image:1 common:2 jp:1 exponentially:1 interpretation:4 he:1 interpret:1 optdigit:3 refer:1 significant:1 phillips:1 tuning:4 consistency:2 shawe:2 had:3 stable:1 similarity:31 ahn:1 own:1 optimizing:3 optimizes:3 verlag:1 rep:1 yi:5 lnp:25 scoring:1 captured:1 seen:1 additional:2 employed:2 belhumeur:1 converge:1 redundant:1 dashed:3 semi:17 ad0:1 kyoto:2 reduces:1 gretton:1 smooth:2 adapt:1 cross:7 bach:1 laplacian:12 qi:1 prediction:13 variant:5 basic:1 scalable:1 vision:1 metric:9 represent:5 kernel:9 fawcett:1 achieved:2 dec:1 justified:1 addition:2 median:7 wij2:1 sch:7 eliminates:1 subject:2 induced:1 undirected:1 lafferty:2 effectiveness:1 jordan:1 call:2 yang:1 bengio:1 enough:1 easy:1 affect:1 xj:6 fit:1 hastie:1 bandwidth:3 reduce:1 idea:2 regarding:1 haffner:1 bartlett:2 york:2 cause:2 clear:2 transforms:1 nonparametric:1 locally:6 reduced:3 http:1 xij:2 estimated:2 tibshirani:1 tkde:1 discrete:1 hyperparameter:2 vol:8 key:1 prevent:1 preprocessed:1 boxplots:1 graph:86 cone:1 run:1 parameterized:4 reasonable:3 scaling:1 orl:5 graham:1 capturing:1 guaranteed:1 yale:8 cheng:1 constraint:4 x2:1 bousquet:1 generates:1 aspect:1 min:3 extremely:1 according:3 combination:1 conjugate:1 smaller:1 parameterisation:1 iccv:1 equation:1 argmaxj:1 vidal:1 eight:3 apply:1 appropriate:1 spectral:2 shaw:1 weinberger:2 rp:4 jd:1 assumes:1 clustering:5 wechsler:1 exploit:1 giving:3 ghahramani:2 especially:2 objective:13 already:1 strategy:2 usual:2 diagonal:2 unclear:1 gradient:4 distance:6 thrun:2 topic:1 manifold:22 trivial:1 boldface:1 assuming:1 besides:1 index:2 relationship:4 difficult:3 favorably:1 trace:2 unknown:1 datasets:16 benchmark:1 descent:2 y1:1 rn:3 jebara:2 community:1 introduced:1 daitch:1 pair:4 namely:1 optimized:4 connection:1 lal:1 cucs:1 nip:9 suggested:1 sparsity:2 including:2 regularized:1 zhu:2 representing:1 improve:2 library:1 numerous:1 axis:1 started:1 fjk:1 columbia:1 review:1 loss:1 interesting:2 validation:10 penalization:1 degree:1 propagates:1 viewpoint:1 share:2 twosample:1 free:1 drastically:1 formal:1 lle:21 institute:1 neighbor:4 saul:5 face:5 characterizing:1 sparse:2 soulie:1 calculated:2 xn:3 world:4 transition:1 commonly:1 adaptive:4 far:1 reconstructed:1 approximate:1 nato:1 unreliable:1 global:3 pseudoinverse:1 reveals:1 xi:20 continuous:1 iterative:1 reality:1 table:6 robust:3 investigated:1 necessarily:1 bottou:3 midst:1 motivation:1 noise:7 nothing:1 x1:3 ny:1 pereira:2 jmlr:1 weighting:4 third:6 theorem:7 list:1 decay:1 intrinsic:1 incorporating:1 mnist:4 workshop:1 importance:1 illumination:1 elhamifar:1 nk:1 chen:1 smoothly:4 chang:2 springer:1 corresponds:2 acm:2 coil:8 weston:1 identity:1 goal:1 consequently:2 considerable:1 change:4 determined:4 specifically:1 except:2 typical:1 averaging:2 justify:1 wij1:1 called:2 mamitsuka:1 accepted:1 experimental:2 spielman:1 nayar:1 |
4,423 | 5,002 | Efficient Supervised Sparse Analysis and Synthesis
Operators
Pablo Sprechmann
Duke University
[email protected]
Roee Litman
Tel Aviv University
[email protected]
Tal Ben Yakar
Tel Aviv University
[email protected]
Alex Bronstein
Tel Aviv University
[email protected]
Guillermo Sapiro
Duke University
[email protected]
?
Abstract
In this paper, we propose a new computationally efficient framework for learning sparse models. We formulate a unified approach that contains as particular
cases models promoting sparse synthesis and analysis type of priors, and mixtures
thereof. The supervised training of the proposed model is formulated as a bilevel
optimization problem, in which the operators are optimized to achieve the best
possible performance on a specific task, e.g., reconstruction or classification. By
restricting the operators to be shift invariant, our approach can be thought as a
way of learning sparsity-promoting convolutional operators. Leveraging recent
ideas on fast trainable regressors designed to approximate exact sparse codes, we
propose a way of constructing feed-forward networks capable of approximating
the learned models at a fraction of the computational cost of exact solvers. In the
shift-invariant case, this leads to a principled way of constructing a form of taskspecific convolutional networks. We illustrate the proposed models on several
experiments in music analysis and image processing applications.
1
Introduction
Parsimony, preferring a simple explanation to a more complex one, is probably one of the most intuitive principles widely adopted in the modeling of nature. The past two decades of research have
shown the power of parsimonious representation in a vast variety of applications from diverse domains of science. Parsimony in the form of sparsity has been shown particularly useful in the fields
of signal and image processing and machine learning. Sparse models impose sparsity-promoting
priors on the signal, which can be roughly categorized as synthesis or analysis. Synthesis priors are
generative, asserting that the signal is approximated well as a superposition of a small number of
vectors from a (possibly redundant) synthesis dictionary. Analysis priors, on the other hand, assume
that the signal admits a sparse projection onto an analysis dictionary. Many classes of signals, in
particular, speech, music, and natural images, have been shown to be sparsely representable in overcomplete wavelet and Gabor frames, which have been successfully adopted as synthesis dictionaries
in numerous applications [14]. Analysis priors involving differential operators, of which total variation is a popular instance, have also been shown very successful in regularizing ill-posed image
restoration problems [19].
?
Work partially supported by ARO, BSF, NGA, ONR, NSF, NSSEFF, and Israel-Us Binational.
1
Despite the spectacular success of these axiomatically constructed synthesis and analysis operators,
significant empirical evidence suggests that better performance is achieved when a data- or problemspecific dictionary is used instead of a predefined one. Works [1, 16], followed by many others,
demonstrated that synthesis dictionaries can be constructed to best represent training data by solving
essentially a matrix factorization problem. Despite the lack of convexity, many efficient dictionary
learning procedures have been proposed.
This unsupervised or data-driven approach to synthesis dictionary learning is well-suited for reconstruction tasks such as image restoration. For example, synthesis models with learned dictionaries,
have achieved excellent results in denoising [9, 13]. However, in discriminative tasks such as classification, good data reconstruction is not necessarily required or even desirable. Attempts to replicate
the success of sparse models in discriminative tasks led to the recent interest in supervised or a
task- rather than data-driven dictionary learning, which appeared to be a significantly more difficult
modeling and computational problem compared to its unsupervised counterpart [6].
Supervised learning also seems to be the only practical option for learning unstructured nongenerative analysis operators, for which no simple unsupervised alternatives exist. While the supervised analysis operator learning has been mainly used as regularization on inverse problems,
e.g., denoising [5], we argue that it is often better suited for classification tasks than it synthesis
counterpart, since the feature learning and the reconstruction are separated. Recent works proposed
to address the supervised learning of `1 norm synthesis [12] and analysis [5, 17] priors via bilevel optimization [8], in which the minimization of a task-specific loss with respect to a dictionary depends
in turn on the minimizer of a representation pursuit problem using that dictionary.
For the synthesis case, the task-oriented bilevel optimization problem is smooth and can be efficiently solved using stochastic gradient descent (SGD) [12]. However, [12] heavily relies on the
separability of the proximal operator of the `1 norm, and thus cannot be extended to the analysis
case, where the `1 term is not separable. The approach proposed in [17] formulates an analysis
model with a smoothed `1 -type prior and uses implicit differentiation to obtain its gradients with respect to the dictionary required for the solution of the bilevel problem. However, such approximate
priors are known to produce inferior results compared to their exact counterparts.
Main contributions. This paper focuses on supervised learning of synthesis and analysis priors,
making three main contributions:
First, we consider a more general sparse model encompassing analysis and synthesis priors as particular cases, and formulate its supervised learning as a bilevel optimization problem. We propose
a new analysis technique, for which the (almost everywhere) smoothness of the proposed bilevel
problem is shown, and its exact subgradients are derived. We also show that the model can be extended to include a sensing matrix and a non-Euclidean metric in the data term, both of which can
be learned as well. We relate the learning of the latter metric matrix to task-driven metric learning
techniques.
Second, we show a systematic way of constructing fast fixed-complexity approximations to the
solution of the proposed exact pursuit problem by unrolling few iterations of the exact iterative
solver into a feed-forward network, whose parameters are learned in the supervised regime. The
idea of deriving a fast approximation of sparse codes from an iterative algorithm has been recently
successfully advocated in [11] for the synthesis model. We present an extension of this line of
research to the various settings of analysis-flavored sparse models.
Third, we dedicate special attention to the shift-invariant particular case of our model. The fast
approximation in this case assumes the form of a convolutional neural network.
2
Analysis, synthesis, and mixed sparse models
We consider a generalization of the Lasso-type [21, 22] pursuit problem
?2
1
(1)
min kM1 x ? M2 yk22 + ?1 k?yk1 + kyk22 ,
y 2
2
where x ? Rn , y ? Rk , M1 , M2 are m ? n and m ? k, respectively, ? is r ? k, and ?1 , ?2 > 0
are parameters. Pursuit problem (1) encompasses many important particular cases that have been
extensively studied in the literature: By setting M1 = I, ? = I, and M2 = D to be a columnovercomplete dictionary (k > m), the standard sparse synthesis model is obtained, which attempts to
2
input : Data x, matrices M1 , M2 , ?, weights ?1 , ?2 , parameter ? > 0.
output: Sparse code y.
Initialize ?0 = 0, z0 = 0
for j = 1, 2, . . . until convergence do
T
T j
?1
j
yj+1 = (MT
(MT
2 M2 + ?? ? + ?2 I)
2 M1 x + ?? (z ? ? ))
zj+1 = ? ?1 (?yj+1 + ?j )
?
?j+1 = ?j + ?yj+1 ? zj+1
end
Algorithm 1: Alternating direction method of multipliers (ADMM). Here, ?t (z) = sign(z) ?
max{|z| ? t, 0} denotes the element-wise soft thresholding (the proximal operator of `1 ).
represent the data vector x as a sparse linear combination of the atoms of D. The case where the data
are unavailable directly, but rather through a set of (usually fewer, m < n) linear measurements, is
handled by supplying x ? Rm and setting M2 = ?D, with ? being an m ? n sensing matrix. Such
a case arises frequently in compressed sensing applications as well as in general inverse problems.
One the other hand, by setting M1 , M2 = I, and ? a row-overcomplete dictionary (r > k), the
standard sparse analysis model is obtained, which attempts to approximate the data vector x by
another vector y in the same space admitting a sparse projection on ?. For example, by setting
? to be the matrix of discrete derivatives leads to total variation regularization, which has been
shown extremely successful in numerous signal processing applications. The analysis model can
also be extended by adding an m ? k sensing operator M2 = ?, assuming that x is given in the mdimensional measurement space. This leads to popular analysis formulations of image deblurring,
super-resolution, and other inverse problems.
Keeping both the analysis and the synthesis dictionaries and setting M2 = D, ? = [?0 D; I], leads
? = Dy with sparse
to the mixed model. Note that the reconstructed data vector is now obtained by x
? on the analysis dictionary
y; as a result, the `1 term is extended to make sparse the projection of x
?0 , as well as impose sparsity of y. A sensing matrix can be incorporated in this setting as well,
by setting M1 = ? and M2 = ?D. Alternatively, we can interpret ? as the projection matrix
parametrizing a ?T ? Mahalanobis metric, thus generalizing the traditional Euclidean data term.
A particularly important family of analysis operators is obtained when the operator is restricted to
be shift-invariant. In this case, the operator can be expressed as a convolution with a filter, ? ? y,
whose impulse response ? ? Rf is generally of a much smaller dimension than y. A straightforward
generalization would be to consider an analysis operator consisting of q filters,
?(? 1 , . . . , ? q ) = ?1 (? 1 ); ? ? ? ; ?q (? q )
with
?i y = ? i ? y,
1 ? i ? q.
(2)
This model includes as a particular case the isotropic total variation priors. In this case, q = 2 and
the filters correspond to the discrete horizontal and vertical derivatives. In general, the exact form of
the operator depends on the dimension of the convolution, and the type of boundary conditions.
On of the most attractive properties of pursuit problem (1) is convexity, which becomes strict for
?2 > 0. While for ? = I, (1) can be solved efficiently using the popular proximal methods [15]
(such as FISTA [2]), this is no more an option in the case of a non-trivial ?, as k?yk1 has no more
a closed-form proximal operator. A way to circumvent this difficulty is by introducing an auxiliary
variable z = ?y and solving the constrained convex program
?2
1
min kM1 x ? M2 yk22 + ?1 kzk1 + kyk22
y,z 2
2
s.t z = ?y,
(3)
with an unscaled `1 term. This leads to a family of the so-called split-Bregman methods; the application of augmented Lagrangian techniques to solve (3) is known in the literature as alternating
direction method of multipliers (ADMM) [4], summarized in Algorithm 1. Particular instances
might be solved more efficiently with alternative algorithms (i.e. proximal splitting methods).
3
Bilevel sparse models
A central focus of this paper is to develop a framework for supervised learning of the parameters in
(1), collectively denoted by ? = {M1 , M2 , D, ?}, to achieve the best possible performance in a
3
specific task such as reconstruction or classification. Supervised schemes arise very naturally when
dealing with analysis operators. In sharp contrast to the generative synthesis models, where data
reconstruction can be enforced unsupervisedly, there is no trivial way for unsupervised training of
analysis operators without restricting them to satisfy some external, frequently arbitrary, constraints.
Clearly, unconstrained minimization of (1) over ? would lead to a trivial solution ? = 0. The ideas
proposed in [12] fit very well here, and were in fact used in [5, 17] for learning of unstructured
analysis operators. However, in both cases the authors used a smoothed version of the `1 penalty,
which is known to produce inferior results. In this work we extend these ideas, without smoothing
the penalty. Formally, given an observed variable x ? Rn coming from a certain distribution PX ,
we aim at predicting a corresponding latent variable y ? Rk . The latter can be discrete, representing
a label in a classification task, or continuous like in regression or reconstruction problems. As noted
before, when ?2 > 0, problem (1) is strictly convex and, consequently, has a unique minimizer. The
solution of the pursuit problem defines, therefore, an unambiguous deterministic map from the space
of the observations to the space of the latent variables, which we denote by y?? (x). The map depends
on the model parameters ?. The goal of supervised learning is to select such ? that minimize the
expectation over PX of some problem-specific loss function `. In practice, the distribution PX is
usually unknown, and the expected loss is substituted by an empirical loss computed on a training
set of pairs (x, y) ? (X , Y). The task-driven model learning problem becomes [12]
X
1
min
`(y, x, y?? (x)) + ?(?),
(4)
? |X |
(x,y)?(X ,Y)
where ?(?) denotes a regularizer on the model parameters added to stabilize the solution. Problem
(4) is a bilevel optimization problem [8], as we need to optimize the loss function `, which in turn
depends on the minimizer of (1).
As an example, let us examine the generic class of signal reconstruction problems, in which, as
explained in Section 2, the matrix M2 = ? plays the role of a linear degradation (e.g., blur and subsampling in case of image super-resolution problems), producing the degraded and, possibly, noisy
observation x = ?y+n from the latent clean signal y. The goal of the model learning problem is to
select the model parameters ? yielding the most accurate inverse operator, y?? (?y) ? y. Assuming
a simple white Gaussian noise model, this can be achieved through the following loss
`(y, x, y? )
=
1
ky ? y? k22 .
2
(5)
While the supervised learning of analysis operator has been considered for solving denoising problems [5, 17], here we address more general scenarios. In particular, we argue that, when used along
with metric learning, it is often better suited for classification tasks than its synthesis counterpart,
because the non-generative nature of analysis models is more suitable for feature learning. For simplicity, we consider the case of a linear binary classifier of the form sign(wT z + b) operating on
the ?feature vector? z = ?y?? (x). Using a loss of the form `(y, x, z) = f (?y(wT z + b)), with
f being, e.g., the logistic regression function f (t) = log(1 + e?t ), we train the model parameters ? simultaneously with the classifier parameters w, b. In this context, the learning of ? can be
interpreted as feature learning.
The generalization to multi-class classification problems is straightforward, by using a matrix W
and a vector b instead of w and b. It is worthwhile noting that more stable classifiers are obtained
by adding a regularization of the form ? = kWk2F to the learning problem (4).
Optimization. A local minimizer of the non-convex model learning problem (4) can be found via
stochastic optimization [8, 12, 17], by performing gradient descent steps on each of the variables in
? with the pair (x, y) each time drawn at random from the training set. Specifically, the parameters
at iteration i + 1 are obtained by
?i+1 ? ?i ? ?i ?? `(x, y, y??i (x)),
(6)
where 0 ? ?i ? ? is a decreasing sequence of step-sizes. Following [12], we use a step size of
the form ?i = min(?, ?i0 /i) in all our experiments, which means that a fixed step size is used
during the first k0 iterations, after which it decays according to the 1/i annealing strategy. Note
that the learning requires the gradient ?? `, which in turn relies on the gradient of y?? (x) with respect to ?. Even though y?? (x) is obtained by solving a non-smooth optimization problem, we will
4
show that it is almost everywhere differentiable, and one can compute its gradient with respect to
? = {M1 , M2 , D, ?} explicitly and in closed form. In the next section, we briefly summarize the
derivation of the gradients for ?M2 ` and ?? `, as these two are the most interesting cases. The
gradients needed for the remaining model settings described in Section 2 can be obtained straightforwardly from ?M2 ` and ?? `.
Gradient computation. To obtain the gradients of the cost function with respect to the matrices
M2 and ?, we consider a version of (3) in which the equality constrained is relaxed by a penalty,
t
?2
1
(7)
min kM1 x ? M2 yk22 + k?y ? zk22 + ?1 kzk1 + kyk22 ,
z,y 2
2
2
with t > 0 being the penalty parameter. We denote by y?t and z?t the unique minimizers of this
strongly convex optimization problem with t, x, M1 , M2 and ? fixed. Naturally, y?t and z?t are
functions of x and ?, the same way as y?? (x). Throughout this section, we will omit this dependence
to simplify notation. The first-order optimality conditions of (8) lead to the equalities
T
?
?
?
?
MT
= 0,
(8)
2 (M2 yt ? M1 x) + t? (?yt ? zt ) + ?2 yt
?
?
?
t(zt ? ?yt ) + ?1 (sign(zt ) + ?) = 0,
(9)
where the sign of zero is defined as zero and ? is a vector in Rr such that ?? = 0 and |??c | ? 1.
Here, ?? denotes the sub-vector of ? whose rows are reduced to ?, the set of non-zero coefficients
(active set) of z?t .
It has been shown that the solution of the synthesis [12], analysis [23], and generalized Lasso [22]
regularization problems are all piecewise affine functions of the observations and the regularization
parameter. This means that the active set of the solution is constant on intervals of the regularization
parameter ?1 . Moreover, the number of transition points (values of ?1 that for a given observation
x the active set of the solution changes) is finite and thus negligible. It can be shown that if ?1
is not a transition point of x, then a small perturbation in ?, M1 , or M2 leaves ? and the sign
of the coefficients in the solution unchanged [12]. Applying this result to (8), we can state that
sign(z?t ) = sign(?y?t ).
?
Let I? be the projection onto ?, and let P? = IT
? I? = diag{|sign(z )|} denote the matrix setting
to zero the rows corresponding to ?c . Multiplying the second optimality condition by P? , we have
z?t = P? z?t = P? ?y?t ? ?t1 sign(z?t ), where we used the fact that P? sign(z?t ) = sign(z?t ). We can
plug the latter result into (9), obtaining
y?t
T
=
T
?
Qt (MT
2 M1 x ? ?1 ? sign(zt )),
?1
(10)
MT
2 M2
where Qt = (t? P ? + B) and B =
+ ?2 I. By using the first-order Taylor?s
expansion of (11), we can obtain an expression for the gradients of `(y?t ) with respect to M2 and ?,
?c
?? `(y?t )
where ? t =
?M2 `(y?t )
Qt ?y? `(y?t ).
=
?T
??1 sign(z?t )? T ? P?c ?(ty?t ? T
t + t? t yt ),
=
M2 (y?t ? T
t
+
? t y?t T ),
(11)
(12)
Note that since the (unique) solution of (8) can be made arbitrarily close to the (unique) solution of
(1) by increasing t, we can obtain the exact gradients of y? by taking the limit t ? ? in the above
expressions. First, observe that
Qt = (t?T P?c ? + B)?1 = (B(tB?1 ?T P?c ? + I))?1 = (tC + I)?1 B?1 ,
where C = B?1 ?T P?c ?. Note that B is invertible if M2 is full-rank or if ?2 > 0. Let C =
UHU?1 be the eigen-decomposition of C, with H a diagonal matrix with the elements hi , 1 ? i ?
n. Then, Qt = UHt U?1 B?1 , where Ht is diagonal with 1/(thi + 1) on the diagonal. In the limit,
thi ? 0 if hi = 0, and thi ? ? otherwise, yielding
0 : hi 6= 0,
Q = lim Qt = UH0 U?1 B?1 with H0 = diag{h0i }, h0i =
(13)
1 : hi = 0.
t??
T
?
The optimum of (1) is given by y? = Q(MT
2 M1 x ? ?1 ? sign(z )). Analogously, we take the
limit in the expressions describing the gradients in (12) and (13). We summarize our main result in
Proposition 1 below, for which we define
1
: hi 6= 0,
00 ?1 ?1
00
00
00
hi
?
Q = lim tQt = UH U B
with H = diag{hi }, hi =
(14)
t??
0
: hi = 0.
5
x
0
b 0 = Ax
0
zi n
bi n
bprev
zout
zi n
= H( zout ? zin) + ? ? ? bout
Gb in + ? ? ?
bin
F b prev
bin
zout = ? t (b in)
b out
bprev
zout = ? t (b in)
zout
b out = H( zout ? zin) + ? ? ? bout
Gb in + ? ? ?
F b prev
Layer21
Layer2K
yout = Ux + ? ? ?
y out
V( 2z out ? bout )
Reconstruction2Layer
Figure 1:
ADMM neural network encoder. The network comprises K identical layers parameterized by
the matrices A and B and the threshold vector t, and one output layer parameterized by the matrices U
and V. The initial values of the learned parameters are given by ADMM (see Algorithm 1) according to
T
?1
T
T
?1 T
U = (MT
MT
? , A = ?U, H = 2?V?I,
2 M2 +?? ?+?2 I)
2 M1 , V = ?(M2 M2 +?? ?+?2 I)
?1
G = 2I ? ?V, F = ?V ? I, and t = ? 1.
Proposition 1. The functional y? = y?? (x) in (1) is almost everywhere differentiable for ?2 > 0,
and its gradients satisfy
?? `(y? )
?M1 `(y? )
? ? T ),
= ??1 sign(?y? )? T ? P?c ?(?
y? ? T + ?y
= M2 (y? ? T + ?y? T ),
? and y
? = Q?
? y? `(x, ?), and
where the vectors ?, ?
? in Rk are defined as ? = Q?y? `(x, ?), ?
?
T
T
?
?
?
y
? = Q(M2 M1 x ? ?1 ? sign(z )), with Q and Q given by (14) and (15) respectively.
In addition to being a useful analytic tool, the relationship between (1) and its relaxed version (8)
also has practical implications. Obtaining the exact gradients given in Proposition 1 requires computing the eigendecomposition of C, which is in general computationally expensive. In practice,
we approximate the gradients using the expressions in (12) and (13) with a fixed sufficiently large
value of t. The supervised model learning framework can be straightforwardly specialized to the
shift-invariant case, in which filters ? i in (2) are learned instead of a full matrix ?. The gradients of
` with respect to the filter coefficients are obtained using Proposition 1 and the chain rule.
4
Fast approximation
The discussed sparse models rely on an iterative optimization scheme such as ADMM, required to
solve the pursuit problem (1). This has relatively high computational complexity and latency, which
is furthermore data-dependent. ADMM typically requires hundreds or thousands of iterations to
converge, greatly depending on the problem and the input. While the classical optimization theory provides worst-case (data-independent) convergence rate bounds for many families of iterative
algorithms, very little is known about their behavior on specific data, coming, e.g., from a distribution supported on a low-dimensional manifold ? characteristics often exhibited by real data. The
common practice of sparse modeling concentrates on creating sophisticated data models, and then
relies on computational and analytic techniques that are totally agnostic of the data structure. Such
a discrepancy hides a (possibly dramatic) potential of computational improvement [11].
From the perspective of the pursuit process, the minimization of (1) is merely a proxy to obtaining
a highly non-linear map between the data vector x and the representation vector y (which can also
be the ?feature? vector ?Dy or the reconstructed data vector Dy, depending on the application).
Adopting ADMM, such a map can be expressed by unrolling a sufficient number K of iterations into
a feed-forward network comprising K (identical) layers depicted in Figure 1, where the parameters
A, B, U, V, and t, collectively denoted as ?, are prescribed by the ADMM iteration. Fixing K, we
? K,? (x), parameterized by ?.
obtain a fixed-complexity and latency encoder y
? K,? (x) ? y? (x), with the latter denoting the exact minimizer
Note that for a sufficiently large K, y
of (1) given the input x. However, when complexity budget constraints require K to be truncated
? K,? is usually unsatisfactory, and the worst-case analysis
at a small fixed number, the output of y
provided by the classical optimization theory is of little use. However, within the family of functions
? performs better on relevant input data.
{?
yK,? : ?}, there might exist better parameters for which y
Such parameters can be obtained via learning, as described in the sequel.
Similar ideas were first advocated by [11], who considered Lasso sparse synthesis models, and
showed that by unrolling iterative shrinkage thresholding algorithms (ISTA) into a neural network,
6
and learning a new set of parameters, approximate solutions to the pursuit problem could be obtained
at a fraction of the cost of the exact solution, if the inputs were restricted to data coming from a
distribution similar to that used at training. This approach was later extended to more elaborated
structured sparse and low-rank models, with applications in audio separation and denoising [20].
Here is the first attempt to extend it to sparse analysis and mixed analysis-synthesis models.
The learning of the fast encoder is performed by plugging it into the training problem (4) in place
of the exact encoder. The minimization of a loss function `(?) with respect to ? requires the
computation of the (sub)gradients d`(y)/d?, which is achieved by the back-propagation procedure
(essentially, an iterated application of the chain rule). Back-propagation starts with differentiating
`(?) with respect to the output of the last network layer, and propagating the (sub)gradients down to
the input layer, multiplying them by the Jacobian matrices of the traversed layers. For completeness,
we summarize the procedure in the supplementary materials. There is no principled way of choosing
the number of layers K and in practice this is done via cross-validation. In Section 5 we discuss the
selection of K for a particular example.
In the particular setting of a shift-invariant analysis model, the described neural network encoder
assumes a structure resembling that of a convolutional network. The matrices A, B, U, and V
parameterizing the network in Figure 1 are replaced by a set of filter coefficients. The initial inverse
kernels of the form (??T ?+(1+?2 )I)?1 prescribed by ADMM are approximated by finite-support
filters, which are computed using a standard least squares procedure.
5
Experimental results and discussion
In what follows, we illustrate the proposed approaches on two experiments: single-image superresolution (demonstrating a reconstruction problem), and polyphonic music transcription (demonstrating a classification problem). Additional figures are provided in the supplementary materials.
Single-image super-resolution. Single-image super-resolution is an inverse problem in which
a high-resolution image is reconstructed from its blurred and down-sampled version lacking the
high-frequency details. Low-resolution images were created by blurring the original ones with an
anti-aliasing filter, followed by down-sampling operator. In [25], it has been demonstrated that prefiltering a high resolution image with a Gaussian kernel with ? = 0.8s guarantees that the following
s ? s sub-sampling generates an almost aliasing-free low resolution image. This models very well
practical image decimation schemes, since allowing a certain amount of aliasing improves the visual
perception. Super-resolution consists in inverting both the blurring and sub-sampling together as a
compound operator. Since the amount of aliasing is limited, a bi-cubic spline interpolation is more
accurate than lower ordered interpolations for restoring the images to their original size. As shown
in [26], up-sampling the low resolution image in this way, produces an image that is very close
to the pre-filtered high resolution counterpart. Then, the problem reduces to deconvolution with a
Gaussian kernel. In all our experiments we used the scaling factor s = 2. A shift-invariant analysis
model was tested in three configurations: a TV prior created using horizontal and vertical derivative
filters; a bank of 48 7 ? 7 non-constant DCT filters (referred to henceforth as A-DCT); and a combination of the former two settings tuned using the proposed supervised scheme with the loss function
(5). The training set consisted of random image patches from [24]. We also tested a convolutional
neural network approximation of the third model, trained under similar conditions. Pursuit problem
was solved using ADMM with ? = 1, requiring about 100 iterations to converge. Table 1 reports
the obtained PSNR results on seven standard images used in super-resolution experiments. Visual
results are shown in the supplementary materials. We observe that on the average, the supervised
model outperforms A-DCT and TV by 1 ? 3 dB PSNR. While performing slightly inferior to the
exact supervised model, the neural network approximation is about ten times faster.
Automatic polyphonic music transcription. The goal of automatic music transcription is to obtain a musical score from an input audio signal. This task is particularly difficult when the audio
signal is polyphonic, i.e., contains multiple pitches present simultaneously. Like the majority of music and speech analysis techniques, music transcription typically operates on the magnitude of the
audio time-frequency representation such as the short-time Fourier transform or constant-Q transform (CQT) [7] (adopted here). Given a spectral frame x at some time, the transcription problem
consists of producing a binary label vector p ? {?1, +1}k , whose i-th element indicates the pres7
method
Bicubic
TV
A-DCT
SI-ADMM
SI-NN (K = 10)
mean ?std. dev.
29.51 ? 4.39
29.04 ? 3.51
31.06 ? 4.84
32.03 ? 4.84
31.53 ? 5.03
man
28.52
30.23
29.85
31.05
30.42
woman
38.22
33.39
40.23
40.62
40.99
barbara
24.02
24.25
24.32
24.55
24.53
boats
27.38
29.44
28.89
30.06
29.12
lena
30.77
31.75
32.72
34.06
33.58
house
29.75
29.91
31.68
32.91
31.82
peppers
27.95
24.31
29.71
30.93
30.21
Table 1: PSNR in dB of different image super-resolution methods: bicubic interpolation (Bicubic), shiftinvariant analysis models with TV and DCT priors (TV and A-DCT), supervised shift-invariant analysis model
(SI-ADMM), and its fast approximation with K = 10 layers (SI-NN).
100
60
90
80
50
Recall (%)
Accuracy (%)
70
40
30
Analysis?ADMM
Analysis?NN
Nonneg. synthesis
Benetos & Dixon
Poliner & Ellis
20
10
0
0
10
10
1
Number of iterations / layers (K)
60
50
40
30
Analysis ADMM
Analysis NN (K=1)
Analysis NN (K=10)
Nonneg. synthesis
20
10
10
0
2
0
20
40
60
Precision (%)
80
100
Figure 2:
Left: Accuracy of the proposed analysis model (Analysis-ADMM) and its fast approximation
(Analysis-NN) as the function of number of iterations or layers K. For reference, the accuracy of a nonnegative synthesis model as well as two leading methods [3, 18] is shown. Right: Precision-recall curve.
ence (+1) or absence (?1) of the i-th pitch at that time. We use k = 88 corresponding to the span
of the standard piano keyboard (MIDI pitches 21 ? 108).
We used an analysis model with a square dictionary ? and a square metric matrix M1 = M2 to
produce the feature vector z = ?y, which was then fed to a classifier of the form p = sign(Wz+b).
The parameters ?, M2 , W, and b were trained using the logistic loss on the MAPS Disklavier
dataset [10] containing examples of polyphonic piano recordings with time-aligned groundtruth.
The testing was performed on another annotated real piano dataset from [18]. Transcription was
performed frame-by-frame, and the output of the classifier was temporally filtered using a hidden
Markov model proposed in [3]. For comparison, we show the performance of a supervised nonnegative synthesis model and two leading methods [3, 18] evaluated in the same settings.
Performance was measured using the standard precision-recall curve depicted in Figure 2 (right);
in addition we used accuracy measure Acc = TP/(FP + FN + TP), where TP (true positives)
is the number of correctly predicted pitches, and FP (false positives) and FN (false negatives) are
the number of pitches incorrectly transcribed as ON or OFF, respectively. This measure is frequently
used in the music analysis literature [3, 18]. The supervised analysis model outperforms leading
pitch transcription methods. Figure 2 (left) shows that replacing the exact ADMM solver by a fast
approximation described in Section 4 achieves comparable performance, with significantly lower
complexity. In this example, ten layers are enough for having a good representation and the improvement obtained by adding layers begins to be very marginal around this point.
Conclusion. We presented a bilevel optimization framework for the supervised learning of a superset of sparse analysis and synthesis models. We also showed that in applications requiring low
complexity or latency, a fast approximation to the exact solution of the pursuit problem can be
achieved by a feed-forward architecture derived from truncated ADMM. The obtained fast regressor
can be initialized with the model parameters trained through the supervised bilevel framework, and
tuned similarly to the training and adaptation of neural networks. We observed that the structure
of the network becomes essentially a convolutional network in the case of shift-invariant models.
The generative setting of the proposed approaches was demonstrated on an image restoration experiment, while the discriminative setting was tested in a polyphonic piano transcription experiment.
In the former we obtained a very good and fast solution while in the latter the results comparable or
superior to the state-of-the-art.
8
References
[1] M. Aharon, M. Elad, and A. Bruckstein. k-SVD: an algorithm for designing overcomplete
dictionaries for sparse representation. IEEE Trans. Sig. Proc., 54(11):4311?4322, 2006.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM J. Img. Sci., 2:183?202, March 2009.
[3] E. Benetos and S. Dixon. Multiple-instrument polyphonic music transcription using a convolutive probabilistic model. In Sound and Music Computing Conference, pages 19?24, 2011.
[4] D.P. Bertsekas. Nonlinear programming. 1999.
[5] H. Bischof, Y. Chen, and T. Pock. Learning l1-based analysis and synthesis sparsity priors
using bi-level optimization. NIPS workshop, 2012.
[6] M. M. Bronstein, A. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi. Blind deconvolution of
images using optimal sparse representations. IEEE Trans. Im. Proc., 14(6):726?736, 2005.
[7] J. C. Brown. Calculation of a constant Q spectral transform. The Journal of the Acoustical
Society of America, 89:425, 1991.
[8] B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235?256, 2007.
[9] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over
learned dictionaries. IEEE Trans. on Im. Proc., 54(12):3736?3745, 2006.
[10] V. Emiya, R. Badeau, and B. David. Multipitch estimation of piano sounds using a new
probabilistic spectral smoothness principle. IEEE Trans. Audio, Speech, and Language Proc.,
18(6):1643?1654, 2010.
[11] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, pages
399?406, 2010.
[12] J. Mairal, F. Bach, and J. Ponce. Task-driven dictionary learning. IEEE Trans. PAMI,
34(4):791?804, 2012.
[13] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE
Trans. on Im. Proc., 17(1):53?69, 2008.
[14] S. Mallat. A Wavelet Tour of Signal Processing, Second Edition. Academic Press, 1999.
[15] Y. Nesterov. Gradient methods for minimizing composite objective function. In CORE.
Catholic University of Louvain, Louvain-la-Neuve, Belgium, 2007.
[16] B.A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381(6583):607?609, 1996.
[17] G. Peyr?e and J. Fadili. Learning analysis sparsity priors. SAMPTA?11, 2011.
[18] G. E. Poliner and D. Ellis. A discriminative model for polyphonic piano transcription.
EURASIP J. Adv. in Sig. Proc., 2007, 2006.
[19] L.I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation-based noise removal algorithms.
Physica D, 60(1-4):259?268, 1992.
[20] P. Sprechmann, A. M. Bronstein, and G. Sapiro. Learning efficient sparse and low rank models.
arXiv preprint arXiv:1212.3631, 2012.
[21] R. Tibshirani. Regression shrinkage and selection via the LASSO. J. Royal Stat. Society:
Series B, 58(1):267?288, 1996.
[22] Ryan Joseph Tibshirani. The solution path of the generalized lasso. Stanford University, 2011.
[23] S. Vaiter, G. Peyre, C. Dossal, and J. Fadili. Robust sparse analysis regularization. Information
Theory, IEEE Transactions on, 59(4):2001?2016, 2013.
[24] J. Yang, John W., T. Huang, and Y. Ma. Image super-resolution as sparse representation of raw
image patches. In Proc. CVPR, pages 1?8. IEEE, 2008.
[25] G. Yu and J.-M. Morel. On the consistency of the SIFT method. Inverse problems and Imaging,
2009.
[26] G. Yu, G. Sapiro, and S. Mallat. Solving inverse problems with piecewise linear estimators:
from gaussian mixture models to structured sparsity. IEEE Trans. Im. Proc., 21(5):2481?2499,
2012.
9
| 5002 |@word briefly:1 version:4 norm:2 seems:1 replicate:1 eng:1 decomposition:1 dramatic:1 sgd:1 initial:2 configuration:1 contains:2 score:1 series:1 denoting:1 tuned:2 past:1 outperforms:2 com:1 si:4 gmail:1 john:1 dct:6 fn:2 blur:1 analytic:2 designed:1 polyphonic:7 generative:4 fewer:1 leaf:1 rudin:1 isotropic:1 problemspecific:1 short:1 core:1 supplying:1 filtered:2 provides:1 completeness:1 along:1 constructed:2 differential:1 consists:2 prev:2 expected:1 behavior:1 roughly:1 frequently:3 examine:1 multi:1 aliasing:4 lena:1 decreasing:1 little:2 solver:3 unrolling:3 becomes:3 increasing:1 totally:1 notation:1 moreover:1 provided:2 agnostic:1 begin:1 superresolution:1 what:1 israel:1 interpreted:1 parsimony:2 unified:1 differentiation:1 guarantee:1 sapiro:5 litman:1 rm:1 classifier:5 omit:1 producing:2 bertsekas:1 before:1 negligible:1 t1:1 local:1 positive:2 pock:1 limit:3 despite:2 path:1 interpolation:3 pami:1 might:2 studied:1 suggests:1 factorization:1 limited:1 bi:3 practical:3 unique:4 lecun:1 yj:3 restoring:1 practice:4 testing:1 procedure:4 thi:3 empirical:2 thought:1 gabor:1 projection:5 significantly:2 pre:1 composite:1 onto:2 cannot:1 close:2 operator:24 selection:2 context:1 spectacular:1 applying:1 prescribed:2 optimize:1 deterministic:1 demonstrated:3 lagrangian:1 map:5 yt:5 straightforward:2 attention:1 fadili:2 resembling:1 convex:4 formulate:2 resolution:14 simplicity:1 unstructured:2 splitting:1 m2:33 bsf:1 rule:2 parameterizing:1 estimator:1 deriving:1 variation:4 annals:1 play:1 heavily:1 mallat:2 exact:15 duke:4 programming:1 us:1 deblurring:1 designing:1 decimation:1 sig:2 poliner:2 element:3 approximated:2 particularly:3 expensive:1 std:1 sparsely:1 yk1:2 observed:2 role:1 preprint:1 solved:4 worst:2 thousand:1 adv:1 zibulevsky:1 yk:1 principled:2 convexity:2 complexity:6 nesterov:1 trained:3 solving:5 blurring:2 uh:1 k0:1 various:1 america:1 regularizer:1 derivation:1 train:1 separated:1 fast:14 choosing:1 h0:1 whose:4 widely:1 posed:1 solve:2 supplementary:3 elad:3 otherwise:1 compressed:1 encoder:5 cvpr:1 stanford:1 transform:3 noisy:1 emergence:1 sequence:1 differentiable:2 rr:1 propose:3 reconstruction:9 aro:1 coming:3 adaptation:1 relevant:1 aligned:1 achieve:2 yout:1 intuitive:1 ky:1 convergence:2 optimum:1 produce:4 bron:1 ben:1 illustrate:2 develop:1 ac:2 fixing:1 propagating:1 stat:1 depending:2 measured:1 qt:6 advocated:2 taskspecific:1 auxiliary:1 predicted:1 direction:2 concentrate:1 annotated:1 filter:10 stochastic:2 material:3 bin:2 require:1 generalization:3 proposition:4 ryan:1 traversed:1 im:4 extension:1 strictly:1 physica:1 sufficiently:2 considered:2 around:1 zeevi:1 dictionary:20 achieves:1 belgium:1 estimation:1 proc:8 label:2 superposition:1 successfully:2 tool:1 morel:1 minimization:4 clearly:1 gaussian:4 super:8 aim:1 rather:2 shrinkage:3 derived:2 focus:2 ax:1 ponce:1 improvement:2 unsatisfactory:1 rank:3 indicates:1 mainly:1 greatly:1 contrast:1 zk22:1 dependent:1 minimizers:1 i0:1 nn:6 typically:2 hidden:1 comprising:1 classification:8 ill:1 denoted:2 art:1 constrained:2 special:1 initialize:1 smoothing:1 marginal:1 field:3 having:1 atom:1 sampling:4 identical:2 yu:2 unsupervised:4 icml:1 discrepancy:1 others:1 spline:1 simplify:1 piecewise:2 few:1 report:1 oriented:1 simultaneously:2 beck:1 replaced:1 consisting:1 attempt:4 interest:1 highly:1 neuve:1 mixture:2 admitting:1 yielding:2 chain:2 implication:1 predefined:1 accurate:2 bregman:1 bicubic:3 capable:1 euclidean:2 taylor:1 initialized:1 overcomplete:3 mdimensional:1 instance:2 modeling:3 soft:1 elli:2 dev:1 ence:1 tp:3 formulates:1 teboulle:1 restoration:4 cost:3 introducing:1 tour:1 hundred:1 successful:2 peyr:1 straightforwardly:2 yakar:1 proximal:5 dossal:1 siam:1 preferring:1 sequel:1 systematic:1 off:1 probabilistic:2 invertible:1 regressor:1 synthesis:30 analogously:1 together:1 central:1 containing:1 huang:1 possibly:3 woman:1 henceforth:1 transcribed:1 external:1 creating:1 derivative:3 leading:3 potential:1 coding:1 summarized:1 stabilize:1 includes:1 coefficient:4 blurred:1 dixon:2 satisfy:2 explicitly:1 vaiter:1 depends:4 blind:1 later:1 performed:3 closed:2 start:1 option:2 elaborated:1 contribution:2 minimize:1 square:3 il:2 degraded:1 convolutional:6 musical:1 characteristic:1 efficiently:3 who:1 correspond:1 accuracy:4 benetos:2 raw:1 iterated:1 multiplying:2 unsupervisedly:1 acc:1 ty:1 frequency:2 thereof:1 naturally:2 sampled:1 dataset:2 popular:3 recall:3 lim:2 kyk22:3 improves:1 psnr:3 color:1 sophisticated:1 back:2 feed:4 supervised:22 response:1 formulation:1 done:1 though:1 strongly:1 evaluated:1 furthermore:1 implicit:1 until:1 hand:2 horizontal:2 replacing:1 nonlinear:2 lack:1 propagation:2 defines:1 logistic:2 impulse:1 aviv:3 olshausen:1 k22:1 consisted:1 multiplier:2 requiring:2 counterpart:5 former:2 regularization:7 equality:2 true:1 alternating:2 brown:1 white:1 attractive:1 mahalanobis:1 during:1 inferior:3 unambiguous:1 noted:1 generalized:2 performs:1 l1:1 image:28 wise:1 regularizing:1 recently:1 common:1 superior:1 specialized:1 functional:1 mt:8 overview:1 binational:1 extend:2 discussed:1 m1:17 interpret:1 significant:1 measurement:2 smoothness:2 automatic:2 unconstrained:1 consistency:1 similarly:1 badeau:1 language:1 stable:1 operating:1 recent:3 hide:1 perspective:1 showed:2 driven:5 barbara:1 scenario:1 compound:1 certain:2 keyboard:1 onr:1 success:2 binary:2 arbitrarily:1 additional:1 relaxed:2 impose:2 converge:2 redundant:2 signal:11 full:2 desirable:1 multiple:2 reduces:1 sound:2 smooth:2 faster:1 academic:1 plug:1 cross:1 calculation:1 bach:1 post:1 zin:2 plugging:1 pitch:6 involving:1 regression:3 essentially:3 metric:6 expectation:1 arxiv:2 iteration:9 represent:2 adopting:1 kernel:3 achieved:5 cell:1 addition:2 annealing:1 interval:1 exhibited:1 probably:1 strict:1 recording:1 db:2 leveraging:1 marcotte:1 noting:1 yk22:3 yang:1 split:1 enough:1 superset:1 uht:1 variety:1 axiomatically:1 fit:1 zi:2 pepper:1 architecture:1 lasso:5 idea:5 shift:9 expression:4 handled:1 gb:2 penalty:4 speech:3 useful:2 generally:1 latency:3 amount:2 extensively:1 ten:2 multipitch:1 reduced:1 exist:2 nsf:1 zj:2 sign:17 correctly:1 tibshirani:2 diverse:1 discrete:3 threshold:1 demonstrating:2 drawn:1 clean:1 ht:1 vast:1 imaging:1 merely:1 fraction:2 tqt:1 nga:1 enforced:1 inverse:9 everywhere:3 parameterized:3 place:1 almost:4 family:4 throughout:1 groundtruth:1 catholic:1 separation:1 parsimonious:1 patch:2 dy:3 scaling:1 comparable:2 layer:12 hi:9 bound:1 followed:2 nonnegative:2 bilevel:11 constraint:2 alex:1 tal:1 generates:1 fourier:1 span:1 sampta:1 min:5 extremely:1 subgradients:1 separable:1 performing:2 optimality:2 px:3 relatively:1 structured:2 tv:5 according:2 combination:2 representable:1 march:1 smaller:1 slightly:1 separability:1 h0i:2 joseph:1 making:1 osher:1 explained:1 invariant:9 restricted:2 computationally:2 turn:3 describing:1 discus:1 needed:1 sprechmann:3 fed:1 instrument:1 end:1 adopted:3 pursuit:11 aharon:2 operation:1 promoting:3 observe:2 worthwhile:1 generic:1 spectral:3 alternative:2 eigen:1 original:2 assumes:2 denotes:3 include:1 subsampling:1 remaining:1 music:10 approximating:1 classical:2 society:2 unchanged:1 gregor:1 objective:1 added:1 strategy:1 receptive:1 dependence:1 traditional:1 diagonal:3 gradient:20 sci:1 majority:1 seven:1 manifold:1 argue:2 acoustical:1 trivial:3 savard:1 assuming:2 code:4 fatemi:1 relationship:1 minimizing:1 difficult:2 km1:3 relate:1 kzk1:2 negative:1 bronstein:4 zt:4 unknown:1 allowing:1 vertical:2 convolution:2 observation:4 markov:1 finite:2 descent:2 parametrizing:1 anti:1 truncated:2 incorrectly:1 extended:5 incorporated:1 frame:4 rn:2 perturbation:1 peyre:1 smoothed:2 sharp:1 arbitrary:1 pablo:2 inverting:1 pair:2 required:3 trainable:1 david:1 optimized:1 bischof:1 louvain:2 learned:7 bout:3 nip:1 trans:7 address:2 usually:3 below:1 perception:1 convolutive:1 fp:2 appeared:1 sparsity:7 regime:1 summarize:3 encompasses:1 program:1 rf:1 royal:1 tau:2 explanation:1 max:1 tb:1 power:1 suitable:1 wz:1 natural:2 difficulty:1 circumvent:1 predicting:1 rely:1 boat:1 representing:1 scheme:4 numerous:2 temporally:1 created:2 prior:15 literature:3 piano:6 removal:1 lacking:1 loss:10 encompassing:1 mixed:3 interesting:1 validation:1 eigendecomposition:1 affine:1 sufficient:1 proxy:1 principle:2 thresholding:3 bank:1 unscaled:1 row:3 guillermo:2 supported:2 last:1 keeping:1 free:1 taking:1 differentiating:1 sparse:34 boundary:1 dimension:2 curve:2 transition:2 prefiltering:1 asserting:1 forward:4 author:1 made:1 regressors:1 dedicate:1 transaction:1 reconstructed:3 approximate:5 midi:1 transcription:10 dealing:1 emiya:1 bruckstein:1 active:3 mairal:2 img:1 discriminative:4 alternatively:1 continuous:1 iterative:6 latent:3 decade:1 table:2 nature:3 robust:1 tel:3 obtaining:3 unavailable:1 expansion:1 excellent:1 complex:1 necessarily:1 constructing:3 domain:1 substituted:1 diag:3 main:3 kwk2f:1 noise:2 arise:1 edition:1 ista:1 categorized:1 augmented:1 referred:1 cubic:1 precision:3 sub:5 comprises:1 house:1 third:2 jacobian:1 wavelet:2 rk:3 z0:1 down:3 specific:5 sift:1 sensing:5 decay:1 admits:1 evidence:1 deconvolution:2 workshop:1 restricting:2 adding:3 zout:6 false:2 magnitude:1 budget:1 chen:1 suited:3 generalizing:1 led:1 tc:1 depicted:2 visual:2 expressed:2 ordered:1 ux:1 partially:1 collectively:2 minimizer:5 relies:3 ma:1 goal:3 formulated:1 consequently:1 admm:17 man:1 change:1 fista:1 absence:1 specifically:1 eurasip:1 operates:1 wt:2 denoising:5 degradation:1 total:4 called:1 experimental:1 svd:1 la:1 shiftinvariant:1 formally:1 select:2 support:1 latter:5 arises:1 audio:5 tested:3 |
4,424 | 5,003 | When in Doubt, SWAP: High-Dimensional
Sparse Recovery from Correlated Measurements
Divyanshu Vats
Rice University
Houston, TX 77251
[email protected]
Richard Baraniuk
Rice University
Houston, TX 77251
[email protected]
Abstract
We consider the problem of accurately estimating a high-dimensional sparse vector using a small number of linear measurements that are contaminated by noise. It
is well known that standard computationally tractable sparse recovery algorithms,
such as the Lasso, OMP, and their various extensions, perform poorly when the
measurement matrix contains highly correlated columns. We develop a simple
greedy algorithm, called SWAP, that iteratively swaps variables until a desired
loss function cannot be decreased any further. SWAP is surprisingly effective in
handling measurement matrices with high correlations. We prove that SWAP can
easily be used as a wrapper around standard sparse recovery algorithms for improved performance. We theoretically quantify the statistical guarantees of SWAP
and complement our analysis with numerical results on synthetic and real data.
1 Introduction
An important problem that arises in many applications is that of recovering a high-dimensional
sparse (or approximately sparse) vector given a small number of linear measurements. Depending
on the problem of interest, the unknown sparse vector can encode relationships between genes [1],
power line failures in massive power grid networks [2], sparse representations of signals [3, 4], or
edges in a graphical model [5,6], to name just a few applications. The simplest, but still very useful,
setting is when the observations can be approximated as a sparse linear combination of the columns
in a measurement matrix X weighted by the non-zero entries of the unknown sparse vector. In
this paper, we study the problem of recovering the location of the non-zero entries, say S ? , in
the unknown vector, which is equivalent to recovering the columns of X that y depends on. In the
literature, this problem is often to referred to as the sparse recovery or the support recovery problem.
Although several tractable sparse recovery algorithms have been proposed in the literature, statistical guarantees for accurately estimating S ? can only be provided under conditions that limit how
correlated the columns of X can be. For example, if there exists a column, say Xi , that is nearly linearly dependent on the columns indexed by S ? , some sparse recovery algorithms may falsely select
Xi . In certain applications, where X can be specified a priori, correlations can easily be avoided
by appropriately choosing X. However, in many applications, X cannot be specified by a practitioner, and correlated measurement matrices are inevitable. For example, when the columns in X
correspond to gene expression values, it has been observed that genes in the same pathway produce
correlated values [1]. Additionally, it has been observed that regions in the brain that are in close
proximity produce correlated signals as measured using an MRI [7].
In this paper, we develop new sparse recovery algorithms that can accurately recover S ? for measurement matrices that exhibit strong correlations. We propose a greedy algorithm, called SWAP,
that iteratively swaps variables starting from an initial estimate of S ? until a desired loss function
cannot be decreased any further. We prove that SWAP can accurately identify the true signal support
1
under relatively mild conditions on the restricted eigenvalues of the matrix X T X and under certain
conditions on the correlations between the columns of X. A novel aspect of our theory is that the
conditions we derive are only needed when conventional sparse recovery algorithms fail to recover
S ? . This motivates the use of SWAP as a wrapper around sparse recovery algorithms for improved
performance. Finally, using numerical simulations, we show that SWAP consistently outperforms
many state of the art algorithms on both synthetic and real data corresponding to gene expression
values.
As alluded to earlier, several algorithms now exist in the literature for accurately estimating S ? . The
theoretical properties of such algorithms either depend on the irrepresentability condition [5, 8?10]
or various forms of the restricted eigenvalue conditions [11,12]. See [13] for a comprehensive review
of such algorithms and the related conditions. SWAP is a greedy algorithm with novel guarantees
for sparse recovery and we make appropriate comparisons in the text. Another line of research when
dealing with correlated measurements is to estimate a superset of S ? ; see [14?18] for examples.
The rest of the paper is organized as follows. Section 2 formally defines the sparse recovery problem.
Section 3 introduces SWAP. Section 4 presents theoretical results on the conditions needed for
provably correct sparse recovery. Section 5 discusses numerical simulations. Section 6 summarizes
the paper and discusses future work.
2 Problem Setup
Throughout this paper, we assume that y ? Rn and X ? Rn?p are known and related to each other
by the linear model
y = X? ? + w ,
(1)
?
p
where ? ? R is the unknown sparse vector that we seek to estimate. We assume that the columns
of X are normalized, i.e., Xi 22 /n = 1 for all i ? [p], where we use the notation [p] = {1, 2, . . . , p}
throughout the paper. In practice, normalization can easily be done by scaling X and ? ? accordingly.
We assume that the entries of w are i.i.d. zero-mean sub-Gaussian random variables with parameter
? so that E[exp(twi )] ? exp(t2 ? 2 /2). The sub-Gaussian condition on w is common in the literature
and allows for a wide class of noise models, including Gaussian, symmetric Bernoulli, and bounded
random variables. We let k be the number of non-zero entries in ? ? , and let S ? denote the location
of the non-zero entries. It is common to refer to S ? as the support of ? ? and we adopt this notation
throughout the paper.
Once S ? has been estimated, it is relatively straightforward to estimate ? ? . Thus, we mainly focus
on the sparse recovery problem of estimating S ? . A classical strategy for sparse recovery is to
search for a support of size k that minimizes a suitable loss function. For a support S, we assume
the least-squares loss, which is defined as follows:
2
L(S; y, X) := min y ? XS ?22 = ?? [S]y ,
(2)
2
??R|S|
where XS refers to an n ? |S| matrix that only includes the columns indexed by S and ?? [S] =
I ? XS (XST XS )?1 XST is the orthogonal projection onto the null space of the linear operator XS . In
this paper, we design a sparse recovery algorithm that provably, and efficiently, finds the true support
for a broad class of measurement matrices that includes matrices with high correlations.
3 Overview of SWAP
We now describe our proposed greedy algorithm SWAP. Recall that our main goal is to find a
support S that minimizes the loss defined in (2). Suppose that we are given an estimate, say S (1) , of
the true support and let L(1) be the corresponding least-squares loss (see (2)). We want to transition
to another estimate S (2) that is closer (in terms of the number of true variables), or equal, to S ? . Our
main idea to transition from S (1) to an appropriate S (2) is to swap variables as follows:
(1)
Swap every i ? S (1) with i ? (S (1) )c and compute the resulting loss Li,i = L({S (1) \i} ?i ; y, X).
(1)
If mini,i Li,i < L(1) , there exists a support that has a lower loss than the original one. Sub(1)
sequently, we find {i, i } = arg mini,i Li,i and let S (2) = {S (1) \i} ? {i }. We repeat the
2
200
100
0
0
0.05
0.1
0.15
0.5
0
3
0.2
(a)
10
Mean # of Iterations
TLasso
S?TLasso
FoBa
S?FoBa
CoSaMP
S?CoSaMP
MaR
S?MaR
300
True Positive Rate
1
400
(b)
4
5
6
7
Sparsity Level
8
(c)
5
0
3
4
5
6
7
Sparsity Level
8
(d)
Figure 1: Example of using SWAP on pseudo real data where the design matrix X corresponds to
gene expression values and y is simulated. The notation S-Alg refers to the SWAP based algorithms.
(a) Histogram of sparse eigenvalues of X over 10, 000 random sets of size 10; (b) legend; (c) mean
true positive rate vs. sparsity; (d) mean number of iterations vs. sparsity.
Algorithm 1: SWAP(y, X, S)
Inputs: Measurements y, design matrix X, and initial support S.
Let r = 1, S (1) = S, and L(1) = L(S (1) ; y, X)
(r)
(r)
2 Swap i ? S
with i ? (S (r) )c and compute the loss Li,i = L({S (r)\i} ? i ; y, X).
1
3
4
5
6
7
(r)
if mini,i Li,i < L(r) then
(r)
{i, i } = argmini,i Li,i (In case of a tie, choose a pair arbitrarily)
Let S (r+1) = {S (r) \i} ? i and L(r+1) be the corresponding loss.
Let r = r + 1 and repeat steps 2-4.
else
Return S = S (r) .
above steps to find a sequence of supports S (1) , S (2) , . . . , S (r) , where S (r) has the property that
(r)
mini,i Li,i ? L(r) . In other words, we stop SWAP when perturbing S (r) by one variable increases
or does not change the resulting loss. These steps are summarized in Algorithm 1.
Figure 1 illustrates the performance of SWAP for a matrix X that corresponds to 83 samples of
2308 gene expression values for patients with small round blue cell tumors [19]. Since there is no
ground truth available, we simulate the observations y using Gaussian w with ? = 0.5 and randomly
chosen sparse vectors with non-zero entries between 1 and 2. Figure 1(a) shows the histogram of the
T
eigenvalues of 10,000 randomly chosen matrices XA
XA /n, where |A| = 10. We clearly see that
these eigenvalues are very small. This means that the columns of X are highly correlated with each
other. Figure 1(c) shows the mean fraction of variables estimated to be in the true support over 100
different trials. Figure 1(d) shows the mean number of iterations required for SWAP to converge.
Remark 3.1. The main input to SWAP is the initial support S. This parameter implicitly specifies the
desired sparsity level. Although SWAP can be used with a random initialization S, we recommend
using SWAP in combination with another sparse recovery algorithm. For example, in Figure 1(c),
we run SWAP using four different types of initializations. The dashed lines represent standard
sparse recovery algorithms, while the solid lines with markers represent SWAP algorithms. We
clearly see that all SWAP based algorithms outperform standard algorithms. Intuitively, since many
sparse recovery algorithms can perform partial support recovery, using such an initialization results
in a smaller search space when searching for the true support.
Remark 3.2. Since each iteration of SWAP necessarily produces a unique loss, the supports
S (1) , . . . , S (r) are all unique. Thus, SWAP clearly converges in a finite number of iterations. The
exact convergence rate depends on the correlations in the matrix X. Although we do not theoretically quantify the convergence rate, in all numerical simulations, and over a broad range of design
matrices, we observed that SWAP converged in roughly O(k) iterations. See Figure 1(d) for an
example.
Remark 3.3. Using the properties of orthogonal projections, we can write Line 2 of SWAP as a
difference of two rank one projection matrices. The main computational complexity is in computing
3
this quantity k(p ? k) times for all i ? S (r) and i ? (S (r )c . If the computational complexity of
computing a rank k orthogonal projection is Ik , then Line 2 can be implemented in time O(k(Ik +
p ? k). When k p is small, then Ik = O(k 3 ). When k is large, then several computational tricks
can be used to significantly reduce the computational time.
Remark 3.4. SWAP differs significantly from other greedy algorithms in the literature. When k
is known, the main distinctive feature of SWAP is that it always maintains a k-sparse estimate
of the support. Note that the same is true for the computationally intractable exhaustive search
algorithm [10]. Other competitive algorithms, such as forward-backwards (FoBa) [20] or CoSaMP
[21], usually estimate a signal with higher sparsity level and iteratively remove variables until k
variables are selected. The same is true for multi-stage algorithms [22?25]. Intuitively, as we shall
see in Section 4, by maintaining a support of size k, the performance of SWAP only depends on
correlations among the columns of the matrix XA , where A is of size at most 2k and it includes the
true support. In contrast, for other sparse recovery algorithms, |A| ? 2k. In Figure 1, we compare
SWAP to several state of the art algorithms (see Section 5 for a description of the algorithms). In all
cases, SWAP results in superior performance.
4 Theoretical Analysis of SWAP
4.1 Some Important Parameters
In this Section, we collect some important parameters that determine the performance of SWAP.
First, we define the restricted eigenvalue as
X?22
?
: ?0 ? k + , |S ? supp(?)| = k .
(3)
?k+ := inf
n?22
The parameter ?k+ is the minimum eigenvalue of certain blocks of the matrix X T X/n of size 2k
that includes the blocks XST? XS ? /n. Smaller values of ?k+ correspond to correlated columns in
the matrix X. Next, we define the minimum absolute value of the non-zero entries in ? ? as
(4)
?min := min? |?i? | .
i?S
A smaller ?min will evidently require more number of observations for exact recovery of the support.
Finally, we define a parameter that characterizes the correlations between the columns of the matrix
XS ? and the columns of the matrix X(S ? )c , where recall that S ? is the true support of the unknown
sparse vector ? ? . For a set ?k,d that contains all supports of size k with atleast k ? d active variables
from S ? , define ?d as
S\i S\i ?1 2
? ? ? ? ?
i,S
S,S
2
1 ?
?d := max ? min
, S = S ? \S ,
(5)
S\i
?
c
S??k,d \S i?(S ) ?S
?i,i
where ?B = X T ?? [B]X/n. Popular sparse regression algorithms, such as the Lasso and the OMP,
2
can perform accurate support recovery when ? 2 = maxi?(S ? )c ?i,S ? ??1
S ? ,S ? 1 < 1. We will show
in Section 3.2 that SWAP can perform accurate support recovery when ?d < 1. Although the form
of ?d is similar to ?, there are several key differences, which we highlight as follows:
? Since ?k,d contains all supports such that |S ? \S| ? d, it is clear that ?d is the 1 norm of a d ? 1
vector, where d ? k. In contrast, ? is the 1 norm of a k ? 1 vector. If indeed ? < 1, i.e., accurate
support recovery is possible using the Lasso, then SWAP can be initialized by the output of the
Lasso. In this case, ?(?) = 0 and SWAP also outputs the true support as long as S ? minimizes
the loss function. We make this statement precise in Theorem 4.1. Thus, it is only when ? ? 1
that the parameter ?d plays a role in the performance of SWAP.
? The parameter ? directly computes correlations between the columns of X. In contrast, ?d computes correlations between the columns of X when projected onto the null space of a matrix XB ,
where |B| = d ? 1.
? Notice that ?d is computed by taking a maximum over supports in the set ?d \S ? and a minimum
over inactive variables in each support. The reason that the minimum appears in ?d is because we
choose to swap variables that result in the smallest loss. In contrast, ? is computed by taking a
maximum over all inactive variables.
4
4.2 Statement of Main Results
In this Section, we state the main results that characterize the performance of SWAP. Throughout
this Section, we assume the following:
(A1) The observations y and the measurement matrix X follow the linear model in (1), where the
noise is sub-Gaussian with parameter ?, and the columns of X have been normalized.
(A2) SWAP is initialized with a support S (1) of size k and S is the output of SWAP. Since k is
typically unknown, a suitable value can be selected using standard model selection algorithms
such as cross-validation or stability selection [26].
Our first result for SWAP is as follows.
Theorem 4.1. Suppose (A1)-(A2) holds and |S ? \S (1) | ? 1. If n >
1/(18? 2 ), then P(S = S ? ) ? 1 as (n, p, k) ? ?.
4+log(k2 (p?k))
,
2
c2 ?min
?2k /2
where 0 < c2 ?
The proof of Theorem 4.1 can be found in the extended version of our paper [27]. Informally,
Theorem 4.1 states that if the input to SWAP falsely detects at most one variable, then SWAP
is high-dimensional consistent when given a sufficient number of observations n. The condition
on n is mainly enforced to guarantee that the true support S ? minimizes the loss function. This
condition is weaker than the sufficient conditions required for other computationally tractable sparse
recovery algorithms. For example, the method FoBa is known to be superior to other methods
2
such as the Lasso and the OMP. As shown in [20], FoBa requires that n = ?(log(p)/(?3k+ ?min
))
for high-dimensional consistent support recovery, where the choice of , which is greater than k,
depends on the correlations in the matrix X. In contrast, the condition in (4.1), which reduces
2
to n = ?(log(p ? k)/(?2k ?min
)), is weaker since 1/?3k+ < 1/?2k for > k and p ? k < p.
This shows that if a sparse recovery algorithm can accurately estimate the true support, then SWAP
does not introduce any false positives and also outputs the true support. Furthermore, if a sparse
regression algorithm falsely detects one variable, then SWAP can potentially recover the correct
support. Thus, using SWAP with other algorithms does not harm the sparse recovery performance
of other algorithms.
We now consider the more interesting case when SWAP is initialized by a support S (1) that falsely
detects more than one variable. In this case, SWAP will clearly needs more than one iteration to
recover the true support. Furthermore, to ensure that the true support can be recovered, we need to
impose some additional assumptions on the measurement matrix X. The particular assumption we
enforce will depend on the parameter ?k defined in (5). As mentioned in Section 4.1, ?k captures
the correlations between the columns of XS ? and the columns of X(S ? )c . To simplify the statement
?
?
in the next Theorem, define let g(?, ?, c) = g(?, ?, c) = (? ? 1) + 2c( ? + 1/ ?) + 2c2 .
Theorem 4.2. Suppose (A1)-(A2) holds and |S ? \S (1) | > 1. If for a constant c such that 0 <
2 log (p)
c2 < 1/(18? 2 ), g(?k , ?k,1 , c?) < 0, log kp > 4 + log(k 2 (p ? k)), and n > c2 ? 2 ?k2 , then
min 2k
P(S = S ? ) ? 1 as (n, p, k) ? ?.
Theorem 4.2 says that if SWAP is initialized with any support of size k, and ?k satisfies the condition stated in the theorem, then SWAP will output the true support when given a sufficient number
of observations. In the noiseless case, i.e., when ? = 0, the condition required for accurate support
recovery reduces to ?k < 1. The proof of Theorem 4.2, outlined in [27], relies on imposing conditions on each support of size k such that that there exists a swap so that the loss can be necessarily
decreased. Clearly, if such a property holds for each support, except S ? , then SWAP will output the
true support since (i) there are only a finite number of possible
supports, and (ii) each iteration of
SWAP results in a different support. The dependence on kp in the expression for the number of
observations n arises from applying the union bound over all supports of size k.
The condition in Theorem 4.2 is independent of the initialization S (1) . This is why the sample
complexity,
i.e., the number of observations n required for consistent support recovery, scales as
log kp . To reduce the sample complexity, we can impose additional conditions on the support
S (1) that is used to initialize SWAP. Under such assumptions, assuming that |S ? \S (1) | > d, the
5
performance of SWAP will depend on ?d , which is less than ?k , and n will scale as log
refer to [27] for more details.
p
d . We
5 Numerical Simulations
In this section, we show how SWAP compares to other sparse recovery algorithms. Section 5.1
presents results for synthetic data and Section 5.2 presents results for real data.
5.1 Synthetic Data
To illustrate the advantages of SWAP, we use the following examples:
(A1) We sample the rows of X from a Gaussian distribution with mean zero and covariance ?. The
? are specicovariance ? is block-diagonal with blocks of size 10. The entries in each block ?
?
?
fied as follows: ?ii = 1 for i ? [10] and ?ij = a for i = j. This construction of the design
matrix is motivated from [18]. The true support is chosen so that each variable in the support
is assigned to a different block. The non-zero entries in ? ? are chosen uniformly between 1
and 2. We let ? = 1, p = 500, n = 100, 200, k = 20, and a = 0.5, 0.55, . . . , 0.9, 0.95.
(A2) We sample X from the same distribution as described in (A1). The only difference is that the
true support is chosen so that five different blocks contain active variables and each chosen
block contains four active variables. The rest of the parameters are also the same.
In both (A1) and (A2), as a increases, the strength of correlations between the columns increases.
Further, the restricted eigenvalue parameter for (A1) is greater than the restricted eigenvalue parameter of (A2).
We use the following sparse recovery algorithms to initialize SWAP: (i) Lasso, (ii) Thresholded
Lasso (TLasso) [25], (iii) Forward-Backward (FoBa) [20], (iv) CoSaMP [21], (v) Marginal Regression (MaR), and (vi) Random. TLasso first applies Lasso to select a superset of the support and then
selects the largest k as the estimated support. In our implementation, we used Lasso to select 2k
variables and then selected the largest k variables after least-squares. This algorithm is known to
have better performance that the Lasso. FoBa uses a combination of a forward and a backwards algorithm. CoSaMP is an iterative greedy algorithm. MaR selects the support by choosing the largest
k variables in |X T y|. Finally, Random selects a random subset of size k. We use the notation STLasso to refer to the algorithm that uses TLasso as an initialization for SWAP. A similar notation
follows for other algorithms.
Our results are shown in Figure 2. We use two metrics to assess the performance of SWAP. The
first metric is the true positive rate (TPR), i.e., the number of active variables in the estimate divided
by the total number of active variables. The second metric is the the number of iterations needed
for SWAP to converge. Since all the results are over supports of size k, the false postive rate (FPR)
is simply 1 ? TPR. All results for SWAP based algorithms have markers, while all results for non
SWAP based algorithms are represented in dashed lines.
From the TPR performance, we clearly see the advantages of using SWAP in practice. For different
choices the algorithm Alg, when n = 100, the performance of S-Alg is always better than the
performance of Alg. When the number of observations increase to n = 200, we observe that all
SWAP based algorithms perform better than standard sparse recovery algorithms. For (A1), we
have exact support recovery for SWAP when a ? 0.9. For (A2), we have exact support recovery
when a < 0.8. The reason for this difference is because of the differences in the placement of the
non-zero entries.
Figures 2(a) and 2(b) shows the mean number of iterations required by SWAP based algorithms as
the correlations in the matrix X increase. We clearly see that the number of iterations increase with
the degree of correlations. For algorithms that estimate a large fraction of the true support (TLasso,
FoBa, and CoSaMP), the number of iterations is generally very small. For MaR and Random, the
number of iterations is larger, but still comparable to the sparsity level of k = 20.
6
0.6
0.4
0.2
0.6
0.7
0.8
Degree of Correlation
0.8
0.8
0.4
0.9
0.6
0.5
(d) Example (A2), n = 100
0.9
25
0.4
0.6
0.7
0.8
Degree of Correlation
0.6
0.7
0.8
Degree of Correlation
(c) Example (A1), n = 100
Mean # of Iterations
1
Mean TPR
1
0.5
0.4
0
0.5
0.9
(b) Example (A1), n = 100
0.6
0.6
0.2
0
0.5
(a) Legend
Mean TPR
1
0.8
Mean TPR
Mean TPR
Lasso
S?Lasso
TLasso
S?TLasso
FoBa
S?FoBa
CoSaMP
S?CoSaMP
MaR
S?MaR
S?Random
1
0.8
0.6
0.7
0.8
Degree of Correlation
20
15
10
5
0
0.5
0.9
(e) Example (A2), n = 100
0.6
0.7
0.8
Degree of Correlation
0.9
(f) Example (A2), n = 100
30
0.8
0.6
0.4
0.2
0
0.5
0.6
0.7
0.8
Degree of Correlation
0.9
(g) Example (A1), n = 200
1
20
Mean TPR
Mean # of Iterations
Mean TPR
1
10
0.8
0.6
0.4
0.2
0
0.5
0.6
0.7
0.8
Degree of Correlation
0.9
(h) Example (A1), n = 200
0
0.5
0.6
0.7
0.8
Degree of Correlation
0.9
(i) Example (A2), n = 200
Figure 2: Empirical true positive rate (TPR) and number of iterations required by SWAP.
5.2 Gene Expression Data
We now present results on two gene expression cancer datasets. The first dataset1 contains expression values from patients with two different types cancers related to leukemia. The second dataset2
contains expression levels from patients with and without prostate cancer. The matrix X contains
the gene expression values and the vector y is an indictor of the type of cancer a patient has. Although this is a classification problem, we treat it as a recovery problem. For the leukemia data,
p = 5147 and n = 72. For the prostate cancer data, p = 12533 and n = 102. This is clearly a
high-dimensional dataset, and the goal is to identify a small set of genes that are predictive of the
cancer type.
Figure 3 shows the performance of standard algorithms vs. SWAP. We use leave-one-out crossvalidation and apply the sparse recovery algorithms described in Section 5.1 using multiple different
choices of the sparsity level. For each level of sparsity, we choose the sparse recovery algorithm
(labeled as standard) and the SWAP based algorithm that results in the minimum least-squares loss
over the training data. This allows us to compare the performance of using SWAP vs. not using
SWAP. For both datasets, we clearly see that the training and testing error is lower for SWAP based
algorithms. This means that SWAP is able to choose a subset of genes that has better predictive
performance than that of standard algorithms for each level of sparsity.
1
2
see http://www.biolab.si/supp/bi-cancer/projections/info/leukemia.htm
see http://www.biolab.si/supp/bi-cancer/projections/info/prostata.htm
7
1
0.32
0.3
0.28
3
4
6
Sparsity Level
8
10
(a) Training Error
0.24
0.4
0.35
0.3
2.5
0.25
0.26
2
SWAP
Standard
0.45
3.5
0.34
CV?Train Error
CV?Test Error
CV?Train Error
2
0.5
SWAP
Standard
SWAP
Standard
0.36
CV?Test Error
SWAP
Standard
1.5
0.5
4
0.38
3
2.5
2
4
6
Sparsity Level
8
2
10
(b) Testing Error
2
3
4
Sparsity Level
5
(c) Training Error
6
0.2
2
3
4
Sparsity Level
5
6
(d) Testing Error
Figure 3: (a)-(b) Leukemia dataset with p = 5147 and n = 72. (c)-(d) Prostate cancer dataset with
p = 12533 and n = 102.
6 Summary and Future Work
We studied the sparse recovery problem of estimating the support of a high-dimensional sparse
vector when given a measurement matrix that contains correlated columns. We presented a simple
algorithm, called SWAP, that iteratively swaps variables starting from an initial estimate of the
support until an appropriate loss function can no longer be decreased further. We showed that SWAP
is surprising effective in situations where the measurement matrix contains correlated columns. We
theoretically quantified the conditions on the measurement matrix that guarantee accurate support
recovery. Our theoretical results show that if SWAP is initialized with a support that contains some
active variables, then SWAP can tolerate even higher correlations in the measurement matrix. Using
numerical simulations on synthetic and real data, we showed how SWAP outperformed several
sparse recovery algorithms.
Our work in this paper sets up a platform to study the following interesting extensions of SWAP.
The first is a generalization of SWAP so that a group of variables can be swapped in a sequential
manner. The second is a detailed analysis of SWAP when used with other sparse recovery algorithms. The third is an extension of SWAP to high-dimensional vectors that admit structured sparse
representations.
Acknowledgement
The authors would like to thank Aswin Sankaranarayanan and Christoph Studer for feedback and
discussions. The work of D. Vats was partly supported by an Institute for Mathematics and Applications (IMA) Postdoctoral Fellowship.
References
[1] M. Segal, K. Dahlquist, and B. Conklin, ?Regression approaches for microarray data analysis,?
Journal of Computational Biology, vol. 10, no. 6, pp. 961?980, 2003.
[2] H. Zhu and G. Giannakis, ?Sparse overcomplete representations for efficient identification of
power line outages,? IEEE Transactions on Power Systems, vol. 27, no. 4, pp. 2215 ?2224,
nov. 2012.
[3] E. J. Cand`es, J. Romberg, and T. Tao, ?Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,? IEEE Trans. Information Theory, vol. 52,
no. 2, pp. 489?509, 2006.
[4] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk,
?Single-pixel imaging via compressive sampling,? IEEE Signal Processing Magazine, vol. 25,
no. 2, pp. 83?91, Mar. 2008.
[5] N. Meinshausen and P. B?uhlmann, ?High-dimensional graphs and variable selection with the
Lasso,? Annals of Statistics, vol. 34, no. 3, pp. 1436, 2006.
[6] P. Ravikumar, M. Wainwright, and J. Lafferty, ?High-dimensional Ising model selection using
1 -egularized logistic regression,? Annals of Statistics, vol. 38, no. 3, pp. 1287?1319, 2010.
8
[7] G. Varoquaux, A. Gramfort, and B. Thirion, ?Small-sample brain mapping: sparse recovery
on spatially correlated designs with randomization and clustering,? in Proceedings of the 29th
International Conference on Machine Learning (ICML-12), 2012, pp. 1375?1382.
[8] P. Zhao and B. Yu, ?On model selection consistency of Lasso,? Journal of Machine Learning
Research, vol. 7, pp. 2541?2563, 2006.
[9] J. A. Tropp and A. C. Gilbert, ?Signal recovery from random measurements via orthogonal
matching pursuit,? IEEE Transactions Information Theory, vol. 53, no. 12, pp. 4655?4666,
2007.
[10] M. J. Wainwright, ?Sharp thresholds for noisy and high-dimensional recovery of sparsity using
1 -constrained quadratic programming (Lasso),? IEEE Transactions Information Theory, vol.
55, no. 5, May 2009.
[11] N. Meinshausen and B. Yu, ?Lasso-type recovery of sparse representations for highdimensional data,? Annals of Statistics, vol. 37, no. 1, pp. 246?270, 2009.
[12] P. J. Bickel, Y. Ritov, and A. B. Tsybakov, ?Simultaneous analysis of Lasso and Dantzig
selector,? Annals of Statistics, vol. 37, no. 4, pp. 1705?1732, 2009.
[13] P. B?uhlmann and S. Van De Geer, Statistics for High-Dimensional Data: Methods, Theory and
Applications, Springer-Verlag New York Inc, 2011.
[14] H. Zou and T. Hastie, ?Regularization and variable selection via the elastic net,? Journal of
the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301?320,
2005.
[15] Y. She, ?Sparse regression with exact clustering,? Electronic Journal Statistics, vol. 4, pp.
1055?1096, 2010.
[16] E. Grave, G. R. Obozinski, and F. R. Bach, ?Trace Lasso: A trace norm regularization for
correlated designs,? in Advances in Neural Information Processing Systems 24, J. Shawetaylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, Eds., 2011, pp. 2187?2195.
[17] J. Huang, S. Ma, H. Li, and C. Zhang, ?The sparse laplacian shrinkage estimator for highdimensional regression,? Annals of Statistics, vol. 39, no. 4, pp. 2021, 2011.
[18] P. B?uhlmann, P. R?utimann, S. van de Geer, and C.-H. Zhang, ?Correlated variables in regression: clustering and sparse estimation,? Journal of Statistical Planning and Inference, vol. 143,
pp. 1835?1858, Nov. 2013.
[19] J. Khan, J. S. Wei, M. Ringner, L. H. Saal, M. Ladanyi, F. Westermann, F. Berthold, M.
Schwab, C. R. Antonescu, C. Peterson, et al., ?Classification and diagnostic prediction of
cancers using gene expression profiling and artificial neural networks,? Nature medicine, vol.
7, no. 6, pp. 673?679, 2001.
[20] T. Zhang, ?Adaptive forward-backward greedy algorithm for learning sparse representations,?
IEEE Transactions Information Theory, vol. 57, no. 7, pp. 4689?4708, 2011.
[21] D. Needell and J. A. Tropp, ?CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,? Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301?321,
2009.
[22] T. Zhang, ?Some sharp performance bounds for least squares regression with l1 regularization,?
The Annals of Statistics, vol. 37, no. 5A, pp. 2109?2144, 2009.
[23] L. Wasserman and K. Roeder, ?High dimensional variable selection,? Annals of statistics, vol.
37, no. 5A, pp. 2178, 2009.
[24] T. Zhang, ?Analysis of multi-stage convex relaxation for sparse regularization,? Journal of
Machine Learning Research, vol. 11, pp. 1081?1107, Mar. 2010.
[25] S. van de Geer, P. B?uhlmann, and S. Zhou, ?The adaptive and the thresholded lasso for potentially misspecified models (and a lower bound for the lasso),? Electronic Journal of Statistics,
vol. 5, pp. 688?749, 2011.
[26] N. Meinshausen and P. B?uhlmann, ?Stability selection,? Journal of the Royal Statistical
Society: Series B (Statistical Methodology), vol. 72, no. 4, pp. 417?473, 2010.
[27] D. Vats and R. G. Baraniuk, ?Swapping variables for high-dimensional sparse regression with
correlated measurements,? arXiv:1312.1706, 2013.
9
| 5003 |@word mild:1 trial:1 version:1 mri:1 norm:3 simulation:5 seek:1 covariance:1 solid:1 initial:4 wrapper:2 contains:10 series:2 outperforms:1 recovered:1 surprising:1 si:2 saal:1 numerical:6 shawetaylor:1 remove:1 v:4 greedy:7 selected:3 accordingly:1 fpr:1 location:2 schwab:1 zhang:5 five:1 c2:5 ik:3 prove:2 pathway:1 manner:1 introduce:1 falsely:4 theoretically:3 indeed:1 roughly:1 cand:1 planning:1 multi:2 brain:2 detects:3 provided:1 estimating:5 bounded:1 notation:5 null:2 minimizes:4 compressive:1 guarantee:5 pseudo:1 every:1 tie:1 k2:2 positive:5 treat:1 limit:1 dahlquist:1 foba:10 approximately:1 initialization:5 studied:1 quantified:1 meinshausen:3 collect:1 christoph:1 conklin:1 dantzig:1 range:1 bi:2 unique:2 testing:3 practice:2 block:8 union:1 differs:1 empirical:1 significantly:2 projection:6 matching:1 word:1 refers:2 studer:1 cannot:3 close:1 onto:2 operator:1 selection:8 romberg:1 applying:1 www:2 equivalent:1 conventional:1 gilbert:1 straightforward:1 starting:2 convex:1 recovery:48 needell:1 wasserman:1 estimator:1 sequently:1 stability:2 searching:1 annals:7 construction:1 suppose:3 play:1 massive:1 exact:6 magazine:1 programming:1 us:2 trick:1 approximated:1 ising:1 labeled:1 observed:3 role:1 capture:1 region:1 sun:1 mentioned:1 complexity:4 ladanyi:1 depend:3 predictive:2 distinctive:1 swap:97 easily:3 htm:2 various:2 tx:2 represented:1 train:2 effective:2 describe:1 kp:3 artificial:1 zemel:1 choosing:2 exhaustive:1 grave:1 larger:1 say:4 statistic:10 noisy:1 sequence:1 eigenvalue:9 evidently:1 advantage:2 net:1 propose:1 reconstruction:1 poorly:1 description:1 crossvalidation:1 convergence:2 cosamp:9 produce:3 converges:1 leave:1 depending:1 develop:2 derive:1 illustrate:1 measured:1 ij:1 strong:1 recovering:3 implemented:1 quantify:2 correct:2 require:1 generalization:1 randomization:1 varoquaux:1 extension:3 hold:3 proximity:1 around:2 ground:1 exp:2 mapping:1 bickel:1 adopt:1 smallest:1 a2:11 estimation:1 outperformed:1 uhlmann:5 largest:3 weighted:1 clearly:9 gaussian:6 always:2 zhou:1 shrinkage:1 encode:1 focus:1 she:1 consistently:1 bernoulli:1 rank:2 mainly:2 contrast:5 duarte:1 inference:1 roeder:1 dependent:1 inaccurate:1 typically:1 selects:3 tao:1 provably:2 pixel:1 arg:1 among:1 classification:2 priori:1 art:2 platform:1 initialize:2 laska:1 marginal:1 equal:1 once:1 gramfort:1 constrained:1 sampling:1 biology:1 broad:2 yu:2 icml:1 nearly:1 leukemia:4 inevitable:1 future:2 contaminated:1 t2:1 recommend:1 richard:1 few:1 simplify:1 prostate:3 randomly:2 comprehensive:1 ima:1 interest:1 highly:3 introduces:1 swapping:1 xb:1 accurate:5 edge:1 closer:1 partial:1 orthogonal:4 indexed:2 iv:1 incomplete:2 initialized:5 desired:3 overcomplete:1 theoretical:4 column:23 earlier:1 entry:10 subset:2 characterize:1 synthetic:5 international:1 choose:4 huang:1 davenport:1 admit:1 zhao:1 return:1 doubt:1 li:8 supp:3 segal:1 de:3 summarized:1 includes:4 inc:1 depends:4 vi:1 characterizes:1 competitive:1 recover:4 maintains:1 ass:1 square:5 efficiently:1 correspond:2 identify:2 identification:1 accurately:6 converged:1 simultaneous:1 ed:1 failure:1 pp:24 frequency:1 proof:2 stop:1 dataset:3 popular:1 recall:2 organized:1 appears:1 higher:2 tolerate:1 follow:1 methodology:2 improved:2 wei:1 ritov:1 done:1 mar:9 furthermore:2 just:1 xa:3 stage:2 until:4 correlation:24 twi:1 tropp:2 marker:2 defines:1 logistic:1 name:1 normalized:2 true:25 contain:1 regularization:4 assigned:1 spatially:1 symmetric:1 iteratively:4 tlasso:8 round:1 l1:1 harmonic:1 novel:2 misspecified:1 common:2 superior:2 overview:1 perturbing:1 tpr:10 measurement:19 refer:3 imposing:1 cv:4 grid:1 outlined:1 mathematics:1 consistency:1 longer:1 showed:2 inf:1 irrepresentability:1 certain:3 verlag:1 dataset2:1 arbitrarily:1 minimum:5 greater:2 houston:2 impose:2 omp:3 additional:2 converge:2 determine:1 signal:8 dashed:2 ii:3 multiple:1 reduces:2 profiling:1 cross:1 long:1 bach:1 divided:1 ravikumar:1 a1:12 vat:3 laplacian:1 prediction:1 regression:10 patient:4 noiseless:1 metric:3 arxiv:1 iteration:16 normalization:1 histogram:2 represent:2 cell:1 want:1 fellowship:1 decreased:4 xst:3 else:1 microarray:1 appropriately:1 swapped:1 rest:2 legend:2 lafferty:1 practitioner:1 backwards:2 iii:1 superset:2 hastie:1 lasso:20 reduce:2 idea:1 inactive:2 expression:11 motivated:1 bartlett:1 york:1 remark:4 useful:1 generally:1 clear:1 informally:1 detailed:1 tsybakov:1 simplest:1 http:2 specifies:1 outperform:1 exist:1 notice:1 estimated:3 diagnostic:1 blue:1 write:1 shall:1 vol:23 group:1 key:1 four:2 threshold:1 thresholded:2 backward:2 imaging:1 graph:1 relaxation:1 fraction:2 enforced:1 run:1 baraniuk:3 uncertainty:1 throughout:4 electronic:2 summarizes:1 scaling:1 comparable:1 bound:3 quadratic:1 strength:1 placement:1 aspect:1 simulate:1 min:9 relatively:2 structured:1 richb:1 combination:3 smaller:3 giannakis:1 intuitively:2 restricted:5 computationally:3 alluded:1 discus:2 fail:1 thirion:1 needed:3 tractable:3 available:1 pursuit:1 apply:1 observe:1 appropriate:3 enforce:1 weinberger:1 original:1 clustering:3 ensure:1 graphical:1 maintaining:1 medicine:1 classical:1 society:2 quantity:1 strategy:1 dependence:1 diagonal:1 exhibit:1 thank:1 simulated:1 reason:2 assuming:1 relationship:1 mini:4 setup:1 statement:3 potentially:2 takhar:1 info:2 trace:2 stated:1 design:7 implementation:1 motivates:1 unknown:6 perform:5 observation:9 postive:1 datasets:2 finite:2 situation:1 extended:1 precise:1 rn:2 sharp:2 complement:1 pair:1 required:6 specified:2 khan:1 trans:1 able:1 usually:1 aswin:1 sparsity:15 including:1 max:1 royal:2 wainwright:2 power:4 suitable:2 zhu:1 text:1 review:1 literature:5 acknowledgement:1 kelly:1 loss:18 highlight:1 interesting:2 validation:1 degree:9 sufficient:3 consistent:3 principle:1 atleast:1 row:1 cancer:10 summary:1 surprisingly:1 repeat:2 supported:1 weaker:2 institute:1 wide:1 peterson:1 taking:2 absolute:1 sparse:55 van:3 feedback:1 berthold:1 transition:2 dataset1:1 computes:2 forward:4 author:1 adaptive:2 projected:1 avoided:1 transaction:4 nov:2 selector:1 implicitly:1 gene:12 dealing:1 active:6 harm:1 xi:3 postdoctoral:1 search:3 iterative:2 why:1 additionally:1 nature:1 robust:1 correlated:15 elastic:1 alg:4 necessarily:2 zou:1 main:7 linearly:1 noise:3 fied:1 referred:1 sub:4 pereira:1 third:1 theorem:10 maxi:1 x:8 exists:3 intractable:1 sankaranarayanan:1 false:2 sequential:1 illustrates:1 simply:1 applies:1 springer:1 corresponds:2 truth:1 satisfies:1 relies:1 rice:4 obozinski:1 ma:1 goal:2 change:1 argmini:1 except:1 uniformly:1 tumor:1 called:3 total:1 geer:3 partly:1 e:1 select:3 formally:1 highdimensional:2 support:63 arises:2 handling:1 |
4,425 | 5,004 | Deep content-based music recommendation
A?aron van den Oord, Sander Dieleman, Benjamin Schrauwen
Electronics and Information Systems department (ELIS), Ghent University
{aaron.vandenoord, sander.dieleman, benjamin.schrauwen}@ugent.be
Abstract
Automatic music recommendation has become an increasingly relevant problem
in recent years, since a lot of music is now sold and consumed digitally. Most
recommender systems rely on collaborative filtering. However, this approach suffers from the cold start problem: it fails when no usage data is available, so it is not
effective for recommending new and unpopular songs. In this paper, we propose
to use a latent factor model for recommendation, and predict the latent factors
from music audio when they cannot be obtained from usage data. We compare
a traditional approach using a bag-of-words representation of the audio signals
with deep convolutional neural networks, and evaluate the predictions quantitatively and qualitatively on the Million Song Dataset. We show that using predicted
latent factors produces sensible recommendations, despite the fact that there is a
large semantic gap between the characteristics of a song that affect user preference
and the corresponding audio signal. We also show that recent advances in deep
learning translate very well to the music recommendation setting, with deep convolutional neural networks significantly outperforming the traditional approach.
1
Introduction
In recent years, the music industry has shifted more and more towards digital distribution through
online music stores and streaming services such as iTunes, Spotify, Grooveshark and Google Play.
As a result, automatic music recommendation has become an increasingly relevant problem: it allows listeners to discover new music that matches their tastes, and enables online music stores to
target their wares to the right audience.
Although recommender systems have been studied extensively, the problem of music recommendation in particular is complicated by the sheer variety of different styles and genres, as well as social
and geographic factors that influence listener preferences. The number of different items that can
be recommended is very large, especially when recommending individual songs. This number can
be reduced by recommending albums or artists instead, but this is not always compatible with the
intended use of the system (e.g. automatic playlist generation), and it disregards the fact that the
repertoire of an artist is rarely homogenous: listeners may enjoy particular songs more than others.
Many recommender systems rely on usage patterns: the combinations of items that users have consumed or rated provide information about the users? preferences, and how the items relate to each
other. This is the collaborative filtering approach. Another approach is to predict user preferences
from item content and metadata.
The consensus is that collaborative filtering will generally outperform content-based recommendation [1]. However, it is only applicable when usage data is available. Collaborative filtering suffers
from the cold start problem: new items that have not been consumed before cannot be recommended.
Additionally, items that are only of interest to a niche audience are more difficult to recommend because usage data is scarce. In many domains, and especially in music, they comprise the majority of
1
1
2
3
Artists with positive values
Justin Bieber, Alicia Keys, Maroon 5, John
Mayer, Michael Bubl?e
Bonobo, Flying Lotus, Cut Copy, Chromeo,
Boys Noize
Phoenix, Crystal Castles, Muse, R?oyksopp,
Paramore
Artists with negative values
The Kills, Interpol, Man Man, Beirut, the bird
and the bee
Shinedown, Rise Against, Avenged Sevenfold,
Nickelback, Flyleaf
Traveling Wilburys, Cat Stevens, Creedence
Clearwater Revival, Van Halen, The Police
Table 1: Artists whose tracks have very positive and very negative values for three latent factors. The factors
seem to discriminate between different styles, such as indie rock, electronic music and classic rock.
the available items, because the users? consumption patterns follow a power law [2]. Content-based
recommendation is not affected by these issues.
1.1
Content-based music recommendation
Music can be recommended based on available metadata: information such as the artist, album and
year of release is usually known. Unfortunately this will lead to predictable recommendations. For
example, recommending songs by artists that the user is known to enjoy is not particularly useful.
One can also attempt to recommend music that is perceptually similar to what the user has previously
listened to, by measuring the similarity between audio signals [3, 4]. This approach requires the
definition of a suitable similarity metric. Such metrics are often defined ad hoc, based on prior
knowledge about music audio, and as a result they are not necessarily optimal for the task of music
recommendation. Because of this, some researchers have used user preference data to tune similarity
metrics [5, 6].
1.2
Collaborative filtering
Collaborative filtering methods can be neighborhood-based or model-based [7]. The former methods
rely on a similarity measure between users or items: they recommend items consumed by other users
with similar preferences, or similar items to the ones that the user has already consumed. Modelbased methods on the other hand attempt to model latent characteristics of the users and items, which
are usually represented as vectors of latent factors. Latent factor models have been very popular ever
since their effectiveness was demonstrated for movie recommendation in the Netflix Prize [8].
1.3
The semantic gap in music
Latent factor vectors form a compact description of the different facets of users? tastes, and the
corresponding characteristics of the items. To demonstrate this, we computed latent factors for a
small set of usage data, and listed some artists whose songs have very positive and very negative
values for each factor in Table 1. This representation is quite versatile and can be used for other
applications besides recommendation, as we will show later (see Section 5.1). Since usage data is
scarce for many songs, it is often impossible to reliably estimate these factor vectors. Therefore it
would be useful to be able to predict them from music audio content.
There is a large semantic gap between the characteristics of a song that affect user preference, and the
corresponding audio signal. Extracting high-level properties such as genre, mood, instrumentation
and lyrical themes from audio signals requires powerful models that are capable of capturing the
complex hierarchical structure of music. Additionally, some properties are impossible to obtain
from audio signals alone, such as the popularity of the artist, their reputation and and their location.
Researchers in the domain of music information retrieval (MIR) concern themselves with extracting
these high-level properties from music. They have grown to rely on a particular set of engineered
audio features, such as mel-frequency cepstral coefficients (MFCCs), which are used as input to
simple classifiers or regressors, such as SVMs and linear regression [9]. Recently this traditional
approach has been challenged by some authors who have applied deep neural networks to MIR
problems [10, 11, 12].
2
In this paper, we strive to bridge the semantic gap in music by training deep convolutional neural
networks to predict latent factors from music audio. We evaluate our approach on an industrialscale dataset with audio excerpts of over 380,000 songs, and compare it with a more conventional
approach using a bag-of-words feature representation for each song. We assess to what extent it is
possible to extract characteristics that affect user preference directly from audio signals, and evaluate
the predictions from our models in a music recommendation setting.
2
The dataset
The Million Song Dataset (MSD) [13] is a collection of metadata and precomputed audio features
for one million contemporary songs. Several other datasets linked to the MSD are also available,
featuring lyrics, cover songs, tags and user listening data. This makes the dataset suitable for a
wide range of different music information retrieval tasks. Two linked datasets are of interest for our
experiments:
? The Echo Nest Taste Profile Subset provides play counts for over 380,000 songs in the MSD,
gathered from 1 million users. The dataset was used in the Million Song Dataset challenge [14]
last year.
? The Last.fm dataset provides tags for over 500,000 songs.
Traditionally, research in music information retrieval (MIR) on large-scale datasets was limited to
industry, because large collections of music audio cannot be published easily due to licensing issues.
The authors of the MSD circumvented these issues by providing precomputed features instead of raw
audio. Unfortunately, the audio features provided with the MSD are of limited use, and the process
by which they were obtained is not very well documented. The feature set was extended by Rauber
et al. [15], but the absence of raw audio data, or at least a mid-level representation, is still an issue.
However, we were able to attain 29 second audio clips for over 99% of the dataset from 7digital.com.
Due to its size, the MSD allows for the music recommendation problem to be studied in a more
realistic setting than was previously possible. It is also worth noting that the Taste Profile Subset is
one of the largest collaborative filtering datasets that are publicly available today.
3
Weighted matrix factorization
The Taste Profile Subset contains play counts per song and per user, which is a form of implicit
feedback. We know how many times the users have listened to each of the songs in the dataset, but
they have not explicitly rated them. However, we can assume that users will probably listen to songs
more often if they enjoy them. If a user has never listened to a song, this can have many causes:
for example, they might not be aware of it, or they might expect not to enjoy it. This setting is not
compatible with traditional matrix factorization algorithms, which are aimed at predicting ratings.
We used the weighted matrix factorization (WMF) algorithm, proposed by Hu et al. [16], to learn
latent factor representations of all users and items in the Taste Profile Subset. This is a modified
matrix factorization algorithm aimed at implicit feedback datasets. Let rui be the play count for
user u and song i. For each user-item pair, we define a preference variable pui and a confidence
variable cui (I(x) is the indicator function, ? and are hyperparameters):
pui
=
I(rui > 0),
(1)
cui
=
1 + ? log(1 + ?1 rui ).
(2)
The preference variable indicates whether user u has ever listened to song i. If it is 1, we will assume
the user enjoys the song. The confidence variable measures how certain we are about this particular
preference. It is a function of the play count, because songs with higher play counts are more likely
to be preferred. If the song has never been played, the confidence variable will have a low value,
because this is the least informative case.
The WMF objective function is given by:
3
!
min
x? ,y?
X
cui (pui ?
xTu yi )2
X
+?
||xu || +
u
u,i
2
X
2
||yi ||
,
(3)
i
where ? is a regularization parameter, xu is the latent factor vector for user u, and yi is the latent
factor vector for song i. It consists of a confidence-weighted mean squared error term and an L2
regularization term. Note that the first sum ranges over all users and all songs: contrary to matrix
factorization for rating prediction, where terms corresponding to user-item combinations for which
no rating is available can be discarded, we have to take all possible combinations into account. As
a result, using stochastic gradient descent for optimization is not practical for a dataset of this size.
Hu et al. propose an efficient alternating least squares (ALS) optimization method, which we used
instead.
4
Predicting latent factors from music audio
Predicting latent factors for a given song from the corresponding audio signal is a regression problem. It requires learning a function that maps a time series to a vector of real numbers. We evaluate
two methods to achieve this: one follows the conventional approach in MIR by extracting local
features from audio signals and aggregating them into a bag-of-words (BoW) representation. Any
traditional regression technique can then be used to map this feature representation to the factors.
The other method is to use a deep convolutional network.
Latent factor vectors obtained by applying WMF to the available usage data are used as ground truth
to train the prediction models. It should be noted that this approach is compatible with any type
of latent factor model that is suitable for large implicit feedback datasets. We chose to use WMF
because an efficient optimization procedure exists for it.
4.1
Bag-of-words representation
Many MIR systems rely on the following feature extraction pipeline to convert music audio signals
into a fixed-size representation that can be used as input to a classifier or regressor [5, 17, 18, 19, 20]:
? Extract MFCCs from the audio signals. We computed 13 MFCCs from windows of 1024
audio frames, corresponding to 23 ms at a sampling rate of 22050 Hz, and a hop size of 512
samples. We also computed first and second order differences, yielding 39 coefficients in total.
? Vector quantize the MFCCs. We learned a dictionary of 4000 elements with the K-means
algorithm and assigned all MFCC vectors to the closest mean.
? Aggregate them into a bag-of-words representation. For every song, we counted how many
times each mean was selected. The resulting vector of counts is a bag-of-words feature representation of the song.
We then reduced the size of this representation using PCA (we kept enough components to retain
95% of the variance) and used linear regression and a multilayer perceptron with 1000 hidden units
on top of this to predict latent factors. We also used it as input for the metric learning to rank (MLR)
algorithm [21], to learn a similarity metric for content-based recommendation. This was used as a
baseline for our music recommendation experiments, which are described in Section 5.2.
4.2
Convolutional neural networks
Convolutional neural networks (CNNs) have recently been used to improve on the state of the art in
speech recognition and large-scale image classification by a large margin [22, 23]. Three ingredients
seem to be central to the success of this approach:
? Using rectified linear units (ReLUs) [24] instead of sigmoid nonlinearities leads to faster convergence and reduces the vanishing gradient problem that plagues traditional neural networks with
many layers.
? Parallellization is used to speed up training, so that larger models can be trained in a reasonable
amount of time. We used the Theano library [25] to take advantage of GPU acceleration.
4
? A large amount of training data is required to be able to fit large models with many parameters.
The MSD contains enough training data to be able to train large models effectively.
We have also evaluated the use of dropout regularization [26], but this did not yield any significant
improvements.
We first extracted an intermediate time-frequency representation from the audio signals to use as
input to the network. We used log-compressed mel-spectrograms with 128 components and the same
window size and hop size that we used for the MFCCs (1024 and 512 audio frames respectively).
The networks were trained on windows of 3 seconds sampled randomly from the audio clips. This
was done primarily to speed up training. To predict the latent factors for an entire clip, we averaged
over the predictions for consecutive windows.
Convolutional neural networks are especially suited for predicting latent factors from music audio,
because they allow for intermediate features to be shared between different factors, and because their
hierarchical structure consisting of alternating feature extraction layers and pooling layers allows
them to operate on multiple timescales.
4.3
Objective functions
Latent factor vectors are real-valued, so the most straightforward objective is to minimize the
mean squared error (MSE) of the predictions. Alternatively, we can also continue to minimize
the weighted prediction error (WPE) from the WMF objective function. Let yi be the latent factor vector for song i, obtained with WMF, and yi0 the corresponding prediction by the model. The
objective functions are then (? represents the model parameters):
min
?
5
5.1
X
||yi ? yi0 ||2 ,
(4)
i
min
?
X
cui (pui ? xTu yi0 )2 .
(5)
u,i
Experiments
Versatility of the latent factor representation
To demonstrate the versatility of the latent factor vectors, we compared them with audio features in
a tag prediction task. Tags can describe a wide range of different aspects of the songs, such as genre,
instrumentation, tempo, mood and year of release.
We ran WMF to obtain 50-dimensional latent factor vectors for all 9,330 songs in the subset, and
trained a logistic regression model to predict the 50 most popular tags from the Last.fm dataset
for each song. We also trained a logistic regression model on a bag-of-words representation of the
audio signals, which was first reduced in size using PCA (see Section 4.1). We used 10-fold crossvalidation and computed the average area under the ROC curve (AUC) across all tags. This resulted
in an average AUC of 0.69365 for audio-based prediction, and 0.86703 for prediction based on
the latent factor vectors.
5.2
Latent factor prediction: quantitative evaluation
To assess quantitatively how well we can predict latent factors from music audio, we used the predictions from our models for music recommendation. For every user u and for every song i in the
test set, we computed the score xTu yi , and recommended the songs with the highest scores first. As
mentioned before, we also learned a song similarity metric on the bag-of-words representation using
metric learning to rank. In this case, scores for a given user are computed by averaging similarity
scores across all the songs that the user has listened to.
The following models were used to predict latent factor vectors:
? Linear regression trained on the bag-of-words representation described in Section 4.1.
? A multi-layer perceptron (MLP) trained on the same bag-of-words representation.
? A convolutional neural network trained on log-scaled mel-spectrograms to minimize the mean
squared error (MSE) of the predictions.
5
? The same convolutional neural network, trained to minimize the weighted prediction error
(WPE) from the WMF objective instead.
For our initial experiments, we used a subset of the
dataset containing only the 9,330 most popular songs,
and listening data for only 20,000 users. We used 1,881
songs for testing. For the other experiments, we used
all available data: we used all songs that we have usage
data for and that we were able to download an audio clip
for (382,410 songs and 1 million users in total, 46,728
songs were used for testing).
Model
mAP
AUC
MLR
linear regression
MLP
CNN with MSE
CNN with WPE
0.01801
0.02389
0.02536
0.05016
0.04323
0.60608
0.63518
0.64611
0.70987
0.70101
We report the mean average precision (mAP, cut off at
Table 2: Results for all considered mod500 recommendations per user) and the area under the
els on a subset of the dataset containing
ROC curve (AUC) of the predictions. We evaluated all
only the 9,330 most popular songs, and
models on the subset, using latent factor vectors with
listening data for 20,000 users.
50 dimensions. We compared the convolutional neural
network with linear regression on the bag-of-words representation on the full dataset as well, using latent factor vectors with 400 dimensions. Results are
shown in Tables 2 and 3 respectively.
On the subset, predicting the latent factors seems to outperform the metric learning approach. Using
an MLP instead of linear regression results in a slight improvement, but the limitation here is clearly
the bag-of-words feature representation. Using a convolutional neural network results in another
large increase in performance. Most likely this is because the bag-of-words representation does not
reflect any kind of temporal structure.
Interestingly, the WPE objective does not result in improved performance. Presumably this is because the weighting causes the importance of the songs to be proportional to their popularity. In
other words, the model will be encouraged to predict latent factor vectors for popular songs from
the training set very well, at the expense of all other songs.
On the full dataset, the difference between the bag-ofwords approach and the convolutional neural network is
much more pronounced. Note that we did not train an
MLP on this dataset due to the small difference in performance with linear regression on the subset. We also
included results for when the latent factor vectors are obtained from usage data. This is an upper bound to what
is achievable when predicting them from content. There
is a large gap between our best result and this theoretical
maximum, but this is to be expected: as we mentioned before, many aspects of the songs that influence user preference cannot possibly be extracted from audio signals only.
In particular, we are unable to predict the popularity of
the songs, which considerably affects the AUC and mAP
scores.
5.3
Model
mAP
AUC
random
linear regression
CNN with MSE
upper bound
0.00015
0.00101
0.00672
0.23278
0.49935
0.64522
0.77192
0.96070
Table 3: Results for linear regression on
a bag-of-words representation of the audio
signals, and a convolutional neural network
trained with the MSE objective, on the full
dataset (382,410 songs and 1 million users).
Also shown are the scores achieved when
the latent factor vectors are randomized,
and when they are learned from usage data
using WMF (upper bound).
Latent factor prediction: qualitative evaluation
Evaluating recommender systems is a complex matter, and
accuracy metrics by themselves do not provide enough insight into whether the recommendations are sound. To establish this, we also performed some
qualitative experiments on the subset. For each song, we searched for similar songs by measuring
the cosine similarity between the predicted usage patterns. We compared the usage patterns predicted using the latent factors obtained with WMF (50 dimensions), with those using latent factors
predicted with a convolutional neural network. A few songs and their closest matches according
to both models are shown in Table 4. When the predicted latent factors are used, the matches are
mostly different, but the results are quite reasonable in the sense that the matched songs are likely
to appeal to the same audience. Furthermore, they seem to be a bit more varied, which is a useful
property for recommender systems.
6
Query
Jonas Brothers Hold On
Beyonc?e Speechless
Coldplay I Ran Away
Daft Punk Rock?n Roll
Most similar tracks (WMF)
Most similar tracks (predicted)
Jonas Brothers - Games
Miley Cyrus - G.N.O. (Girl?s Night Out)
Miley Cyrus - Girls Just Wanna Have Fun
Jonas Brothers - Year 3000
Jonas Brothers - BB Good
Beyonc?e - Gift From Virgo
Beyonce - Daddy
Rihanna / J-Status - Crazy Little Thing Called Love
Beyonc?e - Dangerously In Love
Rihanna - Haunted
Coldplay - Careful Where You Stand
Coldplay - The Goldrush
Coldplay - X & Y
Coldplay - Square One
Jonas Brothers - BB Good
Daft Punk - Short Circuit
Daft Punk - Nightvision
Daft Punk - Too Long (Gonzales Version)
Daft Punk - Aerodynamite
Daft Punk - One More Time / Aerodynamic
Jonas Brothers - Video Girl
Jonas Brothers - Games
New Found Glory - My Friends Over You
My Chemical Romance - Thank You For The Venom
My Chemical Romance - Teenagers
Daniel Bedingfield - If You?re Not The One
Rihanna - Haunted
Alejandro Sanz - Siempre Es De Noche
Madonna - Miles Away
Lil Wayne / Shanell - American Star
Arcade Fire - Keep The Car Running
M83 - You Appearing
Angus & Julia Stone - Hollywood
Bon Iver - Creature Fear
Coldplay - The Goldrush
Boys Noize - Shine Shine
Boys Noize - Lava Lava
Flying Lotus - Pet Monster Shotglass
LCD Soundsystem - One Touch
Justice - One Minute To Midnight
Table 4: A few songs and their closest matches in terms of usage patterns, using latent factors obtained with
WMF and using latent factors predicted by a convolutional neural
network.
Tom dB?s
Petty
The
Mud
Tommy
James
And The Shondells
Joan
Jett
& TheClearwater
Blackhearts
A Tribe Called Quest
Creedence
Revival
Europe
Fleetwood
MacHeartbreakers
Method Man
Cream
John Waite
Tom Petty
And The
The Clearwater
Doors
The Turtles
Revival
TheThe
Turtles
James
Taylor
Cheap
Trick
Steve
MillerCreedence
Cars
Common
The
Runaways
Creedence
Clearwater
Revival
BillyBachman-Turner
JoelSteely Dan OverdriveTom
Shaggy
/Gang
Ricardo
Ducent
The Doors
Method
Man
Petty
The Heartbreakers
Starr
The
Notorious
B.I.G.
Huey
Lewis
&
TheAnd
News
Aerosmith
Baby BashJay-Z
Creedence Clearwater
Revival
The Contours
ZZ Top
Steely
Dan
The Notorious B.I.G.
Jimi
Hendrix
Eric Clapton
Dexys
Midnight Runners
Weird
Al Yankovic
Binary
The
Rolling Stones
Binary Ratatat
StarGang Starr
Tech Star
N9ne
Bobby
Helms
Warren
Zevon
Bread
Freda
Payne
Journey
Bill
Withers
Redman
Lil Scrappy
Alliance Ethnik
Eddy
Grant
Lonesome River
Band
Eminem
The Notorious B.I.G.
Laura Branigan
Eminem
Love Underground & Nico
Dangerdoom
TheBonobo
Ethiopians
Company
The
Animals
Eminem
50 West
Cent Fort
Minor
(Featuring Styles Of Beyond)
Slum
Village
Cher
Wiz
KhalifaFlow
DavidVelvet
Bowie
Fleetwood
Mac
Patrice
Rushen
Kim
Carnes
Thelma
Houston
T.I.Kanye
Eminem
Parliament
Yeah Yeah Yeahs
Gotan Project
Billy
IdolEnigma
McCoy
Prince
& TheVan
Revolution
Rose
Royce
Wild
Cherry
No Doubt
The Streets
Eminem
DMX
Bootsy
Collins
Dangerdoom
Eminem
/
DMX
/
Obie
Trice
The
Roots
/
Common
Miriam
Makeba
Hot Chip
EPMD
/ Ignition
Nocturnal
Guru
Black
Eyed/ Un
Peas
/ Papa RoachYing Yang
Cam?Ron
/ Juelz
Santana
Kasa
The
Notorious B.I.G.
Twins
ft.
Pitbull
Bill Medley & Jennifer Warnes
Tech N9NE Wax
Collabos
Big
Scoob,
Kaliko
The
VelvetKrizz
Underground
Young Jeezy
/ Akon
Eminem
Labelle
(featuring
Patti
Labelle)
Louis
Prima And
Keely
Smith
Tailorfeaturing
Common
Steve
Winwood
Black Eyed Peas
Metallica
Cypress
Hill
Easy
Star
All-Stars
Steel Pulse
Metallica
The
Police
Jurassic 5
Fugees
Usher 50 Cent
Marty
Stuart
Plies
Cat Stevens
Marvin
Gaye
The
Chills
Common
/ Kanye
West / Busta Rhymes
Ying Yang
Twins ft. Lil Jon & The East Side Boyz
Ricchi
E Poveri
Kings
Of Leon
LL
Cool
J
DJ
Khaled
Method
The
Hives
Soulja Boy
Tell?emMan
/
Sammie
Bubba Sparxxx
Vanilla Ice
Ice Cube Featuring Mack 10 And Ms Toi
Gipsy Kings
Eric
Clapton
Fabolous /Eminem
The-Dream
Black
Eyed
Peas
Danger Doom
Buddz DMX Girl Talk
Shaggy / Brian & Tony Gold
Baby BashCollie
/ Akon
Ruben Blades
Steve Miller Band
OutKast
Linkin Park
Re-up Gang
Pepper
Rick Ross Don Omar
Will
SmithJay-Z
Prince
CalleSwizz
13 BeatzDMX
Swizz Beatz Baby Michael
Franti &/ P.
Spearhead
/ Cherine Anderson
Barry Manilow The
Vienio
Pele
Boy Da Prince
Town Moe
Labi
SiffreBrothers
Regina& Spektor
Doobie
Kanye West /Mystikal
GLC / Consequence
Ray LaMontagne
Buju BantonFeaturing Nate
New Dogg
Edition
Estopa
Westside Connection
Chris
Rea
The
PoliceThe Statler Brothers
Noir
Dsir
Puff Daddy
13 / T.I. Mike Jones
Sexion d?Assaut
Bodo Wartke
Diddy
- DirtyCalle
Money
Bee Gees
Lupe
Fiasco
Flight Of The Conchords
California Swag District
Easy Star
All-Stars
Rick
Jay-Z / John Legend Slick
Insane
Clown
Posse
The
Four
Seasons
Gorillaz
Big Kuntry
Motrhead
Sugar Ray feat. Super
Cat Modest
NasKing
/ Damian ?Jr. Gong?
Marley / Kanye West
Weezer
The
Killers
Mouse of the United States of America
Common
The Presidents
Terror
Squad
/Ludacris
Remy
Joe Thought Of The
Fort
Minor
(Featuring
Roots
And
Styles Of Beyond)
Traveling
Wilburys
Swizz
Beatz
War
The
Notorious
Calle
13/ FatBlack
Prince
&B.I.G.
The
New
Power
Generation
Lupe Cam?Ron
FiascoD-12
/ Juelz
Santana
/ Freekey
Zeekey
/ Toya
Black
Keys
R.E.M. The
Crosby,
Stills,
Nash & Young
Nick Drake
The The
KillsBlack Keys
Pinback
Eminem / Dina Rae
Against
Me! Ramones
Beastie Boys
The
Mercury
Program
Usher featuring Jay Z
Jordan Francis/Roshon
Bernard Fegan
PAULA COLE
Sugarland Shaggy
Blue
Ray LaMontagne
Calle
13
LouisSwede
Smith Happy
M. Pokora
Boys Like Girls featuring Taylor Swift
BigYankee
Tymers/ Randy
SeanDaddy
Paul
The
Mamas
& Mondays
The
Papas
Colbie
Caillat
Crosby,
Stills
& Nash
Steve
Miller Band
Lil Wayne
Lady
GaGa
Monkees
Cat Stevens
Yung
Joc
featuring Gorilla Zoe
The Lonely Island / T-Pain
Charlelie
Couture
Justin
Bieber
LCDThe
Soundsystem
Heath
Timberlake Featuring
T.I.Island
GymEminem
ClassJustin
Heroes
The Grass Graham
Roots Nash
TheBrandon
Lonely
Panic! At
The
Disco
Sam & Dave
YungD4L
Joc
Jillfeaturing
Scott Gorilla Zoe
Animal
Collective
Revl9n
The-Dream
Hot Chip
Erykah Badu
Avril Lavigne
Telefon Tel Aviv
Amy Winehouse
Flyleaf
Plies
Next
RiloGretchen
Kiley
TaylorTokio
Swift
Eve
/ Truth
Hurts
TheEmilia
Corrs
Cirrus
Hotel
Beyonc
Sia
Bow WowBeyonc
ChrisBrown
Brown featuring
T-Pain
Wilson
Chris
Aneta Langerova
Jay-Z
Kelly Clarkson
Beyonc
Juvenile / Mannie Fresh / Lil Wayne
Linkin Park
Arctic Monkeys
Miike Snow
Kardinal Offishall / Akon
Santigold
Marc Anthony;Jennifer
Lopez
Nelly
/ Jaheim
Usher
featuring Beyonc
Alicia
Keys
Telefon Tel Aviv
Yael
Cut Copy
Chris
Brown
ClineNam
Dion Vanessa Carlton
Pretty
Ricky
Belanova
The-Dream
Friendly
Sugarland
Armand
VanFires
Helden & A-TRAK Present Duck Sauce
Basshunter
Taylor
Swift
Taylor
Swift
The-Dream
Chromeo
The Knife
I Wayne
Kylie
Minogue
Leona
Lewis
Miranda
Lambert
Fergie
Showtek
Safri Duo
Cheryl
Cole
Kelly
Clarkson
Crystal Castles
Daft
Punk
Sander
Van
Doorn
Kelis
Daft
Punk
Junior
Senior
Avril
Lavigne
Tito
El Bambino
Christina
Aguilera
Beyonce
Alicia
Keys
Rihanna
Basshunter
The
Knife
Shakira
Steps
Beyonc
Mariah Carey
LCD Soundsystem
Alejandro Sanz
Ashlee Simpson
Crystal
Castles
Alaska Y Dinarama
LCD
Soundsystem
Imogen Heap
Daniel
Bedingfield
Ida
Corr
Vs
Fedde Le Grand
Lil
Wayne
/
Shanell
Eric Prydz
Vitalic
Monica featuring Beyonc
Tyrese Mase
Chromeo
Ace
of
Base
Brandy
The Pussycat
Dolls
/ Busta
Rhymes
Miley
Cyrus
Aaliyah
Yeah
Yeah
Yeahs
Mariah
Carey
Rihanna / J-Status
Beyonc
IreneDaft
CaraPunk The Postal Service
Julieta
Venegas A Dueto Con Dante
Toro
Shakira
Bonobo
Beyonc
GymBritney
Class Spears
Heroes
Holy
FuckY Moi
Mariah
Carey
Trentemller
Gwen Stefani
/ Eve
Mariah
Carey
Gorillaz
Way
Out West
Mariah
Carey
Filo
Owl City
Mariah
Carey
ATB+ Peri
RuskoThe Veronicas
Alex Gaudino Feat. Shena
KC AndJunior
The
Sunshine
Band
Basshunter
Mariah
Carey
Digitalism
Gorillaz
Madonna
Young
Money
Boys
Britney
Spears
Johan
Gielen
Kristinia
DeBarge
Aqua
Airwave
Justin
Bieber
/
Usher
Simian
Mobile
Disco
Kesha
Britney Spears featuring Ying Yang Twins
BeyoncJustin Bieber
LMFAO
Daft Punk
Flo Rida
JamesFoals
Blake
Beyonc
Flo Nikki
Rida Jean
Rudeboy Records
Ashanti
DHT Feat. Edme
Kat DeLuna
Lupe Fiasco
feat.
Portishead
Cut LCD
CopySoundsystem
Perfect Stranger
ATB Ryan
Lindsay
Lohan
Kate
Destiny?s
Young Child
Money
featuring
Lloyd
Fergie
/ Ludacris
Madonna
Lady
GaGa
Daft Punk
Bassholes
Alicia
Keys
Jack
White
&
Alicia
Keys
Ronski
Speed
The
Pussycat
Dolls
La
Roux
Passion
Pit
KeriLady
Hilson
/ Lil Wayne
Daft Punk
GaGa
Lange
The Prodigy
Delerium feat. Sarah
McLachlan
Mylo
Peaches
Daft Punk
Timbaland / Keri Hilson Donavon
/ D.O.E.
Amy
Winehouse
Boys Noize
Boys Noize
Sub
Focus
Frankenreiter
Justice
Two GaGa
Door
Cinema Club
Flying Lotus
Boys Noize
Lady
Rilo Kiley
Flying Lotus
Jack Johnson
Dragonette
Dave
Aju
Stardust
Four Tet
Oizo
FlyingMr.
Lotus
Jem
Radiohead
Four Tet
Thievery Corporation
Figure 1: t-SNE visualization of the distribution of predicted usage patterns, using latent factors predicted
from audio. A few close-ups show artists whose songs are projected in specific areas. We can discern hip-hop
(red), rock (green), pop (yellow) and electronic music (blue). This figure is best viewed in color.
Following McFee et al. [5], we also visualized the distribution of predicted usage patterns in two
dimensions using t-SNE [27]. A few close-ups are shown in Figure 1. Clusters of songs that appeal
to the same audience seem to be preserved quite well, even though the latent factor vectors for all
songs were predicted from audio.
6
Related work
Many researchers have attempted to mitigate the cold start problem in collaborative filtering by
incorporating content-based features. We review some recent work in this area of research.
7
Wang et al. [28] extend probabilistic matrix factorization (PMF) [29] with a topic model prior on
the latent factor vectors of the items, and apply this model to scientific article recommendation.
Topic proportions obtained from the content of the articles are used instead of latent factors when no
usage data is available. The entire system is trained jointly, allowing the topic model and the latent
space learned by matrix factorization to adapt to each other. Our approach is sequential instead: we
first obtain latent factor vectors for songs for which usage data is available, and use these to train
a regression model. Because we reduce the incorporation of content information to a regression
problem, we are able to use a deep convolutional network.
McFee et al. [5] define an artist-level content-based similarity measure for music learned from a
sample of collaborative filter data using metric learning to rank [21]. They use a variation on the
typical bag-of-words approach for audio feature extraction (see section 4.1). Their results corroborate that relying on usage data to train the model improves content-based recommendations. For
audio data they used the CAL10K dataset, which consists of 10,832 songs, so it is comparable in
size to the subset of the MSD that we used for our initial experiments.
Weston et al. [17] investigate the problem of recommending items to a user given another item as
a query, which they call ?collaborative retrieval?. They optimize an item scoring function using a
ranking loss and describe a variant of their method that allows for content features to be incorporated. They also use the bag-of-words approach to extract audio features and evaluate this method
on a large proprietary dataset. They find that combining collaborative filtering and content-based information does not improve the accuracy of the recommendations over collaborative filtering alone.
Both McFee et al. and Weston et al. optimized their models using a ranking loss. We have opted to
use quadratic loss functions instead, because we found their optimization to be more easily scalable.
Using a ranking loss instead is an interesting direction of future research, although we suspect that
this approach may suffer from the same problems as the WPE objective (i.e. popular songs will have
an unfair advantage).
7
Conclusion
In this paper, we have investigated the use of deep convolutional neural networks to predict latent
factors from music audio when they cannot be obtained from usage data. We evaluated the predictions by using them for music recommendation on an industrial-scale dataset. Even though a lot
of characteristics of songs that affect user preference cannot be predicted from audio signals, the
resulting recommendations seem to be sensible. We can conclude that predicting latent factors from
music audio is a viable method for recommending new and unpopular music.
We also showed that recent advances in deep learning translate very well to the music recommendation setting in combination with this approach, with deep convolutional neural networks significantly
outperforming a more traditional approach using bag-of-words representations of audio signals. This
bag-of-words representation is used very often in MIR, and our results indicate that a lot of research
in this domain could benefit significantly from using deep neural networks.
References
[1] M. Slaney. Web-scale multimedia analysis: Does content matter? MultiMedia, IEEE, 18(2):12?15, 2011.
` Celma. Music Recommendation and Discovery in the Long Tail. PhD thesis, Universitat Pompeu
[2] O.
Fabra, Barcelona, 2008.
[3] Malcolm Slaney, Kilian Q. Weinberger, and William White. Learning a metric for music similarity. In
Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR), 2008.
[4] Jan Schl?uter and Christian Osendorfer. Music Similarity Estimation with the Mean-Covariance Restricted
Boltzmann Machine. In Proceedings of the 10th International Conference on Machine Learning and
Applications (ICMLA), 2011.
[5] Brian McFee, Luke Barrington, and Gert R. G. Lanckriet. Learning content similarity for music recommendation. IEEE Transactions on Audio, Speech & Language Processing, 20(8), 2012.
[6] Richard Stenzel and Thomas Kamps. Improving Content-Based Similarity Measures by Training a Collaborative Model. pages 264?271, London, UK, September 2005. University of London.
8
[7] Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor, editors. Recommender Systems
Handbook. Springer, 2011.
[8] James Bennett and Stan Lanning. The netflix prize. In Proceedings of KDD cup and workshop, volume
2007, page 35, 2007.
[9] Eric J. Humphrey, Juan P. Bello, and Yann LeCun. Moving beyond feature design: Deep architectures
and automatic feature learning in music informatics. In Proceedings of the 13th International Conference
on Music Information Retrieval (ISMIR), 2012.
[10] Philippe Hamel and Douglas Eck. Learning features from music audio with deep belief networks. In
Proceedings of the 11th International Conference on Music Information Retrieval (ISMIR), 2010.
[11] Honglak Lee, Peter Pham, Yan Largman, and Andrew Ng. Unsupervised feature learning for audio
classification using convolutional deep belief networks. In Advances in Neural Information Processing
Systems 22. 2009.
[12] Sander Dieleman, Phil?emon Brakel, and Benjamin Schrauwen. Audio-based music classification with a
pretrained convolutional network. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR), 2011.
[13] Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. The million song dataset.
In Proceedings of the 11th International Conference on Music Information Retrieval (ISMIR), 2011.
[14] Brian McFee, Thierry Bertin-Mahieux, Daniel P.W. Ellis, and Gert R.G. Lanckriet. The million song
dataset challenge. In Proceedings of the 21st international conference companion on World Wide Web,
2012.
[15] Andreas Rauber, Alexander Schindler, and Rudolf Mayer. Facilitating comprehensive benchmarking
experiments on the million song dataset. In Proceedings of the 13th International Conference on Music
Information Retrieval (ISMIR), 2012.
[16] Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In
Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, 2008.
[17] Jason Weston, Chong Wang, Ron Weiss, and Adam Berenzweig. Latent collaborative retrieval. In Proceedings of the 29th international conference on Machine learning, 2012.
[18] Jason Weston, Samy Bengio, and Philippe Hamel. Large-scale music annotation and retrieval: Learning
to rank in joint semantic spaces. Journal of New Music Research, 2011.
[19] Jonathan T Foote. Content-based retrieval of music and audio. In Voice, Video, and Data Communications,
pages 138?147. International Society for Optics and Photonics, 1997.
[20] Matthew Hoffman, David Blei, and Perry Cook. Easy As CBA: A Simple Probabilistic Model for Tagging
Music. In Proceedings of the 10th International Conference on Music Information Retrieval (ISMIR),
2009.
[21] Brian McFee and Gert R. G. Lanckriet. Metric learning to rank. In Proceedings of the 27 th International
Conference on Machine Learning, 2010.
[22] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew
Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic
modeling in speech recognition: the shared views of four research groups. Signal Processing Magazine,
IEEE, 29(6):82?97, 2012.
[23] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems 25, 2012.
[24] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In
Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010.
[25] James Bergstra, Olivier Breuleux, Fr?ed?eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010.
[26] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural
networks by preventing co-adaptation of feature detectors. Technical report, University of Toronto, 2012.
[27] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning
Research, 9(2579-2605):85, 2008.
[28] Chong Wang and David M. Blei. Collaborative topic modeling for recommending scientific articles.
In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data
mining, 2011.
[29] Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, volume 20, 2008.
9
| 5004 |@word cnn:3 wmf:12 version:1 armand:1 seems:1 proportion:1 yi0:3 achievable:1 justice:2 hu:3 pulse:1 covariance:1 comprise:1 versatile:1 holy:1 blade:1 initial:2 electronics:1 atb:2 score:6 united:1 series:1 wanna:1 daniel:4 contains:2 interestingly:1 petty:3 com:1 ida:1 bello:1 gpu:2 romance:2 john:3 destiny:1 realistic:1 informative:1 kdd:1 noche:1 christian:1 cheap:1 enables:1 toro:1 simian:1 grass:1 alone:2 v:1 selected:1 cook:1 item:19 extracting:3 smith:2 short:1 prize:2 record:1 vanishing:1 blei:2 provides:2 pascanu:1 math:1 ron:3 preference:13 club:1 district:1 monday:1 postal:1 location:1 toronto:1 mahieux:2 become:2 midnight:2 viable:1 jonas:7 marley:1 consists:2 qualitative:2 lopez:1 wild:1 dan:2 ray:3 tommy:1 tagging:1 expected:1 themselves:2 panic:1 love:3 multi:1 passion:1 salakhutdinov:2 relying:1 company:1 eck:1 cpu:1 little:1 gwen:1 humphrey:1 window:4 gift:1 provided:1 discover:1 matched:1 project:1 circuit:1 duo:1 what:3 cher:1 kind:1 killer:1 monkey:1 maroon:1 corporation:1 remy:1 temporal:1 quantitative:1 mitigate:1 every:3 fun:1 friendly:1 classifier:2 scaled:1 uk:1 wayne:6 grant:1 enjoy:4 unit:3 louis:1 before:3 service:2 positive:3 local:1 aggregating:1 bodo:1 ice:2 consequence:1 despite:1 rhyme:2 ware:1 black:4 might:2 chose:1 bird:1 studied:2 luke:1 pit:1 co:1 limited:2 factorization:8 range:3 trice:1 averaged:1 pui:4 practical:1 lecun:1 lyric:1 testing:2 yehuda:1 angus:1 kat:1 razvan:1 cold:3 procedure:1 mcfee:6 jan:1 danger:1 area:4 eyed:3 yan:1 significantly:3 thought:1 attain:1 ups:2 word:19 confidence:4 shapira:1 arcade:1 lady:3 cannot:6 close:2 lamere:1 doom:1 impossible:2 applying:1 influence:2 optimize:1 conventional:2 map:6 demonstrated:1 phil:1 moi:1 bill:2 straightforward:1 ricky:1 sainath:1 roux:1 scipy:1 amy:2 insight:1 lamblin:1 pompeu:1 glc:1 classic:1 gert:3 traditionally:1 hurt:1 juvenile:1 president:1 variation:1 target:1 play:6 today:1 magazine:1 user:40 lindsay:1 olivier:1 samy:1 jaitly:1 lanckriet:3 trick:1 element:1 recognition:2 particularly:1 papa:2 cut:4 monster:1 mike:1 ft:2 royce:1 wang:3 revival:5 news:1 kilian:1 theand:1 contemporary:1 highest:1 underground:2 ran:2 rose:1 digitally:1 mentioned:2 nash:3 benjamin:3 sugar:1 predictable:1 creedence:4 warde:1 cam:2 lcd:4 trained:10 flying:4 eric:5 whitman:1 girl:5 easily:2 joint:1 chip:2 represented:1 america:1 listener:3 cat:4 grown:1 genre:3 train:5 talk:1 effective:1 describe:2 london:2 query:2 tell:1 aggregate:1 clearwater:4 neighborhood:1 later:1 jean:1 ace:1 larger:1 valued:1 whose:3 quite:3 compressed:1 mercury:1 jointly:1 echo:1 patrice:1 online:2 mood:2 hoc:1 advantage:2 rida:2 rock:4 propose:2 fr:1 adaptation:1 shine:2 relevant:2 combining:1 payne:1 bow:2 translate:2 achieve:1 jurassic:1 indie:1 gold:1 description:1 flo:2 pronounced:1 crossvalidation:1 sanz:2 sutskever:2 convergence:1 corrs:1 cluster:1 produce:1 adam:1 redman:1 perfect:1 sarah:1 andrew:2 friend:1 gong:1 schl:1 minor:2 thierry:2 miriam:1 predicted:12 jem:1 indicate:1 cool:1 lava:2 mir:6 direction:1 snow:1 laurens:1 stevens:3 cnns:1 pea:3 stochastic:1 sunshine:1 filter:1 engineered:1 runaway:1 owl:1 regina:1 ricci:1 repertoire:1 cba:1 brian:5 ryan:1 sauce:1 zz:1 hold:1 pham:1 considered:1 blake:1 ground:1 presumably:1 dieleman:3 predict:12 matthew:1 desjardins:1 dictionary:1 consecutive:1 heap:1 ruslan:1 estimation:1 applicable:1 bag:19 vanessa:1 ross:1 bridge:1 cole:2 largest:1 village:1 hollywood:1 akon:3 city:1 weighted:5 hoffman:1 mclachlan:1 clearly:1 lanning:1 always:1 super:1 modified:1 season:1 rick:2 mccoy:1 mobile:1 icmla:1 berenzweig:1 wilson:1 release:2 focus:1 june:1 improvement:2 rank:5 indicates:1 lotus:5 tech:2 industrial:1 opted:1 sigkdd:1 kim:1 sense:1 baseline:1 el:2 streaming:1 entire:2 hidden:1 kc:1 jimi:1 playlist:1 issue:4 classification:4 pascal:1 animal:2 art:1 drake:1 homogenous:1 cube:1 aware:1 never:2 unpopular:2 ng:1 sampling:1 hop:3 encouraged:1 arctic:1 stuart:1 jones:1 icml:1 extraction:3 represents:1 osendorfer:1 park:2 unsupervised:1 jon:1 yoshua:1 others:1 report:2 richard:1 primarily:1 quantitatively:2 future:1 randomly:1 recommend:3 few:4 resulted:1 comprehensive:1 individual:1 intended:1 consisting:1 fire:1 versatility:2 william:1 attempt:2 mlp:4 interest:2 rae:1 investigate:1 simpson:1 mining:2 mnih:1 evaluation:2 punk:12 runner:1 chong:2 photonics:1 farley:1 yielding:1 rokach:1 cherry:1 dina:1 capable:1 bobby:1 modest:1 peach:1 alaska:1 taylor:4 pmf:1 re:2 prince:4 alliance:1 theoretical:1 yeah:4 hip:1 modeling:2 industry:2 elli:2 facet:1 cover:1 fleetwood:2 corroborate:1 measuring:2 challenged:1 mac:1 elis:1 subset:12 ugent:1 bon:1 rolling:1 krizhevsky:2 johnson:1 too:1 listened:5 universitat:1 guru:1 considerably:1 my:3 st:1 peri:1 grand:1 river:1 international:15 randomized:1 oord:1 retain:1 lee:1 dong:1 probabilistic:3 off:1 informatics:1 regressor:1 michael:2 mouse:1 modelbased:1 schrauwen:3 monica:1 thesis:1 central:1 reflect:1 squared:3 containing:2 town:1 possibly:1 nico:1 juan:1 nest:1 slaney:2 castle:3 strive:1 american:1 style:4 laura:1 doubt:1 li:1 tet:2 ricardo:1 nonlinearities:1 de:1 account:1 crosby:2 star:6 twin:3 bergstra:1 lloyd:1 coefficient:2 matter:2 kate:1 explicitly:1 ranking:3 ad:1 aron:1 performed:1 view:1 lot:3 chill:1 root:3 jason:2 francis:1 red:1 netflix:2 start:3 compiler:1 complicated:1 linked:2 annotation:1 relus:1 carey:7 collaborative:16 ass:2 square:2 publicly:1 accuracy:2 convolutional:21 minimize:4 who:1 roll:1 miller:2 gathered:1 variance:1 characteristic:6 yield:1 yellow:1 dmx:3 raw:2 vincent:1 lambert:1 uter:1 artist:11 mfcc:1 researcher:3 rectified:2 published:1 worth:1 dave:2 britney:2 detector:1 suffers:2 wiz:1 ed:1 definition:1 volinsky:1 against:2 hotel:1 frequency:2 mohamed:1 james:4 lior:1 con:1 sampled:1 dataset:24 cinema:1 popular:6 aguilera:1 color:1 car:2 improves:1 knowledge:2 eddy:1 listen:1 steve:4 higher:1 follow:1 tom:2 improved:1 wei:1 evaluated:3 done:1 though:2 anderson:1 furthermore:1 just:1 implicit:4 rahman:1 traveling:2 hand:1 flight:1 web:2 touch:1 night:1 perry:1 google:1 logistic:2 scientific:3 aviv:2 barrington:1 usage:20 brown:2 geographic:1 former:1 regularization:3 assigned:1 chemical:2 alternating:2 hamel:2 semantic:5 mile:1 white:2 gaga:4 visualizing:1 ll:1 game:2 prodigy:1 auc:6 noted:1 mel:3 starr:2 cosine:1 m:2 stone:2 hill:1 crystal:3 demonstrate:2 julia:1 largman:1 image:1 jack:2 recently:2 common:5 sigmoid:1 mlr:2 phoenix:1 volume:2 million:10 extend:1 slight:1 tail:1 significant:1 cup:1 honglak:1 automatic:4 vanilla:1 language:1 dj:1 mfccs:5 moving:1 europe:1 similarity:13 money:3 alejandro:2 base:1 patrick:1 closest:3 recent:5 showed:1 instrumentation:2 store:2 certain:1 randy:1 outperforming:2 continue:1 carlton:1 binary:2 success:1 yi:6 der:1 scoring:1 baby:3 george:1 houston:1 spectrogram:2 deng:1 nate:1 recommended:4 barry:1 signal:18 multiple:1 sound:1 bread:1 reduces:1 full:3 stranger:1 technical:1 match:4 adapt:1 faster:1 knife:2 retrieval:14 long:2 msd:8 christina:1 thethe:1 prediction:18 scalable:1 regression:15 variant:1 multilayer:1 metric:12 navdeep:1 achieved:1 damian:1 rea:1 audience:4 doorn:1 nelly:1 preserved:1 fiasco:2 breuleux:1 operate:1 posse:1 toi:1 usher:4 heath:1 probably:1 hz:1 suspect:1 db:1 pooling:1 cream:1 contrary:1 khaled:1 thing:1 effectiveness:1 legend:1 seem:5 jordan:1 eve:2 call:1 noting:1 yang:3 door:3 intermediate:2 enough:3 easy:3 bengio:2 sander:4 vinod:1 fit:1 variety:1 pepper:1 architecture:1 fm:2 andriy:1 affect:5 reduce:1 andreas:1 lange:1 teenager:1 consumed:5 listening:3 whether:2 expression:1 pca:2 war:1 spotify:1 pele:1 clarkson:2 song:66 peter:1 suffer:1 speech:3 cause:2 proprietary:1 deep:17 useful:3 generally:1 nocturnal:1 listed:1 aimed:2 tune:1 amount:2 weird:1 mid:1 extensively:1 band:4 clip:4 svms:1 visualized:1 reduced:3 documented:1 outperform:2 shifted:1 brother:8 popularity:3 per:3 track:3 kill:1 blue:2 affected:1 carnes:1 group:1 key:7 four:4 sheer:1 schindler:1 xtu:3 douglas:1 miranda:1 dahl:1 kept:1 year:6 convert:1 sum:1 powerful:1 you:5 journey:1 discern:1 reasonable:2 ismir:7 electronic:2 mud:1 yann:1 excerpt:1 withers:1 maaten:1 comparable:1 bit:1 capturing:1 dropout:1 graham:1 layer:4 bound:3 played:1 koren:1 aerodynamic:1 fold:1 quadratic:1 marvin:1 gang:2 optic:1 incorporation:1 kantor:1 alex:1 tag:6 aspect:2 turtle:2 speed:3 min:3 leon:1 brandy:1 circumvented:1 department:1 according:1 combination:4 cui:4 jr:1 across:2 increasingly:2 sam:1 island:2 joseph:1 den:1 restricted:2 theano:2 notorious:5 pipeline:1 mack:1 visualization:1 previously:2 jennifer:2 santana:2 count:6 precomputed:2 know:1 hero:2 available:11 yael:1 doll:2 apply:1 hierarchical:2 away:2 appearing:1 tempo:1 pussycat:2 weinberger:1 voice:1 franti:1 cent:2 thomas:1 top:2 running:1 tony:1 muse:1 lavigne:2 music:61 wax:1 dante:1 especially:3 lonely:2 establish:1 society:1 dht:1 objective:9 already:1 zoe:2 ofwords:1 fabra:1 noize:6 traditional:7 september:1 pain:2 gradient:2 thank:1 gonzales:1 unable:1 majority:1 street:1 consumption:1 chris:4 me:1 topic:4 sensible:2 extent:1 omar:1 consensus:1 fresh:1 dream:4 pet:1 sia:1 besides:1 providing:1 happy:1 ying:2 difficult:1 unfortunately:2 mostly:1 sne:3 relate:1 boy:11 expense:1 wpe:5 negative:3 rise:1 steel:1 design:1 reliably:1 lil:7 boltzmann:2 collective:1 sevenfold:1 miley:3 recommender:6 upper:3 allowing:1 francesco:1 datasets:7 sold:1 discarded:1 descent:1 philippe:2 hinton:5 incorporated:1 communication:1 extended:1 ever:2 frame:2 varied:1 police:2 clown:1 download:1 rating:3 david:3 pair:1 required:1 moe:1 fort:2 connection:1 mayer:2 junior:1 nick:1 optimized:1 acoustic:1 california:1 learned:5 plague:1 imagenet:1 barcelona:1 pop:1 justin:3 beyond:3 able:6 usually:2 pattern:7 alicia:5 eighth:1 scott:1 challenge:2 gorilla:2 program:1 green:1 video:2 belief:2 power:2 suitable:3 hot:2 rely:5 predicting:7 indicator:1 scarce:2 turner:1 swift:4 improve:3 movie:1 rated:2 library:1 squad:1 stan:1 mase:1 metadata:3 extract:3 joan:1 prior:2 review:1 kelly:2 discovery:2 ruben:1 taste:6 l2:1 spear:3 law:1 python:1 bee:2 expect:1 loss:4 generation:2 licensing:1 proportional:1 limitation:1 filtering:11 geoffrey:3 ingredient:1 billy:1 interesting:1 digital:2 madonna:3 abdel:1 bertin:2 vanhoucke:1 article:3 parliament:1 editor:1 compatible:3 featuring:14 last:3 copy:2 tribe:1 enjoys:1 gee:1 warren:1 senior:2 side:1 allow:1 foote:1 perceptron:2 wide:3 cepstral:1 van:4 benefit:1 feedback:4 curve:2 dimension:4 world:1 stand:1 contour:1 evaluating:1 preventing:1 author:2 qualitatively:1 collection:2 regressors:1 projected:1 counted:1 nguyen:1 social:1 transaction:1 bb:2 brakel:1 compact:1 preferred:1 status:2 feat:5 keep:1 handbook:1 cyrus:3 conclude:1 recommending:7 yifan:1 alternatively:1 don:1 un:1 latent:50 helm:1 reputation:1 pretty:1 table:7 additionally:2 learn:2 johan:1 tel:2 improving:2 yu:1 quantize:1 mse:5 investigated:1 complex:2 necessarily:1 anthony:1 domain:3 marc:1 da:1 did:2 timescales:1 tito:1 big:2 hyperparameters:1 edition:1 profile:4 turian:1 terror:1 child:1 facilitating:1 paul:3 xu:2 west:5 benchmarking:1 roc:2 creature:1 hendrix:1 precision:1 fails:1 theme:1 sub:1 duck:1 crazy:1 unfair:1 ply:2 weighting:1 jay:3 young:4 niche:1 minute:1 companion:1 specific:1 bastien:1 revolution:1 appeal:2 veronica:1 concern:1 incorporating:1 exists:1 joe:1 workshop:1 sequential:1 effectively:1 corr:1 importance:1 phd:1 perceptually:1 album:2 rui:3 margin:1 gap:5 suited:1 gielen:1 likely:3 rauber:2 yung:1 fear:1 recommendation:29 rihanna:5 pretrained:1 srivastava:1 springer:1 truth:2 lewis:2 acm:1 extracted:2 nair:1 weston:4 viewed:1 king:2 acceleration:1 careful:1 towards:1 shared:2 absence:1 content:19 man:4 bennett:1 included:1 typical:1 averaging:1 disco:2 cirrus:1 ghent:1 total:2 multimedia:2 called:2 discriminate:1 e:1 la:1 attempted:1 bernard:1 east:1 disregard:1 rarely:1 aaron:1 guillaume:1 puff:1 tara:1 searched:1 quest:1 rudolf:1 collins:1 alexander:1 jonathan:1 evaluate:5 audio:50 malcolm:1 prima:1 |
4,426 | 5,005 | Probabilistic Low-Rank Matrix Completion with
Adaptive Spectral Regularization Algorithms
Franc?ois Caron
Univ. Oxford, Dept. of Statistics
Oxford, OX1 3TG, UK
[email protected]
Adrien Todeschini
INRIA - IMB - Univ. Bordeaux
33405 Talence, France
[email protected]
Marie Chavent
Univ. Bordeaux - IMB - INRIA
33000 Bordeaux, France
[email protected]
Abstract
We propose a novel class of algorithms for low rank matrix completion. Our approach builds on novel penalty functions on the singular values of the low rank
matrix. By exploiting a mixture model representation of this penalty, we show
that a suitably chosen set of latent variables enables to derive an ExpectationMaximization algorithm to obtain a Maximum A Posteriori estimate of the completed low rank matrix. The resulting algorithm is an iterative soft-thresholded
algorithm which iteratively adapts the shrinkage coefficients associated to the singular values. The algorithm is simple to implement and can scale to large matrices.
We provide numerical comparisons between our approach and recent alternatives
showing the interest of the proposed approach for low rank matrix completion.
1
Introduction
Matrix completion has attracted a lot of attention over the past few years. The objective is to ?complete? a matrix of potentially large dimension based on a small (and potentially noisy) subset of its
entries [1, 2, 3]. One popular application is to build automatic recommender systems, where the
rows correspond to users, the columns to items and entries may be ratings or binary (like/dislike).
The objective is then to predict user preferences from a subset of the entries.
In many cases, it is reasonable to assume that the unknown m ? n matrix Z can be approximated
by a matrix of low rank Z ' AB T where A and B are respectively of size m ? k and n ? k, with
k min(m, n). In the recommender system application, the low rank assumption is sensible as it
is commonly believed that only a few factors contribute to user?s preferences. The low rank structure
thus implies some sort of collaboration between the different users/items [4].
We typically observe a noisy version Xij of some entries (i, j) ? ? where ? ? {1, . . . , m} ?
{1, . . . , n}. For (i, j) ? ?
iid
Xij = Zij + ?ij , ?ij ? N (0, ? 2 )
(1)
where ? 2 > 0 and N (?, ? 2 ) is the normal distribution of mean ? and variance ? 2 . Low rank matrix
completion can be adressed by solving the following optimization problem
minimize
Z
1 X
2
(Xij ? Zij ) + ? rank(Z)
2? 2
(i,j)??
1
(2)
where ? > 0 is some regularization parameter. For general subsets ?, the optimization problem (2)
is computationally hard and many authors have advocated the use of a convex relaxation of (2)
[5, 6, 4], yielding the following convex optimization problem
1 X
2
minimize
(Xij ? Zij ) + ? kZk?
(3)
Z
2? 2
(i,j)??
where kZk? is the nuclear norm of Z, or the sum of the singular values of Z. [4] proposed an
iterative algorithm, called Soft-Impute, for solving the nuclear norm regularized minimization (3).
In this paper, we show that the solution to the objective function (3) can be interpreted as a Maximum A Posteriori (MAP) estimate when assuming that the singular values of Z are independently
and identically drawn (iid) from an exponential distribution with rate ?. Using this Bayesian interpretation, we propose alternative concave penalties to the nuclear norm, obtained by considering that the singular values are iid from a mixture of exponential distributions. We show that this
class of penalties bridges the gap between the nuclear norm and the rank penalty, and that a simple
Expectation-Maximization (EM) algorithm can be derived to obtain MAP estimates. The resulting
algorithm iteratively adapts the shrinkage coefficients associated to the singular values. It can be
seen as the equivalent for matrices of reweighted `1 algorithms [6] for multivariate linear regression.
Interestingly, we show that the Soft-Impute algorithm of [4] is obtained as a particular case. We also
discuss the extension of our algorithms to binary matrices, building on the same seed of ideas, in the
supplementary material. Finally, we provide some empirical evidence of the interest of the proposed
approach on simulated and real data.
2
Complete matrix X
Consider first that we observe the complete matrix X of size m ? n. Let r = min(m, n). We
consider the following convex optimization problem
1
2
minimize
kX ? ZkF + ? kZk?
(4)
Z
2? 2
where k?kF is the Frobenius norm. The solution to Eq. (4) in the complete case is a soft-thresholded
singular value decomposition (SVD) of X [7, 4], i.e.
b = S??2 (X)
Z
T
e
e
e
e
where S? (X) = U D? V with D? = diag((de1 ? ?)+ , . . . , (der ? ?)+ ) and t+ = max(t, 0).
eD
e Ve T is the singular value decomposition of X with D
e = diag(de1 , . . . , der ).
X=U
b to the optimization problem (4) can be interpreted as the Maximum A Posteriori
The solution Z
estimate under the likelihood (1) and prior
p(Z) ? exp (?? kZk? )
Assuming Z = U DV T , with D = diag(d1 , d2 , . . . , dr ) this can be further decomposed as
p(Z) = p(U )p(V )p(D)
where we assume a uniform Haar prior distribution on the unitary matrices U and V , and exponential
priors on the singular values di , hence
r
Y
p(d1 , . . . , dr ) =
Exp (di ; ?)
(5)
i=1
where Exp(x; ?) = ? exp(??x) is the probability density function (pdf) of the exponential distribution of parameter ? evaluated at x. The exponential distribution has a mode at 0, hence favoring
sparse solutions.
We propose here alternative penalty/prior distributions, that bridge the gap between the rank and the
nuclear norm penalties. Our penalties are based on hierarchical Bayes constructions and the related
optimization problems to obtain MAP estimates can be solved by using an EM algorithm.
2.1
Hierarchical adaptive spectral penalty
We consider the following hierarchical prior for the low rank matrix Z. We still assume that Z =
U DV T , where the unitary matrices U and V are assigned uniform priors and D = diag(d1 , . . . , dr ).
We now assume that each singular value di has its own regularization parameter ?i .
2
1
?=?
?=2
? = 0.1
0.9
0.8
p(di )
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
3
4
5
di
Figure 1: Marginal distribution p(di ) with
a = b = ? for different values of the parameter ?. The distribution becomes more concentrated around zero with heavier tails as ?
decreases. The case ? ? ? corresponds to
an exponential distribution with unit rate.
p(d1 , . . . , dr |?1 , . . . ?r ) =
Figure 2: Thresholding rules on the singular
values dei of X for the soft thresholding rule
(? = 1), and hierarchical adaptive soft thresholding algorithm with a = b = ?.
r
Y
p(di |?i ) =
i=1
r
Y
Exp(di ; ?i )
i=1
We further assume that the regularization parameters are themselves iid from a gamma distribution
r
r
Y
Y
p(?1 , . . . , ?r ) =
p(?i ) =
Gamma(?i ; a, b)
i=1
i=1
where Gamma(?i ; a, b) is the pdf of the gamma distribution of parameters a > 0 and b > 0 evaluated at ?i . The marginal distribution over di is thus a continuous mixture of exponential distributions
Z ?
aba
p(di ) =
Exp(di ; ?i ) Gamma(?i ; a, b)d?i =
(6)
(di + b)a+1
0
It is a Pareto distribution which has heavier tails than the exponential distribution. Figure 1 shows
the marginal distribution p(di ) for a = b = ?. The lower ?, the heavier the tails of the distribution.
When ? ? ?, one recovers the exponential distribution of unit rate parameter. Let
r
r
X
X
pen(Z) = ? log p(Z) = ?
log(p(di )) = C1 +
(a + 1) log(b + di )
(7)
i=1
i=1
be the penalty induced by the prior p(Z). We call the penalty (7) the Hierarchical Adaptive Spectral
Penalty (HASP). On Figure 3 (top) are represented the balls of constant penalties for a symmetric
2 ? 2 matrix, for the HASP, nuclear norm and rank penalties. When the matrix is assumed to
be diagonal, one recovers respectively the lasso, hierarchical adaptive lasso (HAL) [6, 8] and `0
penalties, as shown on Figure 3 (bottom).
The penalty (7) admits as special cases the nuclear norm penalty ?||Z||? when a = ?b and b ? ?.
Another closely related penalty is the log-det heuristic [5, 9] penalty, defined for a square matrice Z
by log det(Z + ?I) where ? is some small regularization constant. Both penalties agree on square
matrices when a = b = 0 and ? = 0.
2.2
EM algorithm for MAP estimation
Using the exponential mixture representation (6), we now show how to derive an EM algorithm [10]
to obtain a MAP estimate
Zb = arg max [log p(X|Z) + log p(Z)]
Z
i.e. to minimize
r
L(Z) =
X
1
2
kX ? ZkF +
(a + 1) log(b + di )
2
2?
i=1
3
(8)
(a) Nuclear norm
(b) HASP (? = 1)
(c) HASP (? = 0.1)
(d) Rank penalty
(e) `1 norm
(f) HAL (? = 1)
(g) HAL (? = 0.1)
(h) `0 norm
Figure 3: Top: Manifold of constant penalty, for a symmetric 2 ? 2 matrix Z = [x, y; y, z] for
(a) the nuclear norm, (b-c) hierarchical adaptive spectral penalty with a = b = ? (b) ? = 1 and
(c) ? = 0.1, and (d) the rank penalty. Bottom: contour of constant penalty for a diagonal matrix
[x, 0; 0, z], where one recovers the classical (e) lasso, (f-g) hierarchical lasso and (h) `0 penalties.
We use the parameters ? = (?1 , . . . , ?r ) as latent variables in the EM algorithm. The E step is
obtained by
r
X
1
2
Q(Z, Z ? ) = E [log(p(X, Z, ?))|Z ? , X] = C2 ? 2 kX ? ZkF ?
E[?i |d?i ]di
2?
i=1
Hence at each iteration of the EM algorithm, the M step consists in solving the optimization problem
r
X
1
2
minimize
kX
?
Zk
+
?i di
(9)
F
Z
2? 2
i=1
where ?i = E[?i |d?i ] =
?
?d?
i
[? log p(d?i )] =
a+1
.
b+d?
i
(9) is an adaptive nuclear norm regularized optimization problem, with weights ?i . Without loss of
generality, assume that d?1 ? d?2 ? . . . ? d?r . It implies that
0 ? ?1 ? ?2 ? . . . ? ?r
(10)
The above weights will therefore penalize less heavily higher singular values, hence reducing bias.
As shown by [11, 12], a global optimal solution to Eq. (9) under the order constraint (10) is given
by a weighted soft-thresholded SVD
b = S?2 ? (X)
Z
(11)
eD
e ? Ve T with D
e ? = diag((de1 ? ?1 )+ , . . . , (der ? ?r )+ ). X = U
eD
e Ve T is the SVD
where S? (X) = U
e
e
e
e
e
e
of X with D = diag(d1 , . . . , dr ) and d1 ? d2 . . . ? dr .
Algorithm 1 summarizes the Hierarchical Adaptive Soft Thresholded (HAST) procedure to converge
to a local minimum of the objective (8). This algorithm admits the soft-thresholded SVD operator
as a special case when a = b? and b = ? ? ?. Figure 2 shows the thresholding rule applied to
the singular values of X for the HAST algorithm (a = b = ?, with ? = 2 and ? = 0.1) and the
soft-thresholded SVD for ? = 1. The bias term, which is equal to ? for the nuclear norm, goes to
zero as dei goes to infinity.
Setting of the hyperparameters and initialization of the EM algorithm In the experiments, we
have set b = ? and a = ?? where ? and ? are tuning parameters that can be chosen by crossvalidation. As ? is the mean value of the regularization parameter ?i , we initialize the algorithm
with the soft thresholded SVD with parameter ? 2 ?. It is possible to estimate the hyperparameter ?
within the EM algorithm as described in the supplementary material. In our experiments, we have
found the results not very sensitive to the setting of ?, and set it to 1.
4
Algorithm 1 Hierarchical Adaptive Soft Thresholded (HAST) algorithm for low rank estimation of
complete matrices
Initialize Z (0) . At iteration t ? 1
(t)
? For i = 1, . . . , r, compute the weights ?i =
a+1
(t?1)
b+di
? Set Z (t) = S?2 ?(t) (X)
? If
3
L(Z (t?1) )?L(Z (t) )
L(Z (t?1) )
< ? then return Zb = Z (t)
Matrix completion
We now show how the EM algorithm derived in the previous section can be adapted to the case
where only a subset of the entries is observed. It relies on imputing missing values, similarly to the
EM algorithm for SVD with missing data, see e.g. [10, 13].
Consider that only a subset ? ? {1, . . . , m} ? {1, . . . , n} of the entries of the matrix X is observed.
Similarly to [7], we introduce the operator P? (X) and its complementary P?? (X)
Xij if (i, j) ? ?
0
if (i, j) ? ?
P? (X)(i, j) =
and
P?? (X)(i, j) =
0
otherwise
Xij otherwise
Assuming the same prior (6), the MAP estimate is obtained by minimizing
r
X
1
2
L(Z) =
log(b + di )
(12)
kP
(X)
?
P
(Z)k
+
(a
+
1)
?
?
F
2? 2
i=1
We will now derive the EM algorithm, by using latent variables ? and P?? (X). The E step is given
by (details in supplementary material)
Q(Z, Z ? ) = E log(p(P? (X), P?? (X), Z, ?))|Z ? , P? (X)
r
2 o X
1 n
= C4 ? 2
P? (X) + P?? (Z ? ) ? Z
F ?
E[?i |d?i ]di
2?
i=1
Hence at each iteration of the algorithm, one needs to minimize
r
X
1
2
?
kX
?
Zk
+
?i d i
(13)
F
2? 2
i=1
where ?i = E[?i |d?i ] and X ? = P? (X) + P?? (Z ? ) is the observed matrix, completed with entries
in Z ? . We now have a complete matrix problem. As mentioned in the previous section, the minimum of (13) is obtained with a weighted soft-thresholded SVD. Algorithm 2 provides the resulting
iterative procedure for matrix completion with the hierarchical adaptive spectral penalty.
Algorithm 2 Hierarchical Adaptive Soft Impute (HASI) algorithm for matrix completion
Initialize Z (0) . At iteration t ? 1
(t)
? For i = 1, . . . , r, compute the weights ?i = a+1
(t?1)
b+di
(t)
?
(t?1)
? Set Z = S?2 ?(t) P? (X) + P? (Z
)
L(Z (t?1) )?L(Z (t) )
b
? If
< ? then return Z = Z (t)
(t?1)
L(Z
)
Related algorithms Algorithm 2 admits the Soft-Impute algorithm of [4] as a special case when
(t)
a = ?b and b = ? ? ?. In this case, one obtains at each iteration ?i = ? for all i. On the contrary,
when ? < ?, our algorithm adaptively updates the weights so that to penalize less heavily higher
singular values. Some authors have proposed related one-step adaptive spectral penalty algorithms
[14, 11, 12]. However, in these procedures, the weights have to be chosen by some procedure
whereas in our case they are iteratively adapted.
Initialization The objective function (12) is in general not convex and different initializations may
lead to different modes. As in the complete case, we suggest to set a = ?b and b = ? and to initialize
the algorithm with the Soft-Impute algorithm with regularization parameter ? 2 ?.
5
Scaling Similarly to the Soft-Impute algorithm,
the computationally demanding part of Algo
rithm 2 is S?2 ?(t) P? (X) + P?? (Z (t?1) ) which requires calculating a low rank truncated SVD.
For large matrices, one can resort to the PROPACK algorithm [15, 16] as described in [4]. This
sophisticated linear algebra algorithm can efficiently compute the truncated SVD of the ?sparse +
low rank? matrix
(t?1)
P? (X) + P?? (Z (t?1) ) = P? (X) ? P? (Z (t?1) ) + Z
|
{z
} | {z }
low rank
sparse
and can thus handle large matrices, as shown in [4].
4
Experiments
4.1
Simulated data
We first evaluate the performance of the proposed approach on simulated data. Our simulation
setting is similar to that of [4]. We generate Gaussian matrices A and B respectively of size m ? q
and n ? q, q ? r so that the matrix Z = AB T is of low rank q. A Gaussian noise of variance
? 2 is thenqadded to the entries of Z to obtain the matrix X. The signal to noise ratio is defined as
var(Z)
SNR =
? 2 . We set m = n = 100 and ? = 1. We run all the algorithms with a precision
?9
= 10 and a maximum number of tmax = 200 iterations (initialization included for HASI). We
b and the true matrix Z in the complete
compute err, the relative error between the estimated matrix Z
case, and err?? in the incomplete case, where
b ? P ? (Z)||2
||Pb?? (Z)
||Zb ? Z||2F
F
?
err =
and
err?? =
2
||Z||F
||P?? (Z)||2F
For the HASP penalty, we set a = ?? and b = ?. We compute the solutions over a grid of 50 values
of the regularization parameter ? linearly spaced from ?0 to 0, where ?0 = ||P? (X)||2 is the largest
singular value of the input matrix X, padded with zeros. This is done for three different values
? = 1, 10, 100. We use the same grid to obtain the regularization path for the other algorithms.
Complete case We first consider that the observed matrix is complete, with SNR = 1 and q = 10.
The HAST algorithm 1 is compared to the soft thresholded (ST) and hard thresholded (HT) SVD.
Results are reported in Figure 4(a). The HASP penalty provides a bridge/tradeoff between the
nuclear norm and the rank penalty. For example, value of ? = 10 show a minimum at the true rank
q = 10 as HT, but with a lower error when the rank is overestimated.
1
1
1
MMMF
SoftImp
SoftImp+
HardImp
HASI ? = 100
HASI ? = 10
HASI ? = 1
0.9
0.9
0.9
0.8
0.8
0.6
0.5
0.7
0.7
MMMF
SoftImp
SoftImp+
HardImp
HASI ? = 100
HASI ? = 10
HASI ? = 1
0.6
0.4
ST
HT
HAST ? = 100
HAST ? = 10
HAST ? = 1
0.3
0.2
0.1
Test error
0.7
Test error
Relative error
0.8
0.5
0.4
10
20
30
40
50
60
70
80
90
Rank
(a) SNR=1; Complete; rank=10
100
0.5
0.4
0.3
0.2
0.1
0.3
0
0.6
0
0
10
20
30
Rank
40
50
60
0
5
10
15
20
25
30
Rank
(b) SNR=1; 50% missing; rank=5 (c) SNR=10; 80% missing; rank=5
Figure 4: Test error w.r.t. the rank obtained by varying the value of the regularization parameter
?. Results on simulated data are given for (a) complete matrix with SNR=1 (b) 50% missing and
SNR=1 and (c) 80% missing and SNR=10.
Incomplete case Then we consider the matrix completion problem, and remove uniformly a given
percentage of the entries in X. We compare the HASI algorithm to the Soft-Impute, Soft-Impute+
and Hard-Impute algorithms of [4] and to the MMMF algorithm of [17]. Results, averaged over
50 replications, are reported in Figures 4(b-c) for a true rank q = 5, (b) 50% of missing data and
6
MMMF
SoftImp
SoftImp+
HardImp
HASI
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
10
20
30
40
(a) SNR=1; 50% miss.
50
60
70
80
90
Rank
Test error
100
0
0.05
0.1
0.15
0.2
0.25
10
20
30
Test error
40
50
60
70
80
90
100
Rank
(b) SNR=1; 50% miss. (c) SNR=10; 80% miss. (d) SNR=10; 80% miss.
Figure 5: Boxplots of the test error and ranks obtained over 50 replications on simulated data.
Table 1: Results on the Jester and MovieLens datasets
Method
MMMF
Soft Imp
Soft Imp+
Hard Imp
HASI
Jester 1
24983 ? 100
27.5% miss.
NMAE Rank
0.161
95
0.161
100
0.169
14
0.158
7
0.153
100
Jester 2
23500 ? 100
27.3% miss.
NMAE Rank
0.162
96
0.162
100
0.171
11
0.159
6
0.153
100
Jester 3
24938 ? 100
75.3% miss.
NMAE Rank
0.183
58
0.184
78
0.184
33
0.181
4
0.174
30
MovieLens 100k
943 ? 1682
93.7% miss.
NMAE Rank
0.195
50
0.197
156
0.197
108
0.190
7
0.187
35
MovieLens 1M
6040 ? 3952
95.8% miss.
NMAE Rank
0.169
30
0.176
30
0.189
30
0.175
8
0.172
27
SNR = 1 and (c) 80% of missing data and SNR = 10. Similar behavior is observed, with the HASI
algorithm attaining a minimum at the true rank q = 5. We then conduct the same experiments,
but remove 20% of the observed entries as a validation set to estimate the regularization parameters
(?, ?) for HASI, and ? for the other methods. We estimate Z on the whole observed matrix, and use
the unobserved entries as a test set. Results on the test error and estimated ranks over 50 replications
are reported in Figure 5. For 50% missing data, HASI is shown to outperform the other methods.
For 80% missing data, HASI and Hard Impute provide the best performances. In both cases, it is
able to recover very accurately the true rank of the matrix.
4.2
Collaborative filtering examples
We now compare the different methods on several benchmark datasets. We first consider the Jester
datasets [18]. The three datasets1 contain one hundred jokes, with user ratings between -10 and +10.
We randomly select two ratings per user as a test set, and two other ratings per user as a validation
set to select the parameters ? and ?. The results are computed over four values ? = 1000, 100, 10, 1.
We compare the results of the different methods with the Normalized Mean Absolute Error (NMAE)
P
1
b
(i,j)??test |Xij ? Zij |
card(?test )
NMAE =
max(X) ? min(X)
where ?test is the test set. The mean number of iterations for Soft-Impute, Hard-Impute and HASI
(initialization included) algorithms are respectively 9, 76 and 76. Computations for the HASI algorithm take approximately 5 hours on a standard computer. The results, averaged over 10 replications
(with almost no variability observed), are presented in Table 1. The HASI algorithm provides very
good performance on the different Jester datasets, with lower NMAE than the other methods.
Figure 6 shows the NMAE in function of the rank. Low values of ? exhibit a bimodal behavior
with two modes at low rank and full rank. High value ? = 1000 is unimodal and outperforms
Soft-Impute at any particular rank.
1
Jester datasets can be downloaded from the url http://goldberg.berkeley.edu/jester-data/
7
0.32
MMMF
SoftImp
SoftImp+
HardImp
HASI ? = 1000
HASI ? = 100
HASI ? = 10
HASI ? = 1
Test error
0.28
0.26
0.24
MMMF
SoftImp
SoftImp+
HardImp
HASI ? = 1000
HASI ? = 100
HASI ? = 10
HASI ? = 1
0.3
Test error
0.3
0.22
0.25
0.2
0.2
0.18
0.16
0.15
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
Rank
Rank
(a) Jester 1
(b) Jester 3
Figure 6: NMAE on the test set of the (a) Jester 1 and (b) Jester 3 datasets w.r.t. the rank obtained
by varying the value of the regularization parameter ?. The curves obtained on the Jester 2 dataset
are hardly distinguishable from (a) and hence are not displayed due to space limitation.
Second, we conducted the same comparison on two MovieLens datasets2 , which contain ratings of
movies by users. We randomly select 20% of the entries as a test set, and the remaining entries
are split between a training set (80%) and a validation set (20%). For all the methods, we stop
the regularization path as soon as the estimated rank exceeds rmax = 100. This is a practical
consideration: given that the computations for high ranks demand more time and memory, we are
interested in restricting ourselves to low rank solutions. Table 1 presents the results, averaged over 5
replications. For the MovieLens 100k dataset, HASI provides better NMAE than the other methods
with a low rank solution. For the larger MovieLens 1M dataset, the precision, maximum number
of iterations and maximum rank are decreased to = 10?6 , tmax = 100 and rmax = 30. On
this dataset, MMMF provides the best NMAE at maximum rank. HASI provides the second best
performances with a slightly lower rank.
5
Conclusion
The proposed class of methods has shown to provide good results compared to several alternative
low rank matrix completion methods. It provides a bridge between nuclear norm and rank regularization algorithms. Although the related optimization problem is not convex, experiments show that
initializing the algorithm with the Soft-Impute algorithm of [4] provides very satisfactory results.
In this paper, we have focused on a gamma mixture of exponentials, as it leads to a simple and
interpretable expression for the weights. It is however possible to generalize the results presented
here by using a three parameter generalized inverse Gaussian prior distribution (see e.g. [19]) for
the regularization parameters ?i , thus offering an additional degree of freedom. Derivations of
the weights are provided in the supplementary material. Additionally, it is possible to derive an
EM algorithm for low rank matrix completion for binary matrices. Details are also provided in
supplementary material.
While we focus on point estimation in this paper, it would be of interest to investigate a fully
Bayesian approach and derive a Gibbs sampler or variational algorithm to approximate the posterior
distribution, and compare to other full Bayesian approaches to matrix completion [20, 21].
Acknowledgments
F.C. acknowledges the support of the European Commission under the Marie Curie Intra-European
Fellowship Programme. The contents reflect only the authors views and not the views of the European Commission.
2
MovieLens datasets can be downloaded from the url http://www.grouplens.org/node/73.
8
References
[1] N. Srebro, J.D.M. Rennie, and T. Jaakkola. Maximum-Margin Matrix Factorization. In Advances in neural information processing systems, volume 17, pages 1329?1336. MIT Press,
2005.
[2] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational mathematics, 9(6):717?772, 2009.
[3] E.J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[4] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning
large incomplete matrices. The Journal of Machine Learning Research, 11:2287?2322, 2010.
[5] M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002.
[6] E.J. Cand`es, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted l1 minimization.
Journal of Fourier Analysis and Applications, 14(5):877?905, 2008.
[7] J.F. Cai, E.J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010.
[8] Anthony Lee, Francois Caron, Arnaud Doucet, and Chris Holmes. A hierarchical Bayesian
framework for constructing sparsity-inducing priors. arXiv preprint arXiv:1009.1914, 2010.
[9] M. Fazel, H. Hindi, and S.P. Boyd. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In American Control Conference, 2003.
Proceedings of the 2003, volume 3, pages 2156?2162. IEEE, 2003.
[10] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the Royal Statistical Society. Series B, pages 1?38, 1977.
[11] S. Ga??ffas and G. Lecu?e. Weighted algorithms for compressed sensing and matrix completion.
arXiv preprint arXiv:1107.1638, 2011.
[12] Kun Chen, Hongbo Dong, and Kung-Sik Chan. Reduced rank regression via adaptive nuclear
norm penalization. Biometrika, 100(4):901?920, 2013.
[13] N. Srebro and T. Jaakkola. Weighted low-rank approximations. In NIPS, volume 20, page 720,
2003.
[14] F. Bach. Consistency of trace norm minimization. The Journal of Machine Learning Research,
9:1019?1048, 2008.
[15] R. M. Larsen. Lanczos bidiagonalization with partial reorthogonalization. Technical report,
DAIMI PB-357, 1998.
[16] R. M. Larsen. Propack-software for large and sparse svd calculations. Available online. URL
http://sun. stanford. edu/rmunk/PROPACK, 2004.
[17] J. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd international conference on Machine learning, pages
713?719. ACM, 2005.
[18] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative
filtering algorithm. Information Retrieval, 4(2):133?151, 2001.
[19] Z. Zhang, S. Wang, D. Liu, and M. I. Jordan. EP-GIG priors and applications in Bayesian
sparse learning. The Journal of Machine Learning Research, 98888:2031?2061, 2012.
[20] M. Seeger and G. Bouchard. Fast variational Bayesian inference for non-conjugate matrix
factorization models. In Proc. of AISTATS, 2012.
[21] S. Nakajima, M. Sugiyama, S. D. Babacan, and R. Tomioka. Global analytic solution of fullyobserved variational Bayesian matrix factorization. Journal of Machine Learning Research,
14:1?37, 2013.
9
| 5005 |@word version:1 norm:18 nd:1 suitably:1 zkf:3 d2:2 simulation:1 decomposition:2 hasi:28 liu:1 series:1 zij:4 offering:1 interestingly:1 past:1 outperforms:1 err:4 attracted:1 numerical:1 enables:1 analytic:1 remove:2 interpretable:1 update:1 item:2 de1:3 propack:3 provides:8 contribute:1 node:1 preference:2 org:1 zhang:1 c2:1 replication:5 consists:1 introduce:1 behavior:2 themselves:1 cand:4 decomposed:1 considering:1 becomes:1 provided:2 interpreted:2 rmax:2 unobserved:1 berkeley:1 concave:1 biometrika:1 uk:2 control:1 unit:2 local:1 oxford:2 path:2 approximately:1 inria:3 tmax:2 initialization:5 factorization:4 averaged:3 fazel:2 practical:1 acknowledgment:1 implement:1 procedure:4 empirical:1 boyd:2 suggest:1 ga:1 operator:2 www:1 equivalent:1 map:6 missing:10 go:2 attention:1 independently:1 rmunk:1 focused:1 shen:1 convex:6 stats:1 rule:3 holmes:1 nuclear:14 handle:1 construction:1 heavily:2 user:8 exact:1 goldberg:2 approximated:1 adressed:1 bottom:2 observed:8 ep:1 preprint:2 solved:1 initializing:1 wang:1 sun:1 decrease:1 mentioned:1 dempster:1 solving:3 algo:1 algebra:1 represented:1 derivation:1 univ:3 fast:2 kp:1 heuristic:2 supplementary:5 larger:1 stanford:2 rennie:2 otherwise:2 compressed:1 nmae:12 statistic:1 noisy:2 laird:1 online:1 cai:1 propose:3 fr:2 adapts:2 frobenius:1 inducing:1 crossvalidation:1 exploiting:1 francois:1 derive:5 completion:16 ac:1 ij:2 expectationmaximization:1 advocated:1 eq:2 ois:1 implies:2 closely:1 material:5 extension:1 around:1 normal:1 exp:6 seed:1 predict:1 estimation:3 proc:1 grouplens:1 bridge:4 sensitive:1 largest:1 weighted:4 minimization:5 mit:1 gaussian:3 shrinkage:2 varying:2 jaakkola:2 derived:2 focus:1 rank:64 likelihood:2 seeger:1 posteriori:3 inference:1 roeder:1 typically:1 favoring:1 france:2 interested:1 arg:1 jester:13 adrien:2 plan:1 special:3 initialize:4 marginal:3 equal:1 aba:1 imp:3 report:1 few:2 franc:1 randomly:2 gamma:6 ve:3 ourselves:1 ab:2 freedom:1 interest:3 investigate:1 intra:1 mixture:5 yielding:1 partial:1 conduct:1 incomplete:4 euclidean:1 column:1 soft:25 lanczos:1 maximization:1 tg:1 subset:5 entry:13 snr:14 uniform:2 hundred:1 conducted:1 reported:3 commission:2 mmmf:8 adaptively:1 st:2 density:1 recht:1 siam:1 international:1 overestimated:1 probabilistic:1 lee:1 dong:1 thesis:1 reflect:1 dr:6 resort:1 american:1 return:2 attaining:1 coefficient:2 view:2 lot:1 sort:1 bayes:1 recover:1 bouchard:1 curie:1 collaborative:3 minimize:6 square:2 variance:2 bidiagonalization:1 efficiently:1 correspond:1 spaced:1 generalize:1 bayesian:7 accurately:1 iid:4 ed:3 larsen:2 associated:2 di:22 recovers:3 stop:1 dataset:4 popular:1 sophisticated:1 higher:2 evaluated:2 ox:1 done:1 generality:1 datasets2:1 ox1:1 mode:3 hal:3 building:1 contain:2 true:5 normalized:1 regularization:16 hence:6 assigned:1 arnaud:1 symmetric:2 iteratively:3 satisfactory:1 reweighted:2 impute:14 ffas:1 generalized:1 pdf:2 complete:12 l1:1 variational:3 consideration:1 novel:2 imputing:1 volume:3 tail:3 interpretation:1 caron:3 gibbs:1 automatic:1 tuning:1 grid:2 mathematics:1 similarly:3 consistency:1 sugiyama:1 multivariate:1 own:1 recent:1 posterior:1 chan:1 binary:3 lecu:1 der:3 seen:1 minimum:4 additional:1 gig:1 converge:1 signal:1 daimi:1 full:2 unimodal:1 exceeds:1 technical:1 calculation:1 believed:1 bach:1 retrieval:1 dept:1 prediction:1 regression:2 enhancing:1 expectation:1 arxiv:4 iteration:8 nakajima:1 bimodal:1 c1:1 penalize:2 whereas:1 fellowship:1 decreased:1 singular:16 induced:1 contrary:1 jordan:1 call:1 unitary:2 split:1 identically:1 hastie:1 lasso:4 idea:1 tradeoff:1 det:3 expression:1 heavier:3 url:3 penalty:31 hardly:1 concentrated:1 reduced:1 generate:1 http:3 outperform:1 chavent:2 xij:7 percentage:1 estimated:3 per:2 tibshirani:1 hyperparameter:1 four:1 pb:2 drawn:1 marie:3 hasp:6 thresholded:11 ht:3 boxplots:1 relaxation:1 padded:1 year:1 sum:1 run:1 inverse:1 hankel:1 almost:1 reasonable:1 summarizes:1 scaling:1 adapted:2 constraint:1 infinity:1 perkins:1 software:1 fourier:1 babacan:1 min:3 imb:2 ball:1 conjugate:1 slightly:1 em:13 dv:2 computationally:2 agree:1 discus:1 available:1 observe:2 hierarchical:13 spectral:7 talence:1 alternative:4 top:2 remaining:1 todeschini:2 completed:2 wakin:1 calculating:1 build:2 classical:1 society:1 objective:5 joke:1 diagonal:2 exhibit:1 distance:1 card:1 simulated:5 sensible:1 chris:1 manifold:1 assuming:3 ratio:1 minimizing:1 kun:1 potentially:2 trace:1 unknown:1 recommender:2 eigentaste:1 datasets:7 benchmark:1 displayed:1 truncated:2 variability:1 rating:5 c4:1 datasets1:1 hour:1 nip:1 able:1 sparsity:2 max:3 memory:1 royal:1 demanding:1 bordeaux:3 regularized:2 haar:1 hindi:1 movie:1 dei:2 acknowledges:1 prior:11 dislike:1 kf:1 relative:2 loss:1 fully:1 limitation:1 filtering:2 srebro:3 var:1 validation:3 foundation:1 downloaded:2 penalization:1 degree:1 rubin:1 thresholding:5 sik:1 pareto:1 collaboration:1 row:1 soon:1 bias:2 absolute:1 sparse:5 kzk:4 dimension:1 curve:1 contour:1 author:3 commonly:1 adaptive:13 programme:1 reorthogonalization:1 approximate:1 obtains:1 global:2 doucet:1 assumed:1 continuous:1 latent:3 iterative:3 pen:1 table:3 additionally:1 zk:2 mazumder:1 european:3 anthony:1 constructing:1 diag:6 aistats:1 linearly:1 whole:1 noise:3 hyperparameters:1 complementary:1 rithm:1 precision:2 tomioka:1 exponential:11 showing:1 sensing:1 admits:3 gupta:1 evidence:1 restricting:1 phd:1 kx:5 demand:1 gap:2 margin:2 chen:1 distinguishable:1 corresponds:1 relies:1 acm:1 content:1 hard:6 included:2 movielens:7 reducing:1 uniformly:1 sampler:1 miss:9 zb:3 called:1 svd:12 e:4 select:3 support:1 kung:1 evaluate:1 d1:6 |
4,427 | 5,006 | A Gang of Bandits
Nicol`o Cesa-Bianchi
Universit`a degli Studi di Milano, Italy
Claudio Gentile
University of Insubria, Italy
[email protected]
[email protected]
Giovanni Zappella
Universit`a degli Studi di Milano, Italy
[email protected]
Abstract
Multi-armed bandit problems formalize the exploration-exploitation trade-offs
arising in several industrially relevant applications, such as online advertisement
and, more generally, recommendation systems. In many cases, however, these
applications have a strong social component, whose integration in the bandit algorithm could lead to a dramatic performance increase. For instance, content may
be served to a group of users by taking advantage of an underlying network of
social relationships among them. In this paper, we introduce novel algorithmic
approaches to the solution of such networked bandit problems. More specifically,
we design and analyze a global recommendation strategy which allocates a bandit
algorithm to each network node (user) and allows it to ?share? signals (contexts
and payoffs) with the neghboring nodes. We then derive two more scalable variants of this strategy based on different ways of clustering the graph nodes. We
experimentally compare the algorithm and its variants to state-of-the-art methods
for contextual bandits that do not use the relational information. Our experiments,
carried out on synthetic and real-world datasets, show a consistent increase in
prediction performance obtained by exploiting the network structure.
1
Introduction
The ability of a website to present personalized content recommendations is playing an increasingly
crucial role in achieving user satisfaction. Because of the appearance of new content, and due to
the ever-changing nature of content popularity, modern approaches to content recommendation are
strongly adaptive, and attempt to match as closely as possible users? interests by learning good mappings between available content and users. These mappings are based on ?contexts?, that is sets of
features that, typically, are extracted from both contents and users. The need to focus on content
that raises the user interest and, simultaneously, the need of exploring new content in order to globally improve the user experience creates an exploration-exploitation dilemma, which is commonly
formalized as a multi-armed bandit problem. Indeed, contextual bandits have become a reference
model for the study of adaptive techniques in recommender systems (e.g, [5, 7, 15] ). In many cases,
however, the users targeted by a recommender system form a social network. The network structure provides an important additional source of information, revealing potential affinities between
pairs of users. The exploitation of such affinities could lead to a dramatic increase in the quality of
the recommendations. This is because the knowledge gathered about the interests of a given user
may be exploited to improve the recommendation to the user?s friends. In this work, an algorithmic
approach to networked contextual bandits is proposed which is provably able to leverage user similarities represented as a graph. Our approach consists in running an instance of a contextual bandit
algorithm at each network node. These instances are allowed to interact during the learning process,
1
sharing contexts and user feedbacks. Under the modeling assumption that user similarities are properly reflected by the network structure, interactions allow to effectively speed up the learning process
that takes place at each node. This mechanism is implemented by running instances of a linear contextual bandit algorithm in a specific reproducing kernel Hilbert space (RKHS). The underlying
kernel, previously used for solving online multitask classification problems (e.g., [8]), is defined in
terms of the Laplacian matrix of the graph. The Laplacian matrix provides the information we rely
upon to share user feedbacks from one node to the others, according to the network structure. Since
the Laplacian kernel is linear, the implementation in kernel space is straightforward. Moreover, the
existing performance guarantees for the specific bandit algorithm we use can be directly lifted to
the RKHS, and expressed in terms of spectral properties of the user network. Despite its crispness,
the principled approach described above has two drawbacks hindering its practical usage. First,
running a network of linear contextual bandit algorithms with a Laplacian-based feedback sharing
mechanism may cause significant scaling problems, even on small to medium sized social networks.
Second, the social information provided by the network structure at hand need not be fully reliable
in accounting for user behavior similarities. Clearly enough, the more such algorithms hinge on
the network to improve learning rates, the more they are penalized if the network information is
noisy and/or misleading. After collecting empirical evidence on the sensitivity of networked bandit
methods to graph noise, we propose two simple modifications to our basic strategy, both aimed at
circumventing the above issues by clustering the graph nodes. The first approach reduces graph
noise simply by deleting edges between pairs of clusters. By doing that, we end up running a scaled
down independent instance of our original strategy on each cluster. The second approach treats each
cluster as a single node of a much smaller cluster network. In both cases, we are able to empirically
improve prediction performance, and simultaneously achieve dramatic savings in running times.
We run experiments on two real-world datasets: one is extracted from the social bookmarking web
service Delicious, and the other one from the music streaming platform Last.fm.
2
Related work
The benefit of using social relationships in order to improve the quality of recommendations is
a recognized fact in the literature of content recommender systems ?see e.g., [5, 13, 18] and the
survey [3]. Linear models for contextual bandits were introduced in [4]. Their application to personalized content recommendation was pioneered in [15], where the LinUCB algorithm was introduced.
An analysis of LinUCB was provided in the subsequent work [9]. To the best of our knowledge,
this is the first work that combines contextual bandits with the social graph information. However,
non-contextual stochastic bandits in social networks were studied in a recent independent work [20].
Other works, such as [2, 19], consider contextual bandits assuming metric or probabilistic dependencies on the product space of contexts and actions. A different viewpoint, where each action reveals
information about other actions? payoffs, is the one studied in [7, 16], though without the context
provided by feature vectors. A non-contextual model of bandit algorithms running on the nodes of
a graph was studied in [14]. In that work, only one node reveals its payoffs, and the statistical information acquired by this node over time is spread across the entire network following the graphical
structure. The main result shows that the information flow rate is sufficient to control regret at each
node of the network. More recently, a new model of distributed non-contextual bandit algorithms
has been presented in [21], where the number of communications among the nodes is limited, and
all the nodes in the network have the same best action.
3
Learning model
We assume the social relationships over users are encoded as a known undirected and connected
graph G = (V, E), where V = {1, . . . , n} represents a set of n users, and the edges in E represent
the social links over pairs of users. Recall that a graph G can be equivalently defined in terms
n
of its Laplacian matrix L = Li,j i,j=1 , where Li,i is the degree of node i (i.e., the number of
incoming/outgoing edges) and, for i 6= j, Li,j equals ?1 if (i, j) ? E, and 0 otherwise. Learning
proceeds in a sequential fashion: At each time step t = 1, 2, . . . , the learner receives a user index
it ? V together with a set of context vectors Cit = {xt,1 , xt,2 , . . . , xt,ct } ? Rd . The learner then
selects some kt ? Cit to recommend to user it and observes some payoff at ? [?1, 1], a function
2
? t = xt,kt . No assumptions whatsoever are made on the way index it and set Cit are
of it and x
generated, in that they can arbitrarily depend on past choices made by the algorithm.1
A standard modeling assumption for bandit problems with contextual information (one that is also
adopted here) is to assume that rewards are generated by noisy versions of unknown linear functions of the context vectors. That is, we assume each node i ? V hosts an unknown parameter vector ui ? Rd , and that the reward value ai (x) associated with node i and context vector
x ? Rd is given by the random variable ai (x) = u>
i x + i (x), where i (x) is a conditionally
zero-mean
denoting by Et [ ? ] the conditional expec and bounded variance noise term. Specifically,
tation E ? (i1 , Ci1 , a1 ), . . . , (it?1 , Cit?1 , at?1 ) , we take the general approach of [1], and assume
that for any fixed i ? V and x ? Rd, the variablei (x) is conditionally
sub-Gaussian with vari
2 2
ance parameter ? 2 > 0, namely, Et exp(?
(x))
?
exp
?
?
/2
for
all ? ? R and all x, i.
i
2
This implies
E
[
(x)]
=
0
and
V
(x)
?
?
,
where
V
[?]
is
a
shorthand
for the conditional
t
i
t
i
t
(i1 , Ci1 , a1 ), . . . , (it?1 , Cit?1 , at?1 ) . So we clearly have Et [ai (x)] = u> x and
variance
V
?
i
Vt ai (x) ? ? 2 . Therefore, u>
i x is the expected reward observed at node i for context vector x.
In the special case when the noise i (x) is a bounded random variable taking values in the range
[?1, 1], this implies Vt [ai (x)] ? 4.
The regret rt of the learner at time t is the amount by which the average reward of the best choice in
hindsight at node it exceeds the average reward of the algorithm?s choice, i.e.,
?t .
rt = max u>
x
? u>
it
it x
x?Cit
The goal of the algorithm is to bound with high probability (over the noise variables it ) the cumuPT
lative regret t=1 rt for the given sequence of nodes i1 , . . . , iT and observed context vector sets
Ci1 , . . . , CiT . We model the similarity among users in V by making the assumption that nearby
users hold similar underlying vectors ui , so that reward signals received at a given node it at time
t are also, to some extent, informative to learn the behavior of other users j connected to it within
G. We make this more precise by taking the perspective of known multitask learning settings (e.g.,
[8]), and assume that
X
kui ? uj k2
(1)
(i,j)?E
P
2
is small compared to i?V kui k , where k ? k denotes the standard Euclidean norm of vectors. That
is, although (1) may possibly contain a quadratic number of terms, the closeness of vectors lying
on adjacent nodes in G makes this sum comparatively smaller than the actual length of such vectors. This will be our working assumption throughout, one that motivates the Laplacian-regularized
algorithm presented in Section 4, and empirically tested in Section 5.
4
Algorithm and regret analysis
Our bandit algorithm maintains at time t an estimate wi,t for vector ui . Vectors wi,t are updated
based on the reward signals as in a standard linear bandit algorithm (e.g., [9]) operating on the
context vectors contained in Cit . Every node i of G hosts a linear bandit algorithm like the one
described in Figure 1. The algorithm in Figure 1 maintains at time t a prototype vector wt which
is the result of a standard linear least-squares approximation to the unknown parameter vector u
associated with the node under consideration. In particular, wt?1 is obtained by multiplying the
inverse correlation matrix Mt?1 and the bias vector bt?1 . At each time t = 1, 2, . . . , the algorithm
receives context vectors xt,1 , . . . , xt,ct contained in Ct , and must select one among them. The
? t = xt,kt as the vector in Ct that maximizes an upper-confidencelinear bandit algorithm selects x
corrected estimation of the expected reward achieved over context vectors xt,k . The estimation
is based on the current wt?1 , while the upper confidence level CBt is suggested by the standard
analysis of linear bandit algorithms ?see, e.g., [1, 9, 10]. Once the actual reward at associated with
? t is observed, the algorithm uses x
? t for updating Mt?1 to Mt via a rank-one adjustment, and bt?1
x
to bt via an additive update whose learning rate is precisely at . This algorithm can be seen as a
version of LinUCB [9], a linear bandit algorithm derived from LinRel [4].
1
Formally, it and Cit can be arbitrary (measurable) functions of past rewards a1 , . . . , at?1 , indices
i1 , . . . , it?1 , and sets Ci1 , . . . , Cit?1 .
3
Init: b0 = 0 ? Rdn and M0 = I ? Rdn?dn ;
for t = 1, 2, . . . , T do
?1
Set wt?1 = Mt?1
bt?1 ;
Get it ? V , context Cit = {xt,1 , . . . , xt,ct };
Construct vectors ?it (xt,1 ), . . . , ?it (xt,ct ), and modie ,...,?
e
fied vectors ?
, where
Init: b0 = 0 ? Rd and M0 = I ? Rd?d ;
for t = 1, 2, . . . , T do
?1
Set wt?1 = Mt?1
bt?1 ;
Get context Ct = {xt,1 , . . . , xt,ct };
Set
>
kt = argmax wt?1 xt,k + CBt (xt,k )
t,1
k=1,...,ct
?1/2
e
?
?it (xt,k ), k = 1, . . . , ct ;
t,k = A?
where
? s
CB t (xt,k )
=
q
t,ct
?1
?
x>
t,k Mt?1 xt,k ?
?
|Mt |
ln
+ kuk?
?
Set kt = argmax
k=1,...,ct
q
e )=
CB t (?
t,k
? t = xt,kt ;
Set x
Observe reward at ? [?1, 1];
Update
?tx
?>
?
Mt = Mt?1 + x
t ,
?t .
?
bt = bt?1 + at x
end for
Figure 1: Pseudocode of the linear bandit algorithm sitting at each node i of the given graph.
> e
e
wt?1 ?
t,k + CB t (?t,k ) where
? s
e > M ?1 ?
e ?
?
t,k
t?1 t,k ?
?
|Mt |
e k?
ln
+ kU
?
Observe reward at ? [?1, 1] at node it ;
Update
e
e>
?
Mt = Mt?1 + ?
t,kt ?t,kt ,
e
?
bt = bt?1 + at ?
t,k .
end for
Figure 2: Pseudocode of the GOB.Lin algorithm.
We now turn to describing our GOB.Lin (Gang Of Bandits, Linear version) algorithm. GOB.Lin lets
the algorithm in Figure 1 operate on each node i of G (we should then add subscript i throughout,
replacing wt by wi,t , Mt by Mi,t , and so forth). The updates Mi,t?1 ? Mi,t and bi,t?1 ? bi,t
? t both when i = it (i.e., when node i is the one which the
are performed at node i through vector x
context vectors in Cit refer to) and to a lesser extent when i 6= it (i.e., when node i is not the one
which the vectors in Cit refer to). This is because, as we said, the payoff at received for node it is
somehow informative also for all other nodes i 6= it . In other words, because we are assuming the
underlying parameter vectors ui are close to each other, we should let the corresponding prototype
vectors wi,t undergo similar updates, so as to also keep the wi,t close to each other over time.
With this in mind, we now describe GOB.Lin in more detail. It is convenient to introduce first some
extra matrix notation. Let A = In + L, where L is the Laplacian matrix associated with G, and
In is the n ? n identity matrix. Set A? = A ? Id , the Kronecker product2 of matrices A and Id .
Moreover, the ?compound? descriptor for the pairing (i, x) is given by the long (and sparse) vector
?i (x) ? Rdn defined as
?i (x)> = 0, . . . , 0 , x> , 0, . . . , 0 .
| {z }
| {z }
(i?1)d times
(n?i)d times
With the above notation handy, a compact description of GOB.Lin is presented in Figure 2, where
we deliberately tried to mimic the pseudocode of Figure 1. Notice that in Figure 2 we overloaded
the notation for the confidence bound CBt , which is now defined in terms of the Laplacian L of G.
e k, where U
e = A1/2 U and we define
In particular, kuk in Figure 1 is replaced in Figure 2 by kU
?
>
>
> >
dn
ek
U = (u1 , u2 , . . . , un ) ? R . Clearly enough, the potentially unknown quantities kuk and kU
in the two expressions for CBt can be replaced by suitable upper bounds.
e = A?1/2 ? (xt,k ) act in the update of matrix
We now explain how the modified long vectors ?
it
t,k
?
Mt and vector bt . First, observe that if A? were the identity matrix then, according to how the
long vectors ?it (xt,k ) are defined, Mt would be a block-diagonal matrix Mt = diag(D1 , . . . , Dn ),
P
>
whose i-th block Di is the d ? d matrix Di = Id + tP
: kt =i xt xt . Similarly, bt would be the
dn-long vector whose i-th d-dimensional block contains t : kt =i at xt . This would be equivalent
to running n independent linear bandit algorithms (Figure 1), one per node of G. Now, because
A? is not the identity, but contains graph G represented through its Laplacian matrix, the selected
?1/2
vector xt,kt ? Cit for node it gets spread via A?
from the it -th block over all other blocks,
thereby making the contextual information contained in xt,kt available to update the internal status
2
The Kronecker product between two matrices M ? Rm?n and N ? Rq?r is the block matrix M ? N of
dimension mq ? nr whose block on row i and column j is the q ? r matrix Mi,j N .
4
of all other nodes. Yet, the only reward signal observed at time t is the one available at node it . A
theoretical analysis of GOB.Lin relying on the learning model of Section 3 is sketched in Section 4.1.
GOB.Lin?s running time is mainly affected by the inversion of the dn ? dn matrix Mt , which can be
performed in time of order (dn)2 per round by using well-known formulas for incremental matrix
inversions. The same quadratic dependence holds for memory requirements. In our experiments, we
observed that projecting the contexts on the principal components improved performance. Hence,
the quadratic dependence on the context vector dimension d is not really hurting us in practice. On
the other hand, the quadratic dependence on the number of nodes n may be a significant limitation
to GOB.Lin?s practical deployment. In the next section, we show that simple graph compression
schemes (like node clustering) can conveniently be applied to both reduce edge noise and bring the
algorithm to reasonable scaling behaviors.
4.1
Regret Analysis
We now provide a regret analysis for GOB.Lin that relies on the high probability analysis contained
in [1] (Theorem 2 therein). The analysis can be seen as a combination of the multitask kernel
contained in, e.g., [8, 17, 12] and a version of the linear bandit algorithm described and analyzed
in [1].
Theorem 1. Let the GOB.Lin algorithm of Figure 2 be run on graph G = (V, E), V = {1, . . . , n},
hosting at each node i ? V vector ui ? Rd . Define
X
X
L(u1 , . . . , un ) =
kui k2 +
kui ? uj k2 .
i?V
(i,j)?E
Let also the sequence of context vectors xt,k be such that kxt,k k ? B, for all k = 1, . . . , ct , and
t = 1, . . . , T . Then the cumulative regret satisfies
s
T
X
|MT |
2
rt ? 2 T 2? ln
+ 2L(u1 , . . . , un ) (1 + B 2 ) ln |MT |
?
t=1
with probability at least 1 ? ?.
Compared to running n independent bandit algorithms (which
P corresponds to A? being the identity
matrix), the bound in the above theorem has an extra term (i,j)?E kui ? uj k2 , which we assume
small according to our working assumption. However, the bound has also a significantly smaller
?1/2
log determinant ln |MT | on the resulting matrix MT , due to the construction of ?et,k via A? . In
particular, when the graph is very dense, the log determinant in GOB.Lin is a factor n smaller than
the corresponding term for the n independent bandit case (see, e.g.,[8], Section 4.2 therein). To make
things clear, consider two extreme situations. When G has no edges then TR(MT ) = TR(I) + T =
nd+T , hence ln |MT | ? dn ln(1+T /(dn)). On the other hand, When G is the complete graph then
TR (MT ) = TR(I) + 2t/(n + 1) = nd + 2T /(n + 1), hence ln |MT | ? dn ln(1 + 2T /(dn(n + 1))).
The exact behavior of ln |Mt | (one that would ensure a significant advantage in practice) depends
on the actual interplay between the data and the graph, so that the above linear dependence on dn is
really a coarse upper bound.
5
Experiments
In this section, we present an empirical comparison of GOB.Lin (and its variants) to linear bandit algorithms which do not exploit the relational information provided by the graph. We run
our
qexperiments by approximating the CBt function in Figure 1 with the simplified expression
?1
? x>
t,k Mt?1 xt,k log(t + 1), and the CB t function in Figure 2 with the corresponding expression
e . In both cases, the factor ? is used as tunable parameter. Our
in which xt,k is replaced by ?
t,k
preliminary experiments show that this approximation does not affect the predictive performances
of the algorithms, while it speeds up computation significantly. We tested our algorithm and its
competitors on a synthetic dataset and two freely available real-world datasets extracted from the
social bookmarking web service Delicious and from the music streaming service Last.fm. These
datasets are structured as follows.
5
4Cliques. This is an artificial dataset whose graph contains four cliques of 25 nodes each to which
we added graph noise. This noise consists in picking a random pair of nodes and deleting or creating
an edge between them. More precisely, we created a n ? n symmetric noise matrix of random numbers in [0, 1], and we selected a threshold value such that the expected number of matrix elements
above this value is exactly some chosen noise rate parameter. Then we set to 1 all the entries whose
content is above the threshold, and to zero the remaining ones. Finally, we XORed the noise matrix
with the graph adjacency matrix, thus obtaining a noisy version of the original graph.
Last.fm. This is a social network containing 1,892 nodes and 12,717 edges. There are 17,632 items
(artists), described by 11,946 tags. The dataset contains information about the listened artists, and
we used this information in order to create the payoffs: if a user listened to an artist at least once the
payoff is 1, otherwise the payoff is 0.
Delicious. This is a network with 1,861 nodes and 7,668 edges. There are 69,226 items (URLs)
described by 53,388 tags. The payoffs were created using the information about the bookmarked
URLs for each user: the payoff is 1 if the user bookmarked the URL, otherwise the payoff is 0.
Last.fm and Delicious were created by the Information Retrieval group at Universidad Autonoma
de Madrid for the HetRec 2011 Workshop [6] with the goal of investigating the usage of heterogeneous information in recommendation systems.3 These two networks are structurally different: on
Delicious, payoffs depend on users more strongly than on Last.fm. In other words, there are more
popular artists, whom everybody listens to, than popular websites, which everybody bookmarks ?
see Figure 3. This makes a huge difference in practice, and the choice of these two datasets allow
us to make a more realistic comparison of recommendation techniques. Since we did not remove
any items from these datasets (neither the most frequent nor the least frequent), these differences do
influence the behavior of all algorithms ?see below.
Some statistics about Last.fm and Delicious are reported in Table 1. In Figure 3 we plotted the
distribution of the number of preferences per item in order to make clear and visible the differences
explained in the previous paragraphs.4
Preferences per item
NUM PREFERENCES
1000
N ODES
E DGES
AVG . DEGREE
I TEMS
N ONZERO PAYOFFS
TAGS
DELICIOUS
LASTFM
100
10
L AST. FM
1892
12717
13.443
17632
92834
11946
D ELICIOUS
1867
7668
8.21
69226
104799
53388
1
Table 1: Main statistics for Last.fm and Delicious. I TEMS
counts the overall number of items, across all users, from
Figure 3: Plot of the number of prefer- which Ct is selected. N ONZERO PAYOFFS is the number
ences per item (users who bookmarked of pairs (user, item) for which we have a nonzero payoff.
the URL or listened to an artist). Both TAGS is the number of distinct tags that were used to describe the items.
axes have logarithmic scale.
1
10
100
1000
ITEM ID
10000
100000
We preprocessed datasets by breaking down the tags into smaller tags made up of single words. In
fact, many users tend to create tags like ?webdesign tutorial css?. This tag has been splitted into
three smaller tags corresponding to the three words therein. More generally, we splitted all compound tags containing underscores, hyphens and apexes. This makes sense because users create tags
independently, and we may have both ?rock and roll? and ?rock n roll?. Because of this splitting
operation, the number of unique tags decreased from 11,946 to 6,036 on Last.fm and from 53,388
to 9,949 on Delicious. On Delicious, we also removed all tags occurring less than ten times.5 The
3
Datasets and their full descriptions are available at www.grouplens.org/node/462.
In the context of recommender systems, these two datasets may be seen as representatives of two ?markets?
whose products have significantly different market shares (the well-known dichotomy of hit vs. niche products).
Niche product markets give rise to power laws in user preference statistics (as in the blue plot of Figure 3).
5
We did not repeat the same operation on Last.fm because this dataset was already extremely sparse.
4
6
4Cliques graph-noise=0% payoff-noise=0
2000
1500
1000
500
0
2500
2000
1500
1000
500
0
2000
4000
6000
TIME
8000
10000
4Cliques graph-noise=8.3% payoff-noise=0
2000
1500
1000
500
0
8000
4000
6000
TIME
8000
2500
1000
500
0
CUMULATIVE REWARD
500
0
2000
4000
6000
TIME
8000
1500
1000
500
2000
4000
6000
TIME
8000
1000
500
0
2000
4000
6000
TIME
8000
10000
10000
1500
1000
500
0
2500
2000
4000
6000
TIME
8000
10000
4Cliques graph-noise=41.7% payoff-noise=0.5
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
2000
1500
1000
500
0
0
8000
2000
4Cliques graph-noise=41.7% payoff-noise=0.25
1500
4000
6000
TIME
GOB.Lin
LinUCB-IND
LinUCB-SIN
2500
10000
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
2000
2000
0
0
CUMULATIVE REWARD
2500
500
3000
2000
4Cliques graph-noise=41.7% payoff-noise=0
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
1000
4Cliques graph-noise=20.8% payoff-noise=0.5
GOB.Lin
LinUCB-IND
LinUCB-SIN
2500
10000
10000
1500
0
CUMULATIVE REWARD
8000
8000
2000
10000
0
4000
6000
TIME
4000
6000
TIME
GOB.Lin
LinUCB-IND
LinUCB-SIN
2500
4Cliques graph-noise=20.8% payoff-noise=0.25
1000
2000
0
0
3000
1500
2000
500
4Cliques graph-noise=8.3% payoff-noise=0.5
1500
10000
2000
0
1000
3000
2000
4Cliques graph-noise=20.8% payoff-noise=0
2500
1500
10000
CUMULATIVE REWARD
2000
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
CUMULATIVE REWARD
4000
6000
TIME
0
0
CUMULATIVE REWARD
2000
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
CUMULATIVE REWARD
CUMULATIVE REWARD
2500
2000
4Cliques graph-noise=8.3% payoff-noise=0.25
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
2500
0
0
CUMULATIVE REWARD
0
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
CUMULATIVE REWARD
2500
4Cliques graph-noise=0% payoff-noise=0.5
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
CUMULATIVE REWARD
CUMULATIVE REWARD
4Cliques graph-noise=0% payoff-noise=0.25
GOB.Lin
LinUCB-IND
LinUCB-SIN
3000
2500
2000
1500
1000
500
0
0
2000
4000
6000
TIME
8000
10000
0
2000
4000
6000
TIME
8000
10000
Table 2: Normalized cumulated reward for different levels of graph noise (expected fraction of
perturbed edges) and payoff noise (largest absolute value of noise term ) on the 4Cliques dataset.
Graph noise increases from top to bottom, payoff noise increases from left to right. GOB.Lin is
clearly more robust to payoff noise than its competitors. On the other hand, GOB.Lin is sensitive to
high levels of graph noise. In the last row, graph noise is 41.7%, i.e., the number of perturbed edges
is 500 out of 1200 edges of the original graph.
algorithms we tested do not use any prior information about which user provided a specific tag. We
used all tags associated with a single item to create a TF-IDF context vector that uniquely represents
that item, independent of which user the item is proposed to. In both datasets, we only retained
the first 25 principal components of context vectors, so that xt,k ? R25 for all t and k. We generated random context sets Cit of size 25 for Last.fm and Delicious, and of size 10 for 4Cliques.
In practical scenarios, these numbers would be varying over time, but we kept them fixed so as
to simplify the experimental setting. In 4Cliques we assigned the same unit norm random vector
ui to every node in the same clique i of the original graph (before adding graph noise). Payoffs
were then generated according to the following stochastic model: ai (x) = u>
i x + , where (the
payoff noise) is uniformly distributed in a bounded interval centered around zero. For Delicious
and Last.fm, we created a set of context vectors for every round t as follows: we first picked it
uniformly at random in {1, . . . , n}. Then, we generated context vectors xt,1 , . . . , xt,25 in Cit by
picking 24 vectors at random from the dataset and one among those vectors with nonzero payoff
for user it . This is necessary in order to avoid a meaningless comparison: with high probability, a
purely random selection would result in payoffs equal to zero for all the context vectors in Cit . In
our experimental comparison, we tested GOB.Lin and its variants against two baselines: a baseline
LinUCB-IND that runs an independent instance of the algorithm in Figure 1 at each node (this is
equivalent to running GOB.Lin in Figure 2 with A? = Idn ) and a baseline LinUCB-SIN, which
runs a single instance of the algorithm in Figure 1 shared by all the nodes. LinUCB-IND turns to be
7
Last.fm
Delicious
LinUCB-SIN
LinUCB-IND
GOB.Lin
GOB.Lin.MACRO
GOB.Lin.BLOCK
1250
1000
CUMULATIVE REWARD
CUMULATIVE REWARD
150
750
500
250
LinUCB-SIN
LinUCB-IND
GOB.Lin
GOB.Lin.MACRO
GOB.Lin.BLOCK
125
100
75
50
25
0
0
0
2000
4000
6000
8000
10000
TIME
0
2000
4000
6000
8000
10000
TIME
Figure 4: Cumulative reward for all the bandit algorithms introduced in this section.
a reasonable comparator when, as in the Delicious dataset, there are many moderately popular items.
On the other hand, LinUCB-SIN is a competitive baseline when, as in the Last.fm dataset, there are
few very popular items. The two scalable variants of GOB.Lin which we empirically analyzed are
based on node clustering,6 and are defined as follows.
GOB.Lin.MACRO: GOB.Lin is run on a weighted graph whose nodes are the clusters of the original graph. The edges are weighted by the number of inter-cluster edges in the original graph. When
all nodes are clustered together, then GOB.Lin.MACRO recovers the baseline LinUCB-SIN as a
special case. In order to strike a good trade-off between the speed of the algorithms and the loss of
information resulting from clustering, we tested three different cluster sizes: 50, 100, and 200. Our
plots refer to the best performing choice.
GOB.Lin.BLOCK: GOB.Lin is run on a disconnected graph whose connected components are the
clusters. This makes A? and Mt (Figure 2) block-diagonal matrices. When each node is clustered
individually, then GOB.Lin.BLOCK recovers the baseline LinUCB-IND as a special case. Similar
to GOB.Lin.MACRO, in order to trade-off running time and cluster sizes, we tested three different
cluster sizes (5, 10, and 20), and report only on the best performing choice.
As the running time of GOB.Lin scales quadratically with the number of nodes, the computational
savings provided by the clustering are also quadratic. Moreover, as we will see in the experiments,
the clustering acts as a regularizer, limiting the influence of noise. In all cases, the parameter ? in
Figures 1 and 2 was selected based on the scale of instance vectors x
?t and ?et,kt , respectively, and
tuned across appropriate ranges. Table 2 and Figure 4 show the cumulative
P reward for each algorithm, as compared (?normalized?) to that of the random predictor, that is t (at ? a
?t ), where at is
the payoff obtained by the algorithm and a
?t is the payoff obtained by the random predictor, i.e., the
average payoff over the context vectors available at time t. Table 2 (synthetic datasets) shows that
GOB.Lin and LinUCB-SIN are more robust to payoff noise than LinUCB-IND. Clearly, LinUCBSIN is also unaffected by graph noise, but it never outperforms GOB.Lin. When the payoff noise is
low and the graph noise grows GOB.Lin?s performance tends to degrade. Figure 4 reports the results
on the two real-world datasets. Notice that GOB.Lin and its variants always outperform the baselines
(not relying on graphical information) on both datasets. As expected, GOB.Lin.MACRO works best
on Last.fm, where many users gave positive payoffs to the same few items. Hence, macro nodes
apparently help GOB.Lin.MACRO to perform better than its corresponding baseline LinUCB-SIN.
In fact, GOB.Lin.MACRO also outperforms GOB.Lin, thus showing the regularization effect of using macro nodes. On Delicious, where we have many moderately popular items, GOB.Lin.BLOCK
tends to perform best, GOB.Lin being the runner-up. As expected, LinUCB-IND works better than
LinUCB-SIN, since the former is clearly more prone to personalize item recommendation than the
latter. Future work will consider experiments against different methods for sharing contextual and
feedback information in a set of users, such as the feature hashing technique of [22].
Acknowledgments
NCB and GZ gratefully acknowledge partial support by MIUR (project ARS TechnoMedia, PRIN
2010-2011, contract no. 2010N5K7EB-003). We thank the Laboratory for Web Algorithmics at
Dept. of Computer Science of University of Milan.
6
We used the freely available Graclus (see e.g. [11]) graph clustering tool with normalized cut, zero local
search steps, and no spectral clustering options.
8
References
[1] Y. Abbasi-Yadkori, D. P?al, and C. Szepesv?ari. Improved algorithms for linear stochastic bandits. Advances
in Neural Information Processing Systems, 2011.
[2] K. Amin, M. Kearns, and U. Syed. Graphical models for bandit problems. Proceedings of the TwentySeventh Conference Uncertainty in Artificial Intelligence, 2011.
[3] D. Asanov. Algorithms and methods in recommender systems. Berlin Institute of Technology, Berlin,
Germany, 2011.
[4] P. Auer. Using confidence bounds for exploration-exploitation trade-offs. Journal of Machine Learning
Research, 3:397?422, 2002.
[5] T. Bogers. Movie recommendation using random walks over the contextual graph. In CARS?10: Proceedings of the 2nd Workshop on Context-Aware Recommender Systems, 2010.
[6] I. Cantador, P. Brusilovsky, and T. Kuflik. 2nd Workshop on Information Heterogeneity and Fusion in
Recommender Systems (HetRec 2011). In Proceedings of the 5th ACM Conference on Recommender
Systems, RecSys 2011. ACM, 2011.
[7] S. Caron, B. Kveton, M. Lelarge, and S. Bhagat. Leveraging side observations in stochastic bandits. In
Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, pages 142?151, 2012.
[8] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Linear algorithms for online multitask classification.
Journal of Machine Learning Research, 11:2597?2630, 2010.
[9] W. Chu, L. Li, L. Reyzin, and R. E. Schapire. Contextual bandits with linear payoff functions. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pages 208?214, 2011.
[10] K. Crammer and C. Gentile. Multiclass classification with bandit feedback using adaptive regularization.
Machine Learning, 90(3):347?383, 2013.
[11] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors a multilevel approach.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(11):1944?1957, 2007.
[12] T. Evgeniou and M. Pontil. Regularized multi?task learning. In Proceedings of the tenth ACM SIGKDD
international conference on Knowledge discovery and data mining, KDD ?04, pages 109?117, New York,
NY, USA, 2004. ACM.
[13] I. Guy, N. Zwerdling, D. Carmel, I. Ronen, E. Uziel, S. Yogev, and S. Ofek-Koifman. Personalized
recommendation of social software items based on social relations. In Proceedings of the Third ACM
Conference on Recommender Sarxiv ystems, pages 53?60. ACM, 2009.
[14] S. Kar, H. V. Poor, and S. Cui. Bandit problems in networks: Asymptotically efficient distributed allocation rules. In Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE
Conference on, pages 1771?1778. IEEE, 2011.
[15] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news
article recommendation. In Proceedings of the 19th International Conference on World Wide Web, pages
661?670. ACM, 2010.
[16] S. Mannor and O. Shamir. From bandits to experts: On the value of side-observations. In Advances in
Neural Information Processing Systems, pages 684?692, 2011.
[17] C. A. Micchelli and M. Pontil. Kernels for multi?task learning. In Advances in Neural Information
Processing Systems, pages 921?928, 2004.
[18] A. Said, E. W. De Luca, and S. Albayrak. How social relationships affect user similarities. In Proceedings
of the 2010 Workshop on Social Recommender Systems, pages 1?4, 2010.
[19] A. Slivkins. Contextual bandits with similarity information. Journal of Machine Learning Research ?
Proceedings Track, 19:679?702, 2011.
[20] B. Swapna, A. Eryilmaz, and N. B. Shroff. Multi-armed bandits in the presence of side observations in
social networks. In Proceedings of 52nd IEEE Conference on Decision and Control (CDC), 2013.
[21] B. Sz?or?enyi, R. Busa-Fekete, I. Hegedus, R. Orm?andi, M. Jelasity, and B. K?egl. Gossip-based distributed
stochastic bandit algorithms. Proceedings of the 30th International Conference on Machine Learning,
2013.
[22] K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for large scale
multitask learning. In Proceedings of the 26th International Conference on Machine Learning, pages
1113?1120. Omnipress, 2009.
9
| 5006 |@word multitask:5 kulis:1 exploitation:4 version:5 inversion:2 compression:1 norm:2 determinant:2 nd:5 ences:1 tried:1 accounting:1 dramatic:3 thereby:1 tr:4 contains:4 denoting:1 rkhs:2 tuned:1 past:2 existing:1 outperforms:2 current:1 contextual:19 yet:1 chu:2 must:1 subsequent:1 additive:1 informative:2 realistic:1 visible:1 kdd:1 remove:1 plot:3 update:7 v:1 intelligence:4 selected:4 website:2 item:19 num:1 provides:2 coarse:1 node:54 tems:2 preference:4 mannor:1 org:1 dn:12 become:1 pairing:1 consists:2 shorthand:1 combine:1 busa:1 paragraph:1 introduce:2 acquired:1 inter:1 expected:6 market:3 indeed:1 behavior:5 nor:1 multi:5 globally:1 relying:2 actual:3 armed:3 provided:6 project:1 underlying:4 moreover:3 bounded:3 medium:1 maximizes:1 notation:3 whatsoever:1 hindsight:1 guarantee:1 every:3 collecting:1 act:2 exactly:1 universit:2 scaled:1 k2:4 rm:1 control:4 hit:1 unit:1 before:1 service:3 positive:1 local:1 treat:1 tends:2 tation:1 ecc:1 despite:1 id:4 subscript:1 therein:3 studied:3 deployment:1 limited:1 range:2 bi:2 practical:3 unique:1 acknowledgment:1 kveton:1 practice:3 regret:7 block:13 ance:1 handy:1 bookmarking:2 pontil:2 empirical:2 significantly:3 revealing:1 convenient:1 confidence:3 word:4 orm:1 get:3 close:2 selection:1 context:28 influence:2 ast:1 www:1 measurable:1 equivalent:2 straightforward:1 independently:1 survey:1 formalized:1 splitting:1 rule:1 insubria:1 mq:1 updated:1 cs:1 construction:1 limiting:1 shamir:1 user:42 pioneered:1 exact:1 us:1 element:1 updating:1 cut:2 observed:5 role:1 bottom:1 connected:3 news:1 trade:4 removed:1 observes:1 principled:1 rq:1 ui:6 moderately:2 reward:30 raise:1 solving:1 depend:2 predictive:1 dilemma:1 creates:1 upon:1 purely:1 learner:3 represented:2 tx:1 regularizer:1 distinct:1 enyi:1 describe:2 artificial:4 dichotomy:1 whose:10 encoded:1 tested:6 otherwise:3 ability:1 statistic:4 noisy:3 online:3 interplay:1 advantage:2 sequence:2 kxt:1 dges:1 rock:2 albayrak:1 propose:1 hindering:1 interaction:1 product:5 frequent:2 macro:10 relevant:1 networked:3 reyzin:1 achieve:1 forth:1 description:2 amin:1 milan:1 exploiting:1 cluster:10 requirement:1 incremental:1 help:1 derive:1 friend:1 b0:2 received:2 strong:1 implemented:1 implies:2 closely:1 drawback:1 stochastic:5 exploration:3 milano:2 centered:1 adjacency:1 multilevel:1 ci1:4 really:2 preliminary:1 clustered:2 exploring:1 hold:2 lying:1 around:1 exp:2 cb:4 algorithmic:2 mapping:2 m0:2 estimation:2 grouplens:1 sensitive:1 individually:1 largest:1 uninsubria:1 create:4 tf:1 tool:1 weighted:3 hyphen:1 offs:2 clearly:6 gaussian:1 always:1 modified:1 avoid:1 claudio:2 lifted:1 varying:1 derived:1 focus:1 ax:1 properly:1 rank:1 kuflik:1 mainly:1 underscore:1 sigkdd:1 baseline:8 sense:1 streaming:2 typically:1 entire:1 bt:11 bandit:46 relation:1 selects:2 i1:4 provably:1 sketched:1 issue:1 among:5 classification:3 overall:1 germany:1 art:1 integration:1 platform:1 special:3 autonoma:1 equal:2 once:2 saving:2 construct:1 never:1 aware:1 evgeniou:1 graclus:1 represents:2 cantador:1 mimic:1 future:1 report:2 others:1 recommend:1 simplify:1 few:2 modern:1 simultaneously:2 replaced:3 argmax:2 attempt:1 interest:3 huge:1 mining:1 runner:1 analyzed:2 extreme:1 kt:13 edge:13 partial:1 necessary:1 experience:1 allocates:1 euclidean:1 walk:1 plotted:1 ncb:1 theoretical:1 instance:8 column:1 modeling:2 ar:1 tp:1 entry:1 predictor:2 r25:1 listened:3 reported:1 dependency:1 perturbed:2 synthetic:3 international:5 sensitivity:1 probabilistic:1 universidad:1 off:2 contract:1 picking:2 together:2 abbasi:1 cesa:3 containing:2 possibly:1 guy:1 creating:1 ek:1 expert:1 li:5 potential:1 de:2 depends:1 performed:2 picked:1 analyze:1 doing:1 apparently:1 competitive:1 maintains:2 option:1 square:1 roll:2 variance:2 descriptor:1 who:1 gathered:1 sitting:1 ronen:1 artist:5 multiplying:1 served:1 unaffected:1 explain:1 splitted:2 sharing:3 competitor:2 against:2 lelarge:1 uziel:1 associated:5 di:4 mi:4 recovers:2 tunable:1 dataset:8 crispness:1 popular:5 recall:1 knowledge:3 car:1 hilbert:1 formalize:1 auer:1 shroff:1 hashing:2 reflected:1 improved:2 ystems:1 though:1 strongly:2 smola:1 correlation:1 langford:2 hand:5 receives:2 working:2 web:4 replacing:1 somehow:1 quality:2 grows:1 usage:2 effect:1 usa:1 contain:1 normalized:3 deliberately:1 former:1 hence:4 assigned:1 regularization:2 symmetric:1 nonzero:2 laboratory:1 dhillon:1 conditionally:2 adjacent:1 round:2 during:1 ind:19 sin:20 uniquely:1 everybody:2 complete:1 cbt:5 bring:1 omnipress:1 consideration:1 novel:1 recently:1 ari:1 pseudocode:3 mt:28 empirically:3 significant:3 refer:3 caron:1 hurting:1 ai:6 eryilmaz:1 rd:7 similarly:1 ofek:1 gratefully:1 apex:1 similarity:6 operating:1 nicolo:1 add:1 recent:1 perspective:1 italy:3 scenario:1 compound:2 kar:1 arbitrarily:1 delicious:15 vt:2 exploited:1 seen:3 gentile:4 additional:1 freely:2 recognized:1 strike:1 signal:4 full:1 reduces:1 exceeds:1 match:1 bookmarked:3 long:4 lin:53 retrieval:1 luca:1 host:2 a1:4 laplacian:9 prediction:2 scalable:2 variant:6 basic:1 heterogeneous:1 metric:1 kernel:6 represent:1 achieved:1 szepesv:1 ode:1 decreased:1 interval:1 source:1 crucial:1 extra:2 operate:1 meaningless:1 tend:1 undergo:1 undirected:1 thing:1 lastfm:1 gob:53 flow:1 leveraging:1 leverage:1 presence:1 enough:2 affect:2 gave:1 fm:15 reduce:1 prototype:2 lesser:1 multiclass:1 expression:3 url:4 york:1 cause:1 action:4 generally:2 clear:2 aimed:1 eigenvectors:1 hosting:1 amount:1 ten:1 cit:17 schapire:2 outperform:1 tutorial:1 notice:2 arising:1 popularity:1 per:5 track:1 blue:1 dasgupta:1 affected:1 group:2 four:1 threshold:2 achieving:1 changing:1 preprocessed:1 neither:1 kuk:3 tenth:1 kept:1 graph:50 circumventing:1 asymptotically:1 fraction:1 sum:1 bookmark:1 run:7 inverse:1 uncertainty:2 place:1 throughout:2 reasonable:2 decision:2 prefer:1 scaling:2 bound:7 ct:14 quadratic:5 gang:2 precisely:2 kronecker:2 idf:1 software:1 personalized:4 nearby:1 tag:16 u1:3 speed:3 prin:1 extremely:1 performing:2 structured:1 according:4 combination:1 disconnected:1 idn:1 poor:1 cui:1 smaller:6 across:3 increasingly:1 wi:5 modification:1 making:2 projecting:1 explained:1 brusilovsky:1 ln:10 previously:1 turn:2 describing:1 mechanism:2 count:1 mind:1 end:3 adopted:1 available:7 operation:2 observe:3 spectral:2 appropriate:1 attenberg:1 yadkori:1 weinberger:1 original:6 denotes:1 clustering:9 running:12 ensure:1 remaining:1 graphical:3 top:1 hinge:1 music:2 exploit:1 uj:3 approximating:1 comparatively:1 micchelli:1 added:1 quantity:1 already:1 strategy:4 rt:4 dependence:4 diagonal:2 nr:1 said:2 affinity:2 linucb:42 link:1 thank:1 berlin:2 recsys:1 degrade:1 whom:1 extent:2 studi:2 assuming:2 length:1 index:3 relationship:4 retained:1 equivalently:1 potentially:1 rise:1 design:1 implementation:1 motivates:1 unknown:4 perform:2 bianchi:3 recommender:10 upper:4 observation:3 datasets:13 acknowledge:1 miur:1 payoff:41 relational:2 ever:1 communication:1 precise:1 situation:1 heterogeneity:1 reproducing:1 arbitrary:1 introduced:3 overloaded:1 pair:5 namely:1 slivkins:1 quadratically:1 algorithmics:1 able:2 suggested:1 proceeds:1 below:1 pattern:1 cavallanti:1 reliable:1 max:1 memory:1 deleting:2 power:1 suitable:1 zappella:2 satisfaction:1 rely:1 regularized:2 syed:1 scheme:1 improve:5 movie:1 misleading:1 technology:1 created:4 carried:1 gz:1 prior:1 literature:1 discovery:1 nicol:1 law:1 fully:1 loss:1 cdc:2 limitation:1 allocation:1 hetrec:2 degree:2 sufficient:1 consistent:1 article:1 viewpoint:1 playing:1 share:3 row:2 prone:1 penalized:1 repeat:1 last:15 bias:1 allow:2 side:3 institute:1 wide:1 taking:3 absolute:1 sparse:2 benefit:1 distributed:4 feedback:5 dimension:2 giovanni:2 world:5 vari:1 cumulative:17 commonly:1 adaptive:3 made:3 simplified:1 avg:1 social:18 transaction:1 compact:1 status:1 keep:1 clique:18 sz:1 global:1 reveals:2 incoming:1 investigating:1 degli:2 un:3 search:1 table:5 carmel:1 nature:1 learn:1 ku:3 robust:2 init:2 obtaining:1 interact:1 kui:5 listens:1 european:1 diag:1 did:2 spread:2 main:2 dense:1 noise:50 allowed:1 fied:1 personalize:1 representative:1 madrid:1 gossip:1 fashion:1 ny:1 sub:1 structurally:1 breaking:1 guan:1 third:1 advertisement:1 rdn:3 niche:2 down:2 formula:1 theorem:3 specific:3 xt:33 showing:1 bhagat:1 expec:1 evidence:1 closeness:1 workshop:4 fusion:1 sequential:1 effectively:1 cumulated:1 adding:1 occurring:1 egl:1 twentyseventh:1 logarithmic:1 simply:1 appearance:1 conveniently:1 expressed:1 contained:5 adjustment:1 recommendation:14 u2:1 fekete:1 corresponds:1 satisfies:1 relies:1 extracted:3 acm:7 conditional:2 comparator:1 sized:1 targeted:1 goal:2 identity:4 shared:1 content:12 experimentally:1 specifically:2 corrected:1 unimi:2 wt:8 uniformly:2 principal:2 kearns:1 experimental:2 select:1 formally:1 internal:1 support:1 latter:1 crammer:1 dept:1 outgoing:1 d1:1 |
4,428 | 5,007 | Contrastive Learning Using Spectral Methods
James Zou
Harvard University
Daniel Hsu
Columbia University
David Parkes
Harvard University
Ryan Adams
Harvard University
Abstract
In many natural settings, the analysis goal is not to characterize a single data set in
isolation, but rather to understand the difference between one set of observations
and another. For example, given a background corpus of news articles together
with writings of a particular author, one may want a topic model that explains
word patterns and themes specific to the author. Another example comes from
genomics, in which biological signals may be collected from different regions
of a genome, and one wants a model that captures the differential statistics observed in these regions. This paper formalizes this notion of contrastive learning
for mixture models, and develops spectral algorithms for inferring mixture components specific to a foreground data set when contrasted with a background data
set. The method builds on recent moment-based estimators and tensor decompositions for latent variable models, and has the intuitive feature of using background
data statistics to appropriately modify moments estimated from foreground data.
A key advantage of the method is that the background data need only be coarsely
modeled, which is important when the background is too complex, noisy, or not
of interest. The method is demonstrated on applications in contrastive topic modeling and genomic sequence analysis.
1
Introduction
Generative latent variable models offer an intuitive way to explain data in terms of hidden structure,
and are a cornerstone of exploratory data analysis. Popular examples of generative latent variable
models include Latent Dirichlet Allocation (LDA) [1] and Hidden Markov Models (HMMs) [2],
although the modularity of the generative approach has led to a wide range of variations. One of
the challenges of using latent variable models for exploratory data analysis, however, is developing
models and learning techniques that accurately reflect the intuitions of the modeler. In particular,
when analyzing multiple specialized data sets, it is often the case that the most salient statistical
structure?that most easily found by fitting latent variable models?is shared across all the data and
does not reflect interesting specific local structure. For example, if we apply a topic model to a set
of English-language scientific papers on computer science, we might hope to identify different cooccurring words within subfields such as theory, systems, graphics, etc. Instead, such a model will
simply learn about English syntactic structure and invent topics that reflect uninteresting statistical
correlations between stop words [3]. Intuitively, what we would like from such an exploratory
analysis is to answer the question: What makes these data different from other sets of data in the
same broad category?
To answer this question, we develop a new set of techniques that we refer to as contrastive learning
methods. These methods differentiate between foreground and background data and seek to learn
a latent variable model that captures statistical relationships that appear in the foreground but do
not appear in the background. Revisiting the previous scientific topics example, contrastive learning
could treat computer science papers as a foreground corpus and (say) English-language news articles
as a background corpus. As both corpora share the same broad syntactic structure, a contrastive
foreground topic model would be more likely to discover semantic relationships between words that
are specific to computer science. This intuition has broad applicability in other models and domains
1
Background
Foreground
(a) PCA
(b) Linear contrastive analysis
Figure 1: These figures show foreground and background data from Gaussian distributions. The foreground
data has greater variance in its minor direction, but the same variance in its major direction. The means are
slightly different. Different projection lines are shown for different methods, to illustrate the difference between (a) the purely unsupervised variance-preserving linear projection of principal component analysis, (b)
the contrastive foreground projection that captures variance that is not present in the background.
as well. For example, in genomics one might use a contrastive hidden Markov model to amplify the
signal of a particular class of sequences, relative to the broader genome.
Note that the objective of contrastive learning is not to discriminate between foreground and background data, but to learn an interpretable generative model that captures the differential statistics
between the two data sets. To clarify this difference, consider the difference between principal component analysis and contrastive analysis. Principal component analysis finds the linear projection
that maximally preserves variance without regard to foreground versus background. A contrastive
approach, however, would try to find a linear projection that maximally preserves the foreground
variance that is not explained by the background. Figure 1 illustrates the differences between these.
Novelty detection [4] is also related, but it does not directly learn a generative model of the novelty.
Our contributions. We formalize the concept of contrastive learning for mixture models and
present new spectral contrast algorithms. We prove that by appropriately ?subtracting? background
moments from the foreground moments, our algorithms recover the model for the foregroundspecific data. To achieve this, we extend recent developments in learning latent variable models
with moment matching and tensor decompositions. We demonstrate the effectiveness, robustness,
and scalability of our method in contrastive topic modeling and contrastive genomics.
2
Contrastive learning in mixture models
Many data can be naturally described by a mixture model. The general mixture model has the form
? J
?
N
?
?
N
J
p({xn }n=1 ; {(?j , wj )}j=1 ) =
wj f (xn |?j )
(1)
n=1 j=1
where {?j } are the parameters of the mixture components, {wj } are the mixture weights,
and f (?|?j ) is the density of the j-th mixture component. Each ?j is a vector in some parameter space, and a common estimation task is to infer the component parameters {(?j , wj )} given the
observed data {xn }.
In many applications, we have two sets of observations {xfn } and {xbn }, which we call the foreground
data and the background data, respectively. The foreground and background are generated by two
possibly overlapping sets of mixture components. More concretely, let {?j }j?A , {?j }j?B , and
{?j }j?C be three disjoint sets of parameters, with A, B, and C being three disjoint index sets. The
foreground {xfn } is generated from the mixture model {(?j , wjf )}j?A?B , and the background {xbn }
is generated from {(?j , wjb )}j?B?C .
The goal of contrastive learning is to infer the parameters {(?j , wjf )}j?A , which we call the
foreground-specific model. The direct approach would be to infer {(?j , wjf )}j?A?B just from {xfn },
and in parallel infer {(?j , wjb )}j?B?C just from {xbn }, and then pick out the components specific to
the foreground. However, this involves explicitly learning a model for the background data, which
2
is undesirable if the background is too complex, if {xbn } is too noisy, or if we do not want to devote computational power to learn the background. In many applications, we are only interested in
learning a generative model for the difference between the foreground and background, because that
contrast is the interesting signal.
In this paper, we introduce an efficient and general approach to learn the foreground-specific model
without having to learn an accurate model of the background. Our approach is based on a method-ofmoments that uses higher-order tensor decompositions for estimation [5]; we generalize the tensor
decomposition technique to deal with our task of contrastive learning. Many other recent spectral
learning algorithms for latent variable models are also based on the method-of-moments (e.g., [6?
13]), but their parameter estimation can not account for the asymmetry between foreground and
background.
We demonstrate spectral contrastive learning through two concrete applications: contrastive topic
modeling and contrastive genomics. In contrastive topic modeling we are given a foreground corpus of documents and a background corpus. We want to learn a fully generative topic model that
explains the foreground-specific documents (the contrast). We show that even when the background
is extremely sparse?too noisy to learn a good background topic model?our spectral contrast algorithm still recovers foreground-specific topics. In contrastive genomics, sequence data is modeled by
HMMs. The foreground data is generated by a mixture of two HMMs; one is foreground-specific,
and the other captures some background process. The background data is generated by this second HMM. Contrastive learning amplifies the foreground-specific signal, which have meaningful
biological interpretations.
3
Contrastive topic modeling
To illustrate contrastive analysis and introduce tensor methods, we consider a simple topic model
where each document is generated by exactly one topic. In LDA [1], this corresponds to setting the
Dirichlet prior hyper-parameter ? ? 0. The techniques here can be extended to the general ? > 0
case using the moment transformations given in [10]. The generative topic model for a document is
as follows.
? A word x is represented by an indicator vector ex ? RD which is 1 in its x-th entry and 0
elsewhere. D is the size of the vocabulary. A document is a bag-of-words and is represented
by a vector c ? RD with non-negative integer word counts.
? A topic is first chosen according to the distribution on [K] := {1, 2, . . . , K} specified by
the probability vector w ? RK .
? Given that the chosen topic is t, the words in the document are drawn independently from
the distribution specified by the probability vector ?t ? RD .
Following previous work (e.g., [10]) we assume that ?1 , ?2 , . . . , ?K are linearly independent probability vectors in RD . Let the foreground corpus of documents be generated by the mixture
of |A| + |B| topics {(?t , wtf )}t?A ? {(?t , wtf )}t?B , and the background topics be generated by the
mixture of |B| + |C| topics {(?t , wtb )}t?B ? {(?t , wtb )}t?C (here, we assume (A, B, C) is a nontrivial partition of [K], and that wtf , wtb > 0 for all t). Our goal is to learn {(?t , wtf )}t?A .
3.1
Moment decompositions
We use the symbol ? to denote the tensor product of vectors, so a?b is the matrix whose (i, j)-th entry is ai bj , and a?b?c is the third-order tensor whose (i, j, k)-th entry is ai bj ck . Given a third-order
d1
d2
d3
d1
tensor T ? Rd1 ?d2 ?d3 and vectors
? a ? R , b ? R , and c ? R , we let T (I,
?b, c) ? R denote
the vector whose i-th entry is j,k Ti,j,k bj ck , and T (a, b, c) denote the scalar i,j,k Ti,j,k ai bj ck .
We review the moments of the word observations in this model (see, e.g., [10]). Let x1 , x2 , x3 ? [D]
be three random words sampled from a random document generated by the foreground model
(the discussion here also applies to the background model). The second-order (cross) moment
matrix M2f := E[ex1 ? ex2 ] is the matrix whose (i, j)-th entry is the probability that x1 = i
and x2 = j. Similarly, the third-order (cross) moment tensor M3f := E[ex1 ? ex2 ? ex3 ] is the
3
Algorithm 1 Contrastive Topic Model estimator
input Foreground and background documents {cfn }, {cbn }; parameter ? > 0; number of topics K.
output Foreground-specific topics Topicsf .
? f and M
? f (M
? b and M
? b ) be the foreground (background) second- and third-order moment
1: Let M
2
3
2
3
f
? 2 := M
? f ? ?M
? b and M
? 3 := M
? f ? ?M
? b.
estimates based on {cn } ({cbn }), and let M
2
2
3
3
?
?
?
2: Run Algorithm 2 with input M2 , M3 , K, and N to obtain {(?
at , ?t ) : t ? [K]}.
? 2 ) : t ? [K], ?
? t > 0}.
3: Topicsf := {(?
at /??
at ?1 , 1/?
t
third-order tensor whose (i, j, k)-th entry is the probability that x1 = i, x2 = j, x3 = k. Observe that for any t ? A ? B, the i-th entry of E[ex1 |topic = t] is precisely the probability
that x1 = i given topic = t, which is i-th entry of ?t . Therefore, E[ex1 |topic = t] = ?t . Since
the words are independent given the topic, the (i, j)-th entry of E[ex1 ? ex2 |topic = t] is the
product of the i-th and j-th entry of ?t , i.e., E[ex1 ? ex2 |topic = t] = ?t ? ?t . Similarly,
E[ex1 ? ex2 ? ex3 |topic = t] = ?t ? ?t ? ?t . Averaging over the choices of t ? A ? B with the
weights wtf implies that the second- and third-order moments are
?
?
M2f = E[ex1 ? ex2 ] =
wtf ?t ? ?t and M3f = E[ex1 ? ex2 ? ex3 ] =
wtf ?t ? ?t ? ?t .
t?A?B
t?A?B
(We discuss how to efficiently use documents of length > 3 in Section 5.2.) We can similarly
decompose the background moments M2b and M3b in terms of tensors products of {?t }t?B?C . These
equations imply the following proposition (proved in Appendix A).
Proposition 1. Let M2f , M3f and M2b , M3b be the second- and third-order moments from the foreground and background data, respectively. Define
M2 := M2f ? ?M2b and M3 := M3f ? ?M3b .
If ? ? maxj?B wjf /wjb , then
K
K
?
?
M2 =
?t ?t ? ?t and M3 =
?t ?t ? ?t ? ?t
(2)
t=1
t=1
where ?t = wtf > 0 for t ? A (foreground-specific topic), and ?t ? 0 for t ? B ? C.
Using tensor decompositions. Proposition 1 implies that the modified moments M2 and M3 have
low-rank decompositions in which the components t with positive multipliers ?t correspond to the
foreground-specific topics {(?t , wtf )}t?A . A main technical innovation of this paper is a generalized tensor power method, described in Section 5, which takes as input (estimates of) second- and
third-order tensors of the form in (2), and approximately recovers the individual components. We
argue that under some natural conditions, the generalized power method is robust to large perturbations in M2b and M3b , which suggests that foreground-specific topics can be learned even when it
is not possible to accurately model the background. We use the generalized tensor power method
to estimate the foreground-specific topics in our Contrastive Topic Model estimator (Algorithm 1).
Proposition 1 gives the lower bound on ?; we empirically find that ? ? maxj?B wjf /wjb gives good
results. When ? is too large, the convergence of the tensor power worsens. Where possible in practice, we recommend using prior belief about foreground and background compositions to estimate
maxj?B wjf /wjb , and then vary ? as part of the exploratory analysis.
3.2
Experiments with contrastive topic modeling
We test our contrastive topic models on the RCV1 dataset, which consists of ? 800000 news articles. Each document comes with multiple category labels (e.g., economics, entertainment) and
region labels (e.g., USA, Europe, China). The corpus spans a large set of complex and overlapping
categories, making this a good dataset to validate our contrastive learning algorithm.
In one set of experiments, we take documents associated with one region as the foreground corpus,
and documents associated with a general theme, such as economics, as the background. The goal
of the contrast is to find the region-specific topics which are not relevant to the background theme.
The top half of Table 1 shows the example where we take USA-related documents as the foreground
4
USA foreground
lbs
bond
million
usda
municipal week
hog
index
sale
gilt
year
export
barrow trade
total
China foreground
china share billion
shanghai
ton
market reserve
yuan
percent percent bank
firm
import million balance china
alumin trade trade
exchange
percent
week
rate
market
wheat
USA foreground, Economics background
play
research result
basketball game
round science hockey
game
run
golf
cancer nation
nation
hit
open
cell
cap
la
win
hole
study
ny
association inn
China foreground, Economics background
yuan
china
panda earthquake china
interest
year
east
china
china
office
bond
bank
typhoon year
office
court
million
foreign storm xinhua richt
smuggle
cost
invest flood
zoo
scale
ship
moody
stock
price
close
trade
index
Table 1: Top words from representative topics: foreground alone (left); foreground/background contrast (right).
Each column corresponds to one topic.
classification score
1.6
1.4
N=10000
N=1000
N=100
N=50
1.2
1
0.8
0.6
0
0.5
?
1
2
)4)!
5.3+6
5.37$/
5/27$/
507$507$.
507$/
58+6
5.97$:);
<$=%>*
-
.
!"!#
!"(#
("!!
!"!'
!"')
!")!
!"!#
!"!!
!"(!
!"!&
!"$$
!"(#
!"!%
!"!#
!"+'
!"++
!"')
!"($
!"!'
!"!'
!"#$%#"&'(
/
0
1
2
3
-
.
!"!%
!"$$
!"!!
!"!&
!"+%
("!!
!"*!
!"$+
!"!&
!"!!
!"!&
!"('
!"!#
!"!#
!")*
!")$
!"&#
!"()
!"!#
!"!!
!"!!
!")#
!"!&
!"!$
!"$$
!"%+
!"!$
!"+%
!"!+
!"!!
!"!!
!"!#
!"!!
!"#+
!"(!
!"!&
!"!+
!"!&
!"!*
!"!(
!"!!
!"((
!"!!
!"(&
!"'*
!"(+
!"!*
!"!*
!"(%
!"!(
!"!'
!"!%
!"$!
!"!!
!"'&
!"&#
!"((
!"!!
!"'+
!"!!
("!!
!"()
!"!+
!"!%
!"+(
!"+(
!"')
!"($
!"!+
!"!!
!"!( !"!# !"(! !"(+ !"(& !"'% !"+'
(a)
)"'*#+,*
/
0
1
2
3
!"!#
!"$*
!"!!
!"!+
!"+%
("!!
!"*!
!"$+
!"!+
!"!!
!"!&
!"(+
!"!)
!"!#
!")$
!")*
!"&)
!"($
!"!&
!"!!
!"!!
!")&
!"!(
!"!*
!"*!
!"%+
!"!*
!"+&
!"!+
!"!(
!"!!
!"!%
!"!!
!"#+
!"!)
!"!+
!"!(
!"!+
!"!%
!"!!
!"!!
!"'(
!"!*
!"+(
!"&&
!"'%
!"(#
!"(*
!"%&
!"!(
!"!+ !"!) !"(+ !"() !"($ !"+# !"!#
(b)
Figure 2: (a) Relative AUC as function of ? (Sec. 3.2). (b) Emission probabilities of HMM states (Sec. 4).
and Economics as the background theme. We first set the contrast parameter ? = 0 in Algorithm 1;
this learns the topics from the foreground dataset alone. Due to the composition of the corpus, the
foreground topics for USA is dominated by topics relevant to stock markets and trade; representative
topics and keywords are shown on the left of Table 1. Then we increase ? to observe the effects
of contrast. In the right half of Table 1, we show the heavily weighted topics and keywords for
when ? = 2. The topics involving market and trade are also present in the background corpus, so
their weights are reduced through contrast. Topics which are very USA-specific and distinct from
economics rise to the top: basketball, baseball, scientific research, etc. A similar experiment with
China-related articles as foreground, and the same economics themed background is shown in the
bottom of Table 1.
These examples illustrate that Algorithm 1 learns topics which are unique to the foreground. To
quantify this effect, we devised a specificity test. Using the RCV1 labels, we partition the foreground
USA documents into two disjoint groups: documents with any economics-related labels (group 0)
and the rest (group 1). Because Algorithm 1 learns the full probabilistic model, we use the inferred
topic parameters to compute the marginal likelihood for each foreground document given the model.
We then use the likelihood value to classify each foreground document as belonging to group 0 or 1.
The performance of the classifier is summarized by the AUC score.
We first set ? = 0 and compute the AUC score, which corresponds to how well a topic model
learned from only the foreground can distinguish between the two groups. We use this score as
the baseline and normalize so it is equal to 1. The hope is that by using the background data, the
contrastive model can better identify the documents that are generated by foreground-specific topics.
Indeed, as ? increases, the AUC score improves significantly over the benchmark (dark blue bars in
Figure 2(a)). For ? > 2 we find that the foreground specific topics do not change qualitatively.
A major advantage of our approach is that we do not need to learn a very accurate background
model to learn the contrast. To validate this, we down sample the background corpus to 1000, 100,
5
and 50 documents. This simulates settings where the background is very sparsely sampled, so it
is not possible to learn a background model very accurately. Qualitatively, we observe that even
with only 50 randomly sampled background documents, Algorithm 1 still recovers topics specific to
USA and not related to Economics. At ? = 2, it learns sports and NASA/space as the most prominent foreground-specific topics. This is supported by the specificity test, where contrastive topic
models with sparse background better identify foreground-specific documents relative to the ? = 0
(foreground data-only) model.
4
Contrastive Hidden Markov Models
Hidden Markov Models (HMMs) are commonly used to model sequence and time series data. For
example, a biologist may collect several sequences from an experiment; some of the sequences are
generated by a biological process of interest (modeled by an HMM), while others are generated by
a different ?background? process?e.g., noise or a process that is not of primary interest.
Consider a simple generative process where foreground data are generated by a mixture of two
HMMs: (1 ? ?) HMMA +? HMMB , and background data are generated by HMMB . The goal
is to learn the parameters of HMMA , which models the biological process of interest. As we did
for topic models, we can estimate a contrastive HMM by taking appropriate combinations of observable moments. Let xf1 , xf2 , xf3 , . . . be a random emission sequence in RD generated by the
foreground model (1 ? ?) HMMA +? HMMB , and xb1 , xb2 , xb3 , . . . be the sequence generated by the
background model HMMB . Following [5], we estimate the following cross moment matrices and
f
f
f
f
tensors: M1,2
:= E[xf1 ? xf2 ], M1,3
:= E[xf1 ? xf3 ], M2,3
:= E[xf2 ? xf3 ], M1,2,3
:= E[xf1 ? xf2 ? xf3 ],
as well as the corresponding moments for the background model. This is similar to the estimation
the word pair and triple frequencies in LDA. Here we only use the first three observations in the
sequence, but it is also justifiable to average over all consecutive observation triplets [14]. Then,
f
b
analogous to Proposition 1, we define the contrastive moments as Mu,v := Mu,v
? ?Mu,v
(for
f
b
{u, v} ? {1, 2, 3}) and M1,2,3 := M1,2,3
? ?M1,2,3
. In the Appendix (Sec. D and Algorithm 3), we
describe how to recover the foreground-specific model HMMA . The key technical difference from
contrastive LDA lies in the asymmetric generalization of the Tensor Power Method of Algorithm 2.
Application to contrastive genomics. For many biological problems, it is important to understand
how signals in certain data are enriched relative to some related background data. For instance, we
may want to contrast foreground data composed of gene expressions (or mutation rates, protein
levels, etc) from one population against background data taken from (say) a control experiment, a
different cell type, or a different time point. The contrastive analysis methods developed here can be
a powerful exploratory tool for biology.
As a concrete illustration, we use spectral contrast to refine the characterization of chromatin states.
The human genome consists of ? 3 billion DNA bases, and has recently been shown that these bases
can be naturally segmented into a handful of chromatin states [15, 16]. Each state describes a set of
genomic properties: several states describe different active and regulatory features, while other states
describe repressive features. The chromatin state varies across the genome, remaining constant for
relatively short regions (say, several thousand bases). Learning the nature of the chromatin states
is of great interest in genomics. The state-of-the-art approach for modeling chromatin states uses
an HMM [16]. The observable data are, at every 200 bases, a binary feature vector in {0, 1}10 .
Each feature indicates the presence/absence of a specific chemical feature at that site (assumed
independent given the chromatin state). This correspond to ? 15 million observations across the
genome, which are used to learn the parameters of an HMM. Each chromatin state corresponds to a
latent state, characterized by a vector of 10 emission probabilities.
We take as foreground data the observations from exons, introns and promoters, which account for
about 30% of the genome; as background data, we take observations from intergenic regions. Because exons and introns are transcribed, we expect the foreground to be a mixture of functional
chromatin states and spurious states due to noise, and expect more of the background observations
to be due to non-functional process. The contrastive HMM should capture biologically meaningful
signals in the foreground data. In Figure 2(b), we show the emission matrix for the foreground HMM
and for the contrastive HMM. We learn K = 7 latent states, corresponding to 7 chromatin states.
6
Algorithm 2 Generalized Tensor Power Method
? 2 ? RD?D ; M
? 3 ? RD?D?D ; target rank K; number of iterations N .
input M
? t ) : t ? [K]}.
output Estimates {(?
at , ?
?
? := Moore-Penrose pseudoinverse of rank K approximation to M
? 2 ; initialize T := M
? 3.
1: Let M
2
2: for t = 1 to K do
? 2.
3:
Randomly draw u(0) ? RD from any distribution with full support in the range of M
? (i) ? ? (i)
(i+1)
?
4:
Repeat power iteration update N times: u
:= T (I, M2 u , M2 u ).
? t := T (M
? at ? a
? ? u(N ) ?|1/2 ; ?
? ?a
? ? ?t , M
? ?a
5:
a
?t := u(N ) /|?u(N ) , M
?t ? a
?t .
2
2 ? t , M2 a
2 ?t ); T := T ? |?t |?
6: end for
Each row is a chemical feature of the genome. The foreground states recover the known biological chromatin states from literature [16]. For example, state 6, with high emission for K36me3, is
transcribed genes; state 5 is active enhancers; state 4 is poised enhancers. In the contrastive HMM,
most of the states are the same as before. Interestingly, state 7, which is associated with feature
K20me1, drops from the largest component of the foreground to a very small component of the contrast. This finding suggests that state 7 and K20me1 are less specific to gene bodies than previously
thought [17], and raises more questions regarding its function, which is relatively unknown.
5
Generalized tensor power method
We now describe our general approach for tensor decomposition used in Algorithm 1. Let
a1 , a2 , . . . , aK ? RD be linearly independent vectors, and set A := [a1 |a2 | ? ? ? |aK ].
Let
?K
?K
M2 := i=1 ?i ai ? ai and M3 := i=1 ?i ai ? ai ? ai , where ?i = sign(?i ) ? {?1}. The goal
is to recover {(at , ?t ) : t ? [K]} from (estimates of) M2 and M3 .
The following proposition shows that one of the vectors ai (and its associated ?i ) can be obtained
from M2 and M3 using a simple power method similar to that from [5, 18] (note that which of the
K components is obtained depends on the initialization of the procedure). Note that the error ? is
exponentially small in 2t after t iterations, so the number of iterations required to converge is very
small. Below, we use (?)? to denote the Moore-Penrose pseudoinverse.
Proposition 2 (Informal statement). Consider the sequence u(0) , u(1) , . . . in RD determined by
u(i+1) := M3 (I, M2? u(i) , M2? u(i) ) . Then for any ? ? (0, 1) and almost all u(0) ? range(A),
there exists t? ? [K], c1 , c2 > 0 (all depending on u(0) and {(?t , ?t ) : t ? [K]}) such that
? ? |?t? || ? |?t? |? + maxt?=t? |?t |?3/2 for ? := c1 exp(?c2 2i ), where
??
u(i) ? at? ?2 ? ? and |?
? := M3 (M ? u
u
?(i) := ?t? u(i) /?A? u(i) ?, and ?
?(i) , M ? u
?(i) , M ? u
?(i) ).
2
2
2
See Appendix B for the formal statement and proof which give explicit dependencies. We use the
iterations from Proposition 2 in our main decomposition algorithm (Algorithm 2), which is a variant
of the main algorithm from [5]. The main difference is that we do not require M2 to be positive semidefinite, which is essential for our application, but requires subtle modifications. For simplicity, we
assume we run Algorithm 2 with exact moments M2 and M3 ? a detailed perturbation analysis
would be similar to that in [5] but is beyond the scope of this paper. Proposition 2 shows that a single
component can be accurately recovered, and we use deflation to recover subsequent components
(normalization and deflation is further discussed in Appendix B). As noted in [5], errors introduced
in this deflation step have only a lower-order effect, and therefore it can be used reliably to recover
all K components. For increased robustness, we actually repeat steps 3?5 in Algorithm 2 several
? t | takes the median value.
times, and use the results of the trial in which |?
5.1
Robustness to sparse background sampling
Algorithm 1 can recover the foreground-specific {?t }t?A even with relatively small numbers
of background data. We can illustrate this robustness under the assumption that the support
of the foreground-specific topics S0 := ?t?A supp(?t ) is disjoint from that of the other topics
S1 := ?t?B?C supp(?t ) (similar to Brown clusters [19]). Suppose that M2f is estimated accurately
using a large sample of foreground documents. Then because S0 and S1 are disjoint, Algorithm 1
7
(using sufficiently large ?) will accurately recover the topics {(?t , wtf ) : t ? A} in Topicsf . The
remaining concern is that sampling errors will cause Algorithm 1 to mistakenly return additional
?t
topics in Topicsf , namely the topics t ? B ? C. It thus suffices to guarantee that the signs of the ?
returned by Algorithm 2 are correct. The sample size requirement for this is independent of the desired accuracy level for the foreground-specific topics?it depends only on ? and fixed properties of
the background model.1 As reported in Section 3.2, this robustness is borne out in our experiments.
5.2
Scalability
Our algorithms are scalable to large datasets when implemented to exploit sparsity and low-rank
structure (each experiment we report runs on a standard laptop in a few minutes). Two important
details are (i) how the moments M2 and M3 are represented, and (ii) how to execute the power
iteration update in Algorithm 2. These issues are only briefly mentioned in [5] and without proof,
so in this section, we address these issues in detail.
Efficient moment estimates for topic models. We first discuss how to represent empirical estimates of the second- and third-order moments M2f and M3f for the foreground documents (the same
will hold for the background documents). Let document n ? [N ] have length ?n , and let cn ? ND
be its word count vector (its i-th entry cn (i) is the number of times word i appears in document n).
Proposition 3 (Estimator for M2f ). Assume ?n ? 2. For any distinct i, j ? [D], E[(cn (i)2 ?
cn (i))/(?n (?n ? 1))] = [M2f ]i,i and E[cn (i)cn (j)/(?n (?n ? 1))] = [M2f ]i,j .
? f := N ?1 ?N (?n (?n ? 1))?1 (cn ? cn ?
By Proposition 3, an unbiased estimator of M2f is M
2
n=1
? f is sum of sparse matrices, it can be represented efficiently, and we may use
diag(cn )). Since M
2
sparsity-aware methods for computing its low-rank spectral decompositions. It is similarly easy to
? f ? ?M
? b , from which one can compute its pseudoinverse and
obtain such a decomposition for M
2
2
?
represent it in factored form as P Q for some P, Q ? RD?K .
Proposition 4 (Estimator for M3f ). Assume ?n ? 3. For any distinct i, j, k ? [D], E[(cn (i)3 ?
3cn (i)2 + 2cn (i))/(?n (?n ? 1)(?n ? 2))] = [M3f ]i,i,i , E[(cn (i)2 cn (j) ? cn (i)cn (j))/(?n (?n ?
1)(?n ? 2))] = [M3f ]i,i,j , and E[(cn (i)cn (j)cn (k))/(?n (?n ? 1)(?n ? 2))] = [M3f ]i,j,k .
? f (I, v, v) :=
By Proposition 4, an unbiased estimator of M3f (I, v, v) for any vector v ? RD is M
3
?N
?1
?1
2
N
(?cn , v? cn ?2?cn , v?(cn ?v)??cn , v ?v?cn +2cn ?v ?v) (where
n=1 (?n (?n ?1)(?n ?2))
? denotes component-wise product of vectors). Let nnz(cn ) be the number of non-zero entries in
cn , then each term in the sum takes only O(nnz(cn )) operations to compute. So the time to compute
? f (I, v, v) is proportional to the number of non-zero entries of the term-document matrix, using
M
3
just a single pass over the document corpus.
Power iteration computation. Each power iteration update in Algorithm 2 just requires the eval? f (I, v, v) ? ? M
? b (I, v, v) (one-pass linear time, as shown above) for v := M
? ? u(i) , and
uating M
3
3
2
?
?
? ? ??
? is kept in rank-K factored
computing the deflation ? <t ?
a? , v?2 a
?? (O(DK) time). Since M
2
form, v can also be computed in O(DK) time.
6
Discussion
In this paper, we formalize a model of contrastive learning and introduce efficient spectral methods to
learn the model parameters specific to the foreground. Experiments with contrastive topic modeling
show that Algorithm 1 can learn foreground-specific topics even when the background data is noisy.
Our application in contrastive genomics illustrates the utility of this method in exploratory analysis
of biological data. The contrast identifies an intriguing change associated with K20me1, which
can be followed up with biological experiments. While we have focused in this work on a natural
contrast model for mixture models, we also discuss an alternative approach in Appendix E.
Acknowledgement This work was partially supported by DARPA Young Faculty Award DARPA
N66001-12-1-4219.
1
For instance, if the background model consists only of one topic ?, then the analyses from [5, 10] can be
adapted to bound the sample size requirement by O(1/???6 ).
8
References
[1] David M. Blei, Andrew Ng, and Michael Jordan. Latent Dirichlet allocation. JMLR, 3:993?
1022, 2003.
[2] Leonard E. Baum and J. A. Eagon. An inequality with applications to statistical estimation
for probabilistic functions of Markov processes and to a model for ecology. Bull. Amer. Math.
Soc., 73(3):360?363, 1967.
[3] J. Zou and R. Adams. Priors for diversity in generative latent variable models. In Advances in
Neural Information Processing Systems 25, 2012.
[4] B. Scholkopf, R. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt. Support vector method
for novelty detection. In Advances in Neural Information Processing Systems 25, 2000.
[5] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and T. Telgarsky. Tensor decompositions for
learning latent variable models, 2012. arXiv:1210.7559.
[6] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden Markov models.
Journal of Computer and System Sciences, 78(5):1460?1480, 2012.
[7] S. Siddiqi, B. Boots, and G. Gordon. Reduced rank hidden markov models. In Proceedings of
the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.
[8] B. Balle, A. Quattoni, and X. Carreras. Local loss optimization in operator models: A new
insight into spectral learning. In Twenty-Ninth International Conference on Machine Learning,
2012.
[9] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. Spectral learning of latent
variable PCFGs. In Proceedings of Association of Computational Linguistics, 2012.
[10] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y. K. Liu. A spectral algorithm for
latent Dirichlet allocation. In Advances in Neural Information Processing Systems 25, 2012.
[11] D. Hsu, S. M. Kakade, and P. Liang. Identifiability and unmixing of latent parse trees. In
Advances in Neural Information Processing Systems 25, 2012.
[12] S. B. Cohen, K. Stratos, M. Collins, D. P. Foster, and L. Ungar. Experiments with spectral
learning of latent-variable PCFGs. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics, 2013.
[13] A. T. Chaganty and P. Liang. Spectral experts for estimating mixtures of linear regressions. In
Thirtieth International Conference on Machine Learning, 2013.
[14] A. Kontorovich, B. Nadler, and R. Weiss. On learning parametric-output HMMs. In Thirtieth
International Conference on Machine Learning, 2013.
[15] J. Zhu et al. Genome-wide chromatin state transitions associated with developmental and
environmental cues. Cell, 152(3):642?54, 2013.
[16] J. Ernst et al. Mapping and analysis of chromatin state dynamics in nine human cell types.
Nature, 473(7345):43?49, 2011.
[17] D. Beck et al. Signal analysis for genome wide maps of histone modifications measured by
chip-seq. Bioinformatics, 28(8):1062?9, 2012.
[18] L. De Lathauwer, B. De Moor, and J. Vandewalle. On the best rank-1 and rank(R1 , R2 , ..., Rn ) approximation and applications of higher-order tensors. SIAM J. Matrix Anal.
Appl., 21(4):1324?1342, 2000.
[19] Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai.
Class-based n-gram models of natural language. Comput. Linguist., 18(4):467?479, 1992.
9
| 5007 |@word worsens:1 trial:1 faculty:1 briefly:1 nd:1 open:1 d2:2 seek:1 decomposition:12 contrastive:45 pick:1 wjf:6 moment:24 liu:1 series:1 score:5 daniel:1 document:29 interestingly:1 recovered:1 intriguing:1 import:1 subsequent:1 partition:2 xb1:1 drop:1 interpretable:1 update:3 alone:2 generative:10 half:2 intelligence:1 cue:1 histone:1 short:1 parkes:1 blei:1 characterization:1 math:1 zhang:1 lathauwer:1 c2:2 direct:1 differential:2 scholkopf:1 yuan:2 prove:1 consists:3 fitting:1 poised:1 ex2:7 introduce:3 indeed:1 market:4 discover:1 estimating:1 laptop:1 what:2 developed:1 finding:1 transformation:1 formalizes:1 guarantee:1 every:1 ti:2 nation:2 exactly:1 classifier:1 hit:1 platt:1 sale:1 control:1 appear:2 cbn:2 before:1 positive:2 local:2 modify:1 treat:1 ak:2 analyzing:1 approximately:1 might:2 xf2:4 initialization:1 china:10 suggests:2 collect:1 appl:1 hmms:6 pcfgs:2 range:3 subfields:1 unique:1 earthquake:1 practice:1 x3:2 procedure:1 nnz:2 empirical:1 significantly:1 thought:1 projection:5 matching:1 word:15 specificity:2 protein:1 amplify:1 undesirable:1 close:1 operator:1 writing:1 map:1 demonstrated:1 baum:1 economics:9 independently:1 focused:1 simplicity:1 m2:16 estimator:7 factored:2 insight:1 population:1 notion:1 exploratory:6 variation:1 analogous:1 target:1 play:1 heavily:1 suppose:1 exact:1 us:2 harvard:3 asymmetric:1 sparsely:1 observed:2 bottom:1 export:1 capture:6 thousand:1 revisiting:1 region:7 wj:4 news:3 wheat:1 richt:1 trade:6 mentioned:1 intuition:2 developmental:1 mu:3 xinhua:1 dynamic:1 raise:1 purely:1 baseball:1 exon:2 easily:1 darpa:2 stock:2 chip:1 represented:4 chapter:1 distinct:3 describe:4 artificial:1 hyper:1 firm:1 whose:5 say:3 statistic:4 flood:1 syntactic:2 noisy:4 differentiate:1 advantage:2 sequence:10 inn:1 subtracting:1 product:4 relevant:2 ernst:1 achieve:1 intuitive:2 validate:2 normalize:1 scalability:2 amplifies:1 billion:2 convergence:1 invest:1 asymmetry:1 cluster:1 requirement:2 unmixing:1 r1:1 adam:2 telgarsky:1 illustrate:4 develop:1 depending:1 andrew:1 measured:1 keywords:2 minor:1 ex:1 soc:1 implemented:1 involves:1 come:2 implies:2 quantify:1 direction:2 correct:1 human:2 explains:2 exchange:1 require:1 ungar:2 suffices:1 generalization:1 decompose:1 proposition:13 biological:8 ryan:1 clarify:1 hold:1 sufficiently:1 exp:1 great:1 nadler:1 scope:1 bj:4 week:2 mapping:1 reserve:1 major:2 vary:1 consecutive:1 a2:2 estimation:5 bag:1 label:4 bond:2 largest:1 tool:1 weighted:1 moor:1 hope:2 genomic:2 gaussian:1 modified:1 rather:1 ck:3 thirtieth:2 broader:1 office:2 emission:5 rank:9 likelihood:2 indicates:1 contrast:15 baseline:1 cooccurring:1 foreign:1 hidden:7 spurious:1 interested:1 issue:2 classification:1 development:1 art:1 initialize:1 biologist:1 marginal:1 equal:1 aware:1 having:1 ng:1 sampling:2 xb3:1 biology:1 broad:3 unsupervised:1 foreground:77 report:1 cfn:1 recommend:1 develops:1 few:1 others:1 gordon:1 randomly:2 composed:1 preserve:2 individual:1 pietra:1 maxj:3 beck:1 ecology:1 detection:2 interest:6 eval:1 mixture:18 semidefinite:1 accurate:2 tree:1 taylor:1 desired:1 instance:2 column:1 modeling:8 classify:1 increased:1 bull:1 applicability:1 cost:1 entry:13 uninteresting:1 vandewalle:1 too:5 graphic:1 characterize:1 reported:1 dependency:1 answer:2 varies:1 density:1 international:4 siam:1 probabilistic:2 michael:1 together:1 kontorovich:1 concrete:2 moody:1 reflect:3 possibly:1 transcribed:2 borne:1 american:1 expert:1 return:1 supp:2 account:2 diversity:1 de:2 sec:3 summarized:1 north:1 explicitly:1 depends:2 try:1 xf3:4 recover:8 parallel:1 panda:1 identifiability:1 mutation:1 contribution:1 accuracy:1 variance:6 efficiently:2 correspond:2 identify:3 m2f:10 generalize:1 vincent:1 accurately:6 zoo:1 justifiable:1 explain:1 quattoni:1 against:1 frequency:1 james:1 storm:1 naturally:2 associated:6 proof:2 modeler:1 recovers:3 hsu:5 stop:1 wjb:5 sampled:3 popular:1 proved:1 dataset:3 enhancer:2 cap:1 improves:1 formalize:2 subtle:1 actually:1 nasa:1 appears:1 higher:2 maximally:2 wei:1 amer:1 execute:1 just:4 smola:1 correlation:1 mistakenly:1 parse:1 overlapping:2 lda:4 scientific:3 usa:8 effect:3 concept:1 multiplier:1 brown:2 unbiased:2 chemical:2 moore:2 semantic:1 deal:1 ex1:9 round:1 game:2 basketball:2 auc:4 noted:1 generalized:5 prominent:1 demonstrate:2 percent:3 wise:1 recently:1 common:1 specialized:1 functional:2 empirically:1 cohen:2 shanghai:1 exponentially:1 million:4 extend:1 interpretation:1 association:3 m1:6 discussed:1 refer:1 composition:2 ai:9 chaganty:1 rd:12 similarly:4 language:3 shawe:1 europe:1 etc:3 base:4 carreras:1 recent:3 ship:1 certain:1 inequality:1 binary:1 preserving:1 wtf:10 greater:1 additional:1 novelty:3 converge:1 signal:7 ii:1 multiple:2 full:2 infer:4 segmented:1 technical:2 characterized:1 offer:1 cross:3 devised:1 lai:1 award:1 a1:2 involving:1 variant:1 scalable:1 regression:1 invent:1 arxiv:1 iteration:8 normalization:1 represent:2 cell:4 c1:2 background:66 want:5 thirteenth:1 median:1 appropriately:2 rest:1 repressive:1 simulates:1 effectiveness:1 jordan:1 call:2 integer:1 anandkumar:2 presence:1 easy:1 isolation:1 regarding:1 cn:30 court:1 golf:1 expression:1 pca:1 utility:1 peter:2 returned:1 cause:1 nine:1 linguist:1 cornerstone:1 detailed:1 dark:1 siddiqi:1 category:3 dna:1 reduced:2 sign:2 estimated:2 disjoint:5 blue:1 coarsely:1 group:5 key:2 salient:1 drawn:1 d3:2 kept:1 n66001:1 year:3 sum:2 run:4 powerful:1 xbn:4 almost:1 seq:1 draw:1 appendix:5 bound:2 followed:1 distinguish:1 refine:1 nontrivial:1 adapted:1 precisely:1 handful:1 x2:3 dominated:1 extremely:1 span:1 rcv1:2 relatively:3 developing:1 according:1 combination:1 belonging:1 across:3 slightly:1 describes:1 kakade:4 making:1 biologically:1 modification:2 s1:2 intuitively:1 explained:1 taken:1 equation:1 previously:1 discus:3 count:2 deflation:4 ge:1 end:1 informal:1 operation:1 apply:1 observe:3 spectral:15 appropriate:1 xfn:3 eagon:1 alternative:1 robustness:5 top:3 dirichlet:4 include:1 entertainment:1 remaining:2 denotes:1 linguistics:2 exploit:1 build:1 tensor:23 objective:1 xf1:4 question:3 parametric:1 primary:1 devote:1 win:1 hmm:10 topic:67 argue:1 collected:1 length:2 modeled:3 relationship:2 index:3 illustration:1 balance:1 innovation:1 liang:2 ex3:3 robert:1 statement:2 hog:1 negative:1 rise:1 m2b:4 anal:1 reliably:1 unknown:1 twenty:1 boot:1 observation:9 markov:7 datasets:1 benchmark:1 barrow:1 extended:1 rn:1 perturbation:2 ninth:1 lb:1 inferred:1 david:2 introduced:1 pair:1 required:1 specified:2 namely:1 learned:2 address:1 beyond:1 bar:1 below:1 pattern:1 sparsity:2 challenge:1 belief:1 power:13 natural:4 indicator:1 zhu:1 imply:1 identifies:1 columbia:1 genomics:8 prior:3 review:1 literature:1 acknowledgement:1 balle:1 relative:4 fully:1 expect:2 loss:1 interesting:2 allocation:3 proportional:1 versus:1 triple:1 usda:1 s0:2 article:4 foster:3 mercer:1 bank:2 share:2 maxt:1 row:1 cancer:1 elsewhere:1 supported:2 repeat:2 english:3 formal:1 understand:2 wide:3 taking:1 sparse:4 regard:1 xn:3 vocabulary:1 transition:1 genome:9 gram:1 author:2 concretely:1 qualitatively:2 commonly:1 observable:2 gene:3 pseudoinverse:3 active:2 desouza:1 corpus:13 assumed:1 latent:18 regulatory:1 triplet:1 modularity:1 table:5 hockey:1 learn:18 nature:2 robust:1 williamson:1 complex:3 zou:2 domain:1 intergenic:1 diag:1 did:1 main:4 promoter:1 linearly:2 noise:2 x1:4 enriched:1 site:1 representative:2 body:1 ny:1 theme:4 inferring:1 explicit:1 comput:1 lie:1 jmlr:1 third:9 learns:4 young:1 rk:1 down:1 minute:1 specific:31 intron:2 symbol:1 ton:1 dk:2 r2:1 concern:1 exists:1 essential:1 illustrates:2 hole:1 stratos:2 rd1:1 led:1 simply:1 likely:1 penrose:2 sport:1 scalar:1 partially:1 applies:1 corresponds:4 environmental:1 goal:6 leonard:1 shared:1 price:1 absence:1 change:2 determined:1 contrasted:1 averaging:1 principal:3 total:1 discriminate:1 pas:2 la:1 m3:11 meaningful:2 east:1 support:3 collins:2 bioinformatics:1 d1:2 della:1 chromatin:12 |
4,429 | 5,008 | Fast Determinantal Point Process Sampling with
Application to Clustering
Byungkon Kang ?
Samsung Advanced Institute of Technology
Yongin, Korea
[email protected]
Abstract
Determinantal Point Process (DPP) has gained much popularity for modeling sets
of diverse items. The gist of DPP is that the probability of choosing a particular
set of items is proportional to the determinant of a positive definite matrix that defines the similarity of those items. However, computing the determinant requires
time cubic in the number of items, and is hence impractical for large sets. In this
paper, we address this problem by constructing a rapidly mixing Markov chain,
from which we can acquire a sample from the given DPP in sub-cubic time. In addition, we show that this framework can be extended to sampling from cardinalityconstrained DPPs. As an application, we show how our sampling algorithm can
be used to provide a fast heuristic for determining the number of clusters, resulting
in better clustering.
There are some crucial errors in the proofs of the theorem which invalidate the
theoretical claims of this paper. Please consult the appendix for more details.
1
Introduction
Determinantal Point Process (DPP) [1] is a well-known framework for representing a probability
distribution that models diversity. Originally proposed to model repulsion among physical particles,
it has found its way into many applications in AI, such as image search [2] and text summarization [3].
In a nutshell, given an itemset S = [n] = {1, 2, ? ? ? , n} and a symmetric positive definite (SPD)
matrix L ? Rn?n that describes pairwise similarities, a (discrete) DPP is a probability distribution
over 2S proportional to the determinant of the corresponding submatrix of L. It is known that this
distribution assigns more probability mass on set of points that have larger diversity, which is quantified by the entries of L.
Although the size of the support is exponential, DPP offers tractable inference and sampling algorithms. However, sampling from a DPP requires O(n3 ) time, as an eigen-decomposition of L is
necessary [4]. This presents a huge computational problem when there are a large number of items;
e.g., n > 104 . A motivating problem we consider is that of kernelized clustering [5]. In this problem,
we are given a large number of points plus a kernel function that serves as a dot product between
the points in a feature space. The objective is to partition the points into some number clusters, each
represented by a point called centroid, in a way that a certain cost function is minimized. Our approach is to sample the centroids via DPP. This heuristic is based on the fact that each cluster should
be different from one another as much as possible, which is precisely what DPPs prefer. Naively
using the cubic-complexity sampling algorithm is inefficient, since it can take up to a whole day to
eigen-decompose a 10000 ? 10000 matrix.
In this paper, we present a rapidly mixing Markov chain whose stationary distribution is the DPP
?
This work was submitted when the author was a graduate student at KAIST.
1
of interest. Our Markov chain does not require the eigen-decomposition of L, and is hence timeefficient. Moreover, our algorithm works seamlessly even when new items are added to S (and L),
while the previous sampling algorithm requires expensive decompositions whenever S is updated.
1.1 Settings
More specifically, a DPP over the set S = [n], given a positive-definite similarity matrix L ? 0, is
a probability distribution PL over any Y ? S in the following form:
det(LY )
det(LY )
=
PL (Y = Y ) = P
,
?)
det(L
det(L
+ I)
?
Y
Y ?S
where I is the identity matrix of corresponding dimension, Y is a random subset of S, and LY ? 0
is the principal minior of L whose rows and columns are restricted to the elements of Y . i.e.,
LY = [L(i, j)]i,j?Y , where L(i, j) is the (i, j) entry of L. Many literatures introduce DPP in terms
of a marginal kernel that describes marginal probabilities of inclusion. However, since directly
modeling probabilities over each subset of S 1 offers a more flexible framework, we will focus on
the latter representation.
There is a variant of DPPs that places a constraint on the size of the random subsets. Given an
integer k, a k-DPP is a DPP over size-k sets [2]:
(
P det(LY )
, if |Y | = k
k
|Y ? |=k det(LY ? )
PL (Y = Y ) =
0,
otherwise.
During the discussions, we will use a characteristic vector representation of each Y ? S; i.e.,
vY ? {0, 1}|S| , ?Y ? S, such that vY (i) = 1 if i ? Y , and 0 otherwise. With abuse of notation,
we will often use set operations on characteristic vectors to indicate the same operation on the
corresponding sets. e.g., vY \ {u} is equivalent to setting vY (u) = 0 and correspondingly, Y \ {u}.
2
Algorithm
The overall idea of our algorithm is to design a rapidly-mixing Markov chain whose stationary
distribution is PL . The state space of our chain consists of the characteristic vectors of each subset
of S. This Markov chain is generated by a standard Metropolis-Hastings algorithm, where the
transition probability from state vY to vZ is given as the ratio of PL (Z) to PL (Y ). In particular, we
will only consider transitions between adjacent states - states that have Hamming distance 1. Hence,
the transition probability of removing an element u from Y is of the following form:
det(LY )
Pr(Y ? {u} ? Y ) = min 1,
.
det(LY ?{u} )
The addition probability is defined similarly. The overall chain is an insertion/deletion chain, where
a uniformly proposed element is either added to, or subtracted from the current state. This procedure
is outlined in Algorithm 1. Note that this algorithm has a potentially high computational complexity,
as the determinant of LY for a given Y ? S must be computed on every iteration. If the size of Y
is large, then a single iteration will become very costly. Before discussing how to address this issue
in Section 2.1, we analyze the properties of Algorithm 1 to show that it efficiently samples from PL .
First, we state that the chain induced by Algorithm 1 does indeed represent our desired distribution2 .
Proposition 1. The Markov chain in Algorithm 1 has a stationary distribution PL .
The computational complexity of sampling from PL using Algorithm 1 depends on the mixing time
of the Markov chain; i.e., the number of steps required in the Markov chain to ensure that the current
distribution is ?close enough? to the stationary distribution. More specifically, we are interested
in the ?-mixing time ? (?), which guarantees a distribution that is ?-close to PL in terms of total
variation. In other words, we must spend at least this many time steps in order to acquire a sample
from a distribution close to PL . Our next result states that the chain in Algorithm 1 mixes rapidly:
1
Also known as L-ensembles.
All proofs, including those of irreducibility of our chains, are given in the Appendix of the full version of
our paper.
2
2
Algorithm 1 Markov chain for sampling from PL
Require: Itemset S = [n], similarity matrix L ? 0
Randomly initialize state Y ? S
while Not mixed do
Sample u ? S uniformly at random
Set
det(LY ?{u} )
+
pu (Y ) ? min 1,
det(LY )
det(LY \{u} )
?
pu (Y ) ? min 1,
det(LY )
if u ?
/ Y then
Y ? Y ? {u} with prob. p+
u (Y )
else
Y ? Y \ {u} with prob. p?
u (Y )
end if
end while
return Y
Theorem 1. The Markov chain in Algorithm 1 has a mixing time ? (?) = O (n log (n/?)).
One advantage of having a rapidly-mixing Markov chain as means of sampling from a DPP is that it
is robust to addition/deletion of elements. That is, when a new element is introduced to or removed
from S, we may simply continue the current chain until it is mixed again to obtain a sample from
the new distribution. Previous sampling algorithm, on the other hand, requires to expensively eigendecompose the updated L.
The mixing time of the chain in Algorithm 1 seems to offer a promising direction for sampling
from PL . However, note that this is subject to the presence of an efficient procedure for computing
det(LY ). Unfortunately, computing the determinant already costs O(|Y |3 ) operations, rendering
Algorithm 1 impractical for large Y ?s. In the following sections, we present a linear-algebraic
manipulation of the determinant ratio so that explicit computation of the determinants is unnecessary.
2.1 Determinant Ratio Computation
It turns out that we do not need to explicitly compute the determinants, but rather the ratio of determinants. Suppose we wish to compute det(LY ?{u} )/ det(LY ). Since the determinant is permutationinvariant with respect to the index set, we can represent LY ?{u} as the following block matrix form,
due to its symmetry:
LY bu
LY ?{u} =
,
b?
cu
u
where bu = (L(i, u))i?Y ? R|Y | and cu = L(u, u). With this, the determinant of LY ?{u} is
expressed as
?1
(1)
det(LY ?{u} ) = det(LY ) cu ? b?
u LY bu .
This allows us to re-formulate the insertion transition probability as a determinant-free ratio.
det(LY ?{u} )
?1
+
pu (Y ) = min 1,
= min 1, cu ? b?
u LY bu .
det(LY )
The deletion transition probability p?
u (Y ? {u}) is computed likewise:
det(LY )
?1
?
?1
pu (Y ? {u}) = min 1,
.
= min 1, (cu ? b?
u LY bu )
det(LY ?{u} )
(2)
However, this transformation alone does not seem to result in enhanced computation time, as computing the inverse of a matrix is just as time-consuming as computing the determinant.
3
To save time on computing L?1
Y ? , we incrementally update the inverse through blockwise matrix inversion. Suppose that the matrix L?1
Y has already been computed at the current iteration of the chain.
First, consider the case when an element u is added (?if? clause). The new inverse L?1
Y ?{u} must be
?1
updated from the current LY . This is achieved by the following block-inversion formula [6]:
?1 ?1
?1
? ?1
LY bu
LY + L?1
Y bu bu LY /du ?LY bu /du ,
L?1
=
=
(3)
?1
Y ?{u}
b?
cu
?b?
du
u
u LY /du
?1
?1
where du = cu ? b?
u LY bu is the Schur complement of LY . Since LY is already given, computing
2
each block of the new inverse matrix costs O(|Y | ), which is an order faster than the O(|Y |3 )
complexity required by the determinant. Moreover, only half of the entries may be computed, due
to symmetry.
Next, consider the case when an element u is removed (?else? clause) from the current set Y . The
matrix to be updated is L?1
Y \{u} , and is given by the rank-1 update formula. We first represent the
?1
current inverse LY as
?1
LY \{u} bu
D e
?1
LY =
, ?
,
e
f
b?
cu
u
where D ? R(|Y |?1)?(|Y |?1) , e ? R|Y |?1 , and f ? R. Then, the inverse of the submatrix LY \{u}
is given by
ee?
.
(4)
L?1
=
D
?
Y \{u}
f
2
Again, updating L?1
Y \{u} only requires matrix arithmetic, and hence costs O(|Y | ).
However, the initial L?1
Y , from which all subsequent inverses are updated, must be computed in full
at the beginning of the chain. This complexity can be reduced by restricting the size of the initial Y .
That is, we first randomly initialize Y with a small size, say o(n1/3 ), and compute the inverse L?1
Y .
As we proceed with the chain, update L?1
Y using Equations 3 and 4 when an insertion or a deletion
proposal is accepted, respectively. Therefore, the average complexity of acquiring a sample from a
distribution that is ?-close to PL is O(T 2 n log(n/?)), where T is the average size of Y encountered
during the progress of chain. In Section 3, we introduce a scheme to maintain a small-sized Y .
2.2 Extension to k-DPPs
The idea of constructing a Markov chain to obtain a sample can be extended to k-DPPs. The only
known algorithm so far for sampling from a k-DPP also requires the eigen-decomposition of L.
Extending the previous idea, we provide a Markov chain sampling algorithm similar to Algorithm 1
that samples from PLk .
The main idea behind the k-DPP chain is to propose a new configuration by choosing two elements:
one to remove from the current set, and another to add. We accept this move according to the
probability defined by the ratio of the proposed determinant to the current determinant. This is
equivalent to selecting a row and column of LX , and replacing it with the ones corresponding to the
element to be added. i.e., for X = Y ? {u}
L Y bv
L Y bu
?
,
?
L
=
LX=Y ?{u} =
X =Y ?{v}
b?
cv
b?
cu
v
u
where u and v are the elements being removed and added, respectively. Following Equation 2, we
set the transition probability as the ratio of the determinants of the two matrices.
?1
cv ? b ?
det(LX ? )
v LY bv
=
.
?1
det(LX )
cu ? b ?
u LY bu
The final procedure is given in Algorithm 2.
Similarly to Algorithm 1, we present the analysis on the stationary distribution and the mixing time
of Algorithm 2.
Proposition 2. The Markov chain in Algorithm 2 has a stationary distribution PLk .
4
Algorithm 2 Markov chain for sampling from PLk
Require: Itemset S = [n], similarity matrix L ? 0
Randomly initialize state X ? S, s.t. |X| = k
while Not mixed do
Sample u ? X, and v ? S \ X u.a.r.
Letting Y = X \ {u}, set
?1
cv ? b?
v LY bv
p ? min 1,
.
?1
cu ? b?
u LY bu
(5)
X ? Y ? {v} with prob. p
end while
return X
Theorem 2. The Markov chain in Algorithm 2 has a mixing time ? (?) = O(k log(k/?)).
The main computational bottleneck of Algorithm 2 is the inversion of LY . Since LY is a (k ? 1) ?
(k ?1) matrix, the per-iteration cost is O(k 3 ). However, this complexity can be reduced by applying
Equation 3 on every iteration to update the inverse. This leads to the final sampling complexity of
O(k 3 log(k/?)), which dominates the O(k 3 ) cost of computing the intitial inverse, for acquiring a
single sample from the chain. In many cases, k will be a constant much smaller than n, so our
algorithm is efficient in general.
3
Application to Clustering
Finally, we show how our algorithms lead to an efficient heuristic for a k-means clustering problem
when the number of clusters is not known. First, we briefly overview the k-means problem.
Given a set of points P = {xi ? Rd }ni=1 , the goal of clustering is to construct a partition C =
{C1 , ? ? ? , Ck |Ci ? P } of P such that the distortion
DC =
k X
X
i=1 x?Ci
2
kx ? mi k2
(6)
is minimized, where mi is the centroid
P of cluster Ci . It is known that the optimal centroid is the mean
of the points of Ci . i.e., mi = ( x?Ci x)/|Ci |. Iteratively minimizing this expression converges
to a local optimum, and is hence the preferred approach in many works. However, determining the
number of clusters k is the factor that makes this problem NP-hard [7]. Note that the problem of
unknown k prevails in other types of clustering algorithm, such as kernel k-means [5] and spectral
clustering [8]: Kernel k-means is exactly the same as regular k-means except that the inner-products
are substituted with a positive semi-definite kernel function, and spectral clustering uses regular
k-means clustering as a subroutine. Some common techniques to determine k include performing
a density-based analysis of the data [9], or selecting k that minimizes the Bayesian information
criterion (BIC) [10].
In this work, we propose to sample the initial centroids of the clustering via our DPP sampling
algorithms. The similarity matrix L will be the Gram matrix determined by L(i, j) = ?(xi , xj ),
where ?(?) is simply the inner-product for regular k-means, and a specified kernel function for
kernel k-means. Since DPPs naturally capture the notion of diversity, the sampled points will tend
to be more diverse, and thus serve better as initial representatives for each cluster. Once we have a
sample, we perform a Voronoi partition around the elements of the sample to obtain a clustering3 .
Note that it is not necessary to determine k beforehand as it can be obtained from the DPP samples.
This approach is closely related to the MAP inference problem for DPPs [11], which is known to be
NP-Hard as well. We use the proposed algorithms to sample the representative points that have high
probability under PL , and cluster the rest of the points around the sample. Subsequently, standard
(kernel) k-means algorithms can be applied to improve this initial clustering.
3
The distance between x and y is defined as
kernel ?
p
?(x, x) ? 2?(x, y) + ?(y, y), for any positive semi-definite
5
Since DPPs model both size and diversity, it seems that we could simply collect samples from
Algorithm 1 directly, and use those samples as representatives. However, as pointed out by [2],
modeling both properties simultaneously can negatively bias the quality of diversity. To reduce this
possible negative influence, we adopt a two-step sampling strategy: First, we gather C samples from
Algorithm 1 and construct a histogram H over the sizes of the samples. Next, we sample from
k-DPPs, by Algorithm 2, on a k acquired from H. This last sample is the representatives we use to
cluster.
Another problem we may encounter in this scheme is the sensitivity to outliers. The presence of an
outlier in P can cause the DPP in the first phase to favor the inclusion of that outlier, resulting in a
possibly bad clustering. To make our approach more robust to outliers, we introduce the following
cardinality-penalized DPP:
PL;? (Y = Y ) ? exp(tr(log(LY )) ? ?|Y |) =
det(LY )
,
exp(?|Y |)
where ? ? 0 is a hyper-parameter that controls the weight to be put on |Y |. This regularization
scheme smoothes the original PL by exponentially discounting the size of Y ?s. This does not increase the order of the mixing time of the induced chain, since only a constant factor of exp(??) is
multiplied to the transition probabilities. Algorithm 3 describes the overall procedure of our DPPbased clustering.
Algorithm 3 DPP-based Clustering
Require: L ? 0, ? ? 0, R > 0, C > 0
Gather {S1 , ? ? ? , SC |Si ? PL;? } (Algorithm 1)
Construct histogram H = {|Si |}C
i=1 on the sizes of Si ?s
for j = 1, ? ? ? , R do
k
Sample Mj ? PLj (Algorithm 2), where kj ? H
Voronoi partition around Mj
end for
return clustering with lowest distortion (Equation 6)
Choosing the right value of ? usually requires a priori knowledge of the data set. Since this information is not always available, one may use a small subset of P to heuristically choose ?. For ?
example,
examine the BIC of the initial clustering with respect to the centroids sampled from O( n) randomly chosen elements P ? ? P , with ? = 0. Then, increase ? by 1 until we encounter the point
where the BIC hits the local maximum to choose the final value. An additional binary ?
search step
may be used between ? and ? + 1 to further fine-tune its value. Because we only use O( n) points
to sample from the DPP, each search step has at most linear complexity, allowing for ample time
to choose better ??s. This procedure may not appear to have an apparent advantage over a standard
BIC-based model selection to choose the number of clusters k. However, tuning ? not only allows
one to determine k, but also gives better initial partitions in terms of distortion.
4
Experiments
In this section, we empirically demonstrate how our proposed method, denoted DPP-MC, of choosing an initial clustering compares to other methods, in terms of distortion and running time. The
methods we compare against include:
? DPP-Full: Sample using full DPP sampling procedure as given in [4].
? DPP-MAP: Sample the initial centroids according to the MAP configuration, using the
algorithm of [11].
? KKM: Plain kernel k-means clustering given by [5], run on the ?true? number of clusters.
DPP-Full and DPP-MAP were used only in the first phase of Algorithm 3. To summarize the testing
procedure, DPP-MC, DPP-Full, DPP-MAP were used to choose the initial centroids. After this
initialization, KKM was carried out to improve the initial partitioning. Hence, the only difference
between the algorithms tested and KKM is the initialization.
6
The real-world data sets we use are the letter recognition data set [12] (LET), and a subset of the
power consumption data set [13] (PWC), The LET set is represented as 10,000 points in R16 , and
the PWC set 10,000 points in R7 . While the LET set has 26 ground-truth clusters, the PWC set is
only labeled with timestamps. Hence, we manually divided all points into four clusters, based on
the month of timestamps. Since this partitioning is not the ground truth given by the data collector,
we expected the KKM algorithm to perform badly on this set.
In addition, we also tested our algorithm on an artificially-generated set consisting of 15,000 points
in R10 from five mixtures of Gaussians (MG). However, this task is made challenging by roughly
merging the five Gaussians so that it is more likely to discover fewer clusters. The purpose of this set
is to examine how well our algorithm finds the appropriate number of clusters. For the MG set, we
present the result of DBSCAN [9]: another clustering algorithm that does not require k beforehand.
We used a simple polynomial kernel of the form ?(x, y) = (x ? y + 0.05)3 for the real-world data
sets, and a dot product for the artificial set. Algorithm 3 was run with ?1 = n log(n/0.01) and
?2 = k log(k/0.01) mixing steps for first and second phases, respectively, and C = R = 10.
The running time of our algorithm includes the time taken to heuristically search for ? using the
following BIC [14]:
X
kd
BICk ,
log Pr(x|{mi }ki=1 , ?) ?
log n,
2
x?P
where ? is the average of each cluster?s distortion, and d is the dimension of the data set. The tuning
procedure is the same as the one given at the end of the previous section, without using binary
search.
4.1 Real-World Data Sets
The plots of the distortion and time for the LET set over the clustering iterations are given in
Figure 1. Recall that KKM was run with the true number of clusters as its input, so one may expect it
to perform relatively better, in terms of distortion and running time, than the other algorithms, which
must compute k. The plots show that this is indeed the case, with our DPP-MC outperforming its
competitors. Both DPP-Full and DPP-MAP require long running time for the eigen-decomposition
of the similarity matrix. It is interesting to note that DPP-MAP does not perform better than a
plain DPP-Full. We conjecture that this phenomenon is due to the approximate nature of the MAP
inference.
4.5
3500
3000
Cumulative time (sec.)
Distortion (? 104)
DPP?MC
KKM
DPP?Full
DPP?MAP
4
3.5
DPP?MC
KKM
DPP?Full
DPP?MAP
2500
2000
1500
1000
500
3
1
2
3
4
5
6
7
8
9
0
10
Iterations
1
2
3
4
5
6
7
8
9
Iterations
Figure 1: Distortion (left) and cumulated runtime (right) of the clustering induced by the competing
algorithms on the LET set.
In Table 1, we give a summary of the DPP-based initialization procedures. The reported values are
the immediate results of the initialization. For DPP-MC, the running time includes the automated ?
tuning. Taking this fact into account, DPP-MC was able to recover the true value of k quickly.
In Figure 2, we show the same results on the PWC set. As in the previous case, DPP-MC exhibits
the lowest distortion with the fastest running time. For this set, we have omitted the results for DPP7
Distortion
Time (sec.)
k
?
DPP-MC
36020
20
26
2
DPP-Full
42841
820
6
-
DPP-MAP
43719
2850
16
-
DPP-MC
9.78
15
13
4
DPP-Full
20.15
50
6
-
DPP-MAP
150
220
1
-
Table 1: Comparison among the DPP-based initializations for the LET set (left) and the PWC set
(right).
MAP, as it yielded a degenereate result of a single cluster. Nevertheless, we give the final result in
Table 1.
60
1400
DPP?MC
KKM
DPP?Full
1200
Cumulative time (sec.)
50
Distortion
40
30
20
10
0
DPP?MC
KKM
DPP?Full
1000
800
600
400
200
1
2
3
4
5
6
7
8
0
9
1
2
3
Iterations
4
5
6
7
8
9
Iterations
Figure 2: Distortion (left) and time (right) of the clustering induced by the competing algorithms on
the PWC set.
4.2 Artificial Data Set
Finally, we present results on clustering the artificial MG set. In this set, we compare our algorithm
with another clustering algorithm DBSCAN that does not require k a priori. Due to page constraints,
we summarize the result in Table 2.
Distortion
Time (sec.)
k
DPP-MC
6.127
416
34
DBSCAN
35.967
60
1
Table 2: Comparison among the DPP-based initializations for the PWC set.
Due to the merged configuration of the MG set, DBSCAN is not able to successfuly discover multiple clusters, and ends up with a singleton clustering. On the other hand, DPP-MC managed to find
many distinct clusters in a way the distortion is lowered.
5
Discussion and Future Works
We have proposed a fast method for sampling from an ?-close DPP distribution and its application to
kernelized clustering. Although the exact computational complexity of sampling (O(T 2 n log(n/?))
is not explicitly superior to the previous approach (O(n3 )), we emperically show that T is generally
small enough to account for our algorithm?s efficiency. Furthermore, the extension to k-DPP
sampling yields very fast speed-up compared to the previous sampling algorithm.
However, one must keep in mind that the mixing time analysis is for a single sample only: i.e., we
must mix the chain for each sample we need. For a small number of samples, this may compensate
for the cubic complexity of the previous approach. For a larger number of samples, we must further
8
investigate the effect of sample correlation after mixing in order to prove long-term efficiency.
References
[1] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. ArXiv, 2012.
[2] A. Kulesza and B. Taskar. k-DPPs: Fixed-size determinantal point processes. In Proceedings
of ICML, 2011.
[3] A. Kulesza and B. Taskar. Learning determinantal point processes. In Proceedings of UAI,
2011.
[4] J.B. Hough, M. Krishnapur, Y. Peres, and B. Vir?ag. Determinantal processes and independence.
Probability Surveys, 3, 2006.
[5] I. Dhillon, Y. Guan, and B. Kulis. Kernel k-means, spectral clustering and normalized cuts. In
Proceedings of ACM SIGKDD, 2004.
[6] G. Golub and C. van Loan. Matrix Computations. Johns Hopkins University Press, 1996.
[7] A. Daniel, D. Amit, H. Pierre, and P. Preyas. NP-hardness of euclidean sum-of-squares clustering. Machine Learning, 75:245?248, 2009.
[8] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In
Proceedings of NIPS, 2001.
[9] M. Ester, H. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters
in large spatial databases with noise. In Proceedings of KDD, 1996.
[10] C. Fraley and A. E. Raftery. How many clusters? which clustering method? answers via
model-based cluster analysis. The Computer Journal, 41(8), 1998.
[11] J. Gillenwater, A. Kulesza, and B. Taskar. Near-optimal MAP inference for determinantal
point processes. In Proceedings of NIPS, 2012.
[12] D. Slate.
Letter recognition data set.
http://archive.ics.uci.edu/ml/
datasets/Letter+Recognition, 1991.
[13] G. H?ebrail and A. B?erard. Individual household electric power consumption data set.
http://archive.ics.uci.edu/ml/datasets/Individual+household+
electric+power+consumption, 2012.
[14] C. Goutte, L. K. Hansen, M. G. Liptrot, and E. Rostrup. Feature-space clustering for fMRI
meta-analysis. Human Brain Mapping, 13, 2001.
9
| 5008 |@word kulis:1 determinant:18 version:1 inversion:3 cu:11 seems:2 briefly:1 polynomial:1 heuristically:2 decomposition:5 tr:1 initial:11 configuration:3 selecting:2 daniel:1 current:9 com:1 si:3 must:8 determinantal:8 john:1 subsequent:1 partition:5 timestamps:2 kdd:1 remove:1 plot:2 gist:1 update:4 stationary:6 alone:1 half:1 fewer:1 item:6 pwc:7 discovering:1 beginning:1 lx:4 five:2 become:1 consists:1 prove:1 introduce:3 acquired:1 pairwise:1 expected:1 hardness:1 indeed:2 roughly:1 examine:2 brain:1 cardinality:1 discover:2 moreover:2 notation:1 mass:1 lowest:2 what:1 rostrup:1 minimizes:1 ag:1 transformation:1 impractical:2 guarantee:1 every:2 nutshell:1 runtime:1 exactly:1 k2:1 hit:1 vir:1 partitioning:2 control:1 ly:50 appear:1 before:1 positive:5 local:2 dbscan:4 abuse:1 plus:1 itemset:3 initialization:6 quantified:1 collect:1 challenging:1 fastest:1 graduate:1 testing:1 block:3 definite:5 procedure:9 word:1 regular:3 close:5 selection:1 put:1 applying:1 influence:1 equivalent:2 map:14 survey:1 formulate:1 assigns:1 kkm:9 notion:1 variation:1 updated:5 enhanced:1 suppose:2 exact:1 us:1 element:12 expensive:1 recognition:3 updating:1 cut:1 labeled:1 database:1 taskar:4 capture:1 distribution2:1 removed:3 complexity:11 insertion:3 serve:1 negatively:1 efficiency:2 samsung:2 slate:1 represented:2 distinct:1 fast:4 artificial:3 sc:1 hyper:1 choosing:4 whose:3 heuristic:3 larger:2 kaist:1 spend:1 say:1 distortion:15 otherwise:2 apparent:1 favor:1 final:4 advantage:2 mg:4 propose:2 product:4 uci:2 rapidly:5 mixing:14 cluster:23 optimum:1 extending:1 converges:1 progress:1 indicate:1 direction:1 closely:1 merged:1 subsequently:1 human:1 require:7 decompose:1 proposition:2 extension:2 pl:18 around:3 ground:2 ic:2 exp:3 mapping:1 claim:1 adopt:1 omitted:1 purpose:1 hansen:1 vz:1 always:1 rather:1 ck:1 focus:1 rank:1 seamlessly:1 sigkdd:1 centroid:8 inference:4 voronoi:2 repulsion:1 accept:1 kernelized:2 subroutine:1 interested:1 overall:3 among:3 flexible:1 issue:1 denoted:1 priori:2 spatial:1 initialize:3 marginal:2 plk:3 construct:3 having:1 once:1 sampling:23 manually:1 ng:1 r7:1 icml:1 fmri:1 future:1 minimized:2 np:3 randomly:4 simultaneously:1 individual:2 phase:3 consisting:1 n1:1 maintain:1 huge:1 interest:1 investigate:1 golub:1 mixture:1 behind:1 chain:32 beforehand:2 necessary:2 korea:1 hough:1 euclidean:1 desired:1 re:1 theoretical:1 column:2 modeling:3 cost:6 entry:3 subset:6 motivating:1 reported:1 answer:1 density:2 sensitivity:1 bu:14 quickly:1 hopkins:1 again:2 prevails:1 possibly:1 choose:5 r16:1 ester:1 inefficient:1 return:3 account:2 diversity:5 singleton:1 student:1 sec:4 includes:2 expensively:1 explicitly:2 depends:1 analyze:1 recover:1 square:1 ni:1 characteristic:3 efficiently:1 ensemble:1 likewise:1 yield:1 bayesian:1 mc:14 submitted:1 whenever:1 against:1 competitor:1 naturally:1 proof:2 mi:4 liptrot:1 hamming:1 sampled:2 recall:1 knowledge:1 originally:1 day:1 wei:1 furthermore:1 just:1 until:2 correlation:1 hand:2 hastings:1 replacing:1 incrementally:1 defines:1 quality:1 effect:1 normalized:1 true:3 managed:1 hence:7 regularization:1 discounting:1 symmetric:1 iteratively:1 dhillon:1 adjacent:1 during:2 please:1 criterion:1 demonstrate:1 image:1 common:1 superior:1 clause:2 overview:1 physical:1 empirically:1 exponentially:1 ai:1 cv:3 dpps:10 rd:1 tuning:3 outlined:1 similarly:2 inclusion:2 particle:1 pointed:1 gillenwater:1 dot:2 lowered:1 similarity:7 invalidate:1 pu:4 add:1 manipulation:1 certain:1 meta:1 binary:2 continue:1 discussing:1 outperforming:1 additional:1 determine:3 arithmetic:1 semi:2 full:14 mix:2 multiple:1 faster:1 offer:3 long:2 compensate:1 divided:1 variant:1 arxiv:1 iteration:10 represent:3 kernel:12 histogram:2 achieved:1 c1:1 proposal:1 addition:4 fine:1 else:2 crucial:1 rest:1 archive:2 induced:4 subject:1 tend:1 ample:1 seem:1 schur:1 integer:1 consult:1 ee:1 jordan:1 presence:2 near:1 enough:2 sander:1 rendering:1 automated:1 spd:1 bic:5 xj:1 independence:1 irreducibility:1 competing:2 inner:2 idea:4 reduce:1 det:24 bottleneck:1 expression:1 algebraic:1 fraley:1 proceed:1 cause:1 generally:1 tune:1 reduced:2 http:2 vy:5 popularity:1 per:1 diverse:2 discrete:1 four:1 nevertheless:1 r10:1 sum:1 run:3 prob:3 inverse:10 letter:3 place:1 smoothes:1 appendix:2 prefer:1 submatrix:2 ki:1 encountered:1 yielded:1 badly:1 bv:3 precisely:1 constraint:2 n3:2 speed:1 min:8 performing:1 relatively:1 conjecture:1 according:2 kd:1 describes:3 smaller:1 metropolis:1 s1:1 outlier:4 restricted:1 pr:2 taken:1 equation:4 goutte:1 turn:1 mind:1 letting:1 tractable:1 serf:1 end:6 available:1 operation:3 gaussians:2 multiplied:1 spectral:4 appropriate:1 pierre:1 subtracted:1 save:1 encounter:2 eigen:5 original:1 clustering:32 ensure:1 include:2 running:6 household:2 plj:1 amit:1 objective:1 move:1 added:5 already:3 strategy:1 costly:1 exhibit:1 distance:2 consumption:3 index:1 ratio:7 minimizing:1 acquire:2 unfortunately:1 potentially:1 blockwise:1 negative:1 design:1 summarization:1 unknown:1 perform:4 allowing:1 markov:16 datasets:2 immediate:1 peres:1 extended:2 dc:1 rn:1 introduced:1 complement:1 successfuly:1 required:2 specified:1 deletion:4 kang:2 nip:2 address:2 able:2 kriegel:1 usually:1 kulesza:4 summarize:2 including:1 power:3 advanced:1 representing:1 scheme:3 improve:2 technology:1 carried:1 raftery:1 kj:1 text:1 literature:1 determining:2 bick:1 expect:1 mixed:3 interesting:1 proportional:2 gather:2 emperically:1 row:2 penalized:1 summary:1 last:1 free:1 bias:1 institute:1 taking:1 correspondingly:1 van:1 dpp:62 dimension:2 plain:2 transition:7 gram:1 world:3 cumulative:2 author:1 made:1 far:1 approximate:1 preferred:1 keep:1 ml:2 uai:1 krishnapur:1 unnecessary:1 consuming:1 xi:2 eigendecompose:1 search:5 timeefficient:1 table:5 promising:1 mj:2 nature:1 robust:2 symmetry:2 du:5 artificially:1 constructing:2 electric:2 substituted:1 main:2 whole:1 noise:1 collector:1 xu:1 representative:4 cubic:4 sub:1 explicit:1 wish:1 exponential:1 guan:1 theorem:3 removing:1 formula:2 bad:1 dominates:1 naively:1 restricting:1 merging:1 cumulated:1 gained:1 ci:6 kx:1 simply:3 likely:1 erard:1 expressed:1 acquiring:2 truth:2 acm:1 identity:1 sized:1 goal:1 month:1 hard:2 loan:1 specifically:2 except:1 uniformly:2 determined:1 principal:1 called:1 total:1 accepted:1 support:1 latter:1 tested:2 phenomenon:1 |
4,430 | 5,009 | Computing the Stationary Distribution, Locally
Asuman Ozdaglar
LIDS, Department of EECS
Massachusetts Institute of Technology
[email protected]
Christina E. Lee
LIDS, Department of EECS
Massachusetts Institute of Technology
[email protected]
Devavrat Shah
Department of EECS
Massachusetts Institute of Technology
[email protected]
Abstract
Computing the stationary distribution of a large finite or countably infinite state
space Markov Chain (MC) has become central in many problems such as statistical inference and network analysis. Standard methods involve large matrix multiplications as in power iteration, or simulations of long random walks, as in Markov
Chain Monte Carlo (MCMC). Power iteration is costly, as it involves computation
at every state. For MCMC, it is difficult to determine whether the random walks
are long enough to guarantee convergence. In this paper, we provide a novel algorithm that answers whether a chosen state in a MC has stationary probability
larger than some ? ? (0, 1), and outputs an estimate of the stationary probability.
Our algorithm is constant time, using information from a local neighborhood of
the state on the graph induced by the MC, which has constant size relative to the
state space. The multiplicative error of the estimate is upper bounded by a function of the mixing properties of the MC. Simulation results show MCs for which
this method gives tight estimates.
1
Introduction
Computing the stationary distribution of a Markov chain (MC) with a very large state space (finite,
or countably infinite) has become central to statistical inference. The ability to tractably simulate
MCs along with the generic applicability has made Markov Chain Monte Carlo (MCMC) a method
of choice and arguably the top algorithm of the twentieth century [1]. However, MCMC and its variations suffer from limitations in large state spaces, motivating the development of super-computation
capabilities ? be it nuclear physics [2, Chapter 8], Google?s computation of PageRank [3], or stochastic simulation at-large [4]. MCMC methods involve sampling states from a long random walk over
the entire state space [5, 6]. It is difficult to determine when the algorithm has walked ?long enough?
to produce reasonable approximations for the stationary distribution.
Power iteration is another method commonly used for computing leading eigenvectors and stationary
distributions of MCs. The method involves iterative multiplication of the transition matrix of the MC
[7]. However, there is no clearly defined stopping condition in general settings, and computations
must be performed at every state of the MC.
In this paper, we provide a novel algorithm that addresses these limitations. Our algorithm answers
the following question: for a given node i of a countable state space MC, is the stationary probability
of i larger than a given threshold ? ? (0, 1), and can we approximate it? For chosen parameters
?, , and ?, our algorithm guarantees that for nodes such that the estimate ?
?i < ?/(1 + ), the true
1
value ?i is also less than ? with probability at least 1 ? ?. In addition, if ?
?i ? ?/(1 + ), with
probability at least 1 ? ?, the estimate is within an times Zmax (i) multiplicative factor away from
the true ?i , where Zmax (i) is effectively a ?local mixing time? for i derived from the fundamental
matrix of the transition probability matrix P .
1
? ln(3 ? ) , which is constant with respect
The running time of the algorithm is upper bounded by O
?
to the MC. Our algorithm uses only a?local? neighborhood of the state i, defined with respect to the
Markov graph. Stopping conditions are easy to verify and have provable performance guarantees.
Its correctness relies on a basic property: the stationary probability of each node is inversely proportional to the mean of its ?return time.? Therefore, we sample return times to the node and use
the empirical average as an estimate. Since return times can be arbitrarily long, we truncate sample
return times at a chosen threshold. Hence, our algorithm is a truncated Monte Carlo method.
We utilize the exponential concentration of return times in Markov chains to establish theoretical
guarantees for the algorithm. For finite state Markov chains, we use results from Aldous and Fill
[8]. For countably infinite state space Markov chains, we build upon a result by Hajek [9] on the
concentration of certain types of hitting times to derive concentration of return times to a given node.
We use these concentration results to upper bound the estimation error and the algorithm runtime
as a function of the truncation threshold and the mixing properties of the graph. For graphs that
mix quickly, the distribution over return times concentrates more sharply around its mean, resulting
in tighter performance guarantees. We illustrate the wide applicability of our local algorithm for
computing network centralities and stationary distributions of queuing models.
Related Work. MCMC was originally proposed in [5], and a tractable way to design a random
walk for a target distribution was proposed by Hastings [6]. Given a distribution ?(x), the method
designs a Markov chain such that the stationary distribution of the Markov chain is equal to the target
distribution. Without using the full transition matrix of the Markov chain, Monte Carlo sampling
techniques estimate the distribution by sampling random walks via the transition probabilities at each
node. As the length of the random walk approaches infinity, the distribution over possible states of
the random walk approaches stationary distribution. Articles by Diaconis and Saloff-Coste [10] and
Diaconis [11] provide a summary of major developments from a probability theoretic perspective.
The majority of work following the initial introduction of the algorithm involves analyzing the convergence rates and mixing times of this random walk [8, 12]. Techniques involve spectral analysis or
coupling arguments. Graph properties such as conductance help characterize the graph spectrum for
reversible Markov chains. For general non-reversible countably infinite state space Markov chains,
little is known about the mixing time. Thus, it is difficult to verify if the random walk has sufficiently converged to the stationary distribution, and before that point there is no guarantee whether
the estimate obtained from the random walk is larger or smaller than the true stationary probability.
Power iteration is an equally old and well-established method for computing leading eigenvectors of
matrices [7]. Given a matrix A and a seed vector x0 , power iteration recursively computes xt+1 =
Axt
kAxt k . The convergence rate of xt to the leading eigenvector is governed by the spectral gap. As
mentioned above, techniques for analyzing the spectrum are not well developed for general nonreversible MCs, thus it is difficult to know how many iterations are sufficient. Although power
iteration can be implemented in a distributed manner, each iteration requires computation to be
performed by every state in the MC, which is expensive for large state space MCs. For countably
infinite state space MCs, there is no clear analog to matrix multiplication.
In the specialized setting of PageRank, the goal is to compute the stationary distribution of a specific
Markov chain described by a transition matrix P = (1 ? ?)Q + ?1 ? rT , where Q is a stochastic
transition probability matrix, and ? is a scalar in (0, 1). This can be interpreted as random walk in
which every step either follows Q with probability 1 ? ?, or with probability ? jumps to a node
according to the distribution specified by vector r. By exploiting this special structure, numerous
recent results have provided local algorithms for computing PageRank efficiently. This includes
work by Jeh and Widom [13], Fogaras et al. [14], Avrachenkov et al. [15], Bahmani et al. [16] and
most recently, Borgs et al. [17]: it outputs a set of ?important? nodes ? with probability 1 ? o(1),
it includes all nodes with PageRank greater than a given threshold ?, and does not include nodes
1
with PageRank less than ?/c for a given c > 1. The algorithm runs in time O ?
polylog(n) .
Unfortunately, these approaches are specific to PageRank and do not extend to general MCs.
2
2
Setup, problem statement & algorithm
Consider a discrete time, irreducible, positive-recurrent MC {Xt }t?0 on a countable state space ?
(n)
having transition probability matrix P . Let Pij be the (i, j)-coordinate of P n such that
(n)
Pij , P(Xn = j|X0 = i).
Throughout the paper, we will use the notation Ei [?] = E[?|X0 = i], and Pi (?) = P(?|X0 = i). Let
Ti be the return time to a node i, and let Hi be the maximal hitting time to a node i such that
Ti = inf{t ? 1 | Xt = i} and Hi = max Ej [Ti ].
(1)
j??
P
The
P stationary distribution is a function ? : ? ? [0, 1] such that i?? ?i = 1 and ?i =
j?? ?j Pji for all i ? ?. An irreducible positive recurrent Markov chain has a unique stationary distribution satisfying [18, 8]:
i
hP
Ti
1
Ei
{X
=i}
t
t=1
1
?i =
=
for all i ? ?.
(2)
Ei [Ti ]
Ei [Ti ]
The Markov chain can be visualized as a random walk over a weighted directed graph G =
(?, E, P ), where ? is the set of nodes, E = {(i, j) ? ? ? ? : Pij > 0} is the set of edges,
and P describes the weights of the edges.1 The local neighborhood of size r around node i ? ? is
defined as {j ? ? | dG (i, j) ? r}, where dG (i, j) is the length of the shortest directed path (in terms
of number of edges) from i to j in G. An algorithm is local if it only uses information within a local
neighborhood of size r around i, where r is constant with respect to the size of the state space.
The fundamental matrix Z of a finite state space Markov chain is
?
?
X
X
(t)
(t)
T
T ?1
Z,
P ? 1?
= I ? P + 1?
, such that Zjk ,
Pjk ? ?k .
t=0
t=0
(t)
Since Pjk denotes the probability that a random walk beginning at node j is at node k after t steps,
Zjk represents how quickly the probability mass at node k from a random walk beginning at node j
converges to ?k . We will use this to provide bounds for the performance of our algorithm.
2.1
Problem Statement
Consider a discrete time, irreducible, aperiodic, positive recurrent MC {Xt }t?0 on a countable state
space ? with transition probability matrix P : ? ? ? ? [0, 1]. Given node i and threshold ?, is
?i > ?? If so, what is ?i ? We answer this with a local algorithm, which uses only edges within a
local neighborhood around i of constant size with respect to the state space.
We illustrate the limitations of using a local algorithm for answering this question. Consider the
Clique-Cycle Markov chain shown in Figure 1(a) with n nodes, composed of a size k clique connected to a size (n ? k + 1) cycle. For node j in the clique excluding i, with probability 1/2, the
random walk stays at node j, and with probability 1/2 the random walk chooses a random neighbor
uniformly. For node j in the cycle, with probability 1/2, the random walk stays at node j, and with
probability 1/2 the random walk travels counterclockwise to the subsequent node in the cycle. For
node i, with probability the random walk enters the cycle, with probability 1/2 the random walk
chooses any neighbor in the clique; and with probability 1/2 ? the random walk stays at node i.
We can show that the expected return time to node i is (1 ? 2)k + 2n.
Therefore, Ei [Ti ] scales linearly in n and k. Suppose we observe only the local neighborhood of
constant size r around node i. All Clique-Cycle Markov chains with more than k + 2r nodes have
identical local neighborhoods. Therefore, for any ? ? (0, 1), there exists two Clique-Cycle Markov
chains which have the same and k, but two different values for n, such that even though their local
neighborhoods are identical, ?i > ? in the MC with a smaller n, while ?i < ? in the MC with a
larger n. Therefore, by restricting ourselves to a local neighborhood around i of constant size, we
will not be able to correctly determine whether ?i > ? for every node i in any arbitrary MC.
1
Throughout the paper, Markov chain and random walk on a network are used interchangeably; similarly,
nodes and states are used interchangeably.
3
i
1
(a) Clique-Cycle Markov chain
2
3
4
5
(b) MM1 Queue
Figure 1: Examples of Markov Chains
2.2
Algorithm
Given a threshold ? ? (0, 1) and a node i ? ?, the algorithm obtains an estimate ?
?i of ?i , and
uses ?
?i to determine whether to output 0 (?i ? ?) or 1 (?i > ?). The algorithm relies on the
characterization of ?i given in Eq. (2): ?i = 1/Ei [Ti ]. It takes many independent samples of a
truncated random walk that begins at node i and stops either when the random walk returns to node
i, or when the length exceeds a predetermined maximum denoted by ?. Each sample is generated
by simulating the random walk using ?crawl? operations over the MC. The expected length of each
random walk sample is Ei [min(Ti , ?)], which is close to Ei [Ti ] when ? is large.
As the number of samples and ? go to infinity, the estimate will converge almost surely to ?i , due
to the strong law of large numbers and positive recurrence of the MC. We use Chernoff?s bound to
choose a sufficiently large number of samples as a function of ? to guarantee that with probability
1 ? ?, the average length of the sample random walks will lie within (1 ? ) of Ei [min(Ti , ?)].
We also need to choose an suitable value for ? that balances between accuracy and computation cost.
The algorithm searches for an appropriate size for the local neighborhood by beginning small and
increasing the size geometrically. In our analysis, we will show that the total computation summed
over all iterations is only a constant factor more than the computation in the final iteration.
Input: Anchor node i ? ? and parameters ? = threshold for importance,
= closeness of the estimate, and ? = probability of failure.
Initialize: Set
6(1 + ) ln(8/?)
t = 1, ?(1) = 2, N (1) =
.
2
Step 1 (Gather Samples) For each k in {1, 2, 3, . . . , N (t) }, generate independent samples
sk ? min(Ti , ?(t) ) by simulating paths of the MC beginning at node i, and setting sk to
be the length of the k th sample path. Let p?(t) = fraction of samples truncated at ?(t) ,
(t)
N
1
1 ? p?(t)
1 X
(t)
(t)
s
,
?
?
=
,
and
?
?
=
.
k
i
i
(t)
(t)
N (t) k=1
T?i
T?i
Step 2 (Termination Conditions)
(t)
T?i =
(t)
? If (a) ?
?i
<
?
(1+) ,
(t)
? Else if (b) p?(t) ? ?
?i
? Else continue.
(t)
(t)
then stop and return 0, and estimates ?
?i and ?
?i .
(t)
(t)
< ?, then stop and return 1, and estimates ?
?i and ?
?i .
Step 3 (Update Rules) Set
&
?
(t+1)
?2??
(t)
,N
(t+1)
?
'
3(1 + )?(t+1) ln(4?(t+1) /?)
, and t ? t + 1.
(t)
T? 2
i
Return to Step 1.
(t)
(t)
Output: 0 or 1 indicating whether ?i > ?, and estimates ?
?i and ?
?i .
4
This algorithm outputs two estimates for the anchor node i: ?
?i , which relies on the second expression
in Eq. (2), and ?
?i , which relies on the first expression in Eq. (2). We refer to the total number of
iterations used in the algorithm as the value of t at the time of termination, denoted by tmax . The
Pt
(t)
total number of random walk steps taken within the first t iterations is k=1 N (t) ? T?i .
1
The algorithm will always terminate within ln ?
iterations. Since ?(t) governs the radius of the
local neighborhood that the algorithm utilizes, this implies that our algorithm is local, since the
1
maximum distance is strictly upper bounded by ?
, which is constant with respect to the MC.
(t)
?i
due to the truncation. Thus when the
With high probability, the estimate ?
?i is larger than 1+
algorithm terminates at stopping condition (a), ?i < ? with high probability. When the algorithm
terminates at condition (b), the fraction of samples truncated is small, which will imply that the
(t)
percentage error of estimate ?
?i is upper bounded as a function of and properties of the MC.
3
Theoretical guarantees
The following theorems give correctness and convergence guarantees for the algorithm. The proofs
have been omitted and can be found in the extended version of this paper [19].
Theorem 3.1. For an aperiodic, irreducible, positive recurrent, countable state space Markov chain,
and for any i ? ?, with probability greater than 1 ? ?:
(t)
1. Correctness. For all iterations t, ?
?i ?
condition (a) and outputs 0, then ?i < ?.
?i
1+ .
Therefore, if the algorithm terminates at
2. Convergence. The number of iterations tmax and the total number of steps (or neighbor
queries) used by the algorithm are bounded above by2 3
tX
max
ln( ?1 )
1
(t) ? (t)
?
tmax ? ln
, and
N ? Ti ? O
.
?
3 ?
k=1
Part 1 is proved by using Chernoff?s bound to show that N (t) is large enough to guarantee that with
(t)
probability greater than 1 ? ?, for all iterations t, T?i concentrates around its mean. Part 2 asserts
that the algorithm terminates in finite time as a function of the parameters of the algorithm, independent from the size of the MC state space. Therefore this implies that our algorithm is local. This
theorem holds for all aperiodic, irreducible, positive recurrent MCs. This is proved by observing
(t)
1
, termination condition (b) must be satisfied.
that T?i > p?(t) ?(t) . Therefore when ?(t) > ?
3.1
Finite-state space Markov Chain
We can obtain characterizations for the approximation error and the running time as functions of
specific properties of the MC. The analysis depends on how sharply the distribution over return
times concentrates around the mean.
Theorem 3.2. For an irreducible Markov chain {Xt } with finite state space ? and transition probability matrix P , for any i ? ?, with probability greater than 1 ? ?, for all iterations t,
(t)
?
(t)
? i ? ?i
? 2(1 ? )Pi (Ti > ?(t) )Zmax (i) + ? 4(1 ? )2?? /2Hi Zmax (i) + ,
(t)
?
?i
where Hi is defined in Eq (1), and Zmax (i) = maxj |Zji |.
Therefore, with probability greater than 1 ? ?, if the algorithm terminates at condition (b), then
(t)
?
? i ? ?i
? (3Zmax (i) + 1) .
(t)
?
?
i
2
3
? (a)g(b)) to mean O(f
? (a))O(g(b))
?
? (a)polylogf (a))O(g(b)polylogg(b)).
?
We use the notation O(f
= O(f
The bound for tmax is always true (stronger than with high probability).
5
(t)
Theorem 3.2 shows that the percentage error in the estimate ?
?i decays exponentially in ?(t) , which
doubles in each iteration. The proof relies on the fact that the distribution of the return time Ti has
an exponentially decaying tail [8], ensuring that the return time Ti concentrates around its mean
Ei [Ti ]. When the algorithm terminates at stopping condition (b), P(Ti > ?) ? ( 43 + ) with high
probability, thus the percentage error is bounded by O(Zmax (i)).
(t)
Similarly, we can analyze the error between the second estimate ?
?i and ?i , in the case when ?(t) is
1
(t)
large enough such that P(Ti > ? ) < 2 . This is required to guarantee that (1? p?(t) ) lies within an
multiplicative interval around its mean with high probability. Observe than 2Zmax (i) is replaced by
max(2Zmax (i) ? 1, 1). Thus for some values of Zmax (i), the error bound for ?
?i is smaller than the
equivalent bound for ?
?i . We will show simulations of computing PageRank, in which ?
?i estimates
?i more closely than ?
?i .
Theorem 3.3. For an irreducible Markov chain {Xt } with finite state space ? and transition probability matrix P , for any i ? ?, with probability greater than 1 ? ?, for all iterations t such that
P(Ti > ?(t) ) < 12 ,
(t)
?
1+
Pi (Ti > ?(t) )
2
? i ? ?i
?
.
max(2Zmax (i) ? 1, 1) +
(t)
(t)
?
1?
1?
1 ? Pi (Ti > ? )
?
i
Theorem 3.4 also uses the property of an exponentially decaying tail as a function of Hi to show
(t)
that for large ?(t) , with high probability, Pi Ti > ?(t) will be small and ?
?i will be close to ?i ,
and thus the algorithm will terminate at one of the stopping conditions. The bound is a function
of how sharply the distribution over return times concentrates around the mean. Theorem 3.4(a)
states that for low probability nodes, the algorithm will terminate at stopping condition (a) for large
enough iterations. Theorem 3.4(b) states that for all nodes, the algorithm will terminate at stopping
condition (b) for large enough iterations.
Theorem 3.4. For an irreducible Markov chain {Xt } with finite state space ?,
(a) For any node i ? ? such that ?i < (1 ? )?/(1 + ), with probability greater than 1 ? ?, the
total number of steps used by the algorithm is bounded above by
?1 !!!
tX
max
1
ln(
)
1
1
1
+
(t)
?
?
Hi ln
?
.
N (t) ? T?i ? O
2
?i
(1 ? )?
1 ? 2?1/2Hi
k=1
(b) For all nodes i ? ?, with probability greater than 1 ? ?, the total number of steps used by the
algorithm is bounded above by
tX
max
1
1
(t)
? ln( ? ) Hi ln ?i 1 +
N (t) ? T?i ? O
.
2
?
? 1 ? 2?1/2Hi
k=1
3.2
Countable-state space Markov Chain
The proofs of Theorems 3.2 and 3.4 require the state space of the MC to be finite, so we can upper
bound the tail of the distribution of Ti using the maximal hitting time Hi . In fact, these results can
be extended to many countably infinite state space Markov chains, as well. We prove that the tail of
the distribution of Ti decays exponentially for any node i in any countable state space Markov chain
that satisfies Assumption 3.5.
Assumption 3.5. The Markov chain {Xt } is aperiodic and irreducible. There exists a Lyapunov
function V : ? ? R+ and constants ?max , ? > 0, and b ? 0, that satisfy the following conditions:
1. The set B = {x ? ? : V (x) ? b} is finite,
2. For all x, y ? ? such that P Xt+1 = j|Xt = i > 0, |V (j) ? V (i)| ? ?max ,
3. For all x ? ? such that V (x) > b, E V (Xt+1 ) ? V (Xt )|Xt = x < ??.
At first glance, this assumption may seem very restrictive. But in fact, this is quite reasonable: by
the Foster-Lyapunov criteria [20], a countable state space Markov chain is positive recurrent if and
6
only if there exists a Lyapunov function V : ? ? R+ that satisfies condition (1) and (3), as well
as (2?): E[V (Xt+1 )|Xt = x] < ? for all x ? ?. Assumption 3.5 has (2), which is a restriction of
the condition (2?). The existence of the Lyapunov function allows us to decompose the state space
into sets B and B c such that for all nodes x ? B c , there is an expected decrease in the Lyapunov
function in the next step or transition. Therefore, for all nodes in B c , there is a negative drift towards
set B. In addition, in any single step, the random walk cannot escape ?too far?.
Using the concentration bounds for the countable state space settings, we can prove the following
theorems that parallel the theorems stated for the finite state space setting. The formal statements
are restricted to nodes in B = {i ? ? : V (i) ? b}. This is not actually restrictive, as for any i such
that V (i) > b, we can define a new Lyapunov function where V 0 (i) = b, and V 0 (j) = V (j) for all
j 6= i. Then B 0 = B ? {i}, and V 0 still satisfies assumption 3.5 for new values of ?max , ?, and b.
Theorem 3.6. For a Markov chain satisfying Assumption 3.5, for any i ? B, with probability
greater than 1 ? ?, for all iterations t,
!
(t)
(t)
?
2?? /Ri
? i ? ?i
?i + ,
? 4(1 ? )
(t)
?
1 ? 2?1/Ri
?
i
where Ri is defined such that
Ri = O
HiB e2??max
(1 ? ?)(e??max ? ?)
,
and HiB is the maximal hitting time over the Markov chain with its state space restricted to the
subset B. The scalars ? and ? are functions of ? and ?max (defined in [9]).
Theorem 3.7. For a Markov chain satisfying Assumption 3.5,
(a) For any node i ? B such that ?i < (1 ? )?/(1 + ), with probability greater than 1 ? ?, the
total number of steps used by the algorithm is bounded above by
?1 !!!
tX
max
ln( ?1 )
1
1
1+
(t) ? (t) ?
N ? Ti O
Ri ln
.
?
2
?i
(1 ? )?
1 ? 2?1/Ri
k=1
(b) For all nodes i ? B, with probability greater than 1 ? ?, the total number of steps used by the
algorithm is bounded above by
tX
max
1
1
(t)
? ln( ? ) Ri ln ?i 1 +
N (t) ? T?i ? O
.
2
?
? 1 ? 2?1/Ri
k=1
In order to prove these theorems, we build upon results of [9], and establish that return times have
exponentially decaying tails for countable state space MCs that satisfy Assumption 3.5.
4
Example applications: PageRank and MM1 Queue
PageRank is frequently used to compute the importance of web pages in the web graph. Given a
scalar parameter ? and a stochastic transition matrix P , let {Xt } be the Markov chain with transition
matrix n? 1 ? 1T + (1 ? ?)P . In every step, there is an ? probability of jumping uniformly randomly
to any other node in the network. PageRank is defined as the stationary distribution of this Markov
chain. We apply our algorithm to compute PageRank on a random graph generated according to the
configuration model with a power law degree distribution for ? = 0.15.
In queuing theory, Markov chains are used to model the queue length at a server, which evolves over
time as requests arrive and are processed. We use the basic MM1 queue, equivalent to a random
walk on Z+ . Assume we have a single server where the requests arrive according to a Poisson
process, and the processing time for a single request is distributed exponentially. The queue length
is modeled with the Markov chain shown in Figure 1(b), where p is the probability that a new request
arrives before the current request is fully processed.
(t
)
(t
)
Figures 2(a) and 2(b) plot ?
?i max and ?
?i max for each node in the PageRank or MM1 queue MC,
respectively. For both examples, we choose algorithm parameters ? = 0.02, = 0.15, and ? = 0.2.
7
0.7
0.07
True value (?)
Estimate (? )
0.06
Estimate (?tilde)
0.05
0.04
0.03
0.02
0
0
0.4
0.3
0.2
20
40
60
Anchor Node ID
80
0
0
100
(a) PageRank Estimates
10
20
30
Anchor Node ID
40
50
(b) MM1 Estimates
7
8
10
Node 1
Node 2
Node 3
7
10
Total Steps Taken
Total Steps Taken
Estimate (?tilde)
0.5
0.1
0.01
10
True value (?)
Estimate (?hat)
0.6
hat
Stationary Probability
Stationary Probability
0.08
6
10
5
10
Node 1
Node 2
Node 3
6
10
5
10
4
10
4
10 0
10
3
?1
10
?2
10
?
?3
10
10 0
10
?4
10
(c) PageRank ?Total Steps vs. ?
?1
10
?2
10
?
?3
10
?4
10
(d) MM1 Queue ?Total Steps vs. ?
Figure 2: Simulations showing results of our algorithm applied to PageRank and MM1 Queue setting
Observe that the algorithm indeed obtains close estimates for nodes such that ?i > ?, and for nodes
such that ?i ? ?, the algorithm successfully outputs 0 (i.e., ?i ? ?). We observe that the method
for bias correction makes significant improvements for estimating PageRank. We computed the
fundamental matrix for the PageRank MC and observed that that Zmax (i) ? 1 for all i.
Figures 2(c) and 2(d) show the computation time, or total number of random walk steps taken by
our algorithm, as a function of ?. Each figure shows the results from three different nodes, chosen
to illustrate the behavior on nodes with varying ?i . The figures are shown on a log-log scale. The
1
), which is
results confirm that the computation time of the algorithm is upper bounded by O( ?
1
linear when plotted in log-log scale. When ? > ?i , the computation time behaves as ?
. When
1
? < ?i , the computation time grows slower than O( ? ), and is close to constant with respect to ?.
5
Summary
We proposed a local algorithm for estimating the stationary probability of a node in a MC. The
algorithm is a truncated Monte Carlo method, sampling return paths to the node of interest. The
algorithm has many practical benefits. First, it can be implemented easily in a distributed and parallelized fashion, as it only involves sampling random walks using neighbor queries. Second, it only
1
. Third, it only
uses a constant size neighborhood around the node of interest, upper bounded by ?
performs computation at the node of interest. The computation only involves counting and taking
(t)
an average, thus it is simple and memory efficient. We guarantee that the estimate ?
?i , is an upper
bound for ?i with high probability. For MCs that mix well, the estimate will be tight with high
probability for nodes such that ?i > ?. The computation time of the algorithm is upper bounded by
parameters of the algorithm, and constant with respect to the size of the state space. Therefore, this
algorithm is suitable for MCs with large state spaces.
Acknowledgements: This work is supported in parts by ARO under MURI awards 58153-MA-MUR and
W911NF-11-1-0036, and grant 56549-NS, and by NSF under grant CIF 1217043 and a Graduate Fellowship.
8
References
[1] B. Cipra. The best of the 20th century: Editors name top 10 algorithms. SIAM News, 33(4):1,
May 2000.
[2] T.M. Semkow, S. Pomm, S. Jerome, and D.J. Strom, editors. Applied Modeling and Computations in Nuclear Science. American Chemical Society, Washington, DC, 2006.
[3] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing
order to the web. Technical Report 1999-66, November 1999.
[4] S. Assmussen and P. Glynn. Stochastic Simulation: Algorithms and Analysis (Stochastic Modelling and Applied Probability). Springer, 2010.
[5] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller. Equation of
state calculations by fast computing machines. The Journal of Chemical Physics, 21:1087,
1953.
[6] W.K. Hastings. Monte Carlo sampling methods using Markov chains and their applications.
Biometrika, 57(1):97?109, 1970.
[7] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, 1996.
[8] D. Aldous and J. Fill. Reversible Markov chains and random walks on graphs: Chapter 2
(General Markov chains). book in preparation. URL: http:// www.stat.berkeley.edu/ ?aldous/
RWG/ Chap2.pdf , pages 7, 19?20, 1999.
[9] B. Hajek. Hitting-time and occupation-time bounds implied by drift analysis with applications.
Advances in Applied probability, pages 502?525, 1982.
[10] P. Diaconis and L. Saloff-Coste. What do we know about the Metropolis algorithm? Journal
of Computer and System Sciences, 57(1):20?36, 1998.
[11] P. Diaconis. The Markov chain Monte Carlo revolution. Bulletin of the American Mathematical
Society, 46(2):179?205, 2009.
[12] D.A. Levin, Y. Peres, and E.L. Wilmer. Markov chains and mixing times. Amer Mathematical
Society, 2009.
[13] G. Jeh and J. Widom. Scaling personalized web search. In Proceedings of the 12th international conference on World Wide Web, pages 271?279, New York, NY, USA, 2003.
[14] D. Fogaras, B. Racz, K. Csalogany, and T. Sarlos. Towards scaling fully personalized PageRank: Algorithms, lower bounds, and experiments. Internet Mathematics, 2(3):333?358, 2005.
[15] K. Avrachenkov, N. Litvak, D. Nemirovsky, and N. Osipova. Monte Carlo methods in PageRank computation: When one iteration is sufficient. SIAM Journal on Numerical Analysis,
45(2):890?904, 2007.
[16] B. Bahmani, A. Chowdhury, and A. Goel. Fast incremental and personalized PageRank. Proc.
VLDB Endow., 4(3):173?184, December 2010.
[17] C. Borgs, M. Brautbar, J. Chayes, and S.-H. Teng. Sublinear time algorithm for PageRank
computations and related applications. CoRR, abs/1202.2771, 2012.
[18] SP. Meyn and RL. Tweedie. Markov chains and stochastic stability. Springer-Verlag, 1993.
[19] C.E. Lee, A. Ozdaglar, and D. Shah. Computing the stationary distribution locally. MIT LIDS
Report 2914, Nov 2013. URL: http://www.mit.edu/?celee/LocalStationaryDistribution.pdf.
[20] F.G. Foster. On the stochastic matrices associated with certain queuing processes. The Annals
of Mathematical Statistics, 24(3):355?360, 1953.
9
| 5009 |@word version:1 stronger:1 widom:2 vldb:1 termination:3 simulation:6 nemirovsky:1 bahmani:2 recursively:1 initial:1 configuration:1 current:1 must:2 john:2 numerical:1 subsequent:1 predetermined:1 plot:1 update:1 v:2 stationary:22 beginning:4 zmax:12 characterization:2 node:67 mathematical:4 along:1 become:2 prove:3 manner:1 x0:4 indeed:1 expected:3 behavior:1 frequently:1 little:1 increasing:1 provided:1 begin:1 bounded:13 notation:2 estimating:2 mass:1 what:2 interpreted:1 eigenvector:1 developed:1 guarantee:12 berkeley:1 every:6 ti:26 runtime:1 axt:1 biometrika:1 ozdaglar:2 grant:2 brautbar:1 arguably:1 before:2 positive:7 local:20 analyzing:2 id:2 path:4 tmax:4 graduate:1 directed:2 unique:1 practical:1 litvak:1 saloff:2 empirical:1 cannot:1 close:4 www:2 restriction:1 equivalent:2 sarlos:1 go:1 rule:1 meyn:1 nuclear:2 fill:2 stability:1 century:2 variation:1 coordinate:1 annals:1 target:2 suppose:1 pt:1 us:6 expensive:1 satisfying:3 muri:1 winograd:1 observed:1 enters:1 cycle:8 connected:1 news:1 decrease:1 mentioned:1 avrachenkov:2 tight:2 upon:2 easily:1 chapter:2 tx:5 fast:2 monte:8 query:2 neighborhood:12 quite:1 larger:5 ability:1 statistic:1 final:1 chayes:1 aro:1 maximal:3 mixing:6 asserts:1 exploiting:1 convergence:5 double:1 motwani:1 produce:1 incremental:1 converges:1 help:1 derive:1 illustrate:3 coupling:1 stat:1 polylog:1 recurrent:6 eq:4 strong:1 implemented:2 involves:5 implies:2 lyapunov:6 concentrate:5 radius:1 aperiodic:4 closely:1 stochastic:7 brin:1 require:1 pjk:2 decompose:1 tighter:1 strictly:1 correction:1 hold:1 around:12 sufficiently:2 seed:1 major:1 omitted:1 estimation:1 proc:1 travel:1 by2:1 correctness:3 successfully:1 weighted:1 mit:5 clearly:1 always:2 super:1 ej:1 varying:1 endow:1 derived:1 improvement:1 modelling:1 inference:2 stopping:7 entire:1 denoted:2 development:2 special:1 summed:1 initialize:1 equal:1 having:1 washington:1 sampling:6 chernoff:2 identical:2 represents:1 report:2 escape:1 irreducible:9 randomly:1 diaconis:4 dg:2 composed:1 maxj:1 replaced:1 ourselves:1 ab:1 conductance:1 interest:3 chowdhury:1 golub:1 arrives:1 chain:45 coste:2 edge:4 jumping:1 tweedie:1 old:1 walk:33 plotted:1 theoretical:2 modeling:1 w911nf:1 applicability:2 cost:1 subset:1 levin:1 too:1 motivating:1 characterize:1 answer:3 eec:3 chooses:2 fundamental:3 siam:2 international:1 stay:3 lee:2 physic:2 quickly:2 hopkins:2 central:2 satisfied:1 choose:3 book:1 american:2 leading:3 return:19 includes:2 satisfy:2 ranking:1 depends:1 multiplicative:3 performed:2 queuing:3 observing:1 analyze:1 decaying:3 capability:1 walked:1 parallel:1 accuracy:1 efficiently:1 asuman:2 mc:37 carlo:8 converged:1 failure:1 glynn:1 e2:1 proof:3 associated:1 stop:3 proved:2 massachusetts:3 hajek:2 actually:1 originally:1 amer:1 though:1 jerome:1 hastings:2 web:5 ei:10 reversible:3 google:1 glance:1 grows:1 zjk:2 name:1 usa:1 verify:2 true:6 hence:1 chemical:2 strom:1 interchangeably:2 recurrence:1 criterion:1 pdf:2 theoretic:1 performs:1 novel:2 recently:1 specialized:1 behaves:1 rl:1 exponentially:6 analog:1 extend:1 tail:5 refer:1 significant:1 mathematics:1 hp:1 similarly:2 recent:1 perspective:1 aldous:3 inf:1 certain:2 server:2 verlag:1 arbitrarily:1 continue:1 greater:11 goel:1 parallelized:1 surely:1 determine:4 shortest:1 converge:1 full:1 mix:2 exceeds:1 technical:1 calculation:1 long:5 christina:1 equally:1 award:1 ensuring:1 basic:2 poisson:1 iteration:23 addition:2 fellowship:1 interval:1 else:2 bringing:1 induced:1 counterclockwise:1 december:1 seem:1 counting:1 enough:6 easy:1 whether:6 expression:2 url:2 suffer:1 queue:8 cif:1 york:1 clear:1 involve:3 eigenvectors:2 governs:1 locally:2 visualized:1 processed:2 generate:1 http:2 percentage:3 nsf:1 correctly:1 discrete:2 threshold:7 utilize:1 graph:10 geometrically:1 fraction:2 run:1 arrive:2 throughout:2 reasonable:2 almost:1 utilizes:1 scaling:2 jeh:2 bound:13 hi:10 internet:1 infinity:2 sharply:3 ri:8 personalized:3 simulate:1 argument:1 min:3 department:3 according:3 truncate:1 request:5 smaller:3 describes:1 terminates:6 metropolis:2 lid:3 evolves:1 restricted:2 taken:4 ln:14 equation:1 devavrat:2 know:2 tractable:1 operation:1 apply:1 observe:4 away:1 generic:1 spectral:2 appropriate:1 simulating:2 centrality:1 shah:2 pji:1 hat:2 existence:1 slower:1 top:2 running:2 include:1 denotes:1 restrictive:2 build:2 establish:2 society:3 implied:1 question:2 costly:1 concentration:5 rt:1 distance:1 majority:1 provable:1 length:8 modeled:1 balance:1 difficult:4 unfortunately:1 setup:1 statement:3 cipra:1 negative:1 stated:1 design:2 countable:9 rosenbluth:2 upper:10 markov:46 finite:12 november:1 kaxt:1 truncated:5 tilde:2 peres:1 extended:2 excluding:1 dc:1 arbitrary:1 drift:2 required:1 specified:1 established:1 tractably:1 address:1 able:1 pagerank:22 max:16 memory:1 power:7 suitable:2 technology:3 inversely:1 numerous:1 imply:1 rwg:1 teller:2 acknowledgement:1 multiplication:3 relative:1 law:2 occupation:1 fully:2 sublinear:1 limitation:3 proportional:1 degree:1 gather:1 sufficient:2 pij:3 article:1 foster:2 editor:2 pi:5 summary:2 supported:1 truncation:2 wilmer:1 formal:1 bias:1 institute:3 wide:2 neighbor:4 taking:1 mur:1 bulletin:1 distributed:3 benefit:1 van:1 xn:1 transition:13 crawl:1 world:1 computes:1 made:1 commonly:1 jump:1 far:1 approximate:1 obtains:2 citation:1 countably:6 nov:1 clique:7 confirm:1 anchor:4 spectrum:2 search:2 iterative:1 sk:2 terminate:4 sp:1 linearly:1 fashion:1 ny:1 n:1 nonreversible:1 exponential:1 lie:2 governed:1 answering:1 third:1 theorem:16 xt:17 specific:3 borgs:2 showing:1 revolution:1 decay:2 closeness:1 exists:3 restricting:1 effectively:1 importance:2 corr:1 gap:1 zji:1 twentieth:1 hitting:5 scalar:3 mm1:7 springer:2 celee:2 satisfies:3 relies:5 ma:1 goal:1 towards:2 loan:1 infinite:6 uniformly:2 total:13 teng:1 indicating:1 preparation:1 mcmc:6 |
4,431 | 501 | Optical Implementation of a Self?Organizing
Feature Extractor
Dana Z. Anderson*, Claus Benkert, Verena Hebler, Ju-Seog Jang,
Don Montgomery, and Mark Saffinan.
Joint Institute for Laboratory Astrophysics, University of Colorado and the
Department of Physics, University of Colorado, Boulder Colorado 80309-0440
Abstract
We demonstrate a self-organizing system based on photorefractive ring oscillators. We employ the system in two ways that
can both be thought of as feature extractors; one acts on a set
of images exposed repeatedly to the system strictly as a linear
feature extractor, and the other serves as a signal demultiplexer for fiber optic communications. Both systems implement
unsupervised competitive learning embedded within the mode
interaction dynamics between the modes of a set of ring oscillators. After a training period, the modes of the rings become associated with the different image features or carrier
frequencies within the incoming data stream.
1 Introduction
Self-organizing networks (Kohonen, Hertz, Domany) discover features or qualities about their input environment on their own; they learn without a teacher
making explicit what is to be learned. This property reminds us of severa]
ubiquitous behaviors we see in the physical and natural sciences such as pattern formation, morphogenesis and phase transitions (Domany). While in the
natural case one is usually satisfied simply to analyze and understand the behavior of a self-organizing system, we usually have a specific function in mind
that we wish a neural network to perform. That is, in the network case we wish
to synthesize a system that will perform the desired function. Self-organizing
principles are particularly valuable when one does not know ahead of time exactly what to expect from the input to be processed and when it is some property of the input itself that is of interest. For example, one may wish to
determine some quality about the input statistics - this one can often do by applying self-organization principles. However, when one wishes to attribute
some meaning to the data, self-organization principles are probably poor candidates for this task.
821
822
Anderson, Benkert, Hebler. lang. Montgomery. and Saffman
It is the behavioral similarity between self-organizing network models and
physical systems that has lead us to investigate the possibility of implementing
a self-organizing network function by designing the dynamics for a set of optical oscillators. Modes of sets of oscillators undergo competition (Anderson,
Benkert) much like that employed in competitive learning network models.
Using photorefractive elements, we have tailored the dynamics of the mode interaction to perfonn a learning task. A physical optical implementation of selforganized learning serves two functions. Unlike a computer simulation, the
physical system must obey certain physical laws just like a biological system
does. We have in mind the consequences of energy conservation, finite gain
and the effects of noise. Therefore, we might expect to learn something about
general principles applicable to biological systems from our physical versions.
Second, there are some applications where an optical system serves as an ideal
"front end" to signal processing.
Here we take a commonly used supervised approach for extracting features
from a stream of images and demonstrate how this task can be done in a selforganizing manner. The conventional approach employs a holographic correlator (Vander Lugt). In this technique, various patterns are chosen for recognition by the optical system and then recorded in holographic media using
angle-encoded reference beams. When a specific pattern is presented to the holographic correlator, the output is detennined by the correlation between the
presented pattern and the patterns that have been recorded as holograms during the 'learning phase'. The angles and intensities of the reconstructed reference beams identify the features present in the pattern. Because the
processing time-scale in holographic systems is detennined by the time necessary for light to scatter off of the holographic grating, the optical correlation
takes place virtually instantaneously. It is the speed of this correlation that
makes the holographic approach so interesting.
While its speed is an asset, the holographic correlator approach to feature extraction from images is a supervised approach to the problem: an external supervisor must choose the relevant image features to store in the correlator
holograms. Moreover the supervisor must provide an angle-encoded reference
beam for each stored feature. For many applications, it is desirable to have an
adaptive system that has the innate capacity to discover, in an unsupervised
fashion, the underlying structure within the input data.
A photorefractive ring resonator circuit that learns to extract spatially orthogonal features from images is illustrated schematically in figure 1. The resonator rings in figure 1 are constructed physically from optical fibers cables. The
resonator is self-starting and is pumped by images containing the input data
(White). The resonator learns to associate each feature in the input data set
with one and only one of the available resonator rings. In other words, when
the proper feature is present in the input data, the resonator ring with which
it has become associated will light up. When this feature is absent from the
input data, the corresponding resonator ring will be dark.
The self-organizing capabilities of this system arise from the nonlinear dynam-
Optical Implementation of a Self-Organizing Feature Extractor
t
1111""
Figure1: Schematic diagram of the self-organizing photorefractive ring
resonator. The two signal frequencies, (01 amd (02, are identical when
the circuit is used as a feature extractor and are separated by 280 MHz
when the system is used as a frequency demultiplexer.
ics of competition between resonator modes for optical energy within the common photorefractive pump crystal (Benkert). We have used this system to accomplish two optical signal processing tasks. In the first case, the resonator
can learn to distinguish between two spatially orthogonal input images that
are impressed on the common pump beam in a piece-wise constant fashion. In
the second case, frequency demultiplexing of a composite input image constructed from two spatially orthogonal image components of different optical
frequencies can be accomplished (Saffman, 1991b). In both cases, the optical
system has no a priori knowledge of the input data and self-discovers the important structural elements.
2 A Self?Organizing Photorefractive Ring Resonator
The experimental design that realizes an optical self-organizing feature extractor is shown in figure l. The optical system consists of a two ring, multimode,
unidirectional photorefractive ring resonator in which the rings are spatially
distinct. The resonator rings are defined by loops of 100 Jl core multimode optical fiber. The gain for both modes is provided by a common BaTi0 3 crystal
that is pumped by optical images presented as speckle patterns from a single
100 Jl multimode optical fiber. The light source is a single frequency argon-ion
laser operating at 514.5 nm. The second BaTi03 crystal provides reflexive coupling within the resonator, which ensures that each resonator ring becomes associated with only one input feature.
The input images are generated by splitting the source beam and passing it
through two acousto-optic modulator cells. The optical signals generated by
the acousto-optic modulators are then focused into a single 1.5 meter long stepindex, 100 Jl core, multimode optical fiber. The difference in the angle of incidence for the two signal beams at the fiber end face is sufficient to ensure that
the corresponding speckle pattern images are spatially orthogonal (Safi'man,
823
824
Anderson, Benkert, Hebler, Jang, Montgomery, and Saffman
1991a). The acousto-optic cells are used in a conventional fashion to shift the
optical frequency of the carrier signal, and are also used as shutters to impress
time modulated information on the input signals. When the resonator is operating as a feature extractor, both input signals are carried on the same optical
frequency, but are presented to the resonator sequentially. The presentation
cycle time of 500 Hz was chosen to be much smaller than the characteristic
time constan t of the BaTi0 3 pump crystal. When operating as a frequency demultiplexer, the acousto-optic modulators shift the optical carrier frequencies
of the input signals such that they are separated by 280 MHz. The two input
carrier signals are time modulated and mixed into the optical fiber to form a
composite image composed of two spatially orthogonal speckle patterns having
different optical frequencies. This composite image is used as the pump beam
for the resonator.
3 Unsupervised Competitive Learning
Correlations between the optical electric fields in images establish the criterion
for a measure of similarity between different image features. The best measure of these correlations is the inner product between the complex-valued spatial electric field distribution across the input images,
When S 12 =0 the images are uncorrelated and we define such images as spatially orthogonal. When the resonator begins to oscillate, neither resonator
ring has any preference for a particular input feature or frequency. The system
modes have no internal bias (i.e., no a priori knowledge) for the input data. As
the gain for photorefractive two-beam coupling in the common BaTi0 3 pump
crystal saturates, the two resonator rings begin to compete with each other for
the available pump energy. This competitive coupling leads to 'winner-takesall' dynamics in the resonator in which each resonator ring becomes associated
with one or the other spatially orthogonal input images. In other words, the
rings become labels for each spatially orthogonal feature present in the input
image set.
Phenomenologically, the dynamics of this mode competition can be described
by Lotka-Volterra equations (Benkert, Lotka, Volterra),
-dli,p
? dt - /.I.p ( a I,p
P"p. I
I,p
-
L 9?
.
J,J
?I
l,p;,,1 J,J)
Where Ii,p is the intensity of the oscillating energy in ring i due to energy
transferred from the input feature p, ai,p is the gain for two-beam coupling between ring i and feature p, ~i,p is the self-saturation coefficient, and 9i,pj,l are
the cross-saturation coefficients. The self-organizing dynamics are determined
by the values of the cross coupling coefficients. Thus the competitive learning
algorithm that drives the self-organization in this optical system is embedded
Optical Implementation of a Self-Organizing Feature Extractor
resonalor
beam
Figure 2: Reflexive gain interaction. A fraction, 0, of the incident intensity is removed from the resonator beam, and then coupled back into
itself b~ photorefractive two beam coupling. This ensures 'Winnertakes-all' competitive dynamics between the resonator rings.
within the nonlinear dynamics of mode competition in the pump crystal.
Once the system has learned, the spatially orthogonal features in the training
set are represented as holograms in the BaTi0 3 pump crystal. These holograms act as linear projection operators, and any new image constructed from
features in the training set will be projected in a linear fashion onto the learned
feature basis set. The relative intensity of light oscillating in each ring corresponds to the fraction of each learned feature in the new image. Thus, the resonator functions as a feature extractor (Kohonen).
4 Reflexive Gain
If each resonator ring was single mode, then competitive dynamics in the common pump crystal would be sufficient for feature extraction. However, a multimode ring system allows stability for certain pathological feature extracting
states. The multimode character of each resonator ring can permit simultaneous oscillation of two spatially orthogonal modes within a single ring. Ostensibly, the system is performing feature extraction, but this form of output is not
useful for further processing. These pathological states are excluded by introducing reflexive gain into the cavity.
Any system that twists back upon itself and closes a loop is referred to as reflexive (Hofstadter, pg. 3). A reflexive gain interaction is achieved by removing
a portion of the oscillating energy from each ring and then coupling it back into
the same ring by photorefractive two-beam coupling, as illustrated in figure 2.
The standard equations for photorefractive two-beam coupling (Kukhtarev,
Hall) can be used to derive an expression for the steady-state transmission, T,
through the reflexive gain element in terms of the number of spatially orthogonal modes, N, that are oscillating simultaneously within a single ring,
Here, exp(Go) is the small signal gain and 0 is the fraction of light removed
825
826
Anderson, Benkert, Hebler, lang, Montgomery, and Saffman
I,
Ring
1
I
/
- --.. ..,.,...
.L
v
lr
I,
....
.!
I
...
~
_ill
~/
I,
Ring
2
r
/
./
-
!
1/
Figure 3: Time evolution of the intensities within each resonator ring
due to 0>1 (11) and 0>2(12 ). After
about 30 seconds, tne system has
learned to demultiplex the two input
frequencies. Ring 1 has become associated with 0>1 and Ring 2 has
become associated with 0l2. The contrast ratio between 11 anal2 in each
ring is about 40: 1.
I,
from the resonator. The transmission decreases for N > 1 causing larger cavity
losses for the case of simultaneous oscillation of spatially orthogonal modes
within a single ring. Therefore, the system favors 'winner-takes-all' dynamics
over other pathological feature extracting states.
5 Experimental Results
The self-organizing dynamics within the optical circuit require several seconds
to reach steady state. In the case of frequency de multiplexing, the dynamical
evolution of the system was observed by detecting the envelopes of the carrier
modulation, as shown in figure 3. In the case of the feature extractor, transient
system dynamics were observed by synchronizing the observation with the
modulation of one feature or the other, as shown in figure 4. The frequency demultiplexing (figure 3) and feature extracting (figure 4) states develop a high
contrast ratio and are stable for as long as the pump beam is present. Measurements with a spectrum analyzer show an output contrast ratio of better than
40:1 in the frequency demultiplexing case.
The circuit described here extracts spatially orthogonal features while contin~
I
Figure 4: Time evolution of the
intensities in each resonator ring
due to the two input pictures.
The system requires about 30 seconds to learn to extract features
from the input images. Picture 1
is associated with Ring 1 and picture 2 is associated with Ring2.
+----+-~:___-+--_+_-_+
Ring 1 1
( n ..... )
: ??????????l?????????l???????????
?
?
?
?
r
t????????????
.
r?? ? ? ? ? ?;? ? ? ?. ???r????? . ?. ?r. . ?. ? .
:
0..
?????????? ..
02
...........
:
:
r...... ??? ....;?..........?.;-............ .:. .. ?
:P.... 2
0+-""""'--;----;'"--;...---;--_+
10
o
50
~~
( n ..... )
Rmg2
.~
0.8
10
20
10
'0
50
Optical Implementation of a Self-Organizing Feature Extractor
uously adapting to slow variation s in the spatial mode superposition due to
drifts in the carrier frequency or perturbations to the fibers. Thus, the system
is adaptive as well as unsupervised.
6 Summary
An optical implementation of a self-organizing feature extractor that is adaptive has been demonstrated. The circuit exhibits the desirable dynamical property that is often referred to in the parlance of the neural networks as
'unsupervised learning'. The essential properties of this system arise from the
nonlinear dynamics of mode competition within the optical ring resonator. The
learning algorithm is embedded in these dynamics and they contribute to its
capacity to adapt to slow changes in the input signal. The circuit learns to associate different spatially orthogonal images with different rings in an optical
resonator. The learned feature set can represent orthogonal basis vectors in an
image or different frequencies in a multiplexed optical signal. Because a wide
variety of information can be encoded onto the input images presented to the
feature extractor described here, it has the potential to find general application
for tasks where the speed and adaptability of self-organizing and all-optical
processing is desirable.
Acknowledgements
We are grateful for the support of both the Office of Naval Research, contract
#N00014-91.J-1212 and the Air Force Office of Scientific Research, contract
#AFOSR-90-0198. Mark Saffman would like to acknowledge support provided
by a U.S. Air Force Office of Scientific Research laboratory graduate fellowship.
References
D.Z. Anderson and R. Saxena, Theory of Multimode Operation of a Unidirectional Ring Oscillator having Photorefractive Gain: Weak Field Limit, J. Opt.
Soc. Am. B, 4, 164 (1987).
C. Benkert and D.Z. Anderson, Controlled competitive dynamics in a photorefractive ring oscillator: 'Winner-takes-all" and the "voting-paradox" dynamics,
Phys. Rev. A, 44,4633 (1991).
E. Domany, J.L. van Hemmen and K Schulten, eds., Models of Neural Networks; Springer-Verlag (1991).
T.J. Hall, R. Jaura, L.M. Connors and P.D. Foote, The Photorefractive EffectA Review; Prog. Quant. Electr., 10, 77 (1985).
J. Hertz, A. Krogh and R.G. Palmer, Introduction to the Theory of Neural Computation; Addison-Wesley (1991).
D. R. Hofstadter, Metamagical Themas: Questing for the Essence of Mind and
Pattern; Bantam Books (1985).
827
828
Anderson, Benkert, Hebler, lang, Montgomery, and Saffman
T. Kohonen, Self-Organization and Associative Memory, 3rd Edition; SpringerVerlag (1989).
N. V. Kukhtarev, V.B. Markov, S.G. Odulov, M.S. Soskin and V.L. Vinetskii,
Holographic Storage in Electrooptic Crystals. 1. Steady State; Ferroelectrics,
22, 949 (1979).
N. V. Kukhtarev, V.B. Markov, S.G. Odulov, M.S. Soskin and V.L. Vinetskii,
Holographic Storage in Electrooptic Crystals. II. Beam Coupling - Light Amplification; Ferroelectrics, 22, 961 (1979).
A.J. Lotka, Elements of Physical Biology; Baltimore (1925).
M. Saffman and D.Z. Anderson, Mode multiplexing and holographic demultiplexing communications channels on a multimode fiber, Opt. Lett., 16,300
(1991a).
M. Safi'man, C. Benkert and D.Z. Anderson, Self-organizing photorefractive frequency demultiplexer, Opt. Lett., 16, 1993 (1991b).
A Vander Lugt, Signal Detection by Complex Spatial Filtering; IEEE Trans.
Infonn. Theor., IT-10, 139 (1964).
V. Volterra, Lecons sur la Theorie Mathematiques de la Lutte pour la Vie;
Gauthier-Villars (1931).
J.O. White, M. Cronin-Golomb, B. Fischer, and A Yariv, Coherent Oscillation
by Self-Induced Gratings in the Photorefractive Crystal BaTiO.j; Appl. Phys.
Lett., 40, 450 (1982).
PART XII
LEARNING AND
GENERALIZATION
| 501 |@word version:1 simulation:1 pg:1 incidence:1 lang:3 scatter:1 must:3 electr:1 core:2 lr:1 detecting:1 provides:1 severa:1 contribute:1 preference:1 constructed:3 become:5 consists:1 behavioral:1 manner:1 behavior:2 pour:1 correlator:4 becomes:2 provided:2 discover:2 moreover:1 underlying:1 circuit:6 medium:1 begin:2 vinetskii:2 what:2 golomb:1 perfonn:1 saxena:1 act:2 voting:1 exactly:1 carrier:6 vie:1 limit:1 consequence:1 modulation:2 might:1 appl:1 palmer:1 graduate:1 yariv:1 implement:1 impressed:1 thought:1 composite:3 projection:1 adapting:1 word:2 onto:2 close:1 operator:1 storage:2 applying:1 conventional:2 demonstrated:1 go:1 starting:1 focused:1 splitting:1 stability:1 variation:1 colorado:3 designing:1 associate:2 synthesize:1 element:4 recognition:1 particularly:1 observed:2 ensures:2 cycle:1 decrease:1 removed:2 valuable:1 environment:1 dynamic:16 grateful:1 exposed:1 upon:1 basis:2 joint:1 various:1 fiber:9 represented:1 infonn:1 laser:1 separated:2 distinct:1 modulators:2 formation:1 encoded:3 larger:1 valued:1 pumped:2 favor:1 statistic:1 fischer:1 itself:3 associative:1 ferroelectrics:2 interaction:4 product:1 kohonen:3 relevant:1 loop:2 causing:1 organizing:19 detennined:2 amplification:1 competition:5 transmission:2 oscillating:4 ring:44 coupling:10 derive:1 develop:1 krogh:1 soc:1 grating:2 dynam:1 attribute:1 transient:1 implementing:1 require:1 generalization:1 opt:3 biological:2 theor:1 mathematiques:1 strictly:1 hall:2 ic:1 exp:1 applicable:1 realizes:1 label:1 superposition:1 instantaneously:1 office:3 naval:1 argon:1 contrast:3 cronin:1 tne:1 am:1 ill:1 priori:2 spatial:3 field:3 once:1 extraction:3 having:2 identical:1 biology:1 synchronizing:1 unsupervised:5 employ:2 pathological:3 composed:1 simultaneously:1 phase:2 detection:1 organization:4 interest:1 investigate:1 possibility:1 light:6 necessary:1 orthogonal:15 desired:1 mhz:2 reflexive:7 introducing:1 pump:10 holographic:10 supervisor:2 front:1 stored:1 teacher:1 accomplish:1 ju:1 contract:2 physic:1 off:1 satisfied:1 recorded:2 containing:1 choose:1 nm:1 external:1 book:1 potential:1 de:2 coefficient:3 stream:2 piece:1 analyze:1 portion:1 competitive:8 capability:1 unidirectional:2 air:2 characteristic:1 identify:1 weak:1 asset:1 drive:1 simultaneous:2 reach:1 phys:2 ed:1 energy:6 frequency:19 associated:8 gain:11 knowledge:2 ubiquitous:1 adaptability:1 back:3 wesley:1 verena:1 dt:1 supervised:2 figure1:1 done:1 anderson:10 just:1 correlation:5 parlance:1 gauthier:1 nonlinear:3 odulov:2 mode:17 quality:2 scientific:2 innate:1 effect:1 evolution:3 spatially:15 excluded:1 laboratory:2 illustrated:2 reminds:1 white:2 during:1 self:27 essence:1 steady:3 criterion:1 crystal:11 demonstrate:2 image:28 meaning:1 wise:1 discovers:1 common:5 physical:7 lugt:2 twist:1 winner:3 jl:3 measurement:1 ai:1 rd:1 analyzer:1 stable:1 similarity:2 operating:3 something:1 own:1 store:1 certain:2 n00014:1 verlag:1 accomplished:1 hologram:4 employed:1 determine:1 period:1 signal:15 ii:2 themas:1 desirable:3 adapt:1 cross:2 long:2 controlled:1 schematic:1 physically:1 represent:1 tailored:1 achieved:1 ion:1 beam:16 cell:2 schematically:1 fellowship:1 baltimore:1 diagram:1 source:2 envelope:1 unlike:1 probably:1 claus:1 induced:1 hz:1 undergo:1 virtually:1 extracting:4 structural:1 ideal:1 variety:1 winnertakes:1 modulator:1 quant:1 inner:1 domany:3 absent:1 shift:2 expression:1 shutter:1 passing:1 oscillate:1 repeatedly:1 useful:1 selforganizing:1 dark:1 processed:1 xii:1 lotka:3 neither:1 pj:1 demultiplexing:4 fraction:3 electrooptic:2 compete:1 angle:4 place:1 prog:1 oscillation:3 distinguish:1 ahead:1 optic:5 multiplexing:2 speed:3 performing:1 optical:34 transferred:1 department:1 poor:1 hertz:2 smaller:1 across:1 character:1 cable:1 making:1 rev:1 boulder:1 equation:2 demultiplex:1 montgomery:5 mind:3 know:1 ostensibly:1 addison:1 serf:3 end:2 metamagical:1 available:2 operation:1 permit:1 obey:1 jang:2 ensure:1 establish:1 volterra:3 exhibit:1 capacity:2 amd:1 sur:1 ratio:3 theorie:1 astrophysics:1 implementation:6 design:1 proper:1 perform:2 observation:1 markov:2 finite:1 acknowledge:1 speckle:3 saturates:1 communication:2 paradox:1 perturbation:1 vander:2 intensity:6 morphogenesis:1 drift:1 dli:1 coherent:1 learned:6 trans:1 usually:2 pattern:10 dynamical:2 saturation:2 memory:1 phenomenologically:1 natural:2 force:2 picture:3 carried:1 extract:3 coupled:1 review:1 l2:1 meter:1 acknowledgement:1 relative:1 law:1 embedded:3 loss:1 expect:2 afosr:1 mixed:1 interesting:1 filtering:1 dana:1 incident:1 sufficient:2 principle:4 uncorrelated:1 summary:1 bias:1 understand:1 institute:1 wide:1 foote:1 face:1 van:1 lett:3 transition:1 commonly:1 adaptive:3 projected:1 reconstructed:1 soskin:2 cavity:2 connors:1 sequentially:1 incoming:1 conservation:1 don:1 spectrum:1 constan:1 learn:4 channel:1 selforganized:1 complex:2 electric:2 noise:1 arise:2 edition:1 resonator:33 referred:2 hemmen:1 fashion:4 slow:2 schulten:1 explicit:1 wish:4 candidate:1 hofstadter:2 extractor:13 learns:3 removing:1 specific:2 essential:1 simply:1 springer:1 corresponds:1 presentation:1 oscillator:6 man:2 change:1 springerverlag:1 determined:1 photorefractive:16 experimental:2 la:3 internal:1 mark:2 support:2 modulated:2 multiplexed:1 |
4,432 | 5,010 | Learning Prices for Repeated Auctions
with Strategic Buyers
Kareem Amin
University of Pennsylvania
[email protected]
Afshin Rostamizadeh
Google Research
[email protected]
Umar Syed
Google Research
[email protected]
Abstract
Inspired by real-time ad exchanges for online display advertising, we consider the
problem of inferring a buyer?s value distribution for a good when the buyer is
repeatedly interacting with a seller through a posted-price mechanism. We model
the buyer as a strategic agent, whose goal is to maximize her long-term surplus,
and we are interested in mechanisms that maximize the seller?s long-term revenue.
We define the natural notion of strategic regret ? the lost revenue as measured
against a truthful (non-strategic) buyer. We present seller algorithms that are no(strategic)-regret when the buyer discounts her future surplus ? i.e. the buyer
prefers showing advertisements to users sooner rather than later. We also give a
lower bound on strategic regret that increases as the buyer?s discounting weakens
and shows, in particular, that any seller algorithm will suffer linear strategic regret
if there is no discounting.
1
Introduction
Online display advertising inventory ? e.g., space for banner ads on web pages ? is often sold via
automated transactions on real-time ad exchanges. When a user visits a web page whose advertising
inventory is managed by an ad exchange, a description of the web page, the user, and other relevant
properties of the impression, along with a reserve price for the impression, is transmitted to bidding
servers operating on behalf of advertisers. These servers process the data about the impression and
respond to the exchange with a bid. The highest bidder wins the right to display an advertisement
on the web page to the user, provided that the bid is above the reserve price. The amount charged
the winner, if there is one, is settled according to a second-price auction. The winner is charged the
maximum of the second-highest bid and the reserve price.
Ad exchanges have been a boon for advertisers, since rich and real-time data about impressions
allow them to target their bids to only those impressions that they value. However, this precise
targeting has an unfortunate side effect for web page publishers. A nontrivial fraction of ad exchange
auctions involve only a single bidder. Without competitive pressure from other bidders, the task of
maximizing the publisher?s revenue falls entirely to the reserve price setting mechanism. Secondprice auctions with a single bidder are equivalent to posted-price auctions. The seller offers a price
for a good, and a buyer decides whether to accept or reject the price (i.e., whether to bid above or
below the reserve price).
In this paper, we consider online learning algorithms for setting prices in posted-price auctions where
the seller repeatedly interacts with the same buyer over a number of rounds, a common occurrence
in ad exchanges where the same buyer might be interested in buying thousands of user impressions
daily. In each round t, the seller offers a good to a buyer for price pt . The buyer?s value vt for the
good is drawn independently from a fixed value distribution. Both vt and the value distribution are
known to the buyer, but neither is observed by the seller. If the buyer accepts price pt , the seller
receives revenue pt , and the buyer receives surplus vt ? pt . Since the same buyer participates in
1
the auction in each round, the seller has the opportunity to learn about the buyer?s value distribution
and set prices accordingly. Notice that in worst-case repeated auctions there is no such opportunity
to learn, while standard Bayesian auctions assume knowledge of a value distribution, but avoid
addressing how or why the auctioneer was ever able to estimate this distribution.
Taken as an online learning problem, we can view this as a ?bandit? problem [18, 16], since the
revenue for any price not offered is not observed (e.g., even if a buyer rejects a price, she may
well have accepted a lower price). The seller?s goal is to maximize his expected revenue over all
T rounds. One straightforward way for the seller to set prices would therefore be to use a noregret bandit algorithm, which minimizes the difference between seller?s revenue and the revenue
that would have been earned by offering the best fixed price p? in hindsight for all T rounds; for
a no-regret algorithm (such as UCB [3] or EXP3 [4]), this difference is o(T ). However, we argue
that traditional no-regret algorithms are inadequate for this problem. Consider the motivations of a
buyer interacting with an ad exchange where the prices are set by a no-regret algorithm, and suppose
for simplicity that the buyer has a fixed value vt = v for all t. The goal of the buyer is to acquire
the
! most valuable advertising inventory for the least total cost, i.e., to maximize her total surplus
t v ? pt , where the sum is over rounds where the buyer accepts the seller?s price. A naive buyer
might simply accept the seller?s price pt if and only if vt ? pt ; a buyer who behaves this way
is called truthful. Against a truthful buyer any no-regret algorithm will eventually learn to offer
prices pt ? v on nearly all rounds. But a more savvy buyer will notice that if she rejects prices in
earlier rounds, then she will tend to see lower prices in later rounds. Indeed, suppose the buyer only
accepts prices below some small amount !. Then any no-regret algorithm will learn that offering
prices above ! results in zero revenue, and will eventually offer prices below that threshold on nearly
all rounds. In fact, the smaller the learner?s regret, the faster this convergence occurs. If v $ ! then
the deceptive buyer strategy results in a large gain in total surplus for the buyer, and a large loss
in total revenue for the seller, relative to the truthful buyer. While the no-regret guarantee certainly
holds ? in hindsight, the best price is indeed ! ? it seems fairly useless.
In this paper, we propose a definition of strategic regret that accounts for the buyer?s incentives, and
give algorithms that are no-regret with respect to this definition. In our setting, the seller chooses a
learning algorithm for selecting prices and announces this algorithm to the buyer. We assume that
the buyer will examine this algorithm and adopt whatever strategy maximizes her expected surplus
over all T rounds. We define the seller?s strategic regret to be the difference between his expected
revenue and the expected revenue he would have earned if, rather than using his chosen algorithm
to set prices, he had instead offered the best fixed price p? on all rounds and the buyer had been
truthful. As we have seen, this revenue can be much higher than the revenue of the best fixed price
in hindsight (in the example above, p? = v). Unless noted otherwise, throughout the remainder of
the paper the term ?regret? will refer to strategic regret.
We make one further assumption about buyer behavior, which is based on the observation that in
many important real-world markets ? and particularly in online advertising ? sellers are far more
willing to wait for revenue than buyers are willing to wait for goods. For example, advertisers are
often interested in showing ads to users who have recently viewed their products online (this practice
is called ?retargeting?), and the value of these user impressions decays rapidly over time. Or consider
an advertising campaign that is tied to a product launch. A user impression that is purchased long
after the launch (such as the release of a movie) is almost worthless. To model this phenomenon we
multiply the buyer?s surplus in each round by a discount factor: If the buyer accepts the seller?s price
pt in round t, she receives surplus ?t (vt ? pt ), where {?t } is a nonincreasing sequence contained in
!T
the interval (0, 1]. We call T? = t=1 ?t the buyer?s ?horizon?, since it is analogous to the seller?s
horizon T . The buyer?s horizon plays a central role in our analysis.
Summary of results: In Sections 4 and 5 we assume that discount rates decrease geometrically:
?t = ? t?1 for some ? ? (0, 1]. In Section 4 we consider the special case that?the buyer has a fixed
value vt = v for all rounds t, and give an algorithm with regret at most O(T? T ). In Section 5 we
allow the vt to be drawn from any distribution that satisfies a certain smoothness assumption, and
? ? + T?1/? ) where ? ? (0, 1) is a user-selected parameter.
give an algorithm with regret at most O(T
Note that for either algorithm to be no-regret (i.e., for regret to be o(T )), we need that T? = o(T ). In
Section 6 we prove that this requirement is necessary for no-regret: any seller algorithm has regret at
least ?(T? ). The lower bound is proved via a reduction to a non-repeated, or ?single-shot?, auction.
That our regret bounds should depend so crucially on T? is foreshadowed by the example above, in
2
which a deceptive buyer foregoes surplus in early rounds to obtain even more surplus is later rounds.
A buyer with a short horizon T? will be unable to execute this strategy, as she will not be capable of
bearing the short-term costs required to manipulate the seller.
2
Related work
Kleinberg and Leighton study a posted price repeated auction with goods sold sequentially to T bidders who either all have the same fixed private value, private values drawn from a fixed distribution,
or private values that are chosen by an oblivious adversary (an adversary that acts independently of
observed seller behavior) [15] (see also [7, 8, 14]). Cesa-Bianchi et al. study a related problem of
setting the reserve price in a second price auction with multiple (but not repeated) bidders at each
round [9]. Note that none of these previous works allow for the possibility of a strategic buyer, i.e.
one that acts non-truthfully in order to maximize its surplus. This is because a new buyer is considered at each time step and if the seller behavior depends only on previous buyers, then the setting
immediately becomes strategyproof.
Contrary to what is studied in these previous theoretical settings, electronic exchanges in practice see
the same buyer appearing in multiple auctions and, thus, the buyer has incentive to act strategically.
In fact, [12] finds empirical evidence of buyers? strategic behavior in sponsored search auctions,
which in turn negatively affects the seller?s revenue. In the economics literature, ?intertemporal price
discrimination? refers to the practice of using a buyer?s past purchasing behavior to set future prices.
Previous work [1, 13] has shown, as we do in Section 6, that a seller cannot benefit from conditioning
prices on past behavior if the buyer is not myopic and can respond strategically. However, in contrast
to our work, these results assume that the seller knows the buyer?s value distribution.
Our setting can be modeled as a nonzero sum repeated game of incomplete information, and there is
extensive literature on this topic. However, most previous work has focused only on characterizing
the equilibria of these games. Further, our game has a particular structure that allows us to design
seller algorithms that are much more efficient than generic algorithms for solving repeated games.
Two settings that are distinct from what we consider in this paper, but where mechanism design and
learning are combined, are the multi-armed bandit mechanism design problem [6, 5, 11] and the
incentive compatible regression/classification problem [10, 17]. The former problem is motivated
by sponsored search auctions, where the challenge is to elicit truthful values from multiple bidding
advertisers while also efficiently estimating the click-through rate of the set of ads that are to be
allocated. The latter problem involves learning a discriminative classifier or regression function
in the batch setting with training examples that are labeled by selfish agents. The goal is then to
minimize error with respect to the truthful labels.
Finally, Arora et al. proposed a notion of regret for online learning algorithms, called policy regret,
that accounts for the possibility that the adversary may adapt to the learning algorithm?s behavior
[2]. This resembles the ability, in our setting, of a strategic buyer to adapt to the seller algorithm?s
behavior. However, even this stronger definition of regret is inadequate for our setting. This is
because policy regret is equivalent to standard regret when the adversary is oblivious, and as we
explained in the previous section, there is an oblivious buyer strategy such that the seller?s standard
regret is small, but his regret with respect to the best fixed price against a truthful buyer is large.
3
Preliminaries and Model
We consider a posted-price model for a single buyer repeatedly purchasing items from a single seller.
Associated with the buyer is a fixed distribution D over the interval [0, 1], which is known only to
the buyer. On each round t, the buyer receives a value vt ? V ? [0, 1] from the distribution D. The
seller, without observing this value, then posts a price pt ? P ? [0, 1]. Finally, the buyer selects
an allocation decision at ? {0, 1}. On each round t, the buyer receives an instantaneous surplus of
at (vt ? pt ), and the seller receives an instantaneous revenue of at pt .
We will be primarily interested in designing the seller?s learning algorithm, which we will denote A.
Let v1:t denote the sequence of values observed on the first t rounds, (v1 , ..., vt ), defining p1:t and
a1:t analogously. A is an algorithm that selects each price pt as a (possibly randomized) function
of (p1:t?1 , a1:t?1 ). As is common in mechanism design, we assume that the seller announces his
3
choice of algorithm A in advance. The buyer then selects her allocation strategy in response. The
buyer?s allocation strategy B generates allocation decisions at as a (possibly randomized) function
of (D, v1:t , p1:t , a1:t?1 ).
Notice that a choice of A, B and D fixes a distribution over the sequences a1:T and p1:T . This in
turn defines the seller?s total expected revenue:
"!
$
#
T
# A, B, D .
SellerRevenue(A, B, D, T ) = E
a
p
t
t
t=1
In the most general setting, we will consider a buyer whose surplus may be discounted through time.
In fact, our lower bounds will demonstrate that a sufficiently decaying discount rate is necessary for
a no-regret learning algorithm. We will imagine therefore that there exists a nonincreasing sequence
{?t ? (0, 1]} for the buyer. For a choice of T , we will define the effective ?time-horizon? for the
!T
buyer as T? = t=1 ?t . The buyer?s expected total discounted surplus is given by:
"!
$
#
T
# A, B, D .
BuyerSurplus(A, B, D, T ) = E
?
a
(v
?
p
)
t
t
t
t
t=1
We assume that the seller is faced with a strategic buyer who adapts to the choice of A. Thus, let
B ? (A, D) be a surplus-maximizing buyer for seller algorithm A and value distribution is D. In other
words, for all strategies B we have
BuyerSurplus(A, B ? (A, D), D, T ) ? BuyerSurplus(A, B, D, T ).
We are now prepared to define the seller?s regret. Let p? = arg maxp?P p PrD [v ? p], the revenuemaximizing choice of price for a seller that knows the distribution D, and simply posts a price of
p? on every round. Against such a pricing strategy, it is in the buyer?s best interest to be truthful,
accepting if and only if vt ? p? , and the seller would receive a revenue of T p? Prv?D [v ? p? ].
Informally, a no-regret algorithm is able to learn D from previous interactions with the buyer, and
converge to selecting a price close to p? . We therefore define regret as:
Regret(A, D, T ) = T p? Prv?D [v ? p? ] ? SellerRevenue(A, B ? (A, D), D, T ).
Finally, we will be interested in algorithms that attain o(T ) regret (meaning the averaged regret goes to zero as T ? ?) for the worst-case D. In other words, we say A is no-regret if
supD Regret(A, D, T ) = o(T ). Note that this definition of worst-case regret only assumes that Nature?s behavior (i.e., the value distribution) is worst-case; the buyer?s behavior is always presumed
to be surplus maximizing.
4
Fixed Value Setting
In this section we consider the case of a single unknown fixed buyer value, that is V = {v} for
some v ? (0, 1]. We show that in this setting a very
? simple pricing algorithm with monotonically
decreasing price offerings is able to achieve O(T? T ) when the buyer discount is ?t = ? t?1 . Due
to space constraints many of the proofs for this section appear in Appendix A.
Monotone algorithm: Choose parameter ? ? (0, 1), and initialize a0 = 1 and
p0 = 1. In each round t ? 1 let pt = ? 1?at?1 pt?1 .
In the Monotone algorithm, the seller starts at the maximum price of 1, and decreases the price
by a factor of ? whenever the buyer rejects the price, and otherwise leaves it unchanged. Since
Monotone is deterministic and the buyer?s value v is fixed, the surplus-maximizing buyer algorithm
B ? (Monotone, v) is characterized by a deterministic allocation sequence a?1:T ? {0, 1}T .1
The following lemma partially characterizes the optimal buyer allocation sequence.
Lemma 1. The sequence a?1 , . . . , a?T is monotonically nondecreasing.
1
If there are multiple optimal sequences, the buyer can then choose to randomize over the set of sequences.
In such a case, the worst case distribution (for the seller) is the one that always selects the revenue minimizing
optimal sequence. In that case, let a?1:T denote the revenue-minimizing buyer-optimal sequence.
4
In other words, once a buyer decides to start accepting the offered price at a certain time step, she
will keep accepting from that point on. The main idea behind the proof is to show that if there does
exist some time step t% where a?t" = 1 and a?t" +1 = 0, then swapping the values so that a?t" = 0 and
a?t" +1 = 1 (as well potentially swapping another pair of values) will result in a sequence with strictly
better surplus, thereby contradicting the optimality of a?1:T . The full proof is shown in Section A.1.
Now, to finish characterizing the optimal allocation sequence, we provide the following lemma,
which describes time steps where the buyer has with certainty begun to accept the offered price.
c
log( ?,?
v )
, then for any t > d?,? we have
Lemma 2. Let c?,? = 1 + (1 ? ?)T? and d?,? = log(1/?)
?
at+1 = 1.
A detailed proof is presented in Section A.2. These lemmas imply the following regret bound.
&
%
&
%
d?,?
?
1
+ v? c?,?
.
+ c?,?
Theorem 1. Regret(Monotone, v, T ) ? vT 1 ? c?,?
Proof. By Lemmas 1 and 2 we receive no revenue until at most round +d?,? , + 1, and from that
round onwards we receive at least revenue ? &d?,? ' per round. Thus
Regret(Monotone, v, T ) = vT ?
Noting that ?
d?,?
=
v
c?,?
T
'
t=&d?,? '+1
? &d?,? ' ? vT ? (T ? d?,? ? 1)? d?,? +1
and rearranging proves the theorem.
?
Tuning the learning parameter simplifies the bound further and provides a O(T? T ) regret bound.
Note that this tuning parameter does not assume knowledge of the buyer?s discount parameter ?.
?
? (
( ))
T
then
Regret(Monotone,
v,
T
)
?
Corollary 1. If ? = 1+?
T 4vT? + 2v log v1 + v .
T
The computation used to derive this corollary are found in Section A.3. This corollary shows that it
is indeed possible
to achieve no-regret against a strategic buyer with a unknown fixed value as long
?
as T? = o( T ). That is, the effective buyer horizon must be more than a constant factor smaller
than the square-root of the game?s finite horizon.
5
Stochastic Value Setting
We next give a seller algorithm that attains no-regret when the set of prices P is finite, the buyer?s
discount is ?t = ? t?1 , and the buyer?s value vt for each round is drawn from a fixed distribution D
that satistfies a certain continuity assumption, detailed below.
Phased algorithm: Choose parameter ? ? (0, 1). Define Ti ? 2i and Si ?
%
&
Ti
min |P|
, Ti? . For each phase i = 1, 2, 3, . . . of length Ti rounds:
Offer each price p ? P for Si rounds, in some fixed order; these are the explore
rounds. Let Ap,i = Number of explore rounds in phase i where price p was offered
and the buyer accepted. For the remaining Ti ?|P|Si rounds of phase i, offer price
A
p?i = arg maxp?P p Sp,i
in each round; these are the exploit rounds.
i
The Phased algorithm proceeds across a number of phases. Each phase consists of explore rounds
followed by exploit rounds. During explore rounds, the algorithm selects each price in some fixed
order. During exploit rounds, the algorithm repeatedly selects the price that realized the greatest
revenue during the immediately preceding explore rounds.
First notice that a strategic buyer has no incentive to lie during exploit rounds (i.e. it will accept any
price pt < vt and reject any price pt > vt ), since its decisions there do not affect any of its future
prices. Thus, the exploit rounds are the time at which the seller can exploit what it has learned from
the buyer during exploration. Alternatively, if the buyer has successfully manipulated the seller into
offering a low price, we can view the buyer as ?exploiting? the seller.
5
During explore rounds, on the other hand, the strategic buyer can benefit by telling lies which will
cause it to witness better prices during the corresponding exploit rounds. However, the value of
these lies to the buyer will depend on the fraction of the phase consisting of explore rounds. Taken
to the extreme, if the entire phase consists of explore rounds, the buyer is not interested in lying.
In general, the more explore rounds, the more revenue has to be sacrificed by a buyer that is lying
during the explore rounds. For the myopic buyer, the loss of enough immediate revenue at some
point ceases to justify her potential gains in the future exploit rounds.
Thus, while traditional algorithms like UCB balance exploration and exploitation to ensure confidence in the observed payoffs of sampled arms, our Phased algorithm explores for two purposes:
to ensure accurate estimates, and to dampen the buyer?s incentive to mislead the seller. The seller?s
balancing act is to explore for long enough to learn the buyer?s value distribution, but leave enough
exploit rounds to benefit from the knowledge.
Continuity of the value distribution The preceding argument required that the distribution D
does not exhibit a certain pathology. There cannot be two prices p, p% that are very close but
p Prv?D [v ? p] and p% Prv?D [v ? p% ] are very different. Otherwise, the buyer is largely indifferent to being offered prices p or p% , but distinguishing between the two prices is essential for the
seller during exploit rounds. Thus, we assume that the value distribution D is K-Lipschitz, which
eliminates this problem: Defining F (p) ? Prv?D [v ? p], we assume there exists K > 0 such that
|F (p) ? F (p% )| ? K|p ? p% | for all p, p% ? [0, 1]. This assumption is quite mild, as our Phased
algorithm does not need to know K, and the dependence of the regret rate on K will be logarithmic.
Theorem 2. Assume F (p) ? Prv?D [v ? p] is K-Lipschitz. Let ? = minp?P\{p? } p? F (p? ) ?
pF (p), where p? = arg maxp?P pF (p). For any parameter ? ? (0, 1) of the Phased algorithm
there exist constants c1 , c2 , c3 , c4 such that
|P|
Regret(Phased, D, T ) ? c1 |P|T ? + c2 2/? (log T )1/?
?
|P|
+ c3 1/? T?1/? (log T + log(K/?))1/? + c4 |P|
?
? ? + T?1/? ).
= O(T
The complete proof of Theorem 2 is rather technical, and is provided in Appendix B.
To gain further intuition about the upper bounds proved in this section and the previous section, it
helps to parametrize the buyer?s horizon T? as a function of T , e.g. T? = T c for 0 ? c ? 1. Writing
1
it in this fashion, we see that the Monotone
algorithm has regret at most O(T c+ 2 ), and the Phased
?
?
c
?
algorithm has regret at most O(T
) if we choose ? = c. The lower bound proved in the next
section states that, in the worst case, any seller algorithm will incur a regret of at least ?(T c ).
6
Lower Bound
In this section we state the main lower bound, which establishes a connection between the regret of
any seller algorithm and the buyer?s discounting. Specifically, we prove that the regret of any seller
algorithm is ?(T? ). Note that when T = T? ? i.e., the buyer does not discount her future surplus
? our lower bound proves that no-regret seller algorithms do not exist, and thus it is impossible for
the seller to take advantage of learned information. For example, consider the seller algorithm that
uniformly selects prices pt from [0, 1]. The optimal buyer algorithm is truthful, accepting if pt < vt ,
as the seller algorithm is non-adaptive, and the buyer does not gain any advantage by being more
strategic. In such a scenario the seller would quickly learn a good estimate of the value distribution
D. What is surprising is that a seller cannot use this information if the buyer does not discount her
future surplus. If the seller attempts to leverage information learned through interactions with the
buyer, the buyer can react accordingly to negate this advantage.
The lower bound further relates regret in the repeated setting to regret in a particular single-shot
game between the buyer and the seller. This demonstrates that, against a non-discounted buyer, the
seller is no better off in the repeated setting than he would be by repeatedly implementing such a
single-shot mechanism (ignoring previous interactions with the buyer). In the following section we
describe the simple single-shot game.
6
6.1
Single-Shot Auction
We call the following game the single-shot auction. A seller selects a family of distributions S
indexed by b ? [0, 1], where each Sb is a distribution on [0, 1] ? {0, 1}. The family S is revealed to
a buyer with unknown value v ? [0, 1], who then must select a bid b ? [0, 1], and then (p, a) ? Sb
is drawn from the corresponding distribution.
As usual, the buyer gets a surplus of a(v ? p), while the seller enjoys a revenue of ap. We restrict
the set of seller strategies to distributions that are incentive compatible and rational. S is incentive
compatible if for all b, v ? [0, 1], E(p,a)?Sb [a(v?p)] ? E(p,a)?Sv [a(v?p)]. It is rational if for all v,
E(p,a)?Sv [a(v ?p)] ? 0 (i.e. any buyer maximizing expected surplus is actually incentivised to play
the game). Incentive compatible and rational strategies exist: drawing p from a fixed distribution
(i.e. all Sb are the same), and letting a = 1{b ? p} suffices.2
We define the regret in the single-shot setting of any incentive-compatible and rational strategy S
with respect to value v as
SSRegret(S, v) = v ? E(p,a)?Sv [ap].
The following loose lower bound on SSRegret(S, v) is straightforward, and establishes that a
seller?s revenue cannot be a constant fraction of the buyer?s value for all v. The full proof is provided
in the appendix (Section C.1).
Lemma 3. For any incentive compatible and rational strategy S there exists v ? [0, 1] such that
1
SSRegret(S, v) ? 12
.
6.2
Repeated Auction
Returning to the repeated setting, our main lower bound will make use of the following technical
lemma, the full proof of which is provided in the appendix (Section C.1). Informally, the Lemma
states that the surplus enjoyed by an optimal buyer algorithm would only increase if this surplus
were viewed without discounting.
Lemma 4. Let the buyer?s discount sequence {?t } be positive and nonincreasing. For any
seller
D, and $surplus-maximizing buyer algorithm B ? (A, D),
"! algorithm A, value
$ distribution
"!
T
T
E
t=1 ?t at (vt ? pt ) ? E
t=1 at (vt ? pt )
Notice if at (vt ? pt ) ? 0 for all t, then the Lemma 4 is trivial. This would occur if the buyer only
ever accepts prices less than its value (at = 1 only if pt ? vt ). However, Lemma 4 is interesting
in that it holds for any seller algorithm A. It?s easy to imagine a seller algorithm that incentivizes
the buyer to sometimes accept a price pt > vt with the promise that this will generate better prices
in the future (e.g. setting pt" = 1 and offering pt = 0 for all t > t% only if at" = 1 and otherwise
setting pt = 1 for all t > t% ).
Lemmas 3 and 4 let us prove our main lower bound.
Theorem 3. Fix a positive, nonincreasing, discount sequence {?t }. Let A be any seller algorithm
for the repeated setting. There exists a buyer value distribution D such that Regret(A, D, T ) ?
1
12 T? . In particular, if T? = ?(T ), no-regret is impossible.
Proof. Let {ab,t , pb,t } be the sequence of prices and allocations generated by playing B ? (A, b)
!T
against A. For each b ? [0, 1] and p ? [0, 1) ? {0, 1}, let ?b (p, a) = T1? t=1 ?t 1{ab,t =
a}1{pb,t = p}. Notice that ?b (p, a) > 0 for countably many (p, a) and let ?b = {(p, a) ?
[0, 1] ? {0, 1} : ?b (p, a) > 0}. We think of ?b as being a distribution. It?s in fact a random measure
since the {ab,t , pb,t } are themselves random. One could imagine generating ?b by playing B ? (A, b)
against A and observing the sequence {ab,t , pb,t }. Every time we observe a price pb,t = p and
allocation ab,t = a, we assign T1? ?t additional mass to (p, a) in ?b . This is impossible in practice,
but the random measure ?b has a well-defined distribution.
Now consider the following strategy S for the single-shot setting. Sb is induced by drawing a ?b ,
then drawing (p, a) ? ?b . Note that for any b ? [0, 1] and any measurable function f
2
This subclass of auctions is even ex post rational.
7
*
+
E(p,a)?Sb [f (a, p)] = E?b ?Sb E(p,a)??b [f (a, b) | ?b ] =
1
T? E
"!
T
t=1
$
?t f (ab,t , pb,t ) .
Thus the strategy S is incentive compatible, since for any b, v ? [0, 1]
, T
'
1
1
E(p,a)?Sb [a(v ? p)] =
E
BuyerSurplus(A, B ? (A, b), v, T )
?t ab,t (v ? pb,t ) =
T?
T
?
t=1
, T
'
1
1
?
BuyerSurplus(A, B (A, v), v, T ) =
E
?
?t av,t (v ? pv,t ) = E(p,a)?Sv [a(v ? p)]
T?
T?
t=1
where the inequality follows from the fact that B ? (A, v) is a surplus-maximizing algorithm for a
buyer whose value is v. The strategy S is also rational, since for any v ? [0, 1]
, T
'
1
1
E(p,a)?Sv [a(v ? p)] =
E
?t av,t (v ? pv,t ) =
BuyerSurplus(A, B ? (A, v), v, T ) ? 0
T?
T
?
t=1
where the inequality follows from the fact that a surplus-maximizing buyer algorithm cannot earn
negative surplus, as a buyer can always reject every price and earn zero surplus.
!T
Let rt = 1 ? ?t and Tr = t=1 rt . Note that rt ? 0. We have the following for any v ? [0, 1]:
, T
-/
.
'
(
)
1
E
?t av,t pv,t
T? SSRegret(S, v) = T? v ? E(p,a)?Sv [ap] = T? v ?
T?
t=1
, T
, T
'
'
= T? v ? E
?t av,t pv,t = (T ? Tr )v ? E
(1 ? rt )av,t pv,t
t=1
= Tv ? E
,
T
'
t=1
-
av,t pv,t + E
t=1
= Regret(A, v, T )+E
,
T
'
t=1
,
T
'
t=1
-
rt av,t pv,t ? Tr v
-
rt av,t pv,t ?Tr v = Regret(A, v, T )+E
, T
'
t=1
rt (av,t pv,t ? v)
-
"!
$
"!
$
T
T
A closer look at the quantity E
r
(a
p
?
v)
,
tells
us
that:
E
r
(a
p
?
v)
?
t
v,t
v,t
t
v,t
v,t
t=1
t=1
"!
$
"!
$
T
T
E
t=1 rt av,t (pv,t ? v) = ?E
t=1 (1 ? ?t )av,t (v ? pv,t ) ? 0, where the last inequality
follows from Lemma 4. Therefore T? SSRegret(S, v) ? Regret(A, v, T ) and taking D to be the
point-mass on the value v ? [0, 1] which realizes Lemma 3 proves the statement of the theorem.
7
Conclusion
In this work, we have analyzed the performance of revenue maximizing algorithms in the setting of
a repeated posted-price auction with a strategic buyer. We show that if the buyer values inventory in
the present more than in the far future, no-regret (with respect to revenue gained against a truthful
buyer) learning is possible. Furthermore, we provide lower bounds that show such an assumption
is in fact necessary. These are the first bounds of this type for the presented setting. Future directions of study include studying buyer behavior under weaker polynomial discounting rates as well
understanding when existing ?off-the-shelf? bandit-algorithm (UCB, or EXP3), perhaps with slight
modifications, are able to perform well against strategic buyers.
Acknowledgements
We thank Corinna Cortes, Gagan Goel, Yishay Mansour, Hamid Nazerzadeh and Noam Nisan for
early comments on this work and pointers to relevent literature.
8
References
[1] Alessandro Acquisti and Hal R. Varian. Conditioning prices on purchase history. Marketing
Science, 24(3):367?381, 2005.
[2] Raman Arora, Ofer Dekel, and Ambuj Tewari. Online bandit learning against an adaptive
adversary: from regret to policy regret. In ICML, 2012.
[3] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed
bandit problem. Machine learning, 47(2-3):235?256, 2002.
[4] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic
multiarmed bandit problem. Journal on Computing, 32(1):48?77, 2002.
[5] Moshe Babaioff, Robert D Kleinberg, and Aleksandrs Slivkins. Truthful mechanisms with
implicit payment computation. In Proceedings of the Conference on Electronic Commerce,
pages 43?52. ACM, 2010.
[6] Moshe Babaioff, Yogeshwer Sharma, and Aleksandrs Slivkins. Characterizing truthful multiarmed bandit mechanisms. In Proceedings of Conference on Electronic Commerce, pages
79?88. ACM, 2009.
[7] Ziv Bar-Yossef, Kirsten Hildrum, and Felix Wu. Incentive-compatible online auctions for
digital goods. In Proceedings of Symposium on Discrete Algorithms, pages 964?970. SIAM,
2002.
[8] Avrim Blum, Vijay Kumar, Atri Rudra, and Felix Wu. Online learning in online auctions. In
Proceedings Symposium on Discrete algorithms, pages 202?204. SIAM, 2003.
[9] Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve
prices in second-price auctions. In Proceedings of the Symposium on Discrete Algorithms.
SIAM, 2013.
[10] Ofer Dekel, Felix Fischer, and Ariel D Procaccia. Incentive compatible regression learning.
Journal of Computer and System Sciences, 76(8):759?777, 2010.
[11] Nikhil R Devanur and Sham M Kakade. The price of truthfulness for pay-per-click auctions.
In Proceedings of the Conference on Electronic commerce, pages 99?106. ACM, 2009.
[12] Benjamin Edelman and Michael Ostrovsky. Strategic bidder behavior in sponsored search
auctions. Decision support systems, 43(1):192?198, 2007.
[13] Drew Fudenberg and J. Miguel Villas-Boas. Behavior-Based Price Discrimination and Customer Recognition. Elsevier Science, Oxford, 2007.
[14] Jason Hartline. Dynamic posted price mechanisms, 2001.
[15] Robert Kleinberg and Tom Leighton. The value of knowing a demand curve: Bounds on regret
for online posted-price auctions. In Symposium on Foundations of Computer Science, pages
594?605. IEEE, 2003.
[16] Volodymyr Kuleshov and Doina Precup. Algorithms for the multi-armed bandit problem.
Journal of Machine Learning, 2010.
[17] Reshef Meir, Ariel D Procaccia, and Jeffrey S Rosenschein. Strategyproof classification with
shared inputs. Proc. of 21st IJCAI, pages 220?225, 2009.
[18] Herbert Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins
Selected Papers, pages 169?177. Springer, 1985.
9
| 5010 |@word mild:1 private:3 exploitation:1 polynomial:1 seems:1 leighton:2 stronger:1 dekel:2 reshef:1 noregret:1 willing:2 crucially:1 p0:1 pressure:1 thereby:1 tr:4 shot:8 reduction:1 selecting:2 offering:5 past:2 existing:1 com:2 surprising:1 si:3 must:2 sponsored:3 discrimination:2 selected:2 leaf:1 item:1 accordingly:2 short:2 accepting:4 pointer:1 provides:1 along:1 c2:2 symposium:4 edelman:1 prove:3 consists:2 incentivizes:1 upenn:1 indeed:3 market:1 presumed:1 p1:4 examine:1 themselves:1 multi:2 expected:7 behavior:13 inspired:1 discounted:3 decreasing:1 buying:1 armed:2 pf:2 becomes:1 provided:4 estimating:1 maximizes:1 mass:2 what:4 minimizes:1 hindsight:3 guarantee:1 certainty:1 every:3 act:4 ti:5 subclass:1 returning:1 classifier:1 demonstrates:1 ostrovsky:1 whatever:1 appear:1 positive:2 t1:2 felix:3 oxford:1 ap:4 usyed:1 might:2 studied:1 deceptive:2 resembles:1 campaign:1 revenuemaximizing:1 averaged:1 phased:7 commerce:3 lost:1 regret:68 practice:4 babaioff:2 acquisti:1 empirical:1 elicit:1 reject:6 attain:1 word:3 confidence:1 refers:1 wait:2 get:1 cannot:5 targeting:1 close:2 impossible:3 writing:1 equivalent:2 deterministic:2 charged:2 measurable:1 maximizing:9 customer:1 straightforward:2 economics:1 go:1 independently:2 devanur:1 announces:2 focused:1 mislead:1 simplicity:1 immediately:2 react:1 his:5 notion:2 analogous:1 target:1 pt:28 suppose:2 user:9 savvy:1 play:2 imagine:3 distinguishing:1 designing:1 kuleshov:1 recognition:1 particularly:1 labeled:1 observed:5 role:1 yossef:1 worst:6 thousand:1 earned:2 decrease:2 highest:2 valuable:1 alessandro:1 intuition:1 benjamin:1 seller:71 dynamic:1 depend:2 solving:1 incur:1 negatively:1 learner:1 bidding:2 sacrificed:1 distinct:1 effective:2 describe:1 tell:1 whose:4 quite:1 yogeshwer:1 say:1 drawing:3 otherwise:4 nikhil:1 maxp:3 ability:1 fischer:2 kirsten:1 think:1 nondecreasing:1 online:12 sequence:17 advantage:3 propose:1 interaction:3 product:2 remainder:1 relevant:1 rapidly:1 achieve:2 adapts:1 amin:1 description:1 exploiting:1 convergence:1 ijcai:1 requirement:1 generating:1 leave:1 help:1 weakens:1 derive:1 miguel:1 measured:1 launch:2 involves:1 direction:1 stochastic:1 exploration:2 implementing:1 exchange:9 assign:1 fix:2 suffices:1 preliminary:1 hamid:1 rostami:1 strictly:1 hold:2 lying:2 sufficiently:1 considered:1 equilibrium:1 reserve:7 adopt:1 early:2 purpose:1 proc:1 realizes:1 label:1 robbins:2 successfully:1 establishes:2 dampen:1 minimization:1 always:3 rather:3 secondprice:1 avoid:1 shelf:1 claudio:1 corollary:3 release:1 she:6 contrast:1 attains:1 rostamizadeh:1 elsevier:1 sb:8 entire:1 accept:5 a0:1 her:8 bandit:9 interested:6 selects:8 arg:3 classification:2 ziv:1 special:1 fairly:1 initialize:1 once:1 look:1 icml:1 nearly:2 future:9 purchase:1 oblivious:3 strategically:2 primarily:1 manipulated:1 phase:7 consisting:1 jeffrey:1 attempt:1 ab:7 onwards:1 interest:1 possibility:2 multiply:1 certainly:1 indifferent:1 analyzed:1 extreme:1 behind:1 swapping:2 myopic:2 nonincreasing:4 accurate:1 rudra:1 capable:1 closer:1 daily:1 necessary:3 unless:1 indexed:1 incomplete:1 sooner:1 varian:1 theoretical:1 earlier:1 yoav:1 strategic:21 cost:2 addressing:1 inadequate:2 sv:6 banner:1 chooses:1 combined:1 truthfulness:1 explores:1 randomized:2 siam:3 st:1 participates:1 off:2 michael:1 analogously:1 quickly:1 precup:1 earn:2 central:1 settled:1 cesa:4 choose:4 possibly:2 account:2 potential:1 volodymyr:1 bidder:7 doina:1 ad:10 depends:1 nisan:1 later:3 view:2 root:1 jason:1 observing:2 characterizes:1 competitive:1 decaying:1 start:2 minimize:1 square:1 who:5 efficiently:1 largely:1 bayesian:1 none:1 nazerzadeh:1 advertising:6 hartline:1 history:1 whenever:1 definition:4 against:11 associated:1 proof:9 gain:4 sampled:1 proved:3 rational:7 begun:1 knowledge:3 surplus:29 actually:1 auer:2 higher:1 tom:1 response:1 execute:1 furthermore:1 marketing:1 implicit:1 until:1 hand:1 receives:6 web:5 google:4 continuity:2 defines:1 perhaps:1 pricing:2 hal:1 effect:1 managed:1 former:1 discounting:5 nonzero:1 round:50 game:9 during:9 noted:1 impression:8 complete:1 demonstrate:1 auction:26 meaning:1 instantaneous:2 recently:1 common:2 behaves:1 winner:2 conditioning:2 he:3 slight:1 refer:1 multiarmed:3 smoothness:1 tuning:2 enjoyed:1 pathology:1 had:2 operating:1 nicolo:2 scenario:1 certain:4 server:2 inequality:3 vt:25 transmitted:1 seen:1 additional:1 gentile:1 preceding:2 herbert:2 goel:1 converge:1 maximize:5 truthful:13 advertiser:4 monotonically:2 sharma:1 relates:1 multiple:4 full:3 sham:1 technical:2 faster:1 exp3:2 adapt:2 offer:6 long:5 characterized:1 post:3 manipulate:1 visit:1 a1:4 regression:3 sometimes:1 strategyproof:2 c1:2 receive:3 interval:2 publisher:2 allocated:1 eliminates:1 comment:1 induced:1 tend:1 contrary:1 call:2 noting:1 leverage:1 revealed:1 enough:3 easy:1 automated:1 bid:6 affect:2 finish:1 pennsylvania:1 restrict:1 click:2 nonstochastic:1 idea:1 simplifies:1 knowing:1 whether:2 motivated:1 suffer:1 peter:2 cause:1 yishay:2 repeatedly:5 prefers:1 tewari:1 detailed:2 involve:1 informally:2 amount:2 discount:11 prepared:1 generate:1 schapire:1 meir:1 exist:4 notice:6 per:2 discrete:3 prd:1 incentive:13 promise:1 threshold:1 pb:7 blum:1 drawn:5 neither:1 v1:4 geometrically:1 fraction:3 sum:2 monotone:8 respond:2 auctioneer:1 throughout:1 almost:1 family:2 electronic:4 wu:2 raman:1 decision:4 appendix:4 entirely:1 bound:19 pay:1 followed:1 display:3 nontrivial:1 occur:1 constraint:1 prv:6 kleinberg:3 generates:1 aspect:1 argument:1 optimality:1 min:1 kumar:1 tv:1 according:1 smaller:2 describes:1 across:1 kakade:1 modification:1 explained:1 ariel:2 taken:2 payment:1 turn:2 eventually:2 mechanism:10 loose:1 boa:1 know:3 letting:1 studying:1 parametrize:1 ofer:2 observe:1 generic:1 occurrence:1 appearing:1 batch:1 corinna:1 assumes:1 remaining:1 worthless:1 ensure:2 unfortunate:1 include:1 opportunity:2 umar:1 exploit:10 prof:3 purchased:1 unchanged:1 moshe:2 realized:1 occurs:1 quantity:1 strategy:15 randomize:1 dependence:1 usual:1 interacts:1 traditional:2 rt:8 behalf:1 exhibit:1 win:1 villa:1 unable:1 thank:1 topic:1 argue:1 trivial:1 afshin:1 length:1 useless:1 modeled:1 minimizing:2 acquire:1 balance:1 robert:3 potentially:1 statement:1 noam:1 negative:1 design:5 policy:3 unknown:3 perform:1 bianchi:4 upper:1 av:11 observation:1 sold:2 finite:3 immediate:1 defining:2 witness:1 ever:2 precise:1 payoff:1 interacting:2 mansour:2 aleksandrs:2 pair:1 required:2 extensive:1 c3:2 connection:1 slivkins:2 c4:2 accepts:5 learned:3 able:4 adversary:5 proceeds:1 below:4 bar:1 challenge:1 ambuj:1 greatest:1 syed:1 natural:1 arm:1 movie:1 imply:1 arora:2 naive:1 faced:1 literature:3 understanding:1 acknowledgement:1 nicol:1 relative:1 freund:1 loss:2 interesting:1 allocation:9 revenue:30 digital:1 foundation:1 agent:2 boon:1 offered:6 purchasing:2 minp:1 playing:2 balancing:1 compatible:9 summary:1 last:1 retargeting:1 enjoys:1 side:1 allow:3 weaker:1 telling:1 fall:1 characterizing:3 taking:1 kareem:1 benefit:3 curve:1 world:1 rich:1 adaptive:2 far:2 transaction:1 countably:1 rosenschein:1 keep:1 decides:2 sequentially:1 discriminative:1 alternatively:1 truthfully:1 search:3 why:1 learn:7 nature:1 rearranging:1 ignoring:1 inventory:4 bearing:1 posted:8 sp:1 main:4 fudenberg:1 motivation:1 paul:1 contradicting:1 repeated:13 fashion:1 supd:1 inferring:1 pv:11 lie:3 tied:1 advertisement:2 theorem:6 relevent:1 showing:2 decay:1 cease:1 negate:1 cortes:1 evidence:1 exists:4 essential:1 avrim:1 sequential:1 gained:1 ci:1 drew:1 horizon:8 demand:1 vijay:1 gagan:1 logarithmic:1 simply:2 selfish:1 explore:11 contained:1 atri:1 partially:1 springer:1 satisfies:1 acm:3 goal:4 viewed:2 price:84 lipschitz:2 shared:1 specifically:1 uniformly:1 justify:1 lemma:15 called:3 total:6 accepted:2 buyer:130 ucb:3 select:1 procaccia:2 support:1 latter:1 phenomenon:1 ex:1 |
4,433 | 5,011 | Efficient Algorithm for Privately Releasing Smooth
Queries
Ziteng Wang
Key Laboratory of Machine Perception, MOE
School of EECS
Peking University
[email protected]
Kai Fan
Key Laboratory of Machine Perception, MOE
School of EECS
Peking University
[email protected]
Jiaqi Zhang
Key Laboratory of Machine Perception, MOE
School of EECS
Peking University
[email protected]
Liwei Wang
Key Laboratory of Machine Perception, MOE
School of EECS
Peking University
[email protected]
Abstract
We study differentially private mechanisms for answering smooth queries on
databases consisting of data points in Rd . A K-smooth query is specified by a
function whose partial derivatives up to order K are all bounded. We develop an
-differentially private mechanism which for the class of K-smooth queries has
K
accuracy O(n? 2d+K /). The mechanism first outputs a summary of the database.
To obtain an answer of a query, the user runs a public evaluation algorithm which
contains no information of the database. Outputting the summary runs in time
d
O(n1+ 2d+K ), and the evaluation algorithm for answering a query runs in time
d+2+ 2d
K
?
2d+K ). Our mechanism is based on L
O(n
? -approximation of (transformed)
smooth functions by low degree even trigonometric polynomials with small and
efficiently computable coefficients.
1
Introduction
Privacy is an important problem in data analysis. Often people want to learn useful information from
data that are sensitive. But when releasing statistics of sensitive data, one must tradeoff between the
accuracy and the amount of privacy loss of the individuals in the database.
In this paper we consider differential privacy [9], which has become a standard concept of privacy.
Roughly speaking, a mechanism which releases information about the database is said to preserve
differential privacy, if the change of a single database element does not affect the probability distribution of the output significantly. Differential privacy provides strong guarantees against attacks. It
ensures that the risk of any individual to submit her information to the database is very small. An
adversary can discover almost nothing new from the database that contains the individual?s information compared with that from the database without the individual?s information. Recently there
have been extensive studies of machine learning, statistical estimation, and data mining under the
differential privacy framework [29, 5, 18, 17, 6, 30, 20, 4].
Accurately answering statistical queries is an important problem in differential privacy. A simple
and efficient method is the Laplace mechanism [9], which adds Laplace noise to the true answers.
Laplace mechanism is especially useful for query functions with low sensitivity, which is the maximal difference of the query values of two databases that are different in only one item. A typical
1
class of queries that has low sensitivity is linear queries, whose sensitivity is O(1/n), where n is the
size of the database.
The Laplace mechanism has a limitation. It can answer at most O(n2 ) queries. If the number
of queries is substantially larger than n2 , Laplace mechanism is not able to provide differentially
private answers with nontrivial accuracy. Considering that potentially there are many users and
each user may submit a set of queries, limiting the number of total queries to be smaller than n2 is
too restricted in some situations. A remarkable result due to Blum, Ligett and Roth [2] shows that
information theoretically it is possible for a mechanism to answer far more than n2 linear queries
while preserving differential privacy and nontrivial accuracy simultaneously.
There are a series of works [10, 11, 21, 16] improving the result of [2]. All these mechanisms
are very powerful in the sense that they can answer general and adversely chosen queries. On the
other hand, even the fastest algorithms [16, 14] run in time linear in the size of the data universe to
answer a query. Often the size of the data universe is much larger than that of the database, so these
mechanisms are inefficient. Recently, [25] shows that there is no polynomial time algorithm that
can answer n2+o(1) general queries while preserving privacy and accuracy (assuming the existence
of one-way function).
Given the hardness result, recently there are growing interests in studying efficient and differentially
private mechanisms for restricted class of queries. From a practical point of view, if there exists a
class of queries which is rich enough to contain most queries used in applications and allows one to
develop fast mechanisms, then the hardness result is not a serious barrier for differential privacy.
One class of queries that attracts a lot of attentions is the k-way conjunctions. The data universe for
this problem is {0, 1}d . Thus each individual record has d binary attributes. A k-way conjunction
query is specified by k features. The query asks what fraction of the individual records in the
database has all these k features being 1. A series of works attack this problem using several different
techniques [1, 13, 7, 15, 24] . They propose elegant mechanisms which run in time poly(n) when
k is a constant. Another class of queries that yields efficient mechanisms is sparse query. A query
is m-sparse if it takes non-zero values on at most m elements in the data universe. [3] develops
mechanisms which are efficient when m = poly(n).
When the data universe is [?1, 1]d , where d is a constant, [2] considers rectangle queries. A rectangle
query is specified by an axis-aligned rectangle. The answer to the query is the fraction of the data
points that lie in the rectangle. [2] shows that if [?1, 1]d is discretized to poly(n) bits of precision,
then there are efficient mechanisms for the class of rectangle queries. There are also works studying
related range queries [19].
In this paper we study smooth queries defined also on data universe [?1, 1]d for constant d. A smooth
query is specified by a smooth function, which has bounded partial derivatives up to a certain order.
The answer to the query is the average of the function values on data points in the database. Smooth
functions are widely used in machine learning and data analysis [28]. There are extensive studies
on the relation between smoothness, regularization, reproducing kernels and generalization ability
[27, 22].
Our main result is an -differentially private mechanism for the class of K-smooth queries, which
are specified by functions with bounded partial derivatives up to order K. The mechanism has
d
2d+K
K
)
(?, ?)-accuracy, where ? = O(n? 2d+K /) for ? ? e?O(n
. The mechanism first outputs a
summary of the database. To obtain an answer of a smooth query, the user runs a public evaluation
procedure which contains no information of the database. Outputting the summary has running time
d+2+ 2d
d
K
?
2d+K ). The
O n1+ 2d+K , and the evaluation procedure for answering a query runs in time O(n
mechanism has the advantage that both the accuracy and the running time for answering a query
improve quickly as K/d increases (see also Table 1 in Section 3).
Our algorithm is a L? -approximation based mechanism and is motivated by [24], which considers
approximation of k-way conjunctions by low degree polynomials. The basic idea is to approximate
the whole query class by linear combination of a small set of basis functions. The technical difficulties lie in that in order that the approximation induces an efficient and differentially private mechanism, all the linear coefficients of the basis functions must be small and efficiently computable.
To guarantee these properties, we first transform the query function. Then by using even trigono2
metric polynomials as basis functions we prove a constant upper bound for the linear coefficients.
The smoothness of the functions also allows us to use an efficient numerical method to compute the
coefficients to a precision so that the accuracy of the mechanism is not affected significantly.
2
Background
Let D be a database containing n data points in the data universe X . In this paper, we consider the
case that X ? Rd where d is a constant. Typically, we assume that the data universe X = [?1, 1]d .
Two databases D and D0 are called neighbors if |D| = |D0 | = n and they differ in exactly one data
point. The following is the formal definition of differential privacy.
Definition 2.1 ((, ?)-differential privacy). A sanitizer S which is an algorithm that maps input
database into some range R is said to preserve (, ?)-differential privacy, if for all pairs of neighbor
databases D, D0 and for any subset A ? R, it holds that
P(S(D) ? A) ? P(S(D0 ) ? A) ? e + ?.
If S preserves (, 0)-differential privacy, we say S is -differentially private.
We consider linear queries. Each linear query qf is specified
by a function f which maps data
P
1
universe [?1, 1]d to R, and qf is defined by qf (D) := |D|
x?D f (x).
Let Q be a set of queries. The accuracy of a mechanism with respect to Q is defined as follows.
Definition 2.2 ((?, ?)-accuracy). Let Q be a set of queries. A sanitizer S is said to have (?, ?)accuracy for size n databases with respect to Q, if for every database D with |D| = n the following
holds
P(?q ? Q, |S(D, q) ? q(D)| ? ?) ? ?,
where S(D, q) is the answer to q given by S.
We will make use of Laplace mechanism [9] in our algorithm. Laplace mechanism adds Laplace
noise to the output. We denote by Lap(?) the random variable distributed according to the Laplace
1
exp(?|x|/?).
distribution with parameter ?: P(Lap(?) = x) = 2?
We will design a differentially private mechanism which is accurate with respect to a query set
Q possibly consisting of infinite number of queries. Given a database D, the sanitizer outputs a
summary which preserves differential privacy. For any qf ? Q, the user makes use of an evaluation
procedure to measure f on the summary and obtain an approximate answer of qf (D). Although we
may think of the evaluation procedure as part of the mechanism, it does not contain any information
of the database and therefore is public. We will study the running time for the sanitizer outputting
the summary. Ideally it is O(nc ) for some constant c not much larger than 1. For the evaluation
procedure, the running time per query is the focus. Ideally it is sublinear in n. Here and in the rest
of the paper, we assume that calculating the value of f on a data point x can be done in unit time.
In this work we will frequently use trigonometric polynomials. For thePunivariate case, a function
m
p(?) is called a trigonometric polynomial of degree m if p(?) = a0 + l=1 (al cos l? + bl sin l?),
where al , bl are constants. If p(?)Pis an even function, we say that it is an even trigonometm
ric
P polynomial, and p(?) = a0 + l=1 al cos l?. For the multivariate case, if p(?1 , . . . , ?d ) =
l=(l1 ,...,ld ) al cos(l1 ?1 ) . . . cos(ld ?d ), then p is said to be an even trigonometric polynomial (with
respect to each variable), and the degree of ?i is the upper limit of li .
3
Efficient differentially private mechanism
Let us first describe the set of queries considered in this work. Since each query qf is specified by a
function f , a set of queries QF can be specified by a set of functions F . Remember that each f ? F
maps [?1, 1]d to R. For any point x = (x1 , . . . , xd ) ? [?1, 1]d , if k = (k1 , . . . , kd ) is a d-tuple
with nonnegative integers, then we define
Dk := D1k1 ? ? ? Ddkd :=
3
? kd
? k1
? ? ? kd .
k1
?x1
?xd
Parameters: Privacy parameters , ? > 0; Failure probability ? > 0;
1
Smoothness orderK
? N; Set t = n 2d+K .
n
Input: Database D ? [?1, 1]d .
Output: A td -dimensional vector as the summary.
Algorithm:
For each x = (x1 , . . . , xd ) ? D:
Set: ?i (x) = arccos(xi ), i = 1, . . . , d;
For every d-tuple of nonnegativeP
integers m = (m1 , . . . , md ), where kmk? ? t ? 1
Compute: Sum (D) = n1 x?D
dcos
(m1 ?1 (x)) . . . cos (md ?d (x));
t
c
Sum (D) ? Sum (D) + Lap n ;
c
c m (D)
Let Su(D)
= Su
be a td dimensional vector;
kmk? ?t?1
c
Return: Su(D).
Algorithm 1: Outputting the summary
1
Parameters: t = n 2d+K .
K
,
Input: A query qf , where f : [?1, 1]d ? R and f ? CB
d
c
Summary Su(D) (a t -dimensional vector).
Output: Approximate answer to qf (D).
Algorithm:
Let gf (?) = f (cos(?1 ), . . . , cos(?d )), ? = (?1 , . . . , ?d ) ? [??, ?]d ;
Compute a trigonometric polynomial approximation pt (?) of gf (?),
where the degree
P of each ?i is t; // see Section 4 for details of computation.
Denote pt (?) = m=(m1 ,...,md ),kmk? <t cm cos(m1 ?1 ) . . . cos(md ?d );
Let c = (cm )kmk? <t be a td -dimensional vector;
c
Return: the inner product < c, Su(D)
>.
Algorithm 2: Answering a query
Let |k| := k1 + . . . + kd . Define the K-norm as
kf kK := sup
sup
|Dk f (x)|.
|k|?K x?[?1,1]d
K
We will study the set CB
which contains all smooth functions whose derivatives up to order K have
K
?-norm upper bounded by a constant B > 0. Formally, CB
:= {f : kf kK ? B}. The set
K
of queries specified by CB , denoted as QCBK , is our focus. Smooth functions have been studied in
depth in machine learning [26, 28, 27] and found wide applications [22].
The following theorem is our main result. It says that if the query class is specified by smooth
functions, then there is a very efficient mechanism which preserves -differential privacy and good
accuracy. The mechanism consists of two parts: One for outputting a summary of the database,
the other for answering a query. The two parts are described in Algorithm 1 and Algorithm 2
respectively. The second part of the mechanism contains no private information of the database.
P
K
Theorem 3.1. Let the query set be QCBK = {qf = n1 x?D f (x) : f ? CB
}, where K ? N
d
and B > 0 are constants. Let the data universe be [?1, 1] , where d ? N is a constant. Then the
mechanism S given in Algorithm 1 and Algorithm 2 satisfies that for any > 0, the following hold:
1) The mechanism is -differentially private.
1
d
2d+K
)
2) For any ? ? 10 ? e? 5 (n
the mechanism is (?, ?)-accurate, where ? = O
and the hidden constant depends only on d, K and B.
4
1
n
K
2d+K
/ ,
Order of smoothness
d
K
Table 1: Performances vs. Order of smoothness
Accuracy ? Time: Outputting summary Time: Answering a query
3
1
? 32 + 4d+2
O(n
)
O(n 4 )
5
? 14 + 3/4
d )
O(n
O(n1+0 )
? 0 (1+ d3 ) )
O(n
1
K=1
O(( n1 ) 2d+1 )
O(n 2 )
K = 2d
O( ?1n )
O(( n1 )1?20 )
= 0 1
3d+K
3) The running time for S to output the summary is O(n 2d+K ).
4) The running time for S to answer a query is O(n
d+2+ 2d
K
2d+K
polylog(n)).
The proof of Theorem 3.1 is given in the supplementary material. To have a better idea of how
the performances depend on the order of smoothness, let us consider three cases. The first case
is K = 1, i.e., the query functions only have the first order derivatives. Another extreme case is
K d, and we assume d/K = 0 1. We also consider a case in the middle by assuming
K = 2d. Table 1 gives simplified upper bounds for the error and running time in these cases. We
have the following observations:
1
1) The accuracy ? improves dramatically from roughly O(n? 2d ) to nearly O(n?1 ) as K increases.
For K > 2d, the error is smaller than the sampling error O( ?1n ).
2) The running time for outputting the summary does not change too much, because reading through
the database requires ?(n) time.
3) The running time for answering a query reduces significantly from roughly O(n3/2 ) to nearly
O(n0 ) as K getting large. When K = 2d, it is about n1/4 if d is not too small. In practice, the
speed for answering a query may be more important than that for outputting the summary since
the sanitizer only output the summary once. Thus having an nc -time (c 1) algorithm for query
answering will be appealing.
Conceptually our mechanism is simple. First, by change of variables we have gf (?1 , . . . , ?d ) =
f (cos ?1 , . . . , cos ?d ). It also transforms the data universe from [?1, 1]d to [??, ?]d . Note that for
each variable ?i , gf is an even function. To compute the summary, the mechanism just gives noisy
answers to queries specified by even trigonometric monomials cos(m1 ?1 ) . . . cos(md ?d ). For each
1
trigonometric monomial, the highest degree of any variable is t := maxd md = O(n 2d+K ). The
d
summary is a O(n 2d+K )-dimensional vector. To answer a query specified by a smooth function f ,
the mechanism computes a trigonometric polynomial approximation of gf . The answer to the query
qf is a linear combination of the summary by the coefficients of the approximation trigonometric
polynomial.
Our algorithm is an L? -approximation based mechanism, which is motivated by [24]. An approximation based mechanism relies on three conditions: 1) There exists a small set of basis functions
such that every query function can be well approximated by a linear combination of them; 2) All the
linear coefficients are small; 3) The whole set of the linear coefficients can be computed efficiently.
If these conditions hold, then the mechanism just outputs noisy answers to the set of queries specified
by the basis functions as the summary. When answering a query, the mechanism computes the
coefficients with which the linear combination of the basis functions approximate the query function.
The answer to the query is simply the inner product of the coefficients and the summary vector.
The following theorem guarantees that by change of variables and using even trigonometric polynomials as the basis functions, the class of smooth functions has all the three properties described
above.
K
Theorem 3.2. Let ? > 0. For every f ? CB
defined on [?1, 1]d , let
gf (?1 , . . . , ?d ) = f (cos ?1 , . . . , cos ?d ),
5
?i ? [??, ?].
Then, there is an even trigonometric polynomial p whose degree of each variable is t(?) =
X
p(?1 , . . . , ?d ) =
cl1 ,...,ld cos(l1 ?1 ) . . . cos(ld ?d ),
1/K
1
?
:
0?l1 ,...,ld <t(?)
such that
1) kgf ? pk? ? ?.
2) All the linear coefficients cl1 ,...,ld can be uniformly upper bounded by a constant M independent
of t(?) (i.e., M depends only on K, d, and B).
d+2
2d
3) The whole set of the linear coefficients can be computed in time O ( ?1 ) K + K 2 ? polylog( ?1 ) .
Theorem 3.2 is proved in Section 4. Based on Theorem 3.2, the proof of Theorem 3.1 is mainly
the argument for Laplace mechanism together with an optimization of the approximation error ?
trading-off with the Laplace noise. (Please see the supplementary material.)
4
L? -approximation of smooth functions: small and efficiently computable
coefficients
K
In this section we prove Theorem 3.2. That is, for every f ? CB
the corresponding gf can be
approximated by a low degree trigonometric polynomial in L? -norm. We also require that the
linear coefficients of the trigonometric polynomial are all small and can be computed efficiently.
These properties are crucial for the differentially private mechanism to be accurate and efficient.
K
In fact, L? -approximation of smooth functions in CB
by polynomial (and other basis functions) is
K
there is a low
an important topic in approximation theory. It is well-known that for every f ? CB
degree polynomial with small approximation error. However, it is not clear whether there is an upper
bound for the linear coefficients that is sufficiently good for our purpose. Instead we transform f to
gf and use trigonometric polynomials as the basis functions in the mechanism. Then we are able
to give a constant upper bound for the linear coefficients. We also need to compute the coefficients
efficiently. But results from approximation theory give the coefficients as complicated integrals.
We adopt an algorithm which fully exploits the smoothness of the function and thus can efficiently
compute approximations of the coefficients to certain precision so that the errors involved do not
affect the accuracy of the differentially private mechanism too much.
Below, Section 4.1 describes the classical theory on trigonometric polynomial approximation of
smooth functions. Section 4.2 shows that the coefficients have a small upper bound and can be
efficiently computed. Theorem 3.2 then follows from these results.
4.1
Trigonometric polynomial approximation with generalized Jackson kernel
This section mainly contains known results of trigonometric polynomial approximation, stated in a
way tailored to our problem. For a comprehensive description of univariate approximation theory,
please refer to the excellent book of [8]; and to [23] for multivariate approximation theory.
K
Let gf be the function obtained from f ? CB
([?1, 1]d ): gf (?1 , . . . , ?d ) = f (cos ?1 , . . . , cos ?d ).
K
d
Note that gf ? CB 0 ([??, ?] ) for some constant B 0 depending only on B, K, d, and gf is even
with respect to each variable. The key tool in trigonometric polynomial approximation of smooth
functions is the generalized Jackson kernel.
2r
sin(ts/2)
1
, where ?t,r is
Definition 4.1. Define the generalized Jackson kernel as Jt,r (s) = ?t,r
sin(s/2)
R?
determined by ?? Jt,r (s)ds = 1.
Jt,r (s) is an even trigonometric polynomial of degree r(t ? 1). Let Ht,r (s) = Jt0 ,r (s), where
t0 = bt/rc + 1. Then Ht,r is an even trigonometric polynomial of degree at most t. We write
Ht,r (s) = a0 +
t
X
l=1
6
al cos ls.
(1)
Suppose that g is a univariate function defined on [??, ?] which satisfies that g(??) = g(?). Define
the approximation operator It,K as
Z ?
K+1
X
K +1
It,K (g)(x) = ?
Ht,r (s)
(?1)l
g(x + ls)ds,
(2)
l
??
l=1
where r =
at most t.
d K+3
2 e.
It is not difficult to see that It,K maps g to a trigonometric polynomial of degree
Next suppose that g is a d-variate function defined on [??, ?]d , and is even with respect to each
d
variable. Define an operator It,K
as sequential composition of It,K,1 , . . . , It,K,d , where It,K,j is
d
the approximation operator given in (2) with respect to the jth variable of g. Thus It,K
(g) is a
trigonometric polynomial of d-variables and each variable has degree at most t.
Theorem 4.1. Suppose that g is a d-variate function defined on [??, ?]d , and is even with respect
(K)
to each variable. Let Dj g be the Kth order partial derivative of g respect to the j-th variable. If
(K)
kDj
gk? ? M for some constant M for all 1 ? j ? d, then there is a constant C such that
d
kg ? It,K
(g)k? ?
C
,
tK+1
where C depends only on M , d and K.
4.2
The linear coefficients
d
In this subsection we study the linear coefficients in the trigonometric polynomial It,K
(gf ). The
d
previous subsection established that gf can be approximated by It,K (gf ) for a small t. Here we cond
(gf )(?1 , . . . , ?d )
sider the upper bound and approximate computation of the coefficients. Since It,K
is even with respect to each variable, we write
X
d
It,K
(gf )(?1 , . . . , ?d ) =
cn1 ,...,nd cos(n1 ?1 ) . . . cos(nd ?d ).
(3)
0?n1 ,...,nd ?t
d
Fact 4.2. The coefficients cn1 ,...,nd of It,K
(gf ) can be written as
X
cn1 ,...,nd = (?1)d
ml1 ,k1 ,...,ld ,kd ,
(4)
1?k1 ,...,kd ?K+1
0?l1 ,...,ld ?t
li =ki ?ni ?i?[d]
where
ml1 ,k1 ,...,ld ,kd =
d
Y
(?1)ki ali
i=1
K +1
ki
Z
d
Y
cos
[??,?]d i=1
!
li
?i gf (?)d? ,
ki
(5)
and ali is the linear coefficient of cos(li s) in Ht,r (s) as given in (1).
d
The following lemma shows that the coefficients cn1 ,...,nd of It,K
(gf ) can be uniformly upper
bounded by a constant independent of t.
Lemma 4.3. There exists a constant M which depends only on K, B, d but independent of t, such
K
d
that for every f ? CB
, all the linear coefficients cn1 ,...,nd of It,K
(gf ) satisfy
|cn1 ,...,nd | ? M.
The proof of Lemma 4.3 is given in the supplementary material. Now we consider the computation
d
of the coefficients cn1 ,...,nd of It,K
(gf ). Note that each coefficient involves d-dimensional integrations of smooth functions, so we have to numerically compute approximations of them. For function
K
class CB
defined on [?1, 1]d , traditional numerical integration methods run in time O(( ?1 )d/K ) in
order that the error is less than ? . Here we adopt the sparse grids algorithm due to Gerstner and
Griebel [12] which fully exploits the smoothness of the integrand. By choosing a particular quadrature rule as the algorithm?s subroutine, we are able to prove that the running time of the sparse grids
7
is bounded by O(( ?1 )2/K ). The sparse grids algorithm, the theorem giving the bound for the running
time and its proof are all given in the supplementary material. Based on these results, we establish
the running time for computing the approximate coefficients of the trigonometric polynomial, which
is stated in the following Lemma.
d
Lemma 4.4. Let c?n1 ,...,nd be an approximation of the coefficient cn1 ,...,nd of It,K
(gf ) obtained by
approximately computing the integral in (5) with a version of the sparse grids algorithm [12] (given
in the supplementary material). Let
X
d
I?t,K
(gf )(?1 , . . . , ?d ) =
c?n1 ,...,nd cos(n1 ?1 ) . . . cos(nd ?d ).
0?n1 ,...,nd ?t
K
d
d
Then for every f ? CB
, in order that kI?t,K
(gf ) ? It,K
(gf )k? ? O t?K , it suffices that the
2
computation of all the coefficients c?n1 ,...,nd runs in time O t(1+ K )d+2 ? polylog(t) . In addition,
maxn1 ,...,nd |?
cn1 ,...,nd ? cn1 ,...,nd | = o(1) as t ? ?.
The proof of Lemma 4.4 is given in the supplementary material. Theorem 3.2 then follows easily
from Lemma 4.3 and Lemma 4.4.
1/K
d
. Let p = I?m,K
(gf ). Combining Lemma 4.3
Proof of Theorem 3.2. Setting t = t(?) = ?1
and Lemma 4.4, and note that the coefficients c?n1 ,...,nd are upper bounded by a constant, the theorem
follows.
5
Conclusion
In this paper we propose an -differentially private mechanism for efficiently releasing K-smooth
K
queries. The accuracy of the mechanism is O( n1 2d+K ). The running time for outputting the sumd+2+2d/K
d
?
2d+K
mary is O(n1+ 2d+K ), and is O(n
) for answering a query. The result can be generalized
to (, ?)-differential privacy straightforwardly using the composition theorem [11]. The accuracy
2K
K
improves slightly to O(( n1 ) 3d+2K log( 1? ) 3d+2K ), while the running time for outputting the summary
and answering the query increase slightly. Our mechanism is based on approximation of smooth
functions by linear combination of a small set of basis functions with small and efficiently comK
([?1, 1]d ) by polynomials does not
putable coefficients. Directly approximating functions in CB
guarantee small coefficients and is less efficient. To achieve these goals we use trigonometric polynomials to approximate a transformation of the query functions.
It is worth pointing out that the approximation considered here for differential privacy is L? approximation, because the accuracy is defined in the worst case sense with respect to databases and
queries. L? -approximation is different to L2 -approximation, which is simply the Fourier transform
if we use trigonometric polynomials as the basis functions. L2 -approximation does not guarantee
(worst case) accuracy.
For the class of smooth functions defined on [?1, 1]d where d is a constant, in fact it is not difficult
to design a poly(n) time differentially private mechanism. One can discretize [?1, 1]d to O( ?1n )
precision, and use the differentially private mechanism for answering general queries (e.g., [16]).
? d/2 ) to answer a query, and provides O(n
? ?1/2 ) accuraHowever the mechanism runs in time O(n
cy. In contrast our mechanism exploits higher order smoothness of the queries. It is always more
efficient, and for queries highly smooth it is more accurate.
Acknowledgments
This work was supported by NSFC(61222307, 61075003) and a grant from MOE-Microsoft Key
Laboratory of Statistics and Information Technology of Peking University. We also thank Di He for
very helpful discussions.
8
References
[1] B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: a holistic solution to contingency table release. In PODS, pages 273?282. ACM, 2007.
[2] A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In
STOC, pages 609?618. ACM, 2008.
[3] A. Blum and A. Roth. Fast private data release algorithms for sparse queries. arXiv preprint arXiv:1111.6842, 2011.
[4] K. Chaudhuri and D. Hsu. Sample complexity bounds for differentially private learning. In COLT, 2011.
[5] K. Chaudhuri, C. Monteleoni, and A.D. Sarwate. Differentially private empirical risk minimization.
JMLR, 12:1069, 2011.
[6] K. Chaudhuri, A. Sarwate, and K. Sinha. Near-optimal differentially private principal components. In
NIPS, pages 998?1006, 2012.
[7] M. Cheraghchi, A. Klivans, P. Kothari, and H.K. Lee. Submodular functions are noise stable. In SODA,
pages 1586?1592. SIAM, 2012.
[8] R.A. DeVore and G. G. Lorentz. Constructive approximation, volume 303. Springer Verlag, 1993.
[9] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
TCC, pages 265?284, 2006.
[10] C. Dwork, M. Naor, O. Reingold, G.N. Rothblum, and S. Vadhan. On the complexity of differentially
private data release: efficient algorithms and hardness results. In STOC, pages 381?390. ACM, 2009.
[11] C. Dwork, G.N. Rothblum, and S. Vadhan. Boosting and differential privacy. In FOCS, pages 51?60.
IEEE, 2010.
[12] T. Gerstner and M. Griebel. Numerical integration using sparse grids. Numerical algorithms, 18(34):209?232, 1998.
[13] A. Gupta, M. Hardt, A. Roth, and J. Ullman. Privately releasing conjunctions and the statistical query
barrier. In STOC, pages 803?812. ACM, 2011.
[14] M. Hardt, K. Ligett, and F. McSherry. A simple and practical algorithm for differentially private data
release. In NIPS, 2012.
[15] M. Hardt, G. N. Rothblum, and R. A. Servedio. Private data release via learning thresholds. In SODA,
pages 168?187. SIAM, 2012.
[16] M. Hardt and G.N. Rothblum. A multiplicative weights mechanism for privacy-preserving data analysis.
In FOCS, pages 61?70. IEEE Computer Society, 2010.
[17] D. Kifer and B.R. Lin. Towards an axiomatization of statistical privacy and utility. In PODS, pages
147?158. ACM, 2010.
[18] J. Lei. Differentially private M-estimators. In NIPS, 2011.
[19] C. Li, M. Hay, V. Rastogi, G. Miklau, and A. McGregor. Optimizing linear counting queries under
differential privacy. In PODS, pages 123?134. ACM, 2010.
[20] Pravesh K. Prateek J. and Abhradeep T. Differentially private online learning. In COLT, 2012.
[21] A. Roth and T. Roughgarden. Interactive privacy via the median mechanism. In STOC, pages 765?774.
ACM, 2010.
[22] A. Smola, B. Sch?olkopf, and K. M?uller. The connection between regularization operators and support
vector kernels. Neural Networks, 11(4):637?649, 1998.
[23] V.N. Temlyakov. Approximation of periodic functions. Nova Science Pub Inc, 1994.
[24] J. Thaler, J. Ullman, and S. Vadhan. Faster algorithms for privately releasing marginals. In ICALP, pages
810?821. Springer, 2012.
[25] J. Ullman. Answering n2+o(1) counting queries with differential privacy is hard. In STOC. ACM, 2013.
[26] A. van der Vart and J.A. Wellner. Weak Convergence and Empirical Processes. Springer, 1996.
[27] G. Wahba et al. Support vector machines, reproducing kernel Hilbert spaces and the randomized gacv.
Advances in Kernel Methods-Support Vector Learning, 6:69?87, 1999.
[28] L. Wang. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning.
Journal of Machine Learning Research, 12(2269-2292):5?2, 2011.
[29] S. Wasserman, L.and Zhou. A statistical framework for differential privacy. Journal of the American
Statistical Association, 105(489):375?389, 2010.
[30] O. Williams and F. McSherry. Probabilistic inference and differential privacy. In NIPS, 2010.
9
| 5011 |@word private:26 version:1 middle:1 polynomial:31 norm:3 nd:19 asks:1 ld:9 contains:6 series:2 pub:1 miklau:1 kmk:4 com:1 must:2 written:1 lorentz:1 griebel:2 numerical:4 ligett:3 v:1 item:1 smith:1 record:2 provides:2 boosting:1 attack:2 zhang:1 rc:1 differential:20 become:1 focs:2 prove:3 consists:1 naor:1 privacy:30 theoretically:1 hardness:3 roughly:3 frequently:1 growing:1 discretized:1 td:3 considering:1 discover:1 bounded:8 agnostic:1 what:1 kg:1 cm:2 prateek:1 substantially:1 transformation:1 guarantee:5 remember:1 every:8 xd:3 interactive:2 exactly:1 unit:1 grant:1 limit:1 nsfc:1 approximately:1 rothblum:4 studied:1 co:26 fastest:1 range:2 practical:2 acknowledgment:1 practice:1 cn1:10 procedure:5 empirical:2 significantly:3 liwei:1 sider:1 operator:4 risk:2 map:4 roth:5 kale:1 attention:1 williams:1 l:2 pod:3 wasserman:1 rule:1 estimator:1 jackson:3 laplace:11 limiting:1 pt:2 suppose:3 user:5 element:2 approximated:3 database:30 preprint:1 wang:3 worst:2 cy:1 ensures:1 highest:1 complexity:3 ideally:2 depend:1 ali:2 basis:11 gacv:1 easily:1 fast:2 describe:1 query:85 choosing:1 whose:4 kai:1 larger:3 widely:1 say:3 supplementary:6 ability:1 statistic:2 think:1 transform:3 noisy:2 online:1 advantage:1 propose:2 outputting:10 tcc:1 maximal:1 product:2 aligned:1 combining:1 holistic:1 trigonometric:26 chaudhuri:4 achieve:1 description:1 differentially:22 getting:1 cl1:2 olkopf:1 convergence:1 tk:1 polylog:3 develop:2 depending:1 school:4 strong:1 kgf:1 trading:1 involves:1 differ:1 attribute:1 public:3 material:6 require:1 suffices:1 generalization:1 hold:4 sufficiently:1 considered:2 exp:1 cb:15 pointing:1 adopt:2 purpose:1 estimation:1 pravesh:1 label:1 sensitive:2 tool:1 minimization:1 uller:1 always:1 zhou:1 conjunction:4 release:6 focus:2 mainly:2 contrast:1 sense:2 helpful:1 inference:1 typically:1 bt:1 a0:3 her:1 relation:1 hidden:1 transformed:1 subroutine:1 wanglw:1 colt:2 denoted:1 arccos:1 integration:3 once:1 having:1 sampling:1 nearly:2 develops:1 serious:1 preserve:5 simultaneously:1 comprehensive:1 individual:6 consisting:2 n1:19 microsoft:1 interest:1 mining:1 highly:1 dwork:4 evaluation:7 extreme:1 mcsherry:4 accurate:4 tuple:2 integral:2 partial:4 pku:3 sinha:1 subset:1 monomials:1 too:5 straightforwardly:1 answer:21 eec:4 periodic:1 sensitivity:4 siam:2 randomized:1 axiomatization:1 lee:1 off:1 probabilistic:1 together:1 quickly:1 containing:1 possibly:1 adversely:1 book:1 american:1 derivative:6 inefficient:1 return:2 ullman:3 li:5 ml1:2 coefficient:35 inc:1 satisfy:1 depends:4 multiplicative:1 view:1 lot:1 sup:2 complicated:1 ni:1 accuracy:20 efficiently:10 yield:1 rastogi:1 conceptually:1 weak:1 accurately:1 worth:1 monteleoni:1 definition:4 against:1 failure:1 servedio:1 involved:1 proof:6 di:1 hsu:1 proved:1 hardt:4 subsection:2 improves:2 hilbert:1 higher:1 devore:1 done:1 just:2 smola:1 d:2 hand:1 su:5 lei:1 mary:1 dcos:1 calibrating:1 concept:1 true:1 contain:2 regularization:2 laboratory:5 sin:3 please:2 generalized:4 comk:1 l1:5 recently:3 volume:1 sarwate:2 association:1 he:1 m1:5 numerically:1 marginals:1 refer:1 composition:2 smoothness:10 rd:2 grid:5 consistency:1 submodular:1 dj:1 stable:1 add:2 multivariate:2 optimizing:1 hay:1 certain:2 verlag:1 binary:1 maxd:1 der:1 preserving:3 reduces:1 d0:4 smooth:25 technical:1 faster:1 lin:1 peking:5 basic:1 metric:1 arxiv:2 kernel:7 tailored:1 abhradeep:1 background:1 want:1 addition:1 median:1 crucial:1 sanitizer:5 sch:1 releasing:5 rest:1 elegant:1 reingold:1 integer:2 vadhan:3 near:1 counting:2 enough:1 affect:2 variate:2 attracts:1 wahba:1 inner:2 idea:2 cn:3 computable:3 tradeoff:1 t0:1 whether:1 motivated:2 utility:1 wellner:1 speaking:1 dramatically:1 useful:2 clear:1 amount:1 transforms:1 induces:1 per:1 write:2 affected:1 key:6 threshold:1 blum:3 d3:1 ht:5 rectangle:5 fraction:2 sum:3 run:10 talwar:1 powerful:1 soda:2 almost:1 ric:1 bit:1 bound:8 ki:5 fan:1 nonnegative:1 nontrivial:2 roughgarden:1 n3:1 integrand:1 speed:1 argument:1 fourier:1 klivans:1 according:1 combination:5 kd:7 smaller:2 describes:1 slightly:2 ziteng:1 appealing:1 vart:1 restricted:2 mechanism:57 nova:1 kifer:1 studying:2 hotmail:1 disagreement:1 existence:1 running:14 calculating:1 exploit:3 giving:1 k1:7 especially:1 establish:1 approximating:1 classical:1 society:1 bl:2 md:6 traditional:1 said:4 kth:1 thank:1 topic:1 nissim:1 considers:2 assuming:2 kk:2 nc:2 difficult:2 potentially:1 stoc:5 gk:1 stated:2 design:2 upper:11 discretize:1 observation:1 kothari:1 t:1 situation:1 reproducing:2 pair:1 moe:5 specified:13 extensive:2 connection:1 established:1 nip:4 able:3 adversary:1 below:1 perception:4 reading:1 difficulty:1 improve:1 technology:1 thaler:1 axis:1 gf:27 l2:2 kf:2 loss:1 fully:2 icalp:1 sublinear:1 limitation:1 remarkable:1 contingency:1 degree:13 pi:1 qf:11 summary:22 supported:1 jth:1 monomial:1 formal:1 barak:1 neighbor:2 wide:1 barrier:2 sparse:8 distributed:1 van:1 depth:1 rich:1 computes:2 simplified:1 far:1 temlyakov:1 approximate:7 active:1 xi:1 table:4 learn:1 improving:1 excellent:1 poly:4 gerstner:2 submit:2 pk:1 main:2 privately:3 universe:11 whole:3 noise:5 n2:6 nothing:1 quadrature:1 x1:3 precision:4 lie:2 answering:16 jmlr:1 theorem:16 jt:3 dk:2 gupta:1 exists:3 sequential:1 ci:3 lap:3 simply:2 univariate:2 kdj:1 springer:3 satisfies:2 relies:1 acm:8 goal:1 towards:1 jt0:1 change:4 hard:1 typical:1 infinite:1 uniformly:2 determined:1 lemma:10 principal:1 total:1 called:2 cond:1 formally:1 people:1 support:3 constructive:1 mcgregor:1 |
4,434 | 5,012 | (Nearly) Optimal Algorithms for Private Online
Learning in Full-information and Bandit Settings
Adam Smith?
Pennsylvania State University
[email protected]
Abhradeep Thakurta?
Stanford University and
Microsoft Research Silicon Valley Campus
[email protected]
Abstract
We give differentially private algorithms for a large class of online learning algorithms, in both the full information and bandit settings. Our algorithms aim
to minimize a convex loss function which is a sum of smaller convex loss terms,
one for each data point. To design our algorithms, we modify the popular mirror
descent approach, or rather a variant called follow the approximate leader.
The technique leads to the first nonprivate algorithms for private online learning in
the bandit setting. In the full information setting, our algorithms improve over the
regret bounds of previous work (due to Dwork, Naor, Pitassi and Rothblum (2010)
and Jain, Kothari and Thakurta (2012)). In many cases, our algorithms (in both
settings) match the dependence on the input length, T , of the optimal nonprivate
regret bounds up to logarithmic factors in T . Our algorithms require logarithmic
space and update time.
1
Introduction
This paper looks at the information leaked by online learning algorithms, and seeks to design accurate learning algorithms with rigorous privacy guarantees ? that is, algorithms that provably leak
very little about individual inputs.
Even the output of offline (batch) learning algorithms can leak private information. The dual form
of a support vector machine?s solution, for example, is described in terms of a small number of exact
data points, revealing these individuals? data in the clear. Considerable effort has been devoted to
designing batch learning algorithms satisfying differential privacy (a rigorous notion of privacy that
emerged from the cryptography literature [DMNS06, Dwo06]), for example [BDMN05, KLN+ 08,
CM08, CMS11, Smi11, KST12, JT13, DJW13].
In this work we provide a general technique for making a large class of online learning algorithms
differentially private, in both the full information and bandit settings. Our technique applies to
algorithms that aim to minimize a convex loss function which is a sum of smaller convex loss terms,
one for each data point. We modify the popular mirror descent approach (or rather a variant called
follow the approximate leader) [Sha11, HAK07].
In most cases, the modified algorithms provide similar accuracy guarantees to their nonprivate counterparts, with a small (logarithmic in the stream length) blowup in space and time complexity.
Online (Convex) Learning: We begin with the full information setting. Consider an algorithm
that receives a stream of inputs F = hf1 , ...., fT i, each corresponding to one individual?s data. We
interpret each input as a loss function on a parameter space C (for example, it might be one term
?
?
Supported by NSF awards #0941553 and #0747294.
Supported by Sloan Foundation fellowship and Microsoft Research.
1
in a convex program such as the one for logistic regression). The algorithm?s goal is to output a
sequence of parameter estimates w1 , w2 , ..., with each wt in C, that roughly minimizes the errors
P
t ft (wt ). The difficulty for the algorithm is that it computes wt based only on f1 , ..., ft?1 . We
seek to minimize the a posteriori regret,
Regret(T ) =
T
X
ft (wt ) ? min
w?C
t=1
T
X
ft (w)
(1)
t=1
In the bandit setting, the input to the algorithms consists only of f1 (w1 ), f2 (w2 ), .... That is, at each
time step t, the algorithm learns only the cost ft?1 (wt?1 ) of the choice wt?1 it made at the previous
time step, rather than the full cost function ft?1 .
We consider three types of adversarial input selection: An oblivious adversary selects the input
stream f1 , ..., fT ahead of time, based on knowledge of the algorithm but not of the algorithm?s
random coins. A (strongly) adaptive adversary selects ft based on the output so far w1 , w2 , ..., wt
(but not on the algorithm?s internal random coins).
Both the full-information and bandit settings are extensively studied in the literature (see, e.g.,
[Sha11, BCB12] for recent surveys). Most of this effort has been spent on online learning problems are convex, meaning that the loss functions ft are convex (in w) and the parameter set C ? Rp
is a convex set (note that one can typically ?convexify? the parameter space by randomization). The
problem dimension p is the dimension of the ambient space containing C.
We consider various restrictions on the cost functions, such as Lipschitz continuity and strong convexity. A function f : C ? R is L-Lipschitz with respect to the `2 metric if |f (x) ? f (y)| ?
Lkx ? yk2 for all x, y ? C. Equivalently, for every x ? C 0 (the interior of C) and every subgradient z ? ?f (x), we have kzk2 ? L. (Recall that z is a subgradient of f at x if the function
f?(y) = f (x) + hz, y ? xi is a lower bound for f on all of C. If f is convex, then a subgradient
exists at every point, and the subgradient is unique if and only if f is differentiable at that point.)
The function f is H-strongly convex w.r.t. `2 if for every y ? C, we can bound f below on C by a
2
quadratic function of the form f?(y) = f (x) + hz, y ? xi + H
2 ky ? xk2 . If f is twice differentiable,
H-strong convexity is equivalent to the requirement that all eigenvalues of ?2 f (w) be at least H
for all w ? C.
We denote by D the set of allowable cost functions; the input sequence thus lies in DT .
Differential Privacy, and Challenges for Privacy in the Online Setting: We seek to design online learning algorithms that satisfy differential privacy [DMNS06, Dwo06], which ensures that the
amount of information an adversary learns about a particular cost function ft in the function sequence F is almost independent of its presence or absence in F . Each ft can be thought as private
information belonging to an individual. The appropriate notion of privacy here is when the entire
sequence of outputs of the algorithms (w
?1 , ..., w
?T ) is revealed to an attacker (the continual observation setting [DNPR10]). Formally, we say two input sequences F, F 0 ? DT are neighbors if they
differ only in one entry (say, replacing ft by ft0 ).
Definition 2 (Differential privacy [DMNS06, Dwo06, DNPR10]). A randomized algorithm A is
(, ?)-differentially private if for every two neighboring sequences F, F 0 ? DT , and for every event
O in the output space C T ,
Pr[A(F ) ? O] ? e Pr[A(F 0 ) ? O] + ?.
(2)
If ? is zero, then we simply say A is -differentially private.
Here A(F ) refers to the entire sequence of outputs produced by the algorithm during its execution.1
Our protocols all satisfy -differential privacy (that is, with ? = 0). We include ? in the definition
for comparison with previous work.
1
As defined, differential privacy requires indistinguishable outputs only for nonadaptively chosen sequences
(that is, sequences where the inputs at time t are fixed ahead of time and do not depend on the outputs at times
1, ..., t ? 1). The algorithms in our paper (and in previous work) in fact satisfy a stronger adaptive variant,
in which an adversary selects the input online as the computation proceeds. When ? = 0, the nonadaptive
and adaptive variants are equivalent [DNPR10]. Moreover, protocols based on ?randomized response? or the
?tree-based sum? protocol of [DNPR10, CSS10] are adaptively secure, even when ? > 0. We do not define the
adaptive variant here explicitly, but we use it implicitly when proving privacy.
2
Differential privacy provides meaningful guarantees in against an attacker who has access to considerable side information: the attacker learns the same things about someone whether or not their
data were actually used (see [KS08, DN10, KM12] for further discussion).
Differential privacy is particularly challenging to analyze for online learning algorithms, since a
change in a single input at the beginning of the sequence may affect outputs at all future times in
ways that are hard to predict. For example, a popular algorithm for online learning is online gradient
descent: at each time step, the parameter is updated as wt+1 = ?C (wt?1 ? ?t ?ft?1 (wt?1 )),
where ?C (x) the nearest point to x in C, and ?t > 0 is a parameter called the learning rate. A
change in an input fi (replacing it with fi0 ) leads to changes in all subsequent outputs wi+1 , wi+2 , ...,
roughly pushing them in the direction of ?fi (wi ) ? ?fi0 (wi ). The effect is amplified by the fact
that the gradient of subsequent functions fi+1 , fi+2 , ... will be evaluated at different points in the
two streams.
Previous Approaches: Despite the challenges, there are several results on differentially private
online learning. A special case, ?learning from experts? in the full information setting, was discussed
in the seminal paper of Dwork, Naor, Pitassi and Rothblum [DNPR10] on privacy under continual
observation. In this case, the set of available actions is the simplex ?({1, ..., p}) and the functions fi
are linear with coefficients in {0, 1} (that is, ft (w) = hw, ct i where ct ? {0, 1}p ). Their algorithm
guarantees a weaker notion of privacy ?
than the one we consider2 but, when adapted to our stronger
setting, it yields a regret bound of O(p T /).
Jain, Kothari and Thakurta [JKT12] defined the general problem of private online learning, and gave
algorithms for learning convex functions over convex domains in the full information setting. They
gave algorithms that satisfy (, ?)-differential privacy with ? > 0 (our ?
algorithms satisfy the stronger
? T log(1/?)/) for Lipshitzvariant with ? = 0). Specifically, their algorithms have regret O(
? 2/3 log(1/?)/) for general Lipshitz convex costs.
bounded, strongly convex cost functions and O(T
The idea of [JKT12] for learning strongly convex functions is to bound the sensitivity of the entire
vector of outputs w1 , w2 , ... to a change in one input (roughly, they show that when fi is changed, a
subsequent output wj changes by O(1/|j ? i|)).
Unfortunately, the regret bounds obtained by previous
work remain far from the best nonprivate
?
bounds. [Zin03] gave an algorithm with regret?O( T ) for general Lipshitz functions, assuming L
and the diameter kCk2 of C are constants. ?( T ) regret is necessary (see, e.g., [HAK07]), so the
dependence on T of [Zin03] is tight. When cost functions in F are H-strongly convex for constant
H, then the regret can be improved to O(log T ) [HAK07], which is also tight. In this work, we give
new algorithms that match these nonprivate bounds? dependence on T , up to (poly log T )/ factors.
We note that [JKT12] give one algorithm for a specific strongly convex problem, online linear regression, with regret poly(log T ). One can view that algorithm as a special case of our results.
We are not aware of any previous work on privacy in the bandit setting. One might expect that bandit
learning algorithms are easier to make private, since they access data in a much more limited way.
However, even nonprivate algorithms for bandit learning are very delicate, and private versions had
until now proved elusive.
Our Results: In this work we provide a technique for making a large class of online learning algorithms differentially private, in both the full information and bandit settings. In both cases, the idea is
to search for algorithms whose decisions at time t depend only on previous time steps through a sum
of observations made at times 1, 2, ..., t. Specifically, our algorithms work by measuring the gradient
?ft (wt ) when ft is learned, and maintaining a differentially private running sum of the gradients
observed so far. We maintain this sum using the tree-based sum protocol of [DNPR10, CSS10]. We
then show that a class of learning algorithms known collectively as follow the approximate leader
(the version we use is due to [HAK07]) can be run given only these noisy sums, and that their regret
can be bounded even when these sums are inaccurate.
Our algorithms can be run with space O(log T ), and require O(log T ) running time at each step.
2
Specifically, Dwork et al. [DNPR10] provide single-entry-level privacy, in the sense that a neighboring
data set may only differ in one entry of the cost vector for one round. In contrast, we allow the entire cost
vector to change at one round. Hiding that larger set of possible changes is more difficult, so our algorithms
also satisfy the weaker notion of Dwork et al.
3
Our contributions for the full information setting and their relation to previous work is summarized
2.5
in Table 1. Our main algorithm, for strongly convex functions, achieves regret O( log T ), ignoring
factors of the dimension p, Lipschitz continuity L and strong convexity H. When strong convexity
is not guaranteed, we use regularization to ensure it (similar to what
q is done in nonprivate settings,
2.5
e.g. [Sha11]). Setting parameters carefully, we get regret of O( T log T ). These bounds essen?
tially match the nonprivate lower bounds of ?(log T ) and ?( T ), respectively.
The results in the full information setting apply even when the input stream is chosen adaptively as
a function of the algorithm?s choices at previous time steps. In the bandit setting, we distinguish
between oblivious and adaptive adversaries.
Furthermore, in the bandit setting, we assume that C is sandwiched between two concentric L2 -balls
of radii r and R (where r < R). We also assume that for all w ? C, |ft (w)| ? B for all t ? [T ].
Similar assumption were made in [FKM05, ADX10].
Our results are summarized in Table 2. For most of the settings we consider, we match the dependence on T of the best nonprivate algorithm, though generally not the dependence on the dimension
p.
Function class
Learning with experts (linear functions over C =
?({1, ..., p})
Lipshitz
and
Lipshitz and
strongly convex
Previous private upper
bound.
?
?
T /)
[DNPR10]
O(p
Our algorithm
p
O( pT log2.5 T /)
Nonprivate
lower bound
?
?( T log p)
? ?pT 2/3 log(1/?)/)
O(
[JKT12]
q
O( pT log2.5 T /)
?
?( T )
O(p log2.5 T /)
?(log T )
? ?pT log2 (1/?)/)
O(
[JKT12]
Table 1: Regret bounds for online learning in the full information setting. Bounds in lines 2 and 3
? hides poly(log(T )) factors.
hide the (polynomial) dependencies on parameters L, H. Notation O(?)
Function class
Learning with experts (linear functions over C = ?({1, ..., p})
Lipschitz
Lipschitz and strongly convex
(Adaptive)
Lipschitz and strongly convex
(Oblivious)
Our result
3/4
?
O(pT
/)
Best nonprivate bound
?
O( T )
[AHR08]
3/4
?
O(pT
/)
?
O(pT 3/4 /)
O(pT 3/4 ) [FKM05]
O(p2/3 T 3/4 )[ADX10]
2/3
?
O(pT
/)
O(p2/3 T 2/3 )[ADX10]
Table 2: Regret bounds for online
? learning in the bandit setting. In all these settings, the best
? notation hides poly log factors in T . Bounds hide
known nonprivate lower bound is T . The O(?)
polynomial dependencies on L, H, r and R.
In the remainder of the text, we refer to appendices for many of the details of algorithms and proofs.
The appendices can be found in the ?Supplementary Materials? associated to this paper.
2
Private Online Learning: Full-information Setting
In this section we adapt the Follow The Approximate Leader (FTAL) algorithm of [HAK07] to
design a differentially private variant. Our modified algorithm, which we call Private Follow The
4
Approximate Leader (PFTAL), needs a new regret analysis as we have to deal with randomness due
to differential privacy.
2.1
Private Follow The Approximate Leader (PFTAL) with Strongly Convex Costs
Algorithm 1 Differentially Private Follow the Approximate Leader (PFTAL)
Input: Cost functions: hf1 , ? ? ? , fT i (in an online sequence), strong convexity parameter: H, Lipschitz constant: L, convex set: C ? Rp and privacy parameter: .
1: w
?1 ? Any vector from C. Output w
?1 .
2: Pass 5f1 (w
?1 ), L2 -bound L and privacy parameter to the tree based aggregation protocol and
receive the current partial sum in v?1 .
3: for time steps t ? {1, ? ? ? , T ? 1} do
t
P
4:
w
?t+1 ? arg min h?
vt , wi + H
kw ? w
?? k22 . Output w
?t .
2
w?C
? =1
Pass 5ft+1 (w
?t+1 ), L2 -bound L and privacy parameter to the tree-based protocol (Algorithm 2) and receive the current partial sum in v?t+1 .
6: end for
5:
The main idea in PFTAL algorithm is to execute the well-known Follow The Leader algorithm (FTL)
algorithm [Han57] using quadratic approximations f?1 , ? ? ? , f?T of the cost functions f1 , ? ? ? , fT .
Roughly, at every time step (t + 1), PFTAL outputs a vector w that approximately minimizes the
sum of the approximations f?1 , ? ? ? , f?t over the convex set C.
Let w
?1 , ? ? ? , w
?t be the sequence of outputs produced in the first t time steps, and let ft be the costfunction at step t. Consider the following quadratic approximation to ft (as in [HAK07]). Define
f?t (w) = ft (w
?t ) + h5ft (w
?t ), w ? w
?t i +
H
2 kw
?w
?t k22
(3)
where H is the strong convexity parameter. Notice that ft and f?t have the same value and gradient
at w
?t (that is, ft (w
?t ) = f?t (w
?t ) and 5ft (w
?t ) = 5f?t (w
?t )). Moreover, f?t is a lower bound for ft
everywhere on C.
Let w
?t+1 = arg min
t
P
f?? (w) be the ?leader? corresponding to the cost functions f?1 , ? ? ? , f?t .
w?C ? =1
Minimizing the sum of f?t (w) is the same as minimizing the sum of f?t (w)?ft (w
?t ), since subtracting
a constant term won?t change the minimizer. We can thus write w
?t+1 as
w
?t+1 = arg min h
w?C
t
X
5ft (w
?? ), wi +
? =1
H
2
t
X
kw ? w
?? k22
(4)
? =1
Suppose, w
?1 , ? ? ? , w
?t have been released so far. To release a private approximation to w
?t+1 , it
Pt
?? ) while ensuring differential privacy. If we fix the
suffices to approximate vt+1 = ? =1 5ft (w
previously released information w
?? , then changing any one cost function will only change one of
the summands in vt+1 .
With the above observation, we abstract out the following problem: Given a set of vectors
t
P
z1 , ? ? ? , zT ? Rp , compute all the partial sums vt =
z? , while preserving privacy. This problem
? =1
is well studied in the privacy literature. Assuming each zt has L2 -norm of at most L0 , the following
tree-based aggregation schemewill ensure that in expectation, the noise (in terms of L2 -error) in
each of vt is O pL0 log1.5 T / and the whole sequence v1 , ? ? ? , vT is -differentially private. We
now describe the tree-based scheme.
Tree-based Aggregation [DNPR10, CSS10]: Consider a complete binary tree. The leaf nodes are
the vectors z1 , ? ? ? , zT . (For the ease of exposition, assume T to be a power of two. In general,
we can work with the smallest power of two greater than T ). Each internal node in the tree stores
the sum of all the leaves in its sub-tree. In a differentially private version of this tree, we ensure
that each node?s sub-tree sum is (/log2 T )-differentially private, by adding a noise vector b ? Rp
5
?
pL0 log T
whose L2 -norm is Gamma distributed and has standard deviation O(
). Since each zt only
affects log2 T nodes in the tree, by the composition property [DMNS06], the complete tree will be
Pt
-differentially private. Moreover, the algorithm?s error in estimating any partial sum vt = ? =1 z?
?
pL0 log2 T
), since one can compute vt from at most log T nodes in the tree. A formal
grows as O(
description of the tree based aggregation scheme in given in Appendix A.
Now we complete the PFTAL algorithm by computing the private version w
?t+1 of w
?t+1 in (4) as
the minimizer of the perturbed loss function:
w
?t+1 = arg min h?
vt , wi +
w?C
H
2
t
X
kw ? w
?? k22
(5)
? =1
Here v?t is the noisy version of vt , computed using the tree-based aggregation scheme. A formal
description of the algorithm is given in Algorithm 1.
Note on space complexity: For simplicity, in the description of tree based aggregation scheme
(Algorithm 2 in Appendix A) we maintain the complete binary tree. However, it is not hard to show
at any time step t, it suffices to keep track of the vectors (of partial sums) in the path from zt to the
root of the tree. So, the amount of space required by the algorithm is O(log T ).
2.1.1
Privacy and Utility Guarantees for PFTAL (Algorithm 1)
In this section we provide the privacy and regret guarantees for the PFTAL algorithm (Algorithm 1).
For detailed proofs of the theorem statements, see Appendix B.
Theorem 3 (Privacy guarantee). Algorithm 1 is -differentially private.
Proof Sketch. Given the binary tree, the sequence w
?2 , ? ? ? , w
?T is completely determined. Hence,
it suffices to argue privacy for the collection of noisy sums associated to nodes in the binary tree.
At first glance, it seems that each loss function affects only one leaf in the tree, and hence at most
log T of the nodes? partial sums. If it were true, that statement would make the analysis simple.
The analysis is delicate, however, since the value (gradient z? ) at a leaf ? in the tree depends on the
partial sums that are released before time ? . Hence, changing one loss function ft actually affects
all subsequent partial sums. One can get around this by using the fact that differential privacy
composes adaptively [DMNS06]: we can write the computations done on a particular loss function
ft as a sequence of log T smaller differentially private computations, where the each computation
in the sequence depends on the outcome of previous ones. See Appendix B for details.
In terms of regret guarantee, we show that our algorithm enjoys regret of O(p log2.5 T ) (assuming
other parameters to be constants). Compared to the non-private regret bound of O(log T ), our regret
bound has an extra log1.5 T factor and an explicit dependence on the dimensionality (p). A formal
regret bound for PFTAL algorithm is given in Theorem 4.
Theorem 4 (Regret guarantee). Let f1 , ? ? ? , fT be L-Lipschitz, H-strongly convex functions and let
C ? Rp be a fixed convex set. For adaptive adversaries, the expected regret satisfies:
p(L + HkCk2 )2 log2.5 T
E [Regret(T )] = O
.
H
Here expectation is taken over the random coins of the algorithm and adversary.
Results for Lipschitz Convex Costs: Our algorithm for strongly convex costs can be adapted to
2
arbitrary Lipschitz convex costs by executing Algorithm 1 on functions ht (w) = ft (w) + H
2 kwk2
?
?
2.5
? pT /).
instead of the ft ?s. Setting H = O(p log T /( T )) will give us a regret bound of O(
See Appendix C for details.
3
Private Online Learning: Bandit Setting
In this section we adapt the Private Follow the Approximate Leader (PFTAL) from Section 2 to
the bandit setting. Existing (nonprivate) bandit algorithms for online convex optimization follow
6
a generic reduction to the full-information setting [FKM05, ADX10], called the ?one-point? (or
?one-shot?) gradient trick. Our adaptation of PFTAL to the bandit setting also uses this technique.
Specifically, to define the quadratic lower bounds to the input cost functions (as in (3)), we replace
the exact gradient of ft at w
?t with a one-point approximation.
In this section we describe our results for strongly convex costs. Specifically, to define the quadratic
lower bounds to the input cost functions (as in (3)), we replace the exact gradient of ft at w
?t with
a one-point approximation. As in the full information setting, one may obtain regret bounds for
general convex functions in the bandit setting by adding a strongly convex regularizer to the cost
functions.
One-point Gradient Estimates [FKM05]: Suppose one has to estimate the gradient of a function
f : Rp ? R at a point w ? Rp via a single query access to f . [FKM05] showed that one can
approximate 5f (w) by ?p f (w + ?u)u, where ? > 0 is a small real parameter and u is a uniformly
p?1
random vector fromh the p-dimensional
= {a ? Rp : kak2 = 1}. More precisely,
i unit sphere S
5f (w) = lim Eu
??0
p
? f (w
+ ?u)u .
For finite, nonzero values of ?, one can view this technique as estimating the gradient of a smoothed
version of f . Given ? > 0, define f?(w) = Ev?Bp [f (w + ?v)] where Bp is the unit ball in Rp . That
p
is, f? = f ? U?Bp is the convolution of hf with the uniform
i distribution on the ball ?B of radius ?.
p
By Stokes? theorem, we have Eu?Sp?1 f (w + ?u)u = 5f?(w).
?
3.1
Follow the Approximate Leader (Bandit version): Non-private Algorithm
? = hw
Let W
?1 , ? ? ? , w
?T i be a sequence of vectors in C (the outputs of the algorithm). Corresponding
to the smoothed function f?t = f ? U?Bp , we define a quadratic lower bound g?t :
g?t (w) = f?t (w
?t ) + h5f?t (w
?t ), w ? w
?t i + H
?t k22
(6)
2 kw ? w
Notice that g?t is a uniform lower bound on f?t satisfying g?t (w
?t ) = f?t (w
?t ) and 5?
gt (w
?t ) = 5f?t (w
?t ).
To define g?t , one needs access to 5f?t (w
?t ). As suggested above, we replace the true gradient with
the one-point estimate. Consider the following proxy g?t for g?t :
H
p
g?t (w) = f?t (w
?t ) ? h5f?t (w
?t ), w
?t i +h ft (w
?t + ?ut )ut , wi + kw ? w
?t k22
(7)
|
{z
} ?
2
A
where uT is drawn uniformly from the unit sphere Sp?1 . Note that in (7) we replaced the gradient
of f?t with its one-point approximation only in one of its two occurrences (the inner product with w).
Pt
We would like to define w
?t+1 as the minimizer of the sum of proxies ? =1 g?? (w). One difficulty
?t + ?ut )ut is only
remains: because ft is only assumed to be defined on C, the approximation ?p ft (w
defined when w
?t is sufficiently far inside C. Recall from the introduction that we assume C contains
rBp (the ball of radius r). To ensure that we only evaluate f on C, we actually minimize over a
smaller set (1 ? ?)C, where ? = ?r . We obtain:
t
t
t
X
X
X
p
w
?t+1 = arg min
g?? (w) = arg min h
ft (w
?t + ?ut )ut , wi+ H2
kw?w
?? k22
?
w?(1??)C
w?(1??)C
? =1
? =1
? =1
(8)
(We have use the fact that to minimize g?t , one can ignore the constant term A in (7).)
We can now state the bandit version of FTAL. At each step t = 1, ..., T :
1. Compute w
?t+1 using (8).
2. Output w
?t = w
?t + ?ut .
Theorem 12 (in Appendix D) gives the precise regret guarantees for this algorithm. For adaptive
? 2/3 T 3/4 ) and for oblivious adversaries the regret is bounded
adversaries the regret is bounded by O(p
2/3 2/3
?
by O(p T ).
7
3.2
Follow the Approximate Leader (Bandit version): Private Algorithm
To make the bandit version
of FTAL -differentially private, we replace the value vt =
Pt p
?
f
(w
+
?u
)u
with
a private approximation vt? computed using the tree-based sum
t t
t
? =1 ? t
protocol. Specifically, at each time step t we output
?
= arg
wt+1
min
w?(1??)C
hvt? , wi +
t
HX
kw ? w?? k22 .
2 ? =1
(9)
See Algorithm 3 (Appendix E.1) for details.
Theorem 5 (Privacy guarantee). The bandit version of Private Follow The Approximate Leader
(Algorithm 3) is -differentially private.
The proof of Theorem 5 is exactly the same as of Theorem 3, and hence we omit the details.
In the following theorem we provide the regret guarantee of the Private FTAL (bandit version). For
a complete proof, see Appendix E.2.
Theorem 6 (Regret guarantee). Let Bp be the p-dimensional unit ball centered at the origin and
C ? Rp be a convex set such that rBp ? C ? RBp (where 0 < r < R). Let f1 , ? ? ? , fT be LLipschitz, H-strongly convex functions such that for all w ? C, |fi (w)| ? B. Setting ? = ?/r in the
bandit version of Private Follow The Approximate Leader (Algorithm 3 in Appendix E.1), we obtain
the following regret guarantees.
p
? pT 2/3 ?
1. (Oblivious adversary) With ? = T 1/3
, E [Regret(T )] ? O
p
? pT 3/4 ?
, E [Regret(T )] ? O
2. (Adaptive adversary) With ? = T 1/4
2
Here ? = BR + (1 + R/r)L + (HkCkH2 +B) 1 +
domness of the algorithm and the adversary.
B
. The expectations are taken over the ran-
One can remove the dependence on r in Thm. 6 by rescaling C to isotropic position. This increases
the expected regret bound by a factor of (LR + kCk2 ). See [FKM05] for details.
Bound for general convex functions: Our results in this section can be extended to the setting of
arbitrary Lipshitz convex costs via regularization, as in Section C (by adding H2 kwk22 to each cost
? 3/4 /) for both oblivious
function ft ) . With the appropriate choice of H the regret scales as O(T
and adaptive adversaries. See Appendix E.3 for details.
4
Open Questions
Our work raises several interesting
open questions: First, our regret bounds with general convex
?
? T /). We would like to have a regret bound where the parameter 1/
functions have the form O(
is factored
? out with?lower order terms in the regret, i.e., we would like to have regret bound of the
form O( T ) + o( T /).
Second, our regret bounds for convex bandits are worse than the non-private bounds for linear and
multi-arm bandits. For multi-arm
?bandits [ACBF02] and for linear bandits [AHR08], the non-private
regret bound is known to be O( T?). If we use our private algorithm in this setting, we will incur a
? 2/3 ). Can we get O( T ) regret for multi-arm or linear bandits?
regret of O(T
Finally, bandit algorithms require internal randomness to get reasonable regret guarantees. Can we
harness the randomness of non-private bandit algorithms in the design private bandit algorithms?
Our current privacy analysis ignores this additional source of randomness.
8
References
[ACBF02] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine learning, 2002.
[ADX10]
Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization
with multi-point bandit feedback. In COLT, 2010.
[AHR08]
Jacob Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In COLT, 2008.
[BCB12]
S?ebastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. arXiv preprint arXiv:1204.5721, 2012.
[BDMN05] Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. Practical privacy: The SuLQ
framework. In PODS, 2005.
[CM08]
Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In NIPS,
2008.
[CMS11]
Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical
risk minimization. Journal of Machine Learning Research, 12:1069?1109, 2011.
[CSS10]
TH Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. In
ICALP, 2010.
[DJW13]
John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Local privacy and statistical minimax rates. In IEEE Symp. on Foundations of Computer Science (FOCS), 2013.
http://arxiv.org/abs/1302.3203.
[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity
in private data analysis. In TCC, 2006.
[DN10]
Cynthia Dwork and Moni Naor. On the difficulties of disclosure prevention in statistical databases
or the case for differential privacy. J. Privacy and Confidentiality, 2(1), 2010.
[DNPR10] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential privacy under
continual observation. In Proceedings of the 42nd ACM symposium on Theory of computing,
2010.
[Dwo06]
Cynthia Dwork. Differential privacy. In ICALP, 2006.
[FKM05]
Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In SODA, 2005.
[HAK07]
Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex
optimization. Journal of Machine Learning Research, 2007.
[Han57]
James Hannan. Approximation to bayes risk in repeated play. 1957.
[JKT12]
Prateek Jain, Pravesh Kothari, and Abhradeep Thakurta. Differentially private online learning. In
COLT, 2012.
[JT13]
Prateek Jain and Abhradeep Thakurta. Differentially private learning with kernels. In ICML, 2013.
[KLN+ 08] Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam
Smith. What can we learn privately? In FOCS, 2008.
[KM12]
Daniel Kifer and Ashwin Machanavajjhala. A rigorous and customizable framework for privacy.
In PODS, 2012.
[KS08]
Shiva Prasad Kasiviswanathan and Adam Smith. A note on differential privacy: Defining resistance to arbitrary side information. CoRR, arXiv:0803.39461 [cs.CR], 2008.
[KST12]
Daniel Kifer, Adam Smith, and Abhradeep Thakurta. Private convex empirical risk minimization
and high-dimensional regression. In COLT, 2012.
[Sha11]
R
Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends
in Machine Learning, 2011.
[Smi11]
Adam Smith. Privacy-preserving statistical estimators with optimal convergence rates. In STOC,
2011.
[Zin03]
Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
ICML, 2003.
9
| 5012 |@word private:50 version:13 polynomial:2 stronger:3 norm:2 seems:1 dekel:1 nd:1 open:2 seek:3 prasad:2 jacob:1 shot:1 reduction:1 contains:1 daniel:2 existing:1 current:3 com:1 john:1 subsequent:4 remove:1 update:1 leaf:4 isotropic:1 beginning:1 smith:6 lr:1 provides:1 cse:1 node:7 kasiviswanathan:2 org:1 differential:16 symposium:1 focs:2 consists:1 naor:4 symp:1 inside:1 privacy:42 expected:2 blowup:1 roughly:4 multi:4 little:1 hiding:1 begin:1 estimating:2 campus:1 moreover:3 bounded:4 notation:2 what:2 prateek:2 minimizes:2 convexify:1 guarantee:15 every:7 continual:4 exactly:1 lipshitz:5 unit:4 omit:1 before:1 local:1 modify:2 despite:1 path:1 rothblum:3 approximately:1 might:2 twice:1 studied:2 challenging:1 someone:1 ease:1 limited:1 confidentiality:1 unique:1 practical:1 regret:49 llipschitz:1 empirical:2 thought:1 revealing:1 refers:1 get:4 interior:1 valley:1 selection:1 risk:3 raskhodnikova:1 seminal:1 restriction:1 equivalent:2 zinkevich:1 shi:1 elusive:1 kale:1 pod:2 convex:46 survey:1 simplicity:1 factored:1 estimator:1 proving:1 notion:4 updated:1 customizable:1 pt:16 suppose:2 play:1 exact:3 programming:1 us:1 designing:1 origin:1 trick:1 trend:1 satisfying:2 particularly:1 database:1 observed:1 ft:44 preprint:1 wj:1 ensures:1 elaine:1 eu:2 ran:1 leak:2 complexity:2 convexity:6 depend:2 tight:2 raise:1 incur:1 f2:1 completely:1 various:1 regularizer:1 jain:4 describe:2 query:1 outcome:1 shalev:1 abernethy:1 whose:2 emerged:1 stanford:1 larger:1 supplementary:1 say:3 elad:2 satyen:1 statistic:1 fischer:1 noisy:3 online:30 sequence:17 differentiable:2 eigenvalue:1 tcc:1 subtracting:1 product:1 remainder:1 adaptation:1 neighboring:2 chaudhuri:2 amplified:1 fi0:2 description:3 differentially:20 ky:1 convergence:1 costfunction:1 requirement:1 adam:7 executing:1 spent:1 nearest:1 p2:2 strong:6 c:1 differ:2 direction:1 radius:3 stochastic:1 centered:1 material:1 require:3 hx:1 f1:7 fix:1 suffices:3 randomization:1 ftal:4 around:1 sufficiently:1 predict:1 achieves:1 smallest:1 xk2:1 released:3 pravesh:1 thakurta:6 minimization:2 aim:2 modified:2 rather:3 kalai:1 cr:1 release:2 l0:1 secure:1 rigorous:3 adversarial:1 contrast:1 sense:1 brendan:1 posteriori:1 hvt:1 inaccurate:1 typically:1 entire:4 sulq:1 bandit:38 relation:1 selects:3 provably:1 arg:7 dual:1 colt:4 prevention:1 special:2 aware:1 psu:1 kw:8 look:1 icml:2 nearly:1 future:1 simplex:1 oblivious:6 gamma:1 individual:4 replaced:1 microsoft:3 delicate:2 maintain:2 ab:1 dwork:9 essen:1 devoted:1 mcsherry:2 hubert:1 accurate:1 ambient:1 partial:8 necessary:1 tree:25 measuring:1 cost:24 deviation:1 entry:3 uniform:2 dependency:2 perturbed:1 adaptively:3 randomized:2 sensitivity:2 lee:1 michael:1 w1:4 cesa:2 containing:1 guy:1 worse:1 expert:3 kobbi:3 rescaling:1 summarized:2 coefficient:1 satisfy:6 sloan:1 kzk2:1 explicitly:1 stream:5 depends:2 view:2 pl0:3 root:1 analyze:1 hazan:2 aggregation:6 hf:1 bayes:1 shai:1 contribution:1 minimize:5 accuracy:1 who:1 yield:1 sofya:1 produced:2 machanavajjhala:1 randomness:4 composes:1 monteleoni:2 definition:2 infinitesimal:1 against:1 james:1 proof:5 associated:2 proved:1 popular:3 recall:2 knowledge:1 lim:1 dimensionality:1 ut:8 carefully:1 actually:3 auer:1 dt:3 follow:14 harness:1 response:1 improved:1 evaluated:1 done:2 strongly:16 though:1 furthermore:1 execute:1 until:1 sketch:1 receives:1 replacing:2 fkm05:7 glance:1 continuity:2 logistic:2 grows:1 rbp:3 effect:1 calibrating:1 k22:8 true:2 counterpart:1 regularization:2 hence:4 nonzero:1 deal:1 leaked:1 indistinguishable:1 during:1 round:2 won:1 generalized:1 allowable:1 complete:5 duchi:1 meaning:1 fi:7 dawn:1 sarwate:1 discussed:1 acbf02:2 interpret:1 kwk2:1 silicon:1 refer:1 composition:1 multiarmed:2 ashwin:1 had:1 access:4 alekh:1 yk2:1 lkx:1 pitassi:3 summands:1 gt:1 nicolo:1 moni:2 recent:1 hide:4 showed:1 chan:1 store:1 binary:4 vt:12 preserving:3 greater:1 additional:1 full:16 hannan:1 match:4 adapt:2 sphere:2 lin:1 award:1 ensuring:1 variant:6 regression:4 metric:1 expectation:3 bcb12:2 arxiv:4 kernel:1 agarwal:2 abhradeep:4 receive:2 fellowship:1 ftl:1 source:1 w2:4 extra:1 ascent:1 hz:2 kwk22:1 thing:1 anand:1 jordan:1 call:1 presence:1 revealed:1 affect:4 shiva:2 gave:3 pennsylvania:1 competing:1 nonstochastic:1 inner:1 idea:3 br:1 whether:1 utility:1 effort:2 hak07:7 song:1 peter:1 resistance:1 action:1 generally:1 clear:1 detailed:1 amount:2 dark:1 extensively:1 diameter:1 http:1 kck2:2 nsf:1 notice:2 track:1 write:2 blum:1 drawn:1 changing:2 ht:1 v1:1 nonadaptive:1 subgradient:4 sum:25 run:2 everywhere:1 nonadaptively:1 soda:1 almost:1 reasonable:1 decision:1 appendix:12 bound:39 ct:2 guaranteed:1 distinguish:1 quadratic:6 adapted:2 ahead:2 precisely:1 bp:5 min:8 martin:2 dmns06:6 ball:5 belonging:1 smaller:4 remain:1 wi:10 making:2 pr:2 taken:2 previously:1 remains:1 disclosure:1 end:1 kifer:2 available:1 ofer:1 apply:1 h5f:2 appropriate:2 generic:1 ahr08:3 occurrence:1 batch:2 coin:3 rp:10 kln:2 running:2 include:1 ensure:4 log2:9 maintaining:1 pushing:1 amit:1 sandwiched:1 question:2 dependence:7 kak2:1 gradient:17 nissim:3 argue:1 assuming:3 length:2 minimizing:2 equivalently:1 difficult:1 unfortunately:1 statement:2 frank:2 stoc:1 design:5 ebastien:1 zt:5 attacker:3 bianchi:2 upper:1 kothari:3 observation:5 convolution:1 finite:2 descent:4 defining:1 extended:1 stokes:1 precise:1 smoothed:2 arbitrary:3 thm:1 concentric:1 required:1 z1:2 learned:1 nip:1 adversary:13 proceeds:1 below:1 suggested:1 ev:1 challenge:2 program:1 wainwright:1 power:2 event:1 difficulty:3 arm:3 minimax:1 scheme:5 improve:1 log1:2 flaxman:1 text:1 literature:3 l2:6 nicol:1 toniann:1 loss:10 expect:1 icalp:2 interesting:1 foundation:3 tially:1 h2:2 proxy:2 xiao:1 claire:2 changed:1 supported:2 enjoys:1 offline:1 side:2 weaker:2 allow:1 formal:3 neighbor:1 tauman:1 distributed:1 feedback:1 dimension:4 computes:1 ignores:1 made:3 adaptive:10 collection:1 far:5 approximate:14 ignore:1 bdmn05:2 implicitly:1 keep:1 nonprivate:13 assumed:1 leader:14 xi:2 shwartz:1 search:1 table:4 learn:1 ignoring:1 ft0:1 poly:4 protocol:7 domain:1 sp:2 main:2 privately:1 abraham:1 whole:1 noise:3 paul:1 hf1:2 zin03:3 repeated:1 cryptography:1 sub:2 position:1 explicit:1 lie:1 mcmahan:1 learns:3 hw:2 theorem:11 specific:1 cynthia:5 rakhlin:1 exists:1 avrim:1 adding:3 kamalika:2 corr:1 mirror:2 execution:1 easier:1 logarithmic:4 homin:1 simply:1 bubeck:1 applies:1 collectively:1 minimizer:3 satisfies:1 acm:1 goal:1 exposition:1 lipschitz:10 absence:1 considerable:2 change:9 hard:2 replace:4 specifically:6 determined:1 uniformly:2 wt:12 called:4 pas:2 meaningful:1 formally:1 internal:3 support:1 alexander:1 evaluate:1 |
4,435 | 5,013 | Local Privacy and Minimax Bounds:
Sharp Rates for Probability Estimation
1
John C. Duchi1
Michael I. Jordan1,2
Martin J. Wainwright1,2
2
Department of Electrical Engineering and Computer Science
Department of Statistics
University of California, Berkeley
{jduchi,jordan,wainwrig}@eecs.berkeley.edu
Abstract
We provide a detailed study of the estimation of probability distributions?
discrete and continuous?in a stringent setting in which data is kept private even
from the statistician. We give sharp minimax rates of convergence for estimation
in these locally private settings, exhibiting fundamental trade-offs between privacy and convergence rate, as well as providing tools to allow movement along
the privacy-statistical efficiency continuum. One of the consequences of our results is that Warner?s classical work on randomized response is an optimal way to
perform survey sampling while maintaining privacy of the respondents.
1
Introduction
The original motivation for providing privacy in statistical problems, first discussed by Warner [23],
was that ?for reasons of modesty, fear of being thought bigoted, or merely a reluctance to confide
secrets to strangers,? respondents to surveys might prefer to be able to answer certain questions
non-truthfully, or at least without the interviewer knowing their true response. With this motivation,
Warner considered the problem of estimating the fractions of the population belonging to certain
strata, which can be viewed as probability estimation within a multinomial model. In this paper, we
revisit Warner?s probability estimation problem, doing so within a theoretical framework that allows
us to characterize optimal estimation under constraints on privacy. We also apply our theoretical
tools to a further probability estimation problem?that of nonparametric density estimation.
In the large body of research on privacy and statistical inference [e.g., 23, 14, 10, 15], a major focus
has been on the problem of reducing disclosure risk: the probability that a member of a dataset
can be identified given released statistics of the dataset. The literature has stopped short, however,
of providing a formal treatment of disclosure risk that would permit decision-theoretic tools to be
used in characterizing trade-offs between the utility of achieving privacy and the utility associated
with an inferential goal. Recently, a formal treatment of disclosure risk known as ?differential
privacy? has been proposed and studied in the cryptography, database and theoretical computer
science literatures [11, 1]. Differential privacy has strong semantic privacy guarantees that make it a
good candidate for declaring a statistical procedure or data collection mechanism private, and it has
been the focus of a growing body of recent work [13, 16, 24, 21, 6, 18, 8, 5, 9].
In this paper, we bring together the formal treatment of disclosure risk provided by differential privacy with the tools of minimax decision theory to provide a theoretical treatment of probability
estimation under privacy constraints. Just as in classical minimax theory, we are able to provide
lower bounds on the convergence rates of any estimator, in our case under a restriction to estimators that guarantee privacy. We complement these results with matching upper bounds that are
achievable using computationally efficient algorithms. We thus bring classical notions of privacy,
as introduced by Warner [23], into contact with differential privacy and statistical decision theory,
obtaining quantitative trade-offs between privacy and statistical efficiency.
1
1.1
Setting and contributions
Let us develop some basic formalism before describing our main results. We study procedures that
receive private views Z1 , . . . , Zn ? Z of an original set of observations, X1 , . . . , Xn ? X , where
X is the (known) sample space. In our setting, Zi is drawn conditional on Xi via the channel
distribution Qi (Zi | Xi = x); typically we omit the dependence of Qi on i. We focus in this paper
on the non-interactive setting (in information-theoretic terms, on memoryless channels), where Qi
is chosen prior to seeing data; see Duchi et al. [9] for more discussion.
We assume each of these private views Zi is ?-differentially private for the original data Xi . To give
a precise definition for this type of privacy, known as ?local privacy,? let ?(Z) be the ?-field on Z
over which the channel Q is defined. Then Q provides ?-local differential privacy if
Q(S | Xi = x)
?
sup
| S ? ?(Z), and x, x ? X ? exp(?).
(1)
Q(S | Xi = x? )
This formulation of local privacy was first proposed by Evfimievski et al. [13]. The likelihood ratio
bound (1) is attractive for many reasons. It means that any individual providing data guarantees
his or her own privacy?no further processing or mistakes by a collection agency can compromise
one?s data?and the individual has plausible deniability about taking a value x, since any outcome z
is nearly as likely to have come from some other initial value x? . The likelihood ratio also controls
the error rate in tests for the presence of points x in the data [24].
In the current paper, we study minimax convergence rates when the data provided satisfies the local
privacy guarantee (1). Our two main results quantify the penalty that must be paid when local
privacy at a level ? is provided in multinomial estimation and density estimation problems. At a
high level, our first result implies that for estimation of a d-dimensional multinomial probability
mass function, the effective sample size of any statistical estimation procedure decreases from n to
n?2 /d whenever ? is a sufficiently small constant. A consequence of our results is that Warner?s
randomized response procedure [23] enjoys optimal sample complexity; it is interesting to note
that even with the recent focus on privacy and statistical inference, the optimal privacy-preserving
strategy for problems such as survey collection has been known for almost 50 years.
Our second main result, on density estimation, exhibits an interesting departure from standard minimax estimation results. If the density being estimated has ? continuous derivatives, then classical
results on density estimation [e.g., 26, 25, 22] show that the minimax integrated squared error scales
(in the sample size n) as n?2?/(2?+1) . In the locally private case, we show that there is a difference
in the polynomial rate of convergence: we obtain a scaling of (?2 n)?2?/(2?+2) . We give efficiently
implementable algorithms that attain sharp upper bounds as companions to our lower bounds, which
in some cases exhibit the necessity of non-trivial sampling strategies to guarantee privacy.
Notation: Given distributions P and Q defined on a space X , each absolutely continuous with
respect to a measure ? (with densities p and q), the KL-divergence between P and Q is
Z
Z
dP
p
Dkl (P kQ) :=
dP log
=
p log d?.
dQ
q
X
X
Letting ?(X ) denote an appropriate ?-field on X , the total variation distance between P and Q is
Z
1
|p(x) ? q(x)| d?(x).
kP ? QkTV := sup |P (S) ? Q(S)| =
2 X
S??(X )
Let XR be distributed according to P and Y | X be distributed according to Q(? | X), and let
M = Q(? | x)dP (x) denote the marginal of Y . The mutual information between X and Y is
Z
I(X; Y ) := EP [Dkl (Q(? | X)kM (?))] = Dkl (Q(? | X = x)kM (?)) dP (x).
A random variable Y has Laplace(?) distribution if its density pY (y) = ?2 exp (??|y|). We write
an . bn to denote an = O(bn ) and an ? bn to denote an = O(bn ) and bn = O(an ). For a convex
set C ? Rd , we let ?C denote the orthogonal projection operator onto C.
2
2
Background and Problem Formulation
In this section, we provide the necessary background on the minimax framework used throughout
the paper, more details of which can be found in standard sources [e.g., 17, 25, 26, 22]. We also
reference our work [9] paper on statistical inference under differential privacy constraints; we restate
two theorems from the paper [9] to keep our presentation self-contained.
2.1
Minimax framework
Let P denote a class of distributions on the sample space X , and let ? : P ? ? denote a function
defined on P. The range ? depends on the underlying statistical model; for example, for density
estimation, ? may consist of the set of probability densities defined on [0, 1]. We let ? denote the
semi-metric on the space ? that we use to measure the error of an estimator for ?, and ? : R+ ? R+
be a non-decreasing function with ?(0) = 0 (for example, ?(t) = t2 ).
Recalling that Z is the domain of the private variables Zi , let ?b : Z n ? ? denote an arbitrary
estimator for ?. Let Q? denote the set of conditional (or channel) distributions guaranteeing ?-local
privacy (1). Looking uniformly over all channels Q ? Q? , we define the central object of interest
for this paper, the ?-private minimax rate for the family ?(P),
h
i
b 1 , . . . , Zn ), ?(P )) .
(2)
Mn (?(P), ? ? ?, ?) := inf sup EP,Q ? ?(?(Z
b
?,Q?Q
? P ?P
associated with estimating ? based on (Z1 , . . . , Zn ). We remark here (see also the discussion in [9])
that the private minimax risk (2) is different from previous work on optimality in differential privacy
(e.g. [2, 16, 8]): prior work focuses on accurate estimation of a sample quantity ?(x1:n ) based on
the sample x1:n , while we provide lower bounds on error of the population estimator ?(P ). Lower
bounds on population estimation imply those on sample estimation, so our lower bounds are stronger
than most of those in prior work.
A standard route for lower bounding the minimax risk (2) is by reducing the estimation problem to
the testing problem of identifying a point ? ? ? from a collection of well-separated points [26, 25].
Given an index set V, the indexed family of distributions {P? , ? ? V} ? P is a 2?-packing of ?
if ?(?(P? ), ?(P? ? )) ? 2? for all ? 6= ? ? in V. The setup is that of a standard hypothesis testing
problem: nature chooses V ? V uniformly at random, then data (X1 , . . . , Xn ) are drawn i.i.d. from
P?n , conditioning on V = ?. The problem is to identify the member ? of the packing set V.
In this work we have the additional complication that all the statistician observes are the private samples Z1 , . . . , Zn . To that end, if we let Qn (? | x1:n ) denote the conditional distribution of Z1 , . . . , Zn
given that X1 = x1 , . . . , Xn = xn , we define the marginal channel M?n via the expression
Z
n
M? (A) := Qn (A | x1 , . . . , xn )dP? (x1 , . . . , xn ) for A ? ?(Z n ).
(3)
Letting ? : Z n ? V denote an arbitrary testing procedure, we have the following minimax bound,
whose two parts are known as Le Cam?s two-point method [26, 22] and Fano?s inequality [25, 7, 22].
Lemma 1 (Minimax risk bound). For the previously described estimation and testing problems,
Mn (?(P), ? ? ?, Q) ? ?(?) inf P(?(Z1 , . . . , Zn ) 6= V ),
?
(4)
where the infimum is taken over all testing procedures. For a binary test specified by V = {?, ? ? },
inf P (?(Z1 , . . . , Zn ) 6= V ) =
?
1 1
? kM?n ? M?n? kTV ,
2 2
(5a)
and more generally,
I(Z1 , . . . , Zn ; V ) + log 2
inf P(?(Z1 , . . . , Zn ) 6= V ) ? 1 ?
.
?
log |V|
3
(5b)
2.2
Information bounds
The main step in proving minimax lower bounds is to control the divergences involved in the lower
bounds (5a) and (5b). We review two results from our work [9] that obtain such bounds as a function
of the amount of privacy provided. The second of the results provides a variational upper bound on
the mutual information I(Z1 , . . . , Zn ; V ), in that we optimize jointly over subset S ? X . To state
the proposition, we require a bit of notation: for each i ? {1, . . . , n}, let P?,i be the distribution of
Xi conditional on the random packing element V = ?, and let M?n be the marginal
P distribution (3)
1
induced by passing Xi through Q. Define the mixture distribution P i = |V|
??V P?,i , We can
then state a proposition summarizing the results we require from Duchi et al. [9]:
Proposition 1 (Information bounds). For any ?, ? ? ? V and ? ? 0,
n
X
2
Dkl (M?n kM?n? ) ? 4(e? ? 1)2
(6)
kP?,i ? P? ? ,i kTV .
i=1
Additionally for V chosen uniformly at random from V, we have the variational bound
n
X
2
(e? ? e?? )2 X
I(Z1 , . . . , Zn ; V ) ? e?
P?,i (S) ? P (S) .
sup
|V|
i=1 S??(X )
(7)
??V
By combining Proposition 1 with Lemma 1, it is possible to derive sharp lower bounds on arbitrary
estimation procedures under ?-local privacy. In the remainder of the paper, we demonstrate this
combination for probability estimation problems; we provide proofs of all results in [9].
3
Multinomial Estimation under Local Privacy
In this section we return to the classical problem of avoiding answer bias in surveys, the original
motivation for studying local privacy [23].
3.1
Minimax rates of convergence for multinomial estimation
Pd
Let ?d := ? ? Rd | ? ? 0, j=1 ?j = 1 denote the probability simplex in Rd . The multinomial
estimation problem is defined as follows. Given a vector ? ? ?d , samples X are drawn i.i.d. from
a multinomial with parameters ?, where P? (X = j) = ?j for j ? {1, . . . , d}, and the goal is to
estimate ?. In one of the earliest evaluations of privacy, Warner [23] studied the Bernoulli variant of
this problem and proposed randomized response: for a given survey question, respondents provide
a truthful answer with probability p > 1/2 and lie with probability 1 ? p.
In our setting, we assume the statistician sees ?-locally private (1) random variables Zi for the corresponding samples Xi from the multinomial. In this case, we have the following result, which characterizes the minimax rate of estimation of a multinomial in both mean-squared error E[k?b ? ?k22 ]
and absolute error E[k?b ? ?k1 ]; the latter may be more relevant for probability estimation problems.
Theorem 1. There exist universal constants 0 < c? ? cu < 5 such that for all ? ? [0, 1], the
minimax rate for multinomial estimation satisfies the bounds
d
d
1
2
?
,
c? min 1,
? Mn ?d , k?k2 , ? ? cu min 1,
,
(8)
n?2
n?2 n?2
and
d
d
? Mn (?d , k?k1 , ?) ? cu min 1, ?
.
(9)
c? min 1, ?
n?2
n?2
Theorem 1 shows that providing local privacy can sometimes be quite detrimental to the quality
of statistical estimators. Indeed, let us compare this rate to the classical rate in which there is no
privacy. Then estimating ? via proportions (i.e., maximum likelihood), we have
d
d
h
h
i X
i
1
1
1
1X
1?
< .
?j (1 ? ?j ) ?
E k?b ? ?k22 =
E (?bj ? ?j )2 =
n j=1
n
d
n
j=1
By inequality (8), for suitably large sample sizes n, the effect of providing differential privacy at a
level ? causes a reduction in the effective sample size of n 7? n?2 /d.
4
3.2
Optimal mechanisms: attainability for multinomial estimation
An interesting consequence of the lower bound in (8) is the following fact that we now demonstrate:
Warner?s classical randomized response mechanism [23] (with minor modification) achieves the
optimal convergence rate. There are also other relatively simple estimation strategies that achieve
convergence rate d/n?2 ; the perturbation approach Dwork et al. [11] propose, where Laplace(?)
noise is added to each coordinate of a multinomial sample, is one such strategy. Nonetheless, the
ease of use and explainability of randomized response, coupled with our optimality results, provide support for randomized response as a preferred method for private estimation of population
probabilities.
We now prove that randomized response attains the optimal rate of convergence. There is a bijection
between multinomial samples x ? {1, . . . , d} and the d standard basis vectors e1 , . . . , ed ? Rd ,
so we abuse notation and represent samples x as either when designing estimation strategies. In
randomized response, we construct the private vector Z ? {0, 1}d from a multinomial observation
x ? {e1 , . . . , ed } by sampling d coordinates independently via the procedure
(
exp(?/2)
xj
with probability 1+exp(?/2)
(10)
[Z]j =
1
.
1 ? xj with probability 1+exp(?/2)
We claim that this channel (10) is ?-differentially private: indeed, note that for any x, x? ? ?d and
any vector z ? {0, 1}d we have
?
Q(Z = z | x)
?
=
exp
(kz
?
xk
?
kz
?
x
k
)
? [exp(??), exp(?)] ,
1
1
Q(Z = z | x? )
2
where we used the triangle inequality to assert that | kz ? xk1 ? kz ? x? k1 | ? kx ? x? k1 ? 2. We
can compute the expected value and variance of the random variables Z; indeed, by definition (10)
e?/2 ? 1
1
1
e?/2
1.
x+
(1 ? x) = ?/2
x+
?/2
?/2
1+e
1+e
e
+1
1 + e?/2
2
Since the Z are Bernoulli, we obtain the variance bound E[kZ ? E[Z]k2 ] < d/4 + 1 < d. Recalling
the definition of the projection ??d onto the simplex, we arrive at the natural estimator
n
e?/2 + 1
1 X
?bpart :=
Zi ? 1/(1 + e?/2 ) ?/2
and ?b := ??d ?bpart .
(11)
n i=1
e
?1
E[Z | x] =
The projection of ?bpart onto the probability simplex can be done in time linear in the dimension d
of the problem [3], so the estimator (11) is efficiently
?computable. Since projections only decrease
distance, vectors in the simplex are at most distance 2 apart, and E? [?bpart ] = ?, we find
2
io
i
n
h
h
d e?/2 + 1
d
2
2
b
b
. min 1,
E k? ? ?k2 ? min 2, E k?part ? ?k2 ? min 2,
.
n e?/2 ? 1
n?2
A similar argument shows that randomized response is minimax optimal for the ?1 -loss as well.
4
Density Estimation under Local Privacy
In this section, we turn to studying a nonparametric statistical problem in which the effects of local
differential privacy turn out to be somewhat more severe. We show that for the problem of density
estimation, instead of just multiplicative loss in the effective sample size as in the previous section,
imposing local differential privacy leads to a different convergence rate.
R
In more detail, we consider estimation of probability densities f : R ? R+ , f (x)dx = 1 and
f ? 0, defined on the real line, focusing on a standard family of densities of varying smoothness [e.g.
22]. Throughout this section, we let ? ? N denote a fixed positive integer. Roughly, we consider
2
densities that have
R 2bounded ?th derivative, and we study density estimation using the squared L 2
norm kf k2 := f (x)dx as our metric; in formal terms, we impose these constraints in terms of
orthonormal
Sobolev classes (e.g. [22, 12]). Let the countable collection of functions {?j }?
j=1 be an
P?
basis for L2 ([0, 1]). Then any function f ? L2 ([0, 1]) can be expanded as a sum j=1 ?j ?j in
R
2
terms of the basis coefficients ?j := f (x)?j (x)dx, where {?j }?
j=1 ? ? (N). The Sobolev space
F? [C] is obtained by enforcing a particular decay rate on the coefficients ?:
5
Definition 1 (Elliptical Sobolev space). For a given orthonormal basis {?j } of L2 ([0, 1]), smoothness parameter ? > 1/2 and radius C, the function class F? [C] is given by
?
?
X
X
j 2? ?2j ? C 2 .
?j ?j such that
F? [C] := f ? L2 ([0, 1]) | f =
j=1
j=1
If we choose the trigonometric basis as our orthonormal basis, then membership in the class F? [C]
corresponds to certain smoothness constraints on the derivatives of f . More precisely, for j ? N,
consider the orthonormal basis for L2 ([0, 1]) of trigonometric functions:
?
?
?0 (t) = 1, ?2j (t) = 2 cos(2?jt), ?2j+1 (t) = 2 sin(2?jt).
(12)
Now consider a ?-times almost everywhere differentiable function f for which |f (?) (x)| ? C for
almost every x ? [0, 1] satisfying f (k) (0) = f (k) (1) for k ? ? ? 1. Uniformly for such f , there is
a universal constant c such that that f ? F? [cC] [22, Lemma A.3]. Thus, Definition 1 (essentially)
captures densities that have Lipschitz-continuous (? ? 1)th derivative. In the sequel, we write F?
when the bound C in F? [C] is O(1). It is well known [26, 25, 22] that the minimax risk for nonprivate estimation of densities in the class F? scales as
2?
2
Mn F? , k?k2 , ? ? n? 2?+1 .
(13)
Our main result is to demonstrate that the classical rate (13) is no longer attainable when we require
?-local differential privacy. In Sections 4.2 and 4.3, we show how to achieve the (new) optimal rate
using histogram and orthogonal series estimators.
4.1
Lower bounds on density estimation
We begin by giving our main lower bound on the minimax rate of estimation of densities when are
kept differentially private, providing the proof in the longer paper [9].
Theorem 2. Consider the class of densities F? defined using the trigonometric basis (12). For some
? ? [0, 1], suppose Zi are ?-locally private (1) for the samples Xi ? [0, 1]. There exists a constant
c? > 0, dependent only on ?, such that
? 2?
2
Mn F? , k?k2 , ? ? c? n?2 2?+2 .
(14)
In comparison with the classical minimax rate (13), the lower bound (14) involves a different polynomial exponent: privacy reduces the exponent from 2?/(2? + 1) to 2?/(2? + 2). For example,
for Lipschitz densities we have ? = 1, and the rate degrades from n?2/3 to n?1/2 .
Interestingly, no estimator based on Laplace (or exponential) perturbation of the samples Xi themselves can attain the rate of convergence (14). In their study of the deconvolution problem, Carroll
and Hall [4] show that if samples Xi are perturbed by additive noise W , where the characteristic function ?W of the additive noise has tails behaving as |?W (t)| = O(|t|?a ) for some a > 0,
then no estimator can deconvolve the samples X + W and attain a rate of convergence better than
n?2?/(2?+2a+1) . Since the Laplace distribution?s characteristic function has tails decaying as t?2 ,
no estimator based on perturbing the samples directly can attain a rate of convergence better than
n?2?/(2?+5) . If the lower bound (14) is attainable, we must then study privacy mechanisms that are
not simply based on direct perturbation of the samples {Xi }ni=1 .
4.2
Achievability by histogram estimators
We now turn to the mean-squared errors achieved by specific practical schemes, beginning with the
special case of Lipschitz density functions (? = 1), for which it suffices to consider a private version
of a classical histogram estimate. For a fixed positive integer k ? N, let {Xj }kj=1 denote the partition
of X = [0, 1] into the intervals
Xj = [(j ? 1)/k, j/k)
for j = 1, 2, . . . , k ? 1, and Xk = [(k ? 1)/k, 1].
6
Any histogram estimate of the density based on these k bins can be specified by a vector ? ? k?k ,
where we recall ?k ? Rk+ is the probability simplex. Any such vector defines a density estimate via
Pk
the sum f? := j=1 ?j 1Xj , where 1E denotes the characteristic (indicator) function of the set E.
Let us now describe a mechanism that guarantees ?-local differential privacy. Given a data set
{X1 , . . . , Xn } of samples from the distribution f , consider the vectors
Zi := ek (Xi ) + Wi ,
for i = 1, 2, . . . , n,
(15)
where ek (Xi ) ? ?k is a k-vector with the jth entry equal to one if Xi ? Xj , and zeroes in all
other entries, and Wi is a random vector with i.i.d. Laplace(?/2) entries. The variables {Zi }ni=1
so-defined are ?-locally differentially private for {Xi }ni=1 .
Pk
Using these private variables, we then form the density estimate fb := f?b = j=1 ?bj 1Xj based on
X
n
k
b
? := ?k
(16)
Zi ,
n i=1
where ?k denotes the Euclidean projection operator onto the set k?k . By construction, we have
R1
fb ? 0 and 0 fb(x)dx = 1, so fb is a valid density estimate.
Proposition 2. Consider the estimate fb based on k = (n?2 )1/4 bins in the histogram. For any
1-Lipschitz density f : [0, 1] ? R+ , we have
h
2 i
?
1
(17)
Ef
fb ? f
2 ? 5(?2 n)? 2 + ?n?3/4 .
1
For any fixed ? > 0, the first term in the bound (17) dominates, and the O((?2 n)? 2 ) rate matches
the minimax lower bound (14) in the case ? = 1: the privatized histogram estimator is minimaxoptimal for Lipschitz densities. This result provides the private analog of the classical result that
histogram estimators are minimax-optimal (in the non-private setting) for Lipschitz densities.
4.3
Achievability by orthogonal projection estimators
For higher degrees of smoothness (? > 1), histogram estimators no longer achieve optimal rates in
the classical setting [20]. Accordingly, we turn to estimators based on orthogonal series and show
that even under local privacy, they achieve the lower bound (14) for all orders of smoothness ? ? 1.
Recall
R space (Definition 1), in which a function f is represented as f =
P? the elliptical Sobolev
f (x)?j (x)dx. This representation underlies the classical method of orj=1 ?j ?j , where ?j =
thonormal series estimation: given a data set, {X1 , X2 , . . . , Xn }, drawn i.i.d. according to a density
f ? L2 ([0, 1]), we first compute the empirical basis coefficients
k
n
X
1X
?bj =
?j (Xi ) and then set fb =
?bj ?j ,
n i=1
j=1
(18)
where the value k ? N is chosen either a priori based on known properties of the estimation problem
or adaptively, for example, using cross-validation [12, 22].
In the setting of local privacy, we consider a mechanism that, instead of releasing the vector of coefficients ?1 (Xi ), . . . , ?k (Xi ) for each data point, employs a random vector Zi = (Zi,1 , . . . , Zi,k )
with the property that E[Zi,j | Xi ] = ?j (Xi ) for each j = 1, 2, . . . , k. We assume the basis functions are uniformly bounded; i.e., there exists a constant B0 = supj supx |?j (x)| < ?. For a fixed
number B strictly larger than B0 (to be specified momentarily), consider the following scheme:
Sampling strategy Given a vector ? ? [?B0 , B0 ]k , construct ?e ? {?B0 , B0 }k with coordinates ?ej
?
?
sampled independently from {?B0 , B0 } with probabilities 21 ? 2Bj0 and 21 + 2Bj0 . Sample
T from a Bernoulli(e? /(e? + 1)) distribution. Then choose Z ? {?B, B}k via
Uniform on z ? {?B, B}k : hz, ?ei > 0
if T = 1
Z?
(19)
Uniform on z ? {?B, B}k : hz, ?ei ? 0
if T = 0.
7
By inspection, Z is ?-differentially private for any initial vector in the box [?B0 , B0 ]k , and moreover, the samples (19) are efficiently computable (for example by rejection sampling). Starting from
the vector ? ? Rk , ?j = ?j (Xi ), in the above sampling strategy we have
?
B
B e? ? 1
e
1
E[[Z]j | X = x] = ck ?
?j (x) = ck ? ?
?
?j (x),
(20)
?
?
B0 k e + 1 e + 1
B0 k e + 1
for a constant ck that may depend on k but is O(1) and bounded away from 0. Consequently,
to
?
attain the unbiasedness condition E[[Zi ]j | Xi ] = ?j (Xi ), it suffices to take B = O(B0 k/?).
The full sampling and inferential scheme are as follows: (i) given a data point Xi , construct the
vector ? = [?j (Xi )]kj=1 ; (ii) sample Zi according to strategy (19) using ? and the bound B =
?
B0 k(e? + 1)/ck (e? ? 1). (The constant ck is as in the expression (20).) Using the estimator
k
n
we obtain the following proposition.
1 XX
Zi,j ?j ,
fb :=
n i=1 j=1
(21)
Proposition 3. Let {?j } be a B0 -bounded orthonormal basis for L2 ([0, 1]). There exists a constant
c (depending only on C and B0 ) such that the estimator (21) with k = (n?2 )1/(2?+2) satisfies
h
i
? 2?
sup Ef kf ? fbk22 ? c n?2 2?+2 .
f ?F?[C]
Propositions 2 and 3 make clear that the minimax lower bound (14) is sharp, as claimed.
Before concluding our exposition, we make a few remarks on other potential density estimators. Our
orthogonal-series estimator (21) (and sampling scheme (20)), while similar in spirit to that proposed
by Wasserman and Zhou [24, Sec. 6], is different in that it is locally private and requires a different noise strategy to obtain both ?-local privacy and optimal convergence rate. Lei [19] considers
private M -estimators based on first performing a histogram density estimate, then using this to construct a second estimator; his estimator is not locally private, and the resulting M -estimators have
sub-optimal convergence rates. Finally, we remark that density estimators that are based on orthogo2?
nal series and Laplace perturbation are sub-optimal: they can achieve (at best) rates of (n?2 )? 2?+3 ,
which is polynomially worse than the sharp result provided by Proposition 3. It appears that appropriately chosen noise mechanisms are crucial for obtaining optimal results.
5
Discussion
We have linked minimax analysis from statistical decision theory with differential privacy, bringing
some of their respective foundational principles into close contact. In this paper particularly, we
showed how to apply our divergence bounds to obtain sharp bounds on the convergence rate for certain nonparametric problems in addition to standard finite-dimensional settings. By providing sharp
convergence rates for many standard statistical inference procedures under local differential privacy,
we have developed and explored some tools that may be used to better understand privacy-preserving
statistical inference and estimation procedures. We have identified a fundamental continuum along
which privacy may be traded for utility in the form of accurate statistical estimates, providing a
way to adjust statistical procedures to meet the privacy or utility needs of the statistician and the
population being sampled. Formally identifying this trade-off in other statistical problems should
allow us to better understand the costs and benefits of privacy; we believe we have laid some of the
groundwork to do so.
Acknowledgments
JCD was supported by a Facebook Graduate Fellowship and an NDSEG fellowship. Our work was
supported in part by the U.S. Army Research Laboratory, U.S. Army Research Office under grant
number W911NF-11-1-0391, and Office of Naval Research MURI grant N00014-11-1-0688.
8
References
[1] B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: A holistic solution to contingency table release. In Proceedings of the 26th ACM Symposium
on Principles of Database Systems, 2007.
[2] A. Beimel, K. Nissim, and E. Omri. Distributed private data analysis: Simultaneously solving how and
what. In Advances in Cryptology, volume 5157 of Lecture Notes in Computer Science, pages 451?468.
Springer, 2008.
[3] P. Brucker. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters, 3(3):
163?166, 1984.
[4] R. Carroll and P. Hall. Optimal rates of convergence for deconvolving a density. Journal of the American
Statistical Association, 83(404):1184?1186, 1988.
[5] K. Chaudhuri and D. Hsu. Convergence rates for differentially private statistical estimation. In Proceedings of the 29th International Conference on Machine Learning, 2012.
[6] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization.
Journal of Machine Learning Research, 12:1069?1109, 2011.
[7] T. M. Cover and J. A. Thomas. Elements of Information Theory, Second Edition. Wiley, 2006.
[8] A. De. Lower bounds in differential privacy. In Proceedings of the Ninth Theory of Cryptography Conference, 2012. URL http://arxiv.org/abs/1107.2183.
[9] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates.
arXiv:1302.3203 [math.ST], 2013. URL http://arxiv.org/abs/1302.3203.
[10] G. T. Duncan and D. Lambert. Disclosure-limited data dissemination. Journal of the American Statistical
Association, 81(393):10?18, 1986.
[11] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In Proceedings of the 3rd Theory of Cryptography Conference, pages 265?284, 2006.
[12] S. Efromovich. Nonparametric Curve Estimation: Methods, Theory, and Applications. Springer-Verlag,
1999.
[13] A. V. Evfimievski, J. Gehrke, and R. Srikant. Limiting privacy breaches in privacy preserving data mining.
In Proceedings of the Twenty-Second Symposium on Principles of Database Systems, pages 211?222,
2003.
[14] I. P. Fellegi. On the question of statistical confidentiality. Journal of the American Statistical Association,
67(337):7?18, 1972.
[15] S. E. Fienberg, U. E. Makov, and R. J. Steele. Disclosure limitation using perturbation and related methods
for categorical data. Journal of Official Statistics, 14(4):485?502, 1998.
[16] M. Hardt and K. Talwar. On the geometry of differential privacy. In Proceedings of the FourtySecond Annual ACM Symposium on the Theory of Computing, pages 705?714, 2010.
URL
http://arxiv.org/abs/0907.3754.
[17] I. A. Ibragimov and R. Z. Has?minskii. Statistical Estimation: Asymptotic Theory. Springer-Verlag, 1981.
[18] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. Smith. What can we learn
privately? SIAM Journal on Computing, 40(3):793?826, 2011.
[19] J. Lei. Differentially private M-estimators. In Advances in Neural Information Processing Systems 25,
2011.
[20] D. Scott. On optimal and data-based histograms. Biometrika, 66(3):605?610, 1979.
[21] A. Smith. Privacy-preserving statistical estimation with optimal convergence rates. In Proceedings of the
Fourty-Third Annual ACM Symposium on the Theory of Computing, 2011.
[22] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009.
[23] S. Warner. Randomized response: a survey technique for eliminating evasive answer bias. Journal of the
American Statistical Association, 60(309):63?69, 1965.
[24] L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the American
Statistical Association, 105(489):375?389, 2010.
[25] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Annals of
Statistics, 27(5):1564?1599, 1999.
[26] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423?435. Springer-Verlag,
1997.
9
| 5013 |@word private:31 version:1 cu:3 polynomial:2 achievable:1 stronger:1 proportion:1 suitably:1 norm:1 eliminating:1 km:4 bn:5 attainable:2 paid:1 reduction:1 necessity:1 initial:2 series:5 ktv:2 groundwork:1 interestingly:1 wainwrig:1 current:1 elliptical:2 attainability:1 dx:5 must:2 john:1 additive:2 partition:1 accordingly:1 inspection:1 xk:2 beginning:1 smith:3 short:1 minskii:1 provides:3 math:1 complication:1 bijection:1 kasiviswanathan:1 org:3 along:2 direct:1 differential:17 symposium:4 prove:1 privacy:62 secret:1 expected:1 indeed:3 roughly:1 themselves:1 brucker:1 warner:9 growing:1 decreasing:1 provided:5 estimating:3 notation:3 underlying:1 bounded:4 mass:1 begin:1 moreover:1 xx:1 what:2 developed:1 jduchi:1 guarantee:6 berkeley:2 quantitative:1 assert:1 every:1 interactive:1 biometrika:1 k2:7 control:2 grant:2 omit:1 before:2 positive:2 engineering:1 local:21 mistake:1 consequence:3 io:1 meet:1 abuse:1 might:1 studied:2 co:1 ease:1 limited:1 range:1 graduate:1 confidentiality:1 practical:1 acknowledgment:1 testing:5 xr:1 procedure:11 foundational:1 universal:2 empirical:2 evasive:1 thought:1 attain:5 inferential:2 matching:1 projection:6 seeing:1 onto:4 close:1 operator:2 deconvolve:1 risk:9 raskhodnikova:1 py:1 restriction:1 optimize:1 kale:1 starting:1 independently:2 convex:1 survey:6 identifying:2 privatized:1 wasserman:2 estimator:28 orthonormal:5 his:2 population:5 proving:1 notion:1 variation:1 coordinate:3 laplace:6 beimel:1 limiting:1 construction:1 suppose:1 annals:1 designing:1 hypothesis:1 element:2 satisfying:1 particularly:1 muri:1 database:3 ep:2 electrical:1 capture:1 momentarily:1 trade:4 movement:1 decrease:2 observes:1 agency:1 pd:1 complexity:1 cam:3 depend:1 solving:1 compromise:1 efficiency:2 basis:11 triangle:1 packing:3 represented:1 separated:1 effective:3 describe:1 kp:2 outcome:1 whose:1 quite:1 larger:1 plausible:1 statistic:4 jointly:1 differentiable:1 propose:1 remainder:1 fourty:1 relevant:1 combining:1 holistic:1 trigonometric:3 chaudhuri:3 achieve:5 differentially:8 convergence:21 r1:1 guaranteeing:1 object:1 derive:1 develop:1 depending:1 cryptology:1 minor:1 b0:16 strong:1 involves:1 implies:1 come:1 quantify:1 exhibiting:1 restate:1 radius:1 deniability:1 stringent:1 bin:2 require:3 suffices:2 proposition:9 duchi1:1 strictly:1 fellegi:1 sufficiently:1 considered:1 hall:2 exp:8 bj:4 traded:1 claim:1 major:1 continuum:2 achieves:1 released:1 estimation:49 evfimievski:2 lucien:1 gehrke:1 tool:5 minimization:1 offs:3 ck:5 zhou:2 ej:1 varying:1 office:2 earliest:1 release:1 focus:5 naval:1 bernoulli:3 likelihood:3 attains:1 summarizing:1 inference:5 dependent:1 membership:1 typically:1 integrated:1 her:1 exponent:2 priori:1 special:1 mutual:2 marginal:3 field:2 construct:4 equal:1 sampling:8 bj0:2 yu:1 deconvolving:1 nearly:1 simplex:5 t2:1 employ:1 few:1 simultaneously:1 divergence:3 individual:2 festschrift:1 geometry:1 statistician:4 recalling:2 ab:3 interest:1 dwork:3 mining:1 evaluation:1 severe:1 adjust:1 mixture:1 mcsherry:2 accurate:2 necessary:1 respective:1 orthogonal:5 indexed:1 euclidean:1 theoretical:4 stopped:1 formalism:1 cover:1 w911nf:1 zn:11 cost:1 subset:1 entry:3 kq:1 uniform:2 too:1 characterize:1 answer:4 perturbed:1 eec:1 supx:1 chooses:1 adaptively:1 unbiasedness:1 density:34 international:1 sensitivity:1 fundamental:2 randomized:10 stratum:1 sequel:1 lee:1 siam:1 off:1 michael:1 together:1 squared:4 central:1 ndseg:1 choose:2 worse:1 ek:2 derivative:4 american:5 return:1 potential:1 de:1 makov:1 sec:1 coefficient:4 depends:1 multiplicative:1 view:2 doing:1 sup:5 characterizes:1 linked:1 decaying:1 contribution:1 ni:3 accuracy:1 variance:2 characteristic:3 efficiently:3 identify:1 lambert:1 cc:1 monteleoni:1 whenever:1 ed:2 facebook:1 definition:6 nonetheless:1 involved:1 associated:2 proof:2 sampled:2 hsu:1 dataset:2 treatment:4 hardt:1 recall:2 focusing:1 appears:1 higher:1 response:11 formulation:2 done:1 box:1 just:2 reluctance:1 xk1:1 ei:2 defines:1 infimum:1 quality:1 lei:2 believe:1 effect:2 k22:2 calibrating:1 true:1 steele:1 memoryless:1 laboratory:1 semantic:1 attractive:1 sin:1 self:1 theoretic:3 demonstrate:3 duchi:3 bring:2 variational:2 ef:2 recently:1 multinomial:14 perturbing:1 conditioning:1 volume:1 sarwate:1 discussed:1 tail:2 analog:1 jcd:1 association:5 imposing:1 smoothness:5 rd:5 consistency:1 fano:2 longer:3 carroll:2 behaving:1 own:1 recent:2 showed:1 inf:4 apart:1 route:1 certain:4 claimed:1 n00014:1 inequality:3 binary:1 verlag:3 preserving:4 additional:1 somewhat:1 impose:1 truthful:1 semi:1 ii:1 full:1 stranger:1 reduces:1 match:1 determination:1 cross:1 e1:2 dkl:4 qi:3 variant:1 basic:1 underlies:1 essentially:1 metric:2 arxiv:4 histogram:10 sometimes:1 represent:1 achieved:1 receive:1 respondent:3 background:2 addition:1 fellowship:2 interval:1 source:1 crucial:1 appropriately:1 releasing:1 bringing:1 induced:1 hz:2 member:2 spirit:1 jordan:2 integer:2 presence:1 yang:1 xj:7 zi:17 identified:2 knowing:1 computable:2 efromovich:1 expression:2 utility:4 url:3 penalty:1 passing:1 cause:1 remark:3 generally:1 detailed:1 clear:1 ibragimov:1 amount:1 nonparametric:5 tsybakov:1 locally:7 http:3 exist:1 srikant:1 revisit:1 estimated:1 discrete:1 write:2 achieving:1 drawn:4 nal:1 kept:2 merely:1 fraction:1 year:1 sum:2 talwar:2 everywhere:1 letter:1 arrive:1 almost:3 throughout:2 family:3 laid:1 sobolev:4 decision:4 prefer:1 scaling:1 duncan:1 bit:1 bound:35 quadratic:1 annual:2 constraint:5 precisely:1 x2:1 argument:1 optimality:2 min:7 concluding:1 performing:1 expanded:1 martin:1 relatively:1 department:2 according:4 combination:1 belonging:1 dissemination:1 wi:2 modification:1 taken:1 fienberg:1 computationally:1 previously:1 describing:1 turn:4 mechanism:7 disclosure:6 letting:2 supj:1 end:1 studying:2 operation:1 permit:1 apply:2 away:1 appropriate:1 barron:1 knapsack:1 original:4 thomas:1 denotes:2 maintaining:1 giving:1 k1:4 classical:13 contact:2 st:1 question:3 quantity:1 added:1 strategy:9 degrades:1 dependence:1 exhibit:2 dp:5 detrimental:1 distance:3 nissim:3 considers:1 trivial:1 reason:2 enforcing:1 index:1 providing:9 ratio:2 setup:1 minimaxoptimal:1 countable:1 twenty:1 perform:1 upper:3 observation:2 implementable:1 finite:1 looking:1 precise:1 perturbation:5 ninth:1 sharp:8 arbitrary:3 introduced:1 complement:1 kl:1 specified:3 z1:10 california:1 able:2 scott:1 departure:1 wainwright:1 natural:1 indicator:1 mn:6 minimax:28 scheme:4 imply:1 categorical:1 coupled:1 breach:1 kj:2 prior:3 literature:2 review:1 l2:7 kf:2 asymptotic:1 loss:2 lecture:1 interesting:3 limitation:1 declaring:1 validation:1 contingency:1 degree:1 dq:1 principle:3 achievability:2 supported:2 jth:1 enjoys:1 formal:4 allow:2 bias:2 understand:2 barak:1 characterizing:1 taking:1 absolute:1 distributed:3 benefit:1 curve:1 dimension:1 xn:8 valid:1 qn:2 kz:5 fb:8 collection:5 polynomially:1 preferred:1 keep:1 nonprivate:1 xi:26 truthfully:1 continuous:4 table:1 additionally:1 channel:7 nature:1 learn:1 obtaining:2 domain:1 official:1 pk:2 main:6 privately:1 motivation:3 bounding:1 noise:6 edition:1 cryptography:3 body:2 x1:11 wiley:1 sub:2 exponential:1 candidate:1 lie:1 third:1 omri:1 companion:1 theorem:4 rk:2 specific:1 jt:2 explored:1 decay:1 dominates:1 deconvolution:1 consist:1 exists:3 kx:1 rejection:1 simply:1 likely:1 army:2 contained:1 fear:1 springer:5 corresponds:1 satisfies:3 acm:3 assouad:1 conditional:4 viewed:1 goal:2 presentation:1 consequently:1 exposition:1 lipschitz:6 jordan1:1 reducing:2 uniformly:5 lemma:3 total:1 formally:1 support:1 latter:1 wainwright1:1 absolutely:1 avoiding:1 |
4,436 | 5,014 | A Stability-based Validation Procedure for
Differentially Private Machine Learning
Kamalika Chaudhuri
Department of Computer Science and Engineering
UC San Diego, La Jolla CA 92093
[email protected]
Staal Vinterbo
Division of Biomedical Informatics
UC San Diego, La Jolla CA 92093
[email protected]
Abstract
Differential privacy is a cryptographically motivated definition of privacy which
has gained considerable attention in the algorithms, machine-learning and datamining communities. While there has been an explosion of work on differentially
private machine learning algorithms, a major barrier to achieving end-to-end differential privacy in practical machine learning applications is the lack of an effective procedure for differentially private parameter tuning, or, determining the
parameter value, such as a bin size in a histogram, or a regularization parameter,
that is suitable for a particular application.
In this paper, we introduce a generic validation procedure for differentially private
machine learning algorithms that apply when a certain stability condition holds on
the training algorithm and the validation performance metric. The training data
size and the privacy budget used for training in our procedure is independent of
the number of parameter values searched over. We apply our generic procedure to
two fundamental tasks in statistics and machine-learning ? training a regularized
linear classifier and building a histogram density estimator that result in end-toend differentially private solutions for these problems.
1
Introduction
Privacy-preserving machine learning algorithms are increasingly essential for settings where sensitive and personal data are mined. The emerging standard for privacy-preserving computation for
the past few years is differential privacy [7]. Differential privacy is a cryptographically motivated
definition, which guarantees privacy by ensuring that the log-likelihood of any outcome does not
change by more than ? due to the participation of a single individual; an adversary will thus have
difficulty inferring the private value of a single individual when ? is small. This is achieved by
adding random noise to the data or to the result of a function computed on the data. The value ? is
called the privacy budget, and measures the level of privacy risk allowed. As more noise is needed
to achieve lower ?,the price of higher privacy is reduced utility or accuracy. The past few years
have seen an explosion in the literature on differentially private algorithms, and there currently exist
differentially private algorithms for many statistical and machine-learning tasks such as classification [4, 15, 23, 10], regression [18], PCA [2, 5, 17, 12], clustering [2], density estimation [28, 19],
among others.
Many statistics and machine learning algorithms involve one or more parameters, for example, the
regularization parameter ? in Support Vector Machines and the number of clusters in k-means.
Accurately setting these parameters is critical to performance. However there is no good apriori way
to set these parameters, and common practice is to run the algorithm for a few different plausible
parameter values on a dataset, and then select the output that yields the best performance on held-out
validation data. This process is often called parameter-tuning, and is an essential component of any
practical machine-learning system.
1
A major barrier to achieving end-to-end differential privacy in practical machine-learning applications is the absence of an effective procedure for differentially private parameter-tuning. Most
previous experimental works either assume that a good parameter value is known apriori [15, 5] or
use a heuristic to determine a suitable parameter value [19, 28]. Currently, parameter-tuning with
differential privacy is done in two ways. The first is to run the training algorithm on the same data
multiple times. However re-using the data leads to a degradation in the privacy guarantees, and thus
to maintain the privacy budget ?, for each training, we need to use a privacy budget that shrinks
polynomially with the number of parameter values. The second procedure, used by [4], is to divide
the training data into disjoint sets and train for each parameter value using a different set. Both solutions are highly sub-optimal, particularly, if a large number of parameter values are involved ? the
first due to the lower privacy budget, and the second due to less data. Thus the challenge is to design
a differentially private validation procedure that uses the data and the privacy budget effectively, but
can still do parameter-tuning. This is an important problem, and has been mentioned as an open
question by [28] and [4].
In this paper, we show that it is indeed possible to do effective parameter-tuning with differential
privacy in a fairly general setting, provided the training algorithm and the performance measure
used to evaluate its output on the validation data together obey a certain stability condition. We
characterize this stability condition by introducing a notion of (?1 , ?2 , ?)-stability; loosely speaking,
stability holds if the validation performance measure does not change very much when one person?s
private value in the training set changes, when exactly the same random bits are used in the training
algorithm in both cases or, when one person?s private value in the validation set changes. The second
condition is fairly standard, and our key insight is in characterizing the first condition and showing
that it can help in differentially private parameter tuning.
We next design a generic differentially private training and validation procedure that provides endto-end privacy provided this stability condition holds. The training set size and the privacy budget
used by our training algorithms are independent of k, the number of parameter values, and the
accuracy of our validation procedure degrades only logarithmically with k.
We apply our generic procedure to two fundamental tasks in machine-learning and statistics ? training a linear classifier using regularized convex optimization, and building a histogram density estimator. We prove that existing differentially private algorithms for these problems obey our notion
of stability with respect to standard validation performance measures, and we show how to combine
them to provide end-to-end differentially private solutions for these tasks. In particular, our application to linear classification is based on existing differentially private procedures for regularized
convex optimization due to [4], and our application to histogram density estimation is based on the
algorithm variant due to [19].
Finally we provide an experimental evaluation of our procedure for training a logistic regression
classifier on real data. In our experiments, even for a moderate value of k, our procedure outperformed existing differentially private solutions for parameter tuning, and achieved performance
only slightly worse than knowing the best parameter to use ahead of time. We also observed that
our procedure, in contrast to the other procedures we tested, improved the correspondence between
predicted probabilities and observed outcomes, often referred to as model calibration.
Related Work. Differential privacy, proposed by [7], has gained considerable attention in the algorithms, data-mining and machine-learning communities over the past few years as there has been a
large explosion of theoretical and experimental work on differentially private algorithms for statistical and machine-learning tasks [10, 2, 15, 19, 27, 28, 3] ? see [24] for a recent survey of machine
learning methods with a focus on continuous data. In particular, our case study on linear classification is based on existing differentially private procedures for regularized convex optimization,
which were proposed by [4], and extended by [23, 18, 15]. There has also been a large body of
work on differentially private histogram construction in the statistics, algorithms and database literature [7, 19, 27, 28, 20, 29, 14]. We use the algorithm variant due to [19].
While the problem of differentially private parameter tuning has been mentioned in several works,
to the best of our knowledge, an efficient systematic solution has been elusive. Most previous
experimental works either assume that a good parameter value is known apriori [15, 5] or use a
heuristic to determine a suitable parameter value [19, 28]. [4] use a parameter-tuning procedure
where they divide the training data into disjoint sets, and train for a parameter value on each set. [28]
2
mentions finding a good bin size for a histogram using differentially private validation procedure as
an open problem.
Finally, our analysis uses ideas similar to the analysis of the Multiplicative Weights Method for
answering a set of linear queries [13].
2
Preliminaries
Privacy Definition and Composition Properties. We adopt differential privacy as our notion of
privacy.
Definition 1 A (randomized) algorithm A whose output lies in a domain S is said to be (?, ?)differentially private if for all measurable S ? S, for all datasets D and D0 that differ in the value
of a single individual, it is the case that: Pr(A(D) ? S) ? e? Pr(A(D0 ) ? S) + ?. An algorithm is
said to be ?-differentially private if ? = 0.
Here ? and ? are privacy parameters where lower ? and ? imply higher privacy. Differential privacy
has been shown to have many desirable properties, such as robustness to side information [7] and
resistance to composition attacks [11].
An important property of differential privacy is that the privacy guarantees degrade gracefully if
the same sensitive data is used in multiple private computations. In particular, if we apply an ?differentially private procedure k times on the same data, the
p result is k?-differential private as
well as (?0 , ?)-differentially private for ?0 = k?(e? ? 1) + 2k log(1/?)? [7, 8]. These privacy
composition results are the basis of existing differentially private parameter tuning procedures.
Training Procedure and Validation Score. Typical (non-private) machine learning algorithms
have one or more undetermined parameters, and standard practice is to run the machine learning
algorithm for a number of different parameter values on a training set, and evaluate the outputs on a
separate held-out validation dataset. The final output is the one which performs best on the validation
data. For example, in linear classification, we train logistic regression or SVM classifiers with
several different values of the regularization parameter ?, and then select the classifier which has
the best performance on held-out validation data. Our goal in this paper is to design a differentially
private version of this procedure which uses the privacy budget efficiently.
The full validation process thus has two components ? a training procedure, and a validation score
which evaluates how good the training procedure is.
We assume that training and validation data are drawn from a domain X , and the result of the
differentially private training algorithm lies in a domain C. For example, for linear classification, X
is the set of all labelled examples (x, y) where x ? Rd and y ? {?1, 1}, and C is the set of linear
classifiers in d dimensions. We use n to denote the size of a training set, m to denote the size of a
held-out validation set, and ? to denote a set of parameters.
A differentially private training procedure is a randomized algorithm, which takes as input a (sensitive) training dataset, a parameter (of the training procedure), and a privacy parameter ? and outputs
an element of C; the procedure is expected to be ?-differentially private. For ease of exposition and
proof, we represent a differentially private training procedure T as a tuple T = (G, F ), where G is
a density over sequences of real numbers, and F is a function, which takes as input a training set, a
parameter in the parameter set ?, a privacy parameter ?, and a random sequence drawn from G, and
outputs an element of C. F is thus a deterministic function, and the randomization in the training
procedure is isolated in the draw from G.
Observe that any differentially private algorithm can be represented as such a tuple. For example,
given x1 , . . . , xn ? [0, 1], an ?-differentially private approximation to the sample mean x
? is x
?+
1
?n Z where Z is drawn from the standard Laplace distribution. We can represent this procedure
as a tuple T = (G, F ) as follows: G is the standard Laplace density over reals, and for any ?,
r
F ({x1 , . . . , xn }, ?, ?, r) = x
? + ?n
. In general, more complicated procedures will require more
involved functions F .
A validation score is a function q : C ? X m ? R which takes an object h in C and a validation
dataset V , and outputs a score which reflects the quality of h with respect to V . For example, a
3
common validation score used in linear classification is classification accuracy. In (non-private)
validation, if hi is obtained by running the machine learning algorithm with parameter ?i , then the
goal is to output the i (or equivalently the hi ) which maximizes q(hi , V ); our goal is to output
an i that approximately maximizes q(hi , V ) while still preserving the privacy of V as well as the
sensitive training data used in constructing the hi s.
3
Stability and Generic Validation Procedure
We now introduce and discuss our notion of stability, and provide a generic validation procedure
that uses the privacy budget efficiently when this notion of stability holds.
Definition 2 ((?1 , ?2 , ?)-Stability) A validation score q is said to be (?1 , ?2 , ?)-stable with respect
to a training procedure T = (G, F ), a privacy parameter ?, and a parameter set ? if the following
holds. There exists a set ? such that PrR?G (R ? ?) ? 1 ? ?, and whenever R ? ?, the following
two conditions hold:
1. Training Stability: For all ? ? ?, V , and all training sets T and T 0 that differ in a single
entry, |q(F (T, ?, ?, R), V ) ? q(F (T 0 , ?, ?, R), V )| ? ?n1 .
2. Validation Stability: For all T , ? ? ?, and for all V and V 0 that differ in a single entry,
|q(F (T, ?, ?, R), V ) ? q(F (T, ?, ?, R), V 0 )| ? ?m2 .
Condition (1), the training stability condition, bounds the change in the validation score q, when one
person?s private data in the training set T changes, and the validation set V as well as the value of the
random variable R remains the same. Our validation procedure critically relies on this condition,
and our main contribution in this paper is to identify and exploit it to provide a validation procedure
that uses the privacy budget efficiently.
As F (T, ?, ?, R) is a deterministic function, Condition (2), the validation stability condition, bounds
the change in q when one person?s private data in the validation set V changes, and the output of the
training procedure remains the same. We observe that (some version of) Condition (2) is a standard
requirement in existing differentially private algorithms that preserve the privacy of the validation
dataset while selecting a h ? C that approximately maximizes q(h, V ), even if it is not required to
maintain privacy with respect to the training data.
Several remarks are in order. First, observe that Condition (1) is a property of the differentially
private training algorithm (in addition to q and the non-private quantity being approximated). Even
if all else remains the same, different differentially private approximations to the same non-private
quantity will have different values of ?1 .
Second, Condition (1) does not always hold for small ?1 as an immediate consequence of differential
privacy of the training procedure. Differential privacy ensures that the probability of any outcome is
almost the same when the inputs differ in the value of a single individual; Condition (1) requires that
even when the same randomness is used, the validation score evaluated on the actual output of the
algorithm does not change very much when the inputs differ by a single individual?s private value.
In Section 6.1, we present an example of a problem and two ?-differentially private training algorithms which approximately optimize the same function; the first algorithm is based on exponential
mechanism, and the second on a maximum of Laplace random variables mechanism. We show
that while both provide ?-differential privacy guarantees, the first algorithm does not satisfy training stability for ?1 = o(n) and small enough ? while the second one ensures training stability for
?1 = 1 and ? = 0. In Section 4, we present two case studies of commonly used differentially private
algorithms where Conditions (1) and (2) hold for constant ?1 and ?2 .
When the (?1 , ?2 , ?)-stability condition holds, we can design an end-to-end differentially private
parameter tuning algorithm, which is shown in Algorithm 2. The algorithm first uses a validation
procedure to determine which parameter out of the given set ? is (approximately) optimal based
on the held-out data (see Algorithm 1). In the next step, the training data is re-used along with the
parameter output by Algorithm 1 and fresh randomness to generate the final output. Note that we
use Exp(?) to denote the exponential distribution with expectation ?.
4
Algorithm 1 Validate(?, T , T , V , ?1 , ?2 , ?1 , ?2 )
1: Inputs: Parameter list ? = {?1 , . . . , ?k }, training procedure T = (G, F ), validation score q,
2:
3:
4:
5:
6:
7:
training set T , validation set V , stability parameters ?1 and ?2 , training privacy parameter ?1 ,
validation privacy parameter ?2 .
for i = 1, . . . , k do
Draw Ri ? G. Compute hi = F (T, ?i , ?1 , Ri ).
Let ? = max( ?n1 , ?m2 ).
Let ti = q(hi , V ) + 2?Zi , where Zi ? Exp( ?12 ).
end for
Output i? = argmaxi ti .
Algorithm 1 takes as input a training procedure T , a parameter list ?, a validation score q, training
and validation datasets T and V , and privacy parameters ?1 and ?2 . It runs the training procedure
T on the same training set T with privacy budget ?1 for each parameter in ? to generate outputs
h1 , h2 , . . ., and then uses an ?2 -differentially private procedure to select the index i? such that
the validation score q(hi? , V ) is (approximately) maximum. For simplicity, we use a maximum of
Exponential random variables procedure, inspired by [1], to find the approximate maximum; an
exponential mechanism [21] may also be used instead. Algorithm 2 then re-uses the training data
set T to train with parameter ?i? to get the final output.
Algorithm 2 End-to-end Differentially Private Training and Validation Procedure
1: Inputs: Parameter list ? = {?1 , . . . , ?k }, training procedure T = (G, F ), validation score q,
training set T , validation set V , stability parameters ?1 and ?2 , training privacy parameter ?1 ,
validation privacy parameter ?2 .
2: i? = Validate(?, T , T, V, ?1 , ?2 , ?1 , ?2 ).
3: Draw R ? G. Output h = F (T, ?i? , ?1 , R).
3.1
Performance Guarantees
Theorem 1 shows that Algorithm 1 is (?2 , ?)-differentially private, and Theorem 2 shows privacy
guarantees on Algorithm 2. Detailed proofs of both theorems are provided in the Supplementary
Material. We observe that Conditions (1) and (2) are critical to the proof of Theorem 1.
Theorem 1 (Privacy Guarantees for Validation Procedure) If the validation score q is
(?1 , ?2 , k? )-stable with respect to the training procedure T , the privacy parameter ?1 and the
parameter set ?, then, Algorithm 1 guarantees (?2 , ?)-differential privacy.
Theorem 2 (End-to-end Privacy Guarantees) If the conditions in Theorem 1 hold, and if T is
?1 -differentially private, then Algorithm 2 is (?1 + ?2 , ?)-differentially private.
Theorem 3 shows guarantees on the utility of the validation procedure ? that it selects an index i?
which is not too suboptimal.
Theorem 3 (Utility Guarantees) Let h1 , . . . , hk be the output of the differentially private training procedure in Step (3) of Algorithm 1. Then, with probability ? 1 ? ?0 , q(hi? , V ) ?
0)
max1?i?k q(hi , V ) ? 2? log(k/?
.
?2
4
Case Studies
We next show that Algorithm 2 may be applied to design end-to-end differentially private training
and validation procedures for two fundamental statistical and machine-learning tasks ? training a linear classifier, and building a histogram density estimator. In each case, we use existing differentially
private algorithms and validation scores for these tasks. We show that the validation score satisfies
the (?1 , ?2 , ?)-stability property with respect to the training procedure for small values of ?1 and
5
?2 , and thus we can apply in Algorithm 2 with a small value of ? to obtain end-to-end differential
privacy.
Details of the case study for regularized linear classification is shown in Section 4.1, and those for
histogram density estimation is presented in the Supplementary Material.
4.1
Linear Classification based on Logistic Regression and SVM
Given a set of labelled examples (x1 , y1 ), . . . , (xn , yn ) where xi ? Rd , kxi k ? 1 for all i, and
yi ? {?1, 1}, the goal in linear classification is to train a linear classifier that largely separates
examples from the two classes. A popular solution in machine learning is to find a classifier w? by
solving a regulared convex optimization problem:
n
?
1X
w? = argminw?Rd kwk2 +
`(w, xi , yi )
2
n i=1
(1)
Here ? is a regularization parameter, and ` is a convex loss function. When ` is the logistic loss
>
function `(w, x, y) = log(1 + e?yi w xi ), then we have logistic regression. When ` is the hinge loss
`(w, x, y) = max(0, 1 ? yi w> xi ), then we have Support Vector Machines. The optimal value of ?
is data-dependent, and there is no good pre-defined way to select ? apriori. In practice, the optimal
? is determined by training a small number of classifiers with different ? values, and picking the one
that has the best performance on a held-out validation dataset.
[4] present two algorithms for computing differentially private approximations to these regularized
convex optimization problems for fixed ?: output perturbation and objective perturbation. We restate
output perturbation as Algorithm 4 (in the Supplementary Material) and objective perturbation as
Algorithm 3. It was shown by [4] that provided certain conditions hold on ` and the data, Algorithm 4
is ?-differentially
private; moreover, with some additional conditions on `, Algorithm 3 is ? +
c
2 log 1 + ?n
-differentially private, where c is a constant that depends on the loss function `, and
? is the regularization parameter.
Algorithm 3 Objective Perturbation for Differentially Private Linear Classification
1: Inputs: Regularization parameter ?, training set T = {(xi , yi ), i = 1, . . . , n}, privacy parame-
ter ?.
2: Let G be the following density over Rd : ?G (r) ? e?krk . Draw R ? G.
3: Solve the convex optimization problem:
n
?
1X
2 >
w? = argminw?Rd kwk2 +
`(w, xi , yi ) +
R w
2
n i=1
?n
(2)
4: Output w? .
In the sequel, we use the notation X to denote the set {x ? Rd : kxk ? 1}.
Definition 3 A function g : Rd ? X ? {?1, 1} ? R is said to be L-Lipschitz if for all w, w0 ? Rd ,
for all x ? X , and for all y, |g(w, x, y) ? g(w0 , x, y)| ? L ? kw ? w0 k.
Let V = {(?
xi , y?i ), i = 1, . . . , m} be the validation dataset. For our validation score, we choose a
function of the form:
m
1 X
q(w, V ) = ?
g(w, x
?i , y?i )
(3)
m i=1
where g is an L-Lipschitz loss function. In particular, the logistic loss and the hinge loss are 1Lipschitz, whereas the 0/1 loss is not L-Lipschitz for any L. Other examples of 1-Lipschitz but
non-convex losses include the ramp loss: g(w, x, y) = min(1, max(0, 1 ? yw> x)).
The following theorem shows that any non-negative and L-Lipschitz validation score is stable with
respect to Algorithms 3 and 4 and a set of regularization parameters ?; a detailed proof is provided
in the Supplementary Material. Thus we can use Algorithm 2 along with this training procedure
6
and any L-Lipschitz validation score to get an end-to-end differentially private algorithm for linear
classification.
Theorem 4 (Stability of differentially private linear classifiers) Let ? = {?1 , . . . , ?k } be a set
of regularization parameters, let ?min = minki=1 ?i , and let g ? = max(x,y)?X ,w?Rd g(w, x, y). If
` is convex and 1-Lipschitz, and if g is L-Lipschitz and non-negative, then, the validation score q in
Equation 3 is (?1 , ?2 , k? )-stable with respect to Algorithms 3 and 4, ? and ? for:
d log(dk/?)
2L
L
?
, ?2 = min g ,
1+
?1 =
?min
?min
?n
2
Example. For example, if g is chosen to be the hinge loss, then ?1 = ?min
and ?2 =
d log(dk/?)
1
. This follows from the fact that the hinge loss is 1-Lipschitz, but may be
?min 1 +
?n
unbounded for w of unbounded norm.
2
If g is chosen to be the ramp loss, then ?1 = ?min
, and ?2 = 1 (assuming that ?min ? 1). This
follows from the fact that the ramp loss is 1-Lipschitz, but bounded at 1 for any w and (x, y) ? X .
5
Experiments
In order to evaluate Algorithm 2 empirically, we compare the regularizer parameter values and performance of regularized logistic regression classifiers the algorithm produces with those produced
by four alternative methods. We used datasets from two domains, and used 10 times 10-fold crossvalidation (CV) to reduce variability in the computed performance averages.
The Methods Each method takes input (?, ?, T, V ), where ? denotes the allowed differential
privacy, T is a training set, V is a validation set, and ? = {?1 , . . . , ?k } a list of k regularizer values.
Also, let oplr (?, ?, T ) denote the application of the objective perturbation training procedure given
in Algorithm 3 such that it yields ?-differential privacy.
The first of the five methods we compare is Stability, the application of Algorithm 2 with oplr used
for learning classifiers, ? chosen in an ad-hoc manner to be 0.01, average negative ramp loss used as
validation score q, and with ?1 = ?2 = ?/2.
The four other methods work by performing the following 4 steps: (1) for each ?i ? ?, train a
differentially private classifier fi = oplr (?i , ?i , Ti ), (2) determine the number of errors ei each fi
makes on validation set V , (3) randomly choose i? from {1, 2, . . . , k} with probability P (i? = i|pi ),
and (4) output (?i? , fi? ).
What differentiates the four alternative methods is how ?i , Ti , and pi are determined. For
alphaSplit: ?i = ?/k, Ti = T , pi ? e??ei /2 , dataSplit: ?i = ?, partition T into k equally
sized sets Ti , pi ? e??ei /2 (used in [4]), Random: ?i = ?, Ti = T , pi ? 1, and Control : ?i = ?,
Ti = T , pi ? 1(i = arg maxjp
q(fj , V )). Note that for alphaSplit, ?/k > ?0 where ?0 is the
?0
0
solution of ? = k(e ? 1)? + 2k log(1/?)?0 for all of our experimental settings, except when
? = 0.3, then ?/k > ?0 ? 0.0003. The method Control is not private, and serves to provide an
approximate upper bound on the performance of Stability. The three other alternative methods are
differentially private which we state in the following theorem.
Theorem 5 (Privacy of alternative methods) If T and V are disjoint, both alphaSplit and
dataSplit are ?-differentially private. Random is ? differentially private even if T and V are
not disjoint, in which case alphaSplit and dataSplit are 2?-differentially private.
Procedures and Data We performed 10 10-fold CV as follows. For round i in each of the CV
experiments, fold i was used as a test set W on which the produced classifiers were evaluated, fold
(i mod 10)+1 was used as V , and the remaining 8 folds were used as T . Furthermore k = 10 with
? = {0.001, 0.112, 0.223, 0.334, 0.445, 0.556, 0.667, 0.778, 0.889, 1}. Note that the order of ? is
chosen such that i < j implies ?i < ?j . By Theorems 2 and 5, all methods except Control produce
7
a (?, ?)-differentially private classifier. Classifier performance was evaluated using the area under
the receiver operator curve [25] (AUC) as well as mean squared error (MSE). All computations
were done using the R environment [22], and data sets were scaled such that covariate vectors were
constrained to the unit ball. We used the following data available from the UCI Machine Learning
Repository [9]:
Adult ? 98 predictors (14 original including categorical variables that needed to be recoded). The
data set describes measurements on cases taken from the 1994 Census data base. The classification is
whether or not a person has an annual income exceeding 50000 USD, which has a prevalence of 0.22.
Each experiment involves computing more than 24000 classifiers. In order to reduce computation
time, we selected 52 predictors using the step procedure for a model computed by glm with family
binomial and logit link function.
Magic ? 10 predictors on 19020 cases. The data set describes simulated high energy gamma particles registered by a ground-based atmospheric Cherenkov gamma telescope. The classification is
whether particles are primary gammas (signal) or from hadronic showers initiated by cosmic rays in
the upper atmosphere (background). The prevalence of primary gammas is 0.35.
Adult
?
?
?
Magic
?
Adult
Magic
?
?
0.8
?
?
?
?
?
0.24
?
? ?
?
0.22
?
? ?
0.7
?
0.20
?
?
0.18
?
?
?
2.0
3.0
5.0
0.6
0.3 0.5
1.0
0.3 0.5
1.0
2.0
3.0
5.0
alpha
0.3 0.5
1.0
2.0
3.0
5.0
0.3 0.5
1.0
2.0
3.0
? Stability
5.0
alphaSplit
dataSplit
Random
Control
alpha
(a) Averages of AUC for the two data sets.
(b) Averages of MSE for the two data sets.
Figure 1: A summary of 10 times 10-fold cross-validation experiments for different privacy levels
?. Each point in the figure represents a summary of 100 data points. The error bars indiciate a
boot-strap sample estimate of the 95% confidence interval of the mean. A small amount of jitter was
added to positions on the x-axes to avoid over-plotting.
Results Figure 1 summarizes classifier performances and regularizer choices for the different values of the privacy parameter ?, aggregated over all cross-validation runs. Figure 1a shows average
performance in terms of AUC, and Figure 1b shows average performance in terms of MSE.
Looking at AUC in our experiments, Stability significantly outperformed alphaSplit and dataSplit.
However, Stability only outperformed Random for ? > 1 in the Magic data set, and was in fact outperformed by Random in the Adult data set. In the Adult data set, regularizer choice did not seem
to matter as Random performed equally well to Control . For MSE on the other hand, Stability
outperformed the differentially private alternatives in all experiments. We suggest the following
intuition regarding these results. The calibration of a logistic regression model instance, i.e., the
difference between predicted probabilities and a 0/1 encoding of the corresponding labels, is not
captured well by AUC (or 0/1 error rate) as AUC is insensitive to all strictly monotonically increasing transformations of the probabilities. MSE is often used as a measure of probabilistic model
calibration and can be decomposed into two terms: reliability (a calibration term), and refinement
(a discrimination measure) which is related to the AUC. In the Adult data set, the minor change
in AUC of Control and Random for ? > 0.5, together with the apparent insensitivity of AUC
to regularizer value, suggests that any improvement in Stability performance can only come from
(the observed) improved calibration. Unlike in the Adult data set, there is a AUC performance gap
between Control and Random in the Magic data set. This means that regularizer choice matters for
discrimination, and we observe improvement for Stability in both discrimination and calibration.
Acknowledgements This work was supported by NIH grants R01 LM07273 and U54
HL108460, the Hellman Foundation, and NSF IIS 1253942.
8
References
[1] R Bhaskar, S Laxman, A Smith, and A Thakurta. Discovering frequent patterns in sensitive
data. In KDD, 2010.
[2] A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SuLQ framework. In
PODS, 2005.
[3] K. Chaudhuri and D. Hsu. Convergence rates for differentially private statistical estimation. In
ICML, 2012.
[4] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069?1109, March 2011.
[5] K. Chaudhuri, A.D. Sarwate, and K. Sinha. Near-optimal algorithms for differentially-private
principal components. Journal of Machine Learning Research, 2013 (to appear).
[6] L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer, 2001.
[7] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private
data analysis. In Theory of Cryptography, Berlin, Heidelberg, 2006.
[8] C. Dwork, G. Rothblum, and S. Vadhan. Boosting and differential privacy. In FOCS, 2010.
[9] A. Frank and A. Asuncion. UCI machine learning repository, 2013.
[10] A. Friedman and A. Schuster. Data mining with differential privacy. In KDD, 2010.
[11] S. R. Ganta, S. P. Kasiviswanathan, and A. Smith. Composition attacks and auxiliary information in data privacy. In KDD, 2008.
[12] M. Hardt and A. Roth. Beyond worst-case analysis in private singular vector computation. In
STOC, 2013.
[13] M. Hardt and G. Rothblum. A multiplicative weights mechanism for privacy-preserving data
analysis. In FOCS, pages 61?70, 2010.
[14] M. Hay, V. Rastogi, G. Miklau, and D. Suciu. Boosting the accuracy of differentially private
histograms through consistency. PVLDB, 3(1):1021?1032, 2010.
[15] P. Jain, P. Kothari, and A. Thakurta. Differentially private online learning. In COLT, 2012.
[16] M C Jones, J S Marron, and S J Sheather. A brief survey of bandwidth selection for density
estimation. JASA, 91(433):401?407, 1996.
[17] M. Kapralov and K. Talwar. On differentially private low rank approximation. In SODA, 2013.
[18] D. Kifer, A. Smith, and A. Thakurta. Private convex optimization for empirical risk minimization with applications to high-dimensional regression. In COLT, 2012.
[19] J. Lei. Differentially private M-estimators. In NIPS 24, 2011.
[20] A. Machanavajjhala, D. Kifer, J. M. Abowd, J. Gehrke, and L. Vilhuber. Privacy: Theory
meets practice on the map. In ICDE, 2008.
[21] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, 2007.
[22] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation.
[23] B. Rubinstein, P. Bartlett, L. Huang, and N. Taft. Learning in a large function space: Privacypreserving mechanisms for svm learning. Journal of Privacy and Confidentiality, 2012.
[24] A.D. Sarwate and K. Chaudhuri. Signal processing and machine learning with differential
privacy: Algorithms and challenges for continuous data. IEEE Signal Process. Mag., 2013.
[25] J. A. Swets and R. M. Pickett. Evaluation of Diagnostic Systems. Methods from Signal Detection Theory. Academic Press, New York, 1982.
[26] Berwin A Turlach. Bandwidth selection in kernel density estimation: A review. In CORE and
Institut de Statistique. Citeseer, 1993.
[27] S. Vinterbo. Differentially private projected histograms: Construction and use for prediction.
In ECML, 2012.
[28] L. Wasserman and S. Zhou. A statistical framework for differential privacy. JASA,
105(489):375?389, 2010.
[29] J. Xu, Z. Zhang, X. Xiao, Y. Yang, and G. Yu. Differentially private histogram publication. In
ICDE, 2012.
9
6
6.1
Appendix
An Example to Show Training Stability is not a Direct Consequence of Differential
Privacy
We now present an example to illustrate that training stability is a property of the training algorithm
and not a direct consequence of differential privacy. We present a problem and two ?-differentially
private training algorithms which approximately optimize the same function; the first algorithm
is based on exponential mechanism, and the second on a maximum of Laplace random variables
mechanism. We show that while both provide ?-differential privacy guarantees, the first algorithm
does not satisfy training stability while the second one does.
Let i ? {1, . . . , l}, and let f : X n ? R ? [0, 1] be a function such that for all i and all datasets D
and D0 of size n that differ in the value of a single individual, |f (D, i) ? f (D0 , i)| ? n1 .
Consider the following training and validation problem. Given a sensitive dataset D, the private
training procedure A outputs a tuple (i? , t1 , . . . , tl ), where i? is the output of the ?/2-differentially
private exponential mechanism [21] run to approximately maximize f (D, i), and each ti is equal to
2l
f (D, i) plus an independent Laplace random variable with standard deviation ?n
. For any validation
?
?
dataset V , the validation score q((i , t1 , . . . , tl ), V ) = ti .
It follows from standard results that A is ?-differentially private. Moreover, A can be represented
by a tuple TA = (GA , FA ), where GA is the following density over sequences of real numbers of
length l + 1:
1
GA (r0 , r1 , . . . , rl ) = 10?r0 ?1 ? l e?(|r1 |+|r2 |+...+|rl |)
2
Thus GA is the product of the uniform density on [0, 1] and l standard Laplace densities. Consider
the following map E0 . For r ? [0, 1], let
P
P
n?f (D,j)/4
n?f (D,j)/4
j<i e
j?i e
E0 (r) = i, if P n?f (D,j)/4 ? r ? P n?f (D,j)/4
je
je
In other words, E0 (r) is the map that converts a random number r drawn from the uniform distribution on [0, 1] to the ?/2-differentially private exponential mechanism distribution that approximately
maximizes f (D, i). Given a l + 1-tuple R = (R0 , R1 , . . . , Rl ), FA is now the following map:
2lR1
2lR2
2lRl
FA (D, ?, R) = E(R0 ), f (D, 1) +
, f (D, 2) +
, . . . , f (D, l) +
?n
?n
?n
Let l = 2 and D and D0 be two datasets that differ in the value of a single individual. Suppose it
is the case that f (D, 1) = 1, f (D, 2) = 12 and f (D0 , 1) = 1 ? n1 , f (D0 , 2) = 12 + n1 . Observe
en?/4
, and 2 with probability
en?/4 +en?/8
(n?1)?/4
en?/8
e
, where as for D0 , it picks 1 with probability e(n?1)?/4 +e(n+2)?/8 and 2 with probaen?/4 +en?/8
e(n?1)?/4
en?/4
e(n+2)?/8
. Thus, if R0 lies in the interval [ e(n?1)?/4
,
], then,
bility e(n?1)?/4
+e(n+2)?/8
+e(n+2)?/8 en?/4 +en?/8
FA (D, ?, R) = t1 whereas FA (D0 , ?, R) = t2 . When n is large enough, with high probability, |t1 ? t2 | ? 13 ; thus, the training stability condition does not hold for A for ?1 = o(n) and
en?/8 (e?/2 ?1)
? < (en?/8
.
+1)(en?/8 +e?/2 )
that for D, the exponential mechanism picks 1 with probability
Consider a different algorithm A0 which computes t1 , . . . , tl first, and then outputs the index i? that
maximizes ti? . Then A0 can be represented by a tuple TA0 = (GA0 , FA0 ), where GA0 is a density
over sequences of real numbers of length l as follows:
GA (r1 , . . . , rl ) =
1 ?(|r1 |+...+|rl |)
e
2l
and FA0 is the map:
lRi
lR1
lR2
lRl
FA0 (D, ?, R) = argmaxi (f (D, i) +
), f (D, 1) +
, f (D, 2) +
, . . . , f (D, l) +
?n
?n
?n
?n
10
For the same value of R1 , . . . , Rl , if i? = i on input dataset D and if i? = i0 on input dataset D0 ,
then, |f (D, i) ? f (D, i0 )| ? n1 ; this implies that
1
n
with probability 1 over GA0 . Thus the training stability condition holds for ?1 = 1 and ? = 0.
|q(FA0 (D, ?, R), V ) ? q(FA0 (D0 , ?, R), V )| = |ti ? ti0 | = |f (D, i) ? f (D0 , i0 )| ?
6.2
Output Perturbation Algorithm
We present the output perturbation algorithm for regularized linear classification.
Algorithm 4 Output Perturbation for Differentially Private Linear Classification
1: Inputs: Regularization parameter ?, training set T = {(xi , yi ), i = 1, . . . , n}, privacy parame-
ter ?.
2: Let G be the following density over Rd : ?G (r) ? e?krk . Draw R ? G.
3: Solve the convex optimization problem:
n
1
1X
w = argminw?Rd ?kwk2 +
`(w, xi , yi )
2
n i=1
?
4: Output w? +
6.3
(4)
2
??n R.
Case Study: Histogram Density Estimation
Our second case study is developing an end-to-end differentially private solution for histogrambased density estimation. In density estimation, we are given n samples x1 , . . . , xn drawn from
an unknown density f , and our goal is to build an approximation f? to f . In a histogram density
estimator, we divide the range of the data into equal-sized bins of width h; if ni out of n of the input
P1/h ni
? 1(x ? Bin i).
samples lie in bin i, then f? is the density function: f?(x) = i=1 hn
A critical parameter while constructing the histogram density estimator is the bin size h. There is
much theoretical literature on how to choose h ? see [16, 26] for surveys. However, the choice
of h is usually data-dependent, and in practice, the optimal h is often determined by building a
histogram density estimator for a few different values of h, and selecting the one which has the best
performance on held-out validation data.
The most popular measure to evaluate the quality of a density estimator is the L2 -distance or the
Integrated Square Error (ISE) between the density estimate and the true density:
Z
Z
Z
Z
kf? ? f k2 = (f?(x) ? f (x))2 dx =
f 2 (x)dx + f?2 (x)dx ? 2 f (x)f?(x)dx
(5)
x
x
x
x
f is typically unknown, so the ISE cannot be computed exactly. Fortunately it is still possible to
compare multiple density estimates based on this distance. The first term in the right hand side of
Equation 5 depends only on f , and is equal for all f?. The second term is a function of f? only and can
thus be computed. The third term is 2Ex?f [f?(x)], and even though it cannot be computed exactly
without knowledge of f , we can estimate it based on a held out validation dataset. Thus, given a
density estimator f? and a validation dataset V = {z1 , . . . , zm }, we will use the following function
to evaluate the quality of f? on V :
Z
m
2 X?
?
f (zi )
(6)
q(f , V ) = ? f?2 (x)dx +
m i=1
x
A higher value of q indicates a smaller distance kf?? f k2 , and thus a higher quality density estimate.
For other measures, see [6].
In the sequel, we assume that the data lies in the interval [0, 1] and that this interval is known in
advance. For ease of notation, we also assume without loss of generality that h1 is an integer. For
11
ease of exposition, we confine ourselves to one-dimensional data, although the general techniques
can be easily extended to higher dimensions. Given n samples and a bin size h, several works,
including [7, 19, 27, 28, 20, 29, 14] have shown different ways of constructing and sampling from
differentially private histograms. The most basic approach is to construct a non-private histogram
and then add Laplace noise to each cell, followed by some post-processing. Algorithm 5 presents a
variant of a differentially private histogram density estimator due to [19] in our framework.
Algorithm 5 Differentially Private Histogram Density Estimator
1: Inputs: Bin size h (such that 1/h is an integer), data T = {x1 , . . . , xn }, privacy parameter ?.
2: for i = 1, . . . , h1 do
3:
Draw Ri independently from the standard Laplace density: ?G (r) = 21 e?|r| .
h
Pn
i
,
? i = max 0, ni +
Let Ii = i?1
j=1 1(xj ? Ii ), and let n
h
h . Define: ni =
5: end for P
P1/h n? i
6: Let n
? = in
? i . Return the density estimator: f?(x) = i=1 h?
n ? 1(x ? Ii )
4:
2Ri
?
.
The following theorem shows stability guarantees on the differentially private histogram density
estimator described in Algorithm 5.
Theorem 6 (Stability of Private Histogram Density Estimator) Let H = {h1 , . . . , hk } be a set
?
of bin sizes, and let hmin = mini hi . For any fixed ?, if the sample size n ? 1 + 2?ln(4k/?)
, then,
h
min
the validation score q in Equation 6 is (?1 , ?2 , k? )-Stable with respect to Algorithm 5 and H for:
ln(4k/?)
2
6
?
, ?2 = hmin
, where: ? = 2n?
?1 = (1??)h
.
h
min
min
6.4
Proofs of Theorems 1, 2 and 3
We now present the proofs of Theorems 1, 2 and 3. Our proofs involve ideas similar to those in
the analysis of the multiplicative weights update method for answering a set of linear queries in a
differentially private manner [13].
Let A(D) denote the output of Algorithm 1 when the input is a sensitive dataset D = (T, V ), where
T is the training part and V is the validation part. Let D0 = (T 0 , V ) where T and T 0 differ in the
value of a single individual, and let D00 = (T, V 0 ) where V and V 0 differ in the value of a single
individual. The proof of Theorem 1 is a consequence of the following two lemmas.
Lemma 1 Suppose that the conditions in Theorem 1 hold. Then, for all D = (T, V ), all D0 =
(T 0 , V ), such that T and T 0 differ in the value of a single individual, and for any set of outcomes S:
Pr(A(D) ? S) ? e?2 Pr(A(D0 ) ? S) + ?
(7)
Lemma 2 Suppose that the conditions in Theorem 1 hold. Then, for all D = (T, V ), all D00 =
(T, V 0 ) such that V and V 0 differ in the value of a single individual, and for any set of outcomes S,
Pr(A(D) ? S) ? e?2 Pr(A(D00 ) ? S) + ?
(8)
P ROOF : (Of Lemma 1) Let S = (I, C), where I ? [k] is a set of indices and C ? C. Let E be the
event that all of R1 , . . . , Rk lie in the set ?. We will first show that conditioned on E, for all i, it
holds that:
Pr(i? = i|D, E) ? e?2 Pr(i? = i|D0 , E)
(9)
Since Pr(E) ? 1 ? ?, from the conditions in Theorem 1, for any subset I of indices, we can write:
Pr(i? ? I|D) ?
?
?
?
Pr(i? ? I|D, E) Pr(E) + (1 ? Pr(E))
e?2 Pr(i? ? I|D0 , E) Pr(E) + ?
e?2 Pr(i? ? I, E|D0 ) + ?
e?2 Pr(i? ? I|D0 ) + ?
12
(10)
We will now prove Equation 9. For this purpose, we adopt the following notation. We use the
notation Z\i to denote the random variables Z1 , . . . , Zi?1 , Zi+1 , . . . , Zk and z\i to denote the set of
values z1 , . . . , zi?1 , zi+1 , . . . , zk . We also use the notation h(?) to represent the density induced on
the random variables Z1 , . . . , Zk by Algorithm 1. In addition, we use the notation R to denote the
vector (R1 , . . . , Rk ). We first fix a value z\i for Z\i , and a value of R such that R1 , . . . , Rk all lie
in ?, and consider the ratio of probabilities:
Pr(i? = i|Z\i = z\i , D, R)
Pr(i? = i|Z\i = z\i , D0 , R)
Observe that this ratio of probabilities is equal to:
Pr(Zi + q(F (T, ?i , ?1 , Ri ), V ) ? supj6=i zj + q(F (T, ?j , ?1 , Rj ), V ))
Pr(Zi + q(F (T 0 , ?i , ?1 , Ri ), V ) ? supj6=i zj + q(F (T 0 , ?j , ?1 , Rj ), V ))
which is in turn equal to:
Pr(Zi ? supj6=i zj + q(F (T, ?j , ?1 , Rj ), V ) ? q(F (T, ?i , ?1 , Ri ), V ))
Pr(Zi ? supj6=i zj + q(F (T 0 , ?j , ?1 , Rj ), V ) ? q(F (T 0 , ?i , ?1 , Ri ), V ))
Observe that from the stability condition,
|(q(F (T, ?j , ?1 , Rj ), V ) ? q(F (T, ?i , ?1 , Ri ), V )) ? (q(F (T 0 , ?j , ?1 , Rj ), V ) ? q(F (T 0 , ?i , ?1 , Ri ), V ))|
? |q(F (T, ?j , ?1 , Rj ), V ) ? q(F (T 0 , ?j , ?1 , Rj ), V 0 )| + |q(F (T, ?i , ?1 , Ri ), V ) ? q(F (T 0 , ?i , ?1 , Ri ), V )|
2?1
? 2?
?
n
Thus, the ratio of the probabilities is at most the ratio Pr(Zi ? ?)/ Pr(Zi ? ? + 2?) where
? = supj6=i zj +q(F (T, ?j , ?1 , Rj ), V )?q(F (T, ?i , ?1 , Ri ), V ), which is at most e?2 by properties
of the exponential distribution. Thus, we have established that for all z\i , for all R in ?k ,
Pr(i? = i|Z\i = z\i , D, R) ? e?2 ? Pr(i? = i|Z\i = z\i , D0 , R)
Equation 9 follows by integrating over z\i and R. The lemma follows.
P ROOF :(Of Lemma 2) Let S = (I, C), where I ? [k] is a set of indices and C ? C. Let E be the
event that all of R1 , . . . , Rk lie in ?. We will first show that conditioned on E, for all i, it holds that:
Pr(i? = i|D, E) ? e?2 Pr(i? = i|D00 , E)
(11)
Since Pr(E) ? 1 ? ?, from the conditions in Theorem 1, for any subset I of indices, we can write:
Pr(i? ? I|D) ?
?
?
?
Pr(i? ? I|D, E) Pr(E) + (1 ? Pr(E))
e?2 Pr(i? ? I|D00 , E) Pr(E) + ?
e?2 Pr(i? ? I, E|D00 ) + ?
e?2 Pr(i? ? I|D00 ) + ?
(12)
We will now focus on showing Equation 11. We first consider the case when event E holds, that is,
Rj ? R, for j = 1, . . . , k. In this case, the stability definition and the conditions of the theorem
imply that for all ?j ? ?,
|q(F (T, ?j , ?1 , Rj ), V ) ? q(F (T, ?j , ?1 , Rj ), V 0 )| ?
?2
??
m
(13)
In what follows, we use the notation Z\i to denote the random variables Z1 , . . . , Zi?1 , Zi+1 , . . . , Zk
and z\i to denote the set of values z1 , . . . , zi?1 , zi+1 , . . . , zk . We also use the notation h(?) to
represent the density induced on the random variables Z1 , . . . , Zk by Algorithm 1. In addition, we
use the notation R to denote the vector (R1 , . . . , Rk ). We first fix a value z\i for Z\i , and a value of
R such that E holds, and consider the ratio of probabilities:
Pr(i? = i|Z\i = z\i , D, R)
Pr(i? = i|Z\i = z\i , D00 , R)
13
Observe that this ratio of probabilities is equal to:
Pr(Zi + q(F (T, ?i , ?1 , Ri ), V ) ? supj6=i zj + q(F (T, ?j , ?1 , Rj ), V ))
Pr(Zi + q(F (T, ?i , ?1 , Ri ), V 0 ) ? supj6=i zj + q(F (T, ?j , ?1 , Rj ), V 0 ))
which is in turn equal to:
Pr(Zi ? supj6=i zj + q(F (T, ?j , ?1 , Rj ), V ) ? q(F (T, ?i , ?1 , Ri ), V ))
Pr(Zi ? supj6=i zj + q(F (T, ?j , ?1 , Rj ), V 0 ) ? q(F (T, ?i , ?1 , Ri ), V 0 ))
Observe that from Equation 13,
|(q(F (T, ?j , ?1 , Rj ), V )?q(F (T, ?i , ?1 , Ri ), V ))?(q(F (T, ?j , ?1 , Rj ), V 0 )?q(F (T, ?i , ?1 , Ri ), V 0 ))| ?
Thus, the ratio of the probabilities is at most the ratio Pr(Zi ? ?)/ Pr(Zi ? ? + 2?) for ? =
supj6=i zj + q(F (T, ?j , ?1 , rj ), V ) ? q(F (T, ?i , ?1 , ri ), V ), which is at most e?2 by properties of
the exponential distribution. Thus, we have established that when R ? ?k , for all j,
Pr(i? = i|Z\i = z\i , D, R)
? e?2
Pr(i? = i|Z\i = z\i , D00 , R)
Thus for any such R, we can write:
Pr(i? = i|D, R)
Pr(i? = i|D00 , R)
R
=
z\i
R
z\i
Pr(i? = i|Z\i = z\i , D, R)h(z\i )dz\i
Pr(i? = i|Z\i = z\i , D00 , R)h(z\i )dz\i
? e?2
Equation 11 now follows by integrating R over E.
P ROOF :(Of Theorem 1) The proof of Theorem 1 follows from a combination of Lemmas 1 and 2.
P ROOF :(Of Theorem 2) The proof of Theorem 2 follows from privacy composition; Theorem 1
ensures that Step (2) of Algorithm 2 is (?2 , ?)-differentially private; moreover the training procedure
T is ?1 -differentially private. The theorem follows by composing these two results.
P ROOF :(Of Theorem 3) Observe that:
2? log(k/?0 )
log(k/?0 )
? Pr ?j s.t. Zj ?
Pr q(hi? , V ) < max q(hi , V ) ?
1?i?k
?2
?2
By properties of the exponential distribution, for any fixed j, Pr(Zj ?
theorem follows by an Union Bound.
6.5
log(k/?0 )
)
?2
?
?0
k .
Thus the
Proof of Theorem 4
P ROOF : (Of Theorem 4 for Output Perturbation) Let T and T 0 be two training sets which differ in
a single labelled example ((xn , yn ) vs. (x0n , yn0 )), and let w? (T ) and w? (T 0 ) be the solutions to the
regularized convex optimization problem in Equation 1 when the inputs are T and T 0 respectively.
We observe that for fixed ?, ? and R,
F (T, ?, ?, R) ? F (T 0 , ?, ?, R) = w? (T ) ? w? (T 0 )
When the training sets are T and T 0 , the objective functions in the regularized convex optimization
problems are both ?-strongly convex, and they differ by n1 (`(w, xn , yn )?`(w, x0n , yn0 )). Combining
this fact with Lemma 1 of [4], and using the fact that ` is 1-Lipschitz, we have that for all ? and R,
2
?n
Since g is L-Lipschitz, this implies that for any fixed validation set V , and for all ?, ? and R,
kF (T, ?, ?, R) ? F (T 0 , ?, ?, R)k ?
|q(F (T, ?, ?, R), V ) ? q(F (T 0 , ?, ?, R), V )| ?
14
2L
?n
(14)
2?2
? 2?
m
Now let V and V 0 be two validation sets that differ in the value of a single labelled example
(?
xm , y?m ). Since g ? 0 for all inputs, for any such V and V 0 , and for a fixed ?, ? and R,
|q(F (T, ?, ?, R), V ) ? q(F (T, ?, ?, R), V 0 )| ? gmax
m , where
gmax =
sup g(F (T, ?, ?, R), x, y)
(x,y)?X
By definition, gmax ? g ? . Moreover, as g is L-Lipschitz,
gmax ? L ? kF (T, ?, ?, R)k
Now, let E be the event that kRk ? d log(dk/?). From Lemma 4 of [4], Pr(E) ? 1 ? ?/k. Thus,
provided E holds, we have that:
d log(dk/?)
1
d log(dk/?)
1
d log(dk/?)
?
kF (T, ?, ?, R)k ? kw k +
? +
=
1+
??n
?
??n
?
n?
where the bound on kw? k follows from an application of Lemma 1 of [4] on the functions 12 ?kwk2
Pn
and 21 ?kwk2 + n1 i=1 `(w, xi , yi ). This implies that provided E holds, for all training sets T , and
for all ?,
L
d log(dk/?)
|q(F (T, ?, ?, R), V ) ? q(F (T, ?, ?, R), V 0 )| ?
1+
(15)
?m
n?
The theorem now follows from a combination of Equations 14 and 15, and the definition of g ? .
P ROOF : (Of Theorem 4 for Objective Perturbation) Let T and T 0 be two training sets which differ in
a single labelled example (xn , yn ). We observe that for a fixed R and ?, the objective of the regularized convex optimization problem in Equation 2 differs in the term n1 (`(w, xn , yn ) ? `(w, x0n , yn0 )).
Combining this with Lemma 1 of [4], and using the fact that ` is 1-Lipschitz, we have that for all ?,
?, R,
2
kF (T, ?, ?, R) ? F (T 0 , ?, ?, R)k ?
?n
Since g is L-Lipschitz, this implies that for any fixed validation set V , and for all ? and r,
|q(F (T, ?, ?, R), V ) ? q(F (T 0 , ?, ?, R), V )| ?
2L
?n
(16)
Now let V and V 0 be two validation sets that differ in the value of a single labelled example
(?
xm , y?m ). Since g ? 0, for any such V and V 0 , |q(F (T, ?, ?, R), V ) ? q(F (T, ?, ?, R), V 0 )| ?
gmax
m , where
gmax = sup g(F (T, ?, ?, R), x, y)
(x,y)?X
?
By definition gmax ? g . Moreover, as g is L-Lipschitz,
gmax ? L ? kF (T, ?, ?, R)k
Let E be the event that kRk ? d log(dk/?). From Lemma 4 of [4], Pr(E) ? 1 ? ?/k. Thus,
provided E holds, we have that:
1 + kRk/(?n)
1
d log(dk/?)
kF (T, ?, ?, R)k ?
?
1+
?
?
n?
This implies that provided E holds, for all training sets T , and for all ?,
d log(dk/?)
L
1+
|q(F (T, ?, ?, R), V ) ? q(F (T, ?, ?, R), V 0 )| ?
?m
n?
(17)
The theorem now follows from a combination of Equations 16 and 17, and the definition of g ? .
15
6.6
Proof of Theorem 6
Lemma 3 (Concentration of Sum of Laplace Random Variables) Let Z1 , . . . , Zs be s ? 2 iid
standard Laplace random variables, and let Z = Z1 + . . . + Zs . Then, for any ?,
?s
?
?
1
e??/ s ? 4e??/ s
Pr(Z ? ?) ? 1 ?
s
P ROOF : The proof follows from using the method of generating functions. The generating function
1
for the standard Laplace distribution is: ?(X) = E[etX ] = 1?t
2 , for |t| ? 1. As Z1 , . . . , Zs are
tZ
independently distributed, the generating function for Z is E[e ] = (1 ? t2 )?s . Now, we can write:
Pr(Z ? ?)
=
?
Plugging in t =
?1 ,
s
Pr(etZ ? et? )
E[etZ ]
= e?t? ? (1 ? t2 )?s
et?
we get that:
Pr(Z ? ?) ?
1
1?
s
?s
?
e??/
s
The lemma follows by observing that for s ? 2, (1 ? 1s )s ? 14 .
P ROOF : (Of Theorem 6) Let V = {z1 , . . . , zm } be a validation dataset, and let V 0 be a valida0
). We use the notation R to denote
tion dataset that differs from V in a single sample (zm vs zm
the sequence of values R = (R1 , R2 , . . . , R1/h ). Given an input sample T , a bin size h, a privacy parameter ?, and a sequence R, we use the notation f?T,h,?,R to denote the density estimator
F (T, h, ?, R). For all such T , all h, all ? and all R, we can write:
2 ?
0
|q(F (T, h, ?, R), V ) ? q(F (T, h, ?, R), V 0 )| =
(fT,h,?,R (zm ) ? f?T,h,?,R (zm
))
m
?i
2
2 maxi n
?
?
(18)
?
m
h?
n
mh
For a fixed value of h, we define the following event E:
1/h
X
Ri ? ?
i=1
ln(4k/?)
?
h
Using the symmetry of Laplace random variables and Lemma 3, we get that Pr(E) ? 1 ? ?/k. We
observe that provided the event E holds,
n
? ?n?
1/h
X
Ri ? n ?
i=1
2 ln(4k/?)
?
? n(1 ? ?)
? h
(19)
Let T and T 0 be two input datasets that differ in a single sample (xn vs x0n ). We fix a bin size h, a
value of ?, and a sequence R, and for these fixed values, we use the notation n
? i and n
? 0i to denote
P the
0
value of n
?P
in
Algorithm
5
when
the
inputs
are
T
and
T
respectively.
Similarly,
we
use n
? = in
?i
i
and n
?0 = i n
? 0i .
For any V , we can write:
m
q(F (T, h, ?, R), V ) ? q(F (T 0 , h, ?, R), V )
=
?
2 X ?
(fT,h,?,R (zj ) ? f?T 0 ,h,?,R (zj ))
m j=1
1/h
X
i=1
h?
n
? 2i
n
? 02
i
?
h2 n
?2
h2 n
? 02
(20)
We now look at bounding the right hand side of Equation 20 term by term. Suppose T 0 is obtained
rom T by moving a single sample xn from bin a to bin b in the histogram. Then, depending on the
relative values of n
? a and n
? b , there are four cases:
16
1.
2.
3.
4.
n
? 0a
n
? 0a
n
? 0a
n
? 0a
=n
? a ? 1, n
? 0b = n
? b + 1. Thus n
?0 = n
?.
0
0
=n
? a = 0, n
?b = n
? b + 1. Thus n
? =n
? + 1.
0
0
=n
? a ? 1, n
?b = n
? b = 0. Thus n
? =n
? ? 1.
=n
? a = 0, n
? 0b = n
? b = 0. Thus n
?0 = n
?.
In the fourth case, f?T,h,?,R = f?T 0 ,h,?,R , and thus the right hand side of Equation 20 is 0. Moreover,
the second and the third cases are symmetric. We thus focus on the first two cases.
In the first case, the first term in the right hand side of Equation 20 can be written as:
1/h
1/h
m X
m X
2 X
2 X
n
? 0i
n
?i
n
?i ? n
? 0i
=
?
?
1(zj ? Ii ) ?
1(z
?
I
)
?
?
j
i
m j=1 i=1
h?
n h?
n0
m j=1 i=1
h?
n
2
1
2
?m?
?
m
h?
n
h?
n
The second term on the right hand side of Equation 20 can be written as:
1/h 2
X
n
? 2a + n
? 2b ? (?
na ? 1)2 ? (?
nb + 1)2
n
?i
n
? 02
i
=
?
h?
n2
h?
n02
h?
n2
i=1
2?
nb ? 2
2
na ? 2?
=
?
h?
n2
h?
n
where the last step follows from the fact that n
? 0b = n
?b + 1 ? n
? . Thus, for the first case, the right
4
hand side of Equation 20 is at most h?
n.
?
We now consider the second case. The first term on the right hand side of Equation 20 can be written
as:
1/h
m X
2 X
n
?i
n
? 0i
1(zj ? Ii ) ?
?
?
m j=1 i=1
h?
n h?
n0
1/h
m X
2 X
n
?i
n
? 0i
=
?
1(zj ? Ii ) ?
?
mh j=1 i=1
n
?
n
?+1
?
?
2
1
?m?
? max(|?
ni (?
n + 1) ? n
?in
? |, |?
ni (?
n + 1) ? n
? (?
ni + 1)|)
hm
n
? (?
n + 1)
1
2
2
?
? max(|?
ni |, |?
n?n
? i |) ?
h n
? (?
n + 1)
h(?
n + 1)
where the last step follows from the fact that max(|?
ni |, |?
n?n
? i |) ? n
? . The second term on the right
hand side of Equation 20 can be written as:
2
1/h 2
X
X n
n
?i
n
? 02
? 2i
n
? 2i
(?
nb + 1)2
?
n
i
?
=
?
+ b2 ?
2
02
2
2
h?
n
h?
n
h?
n
h(?
n + 1)
h?
n
h(?
n + 1)2
i=1
i6=b
(?
X
2?
n+1
? )(2?
nb n
?+n
?+n
? b )
nb ? n
=
?
n
? 2i +
2
2
2
2
h?
n (?
n + 1)
h?
n (?
n + 1)
i6=b
?
2?
n+1
n
? ? 2?
n(?
n + 1)
4
+
?
h(?
n + 1)2
h?
n2 (?
n + 1)2
h(?
n + 1)
Thus, in the second case, the right hand side of Equation 20 is at most h(?n6+1) . We observe that the
third case is symmetric to the second case, and thus we can carry out very similar calculations in
6
0
the third case to show that the right hand side is at most h?
n . Thus, we have that for any T and T ,
provided the event E holds,
6
|q(F (T, h, ?, R), V ) ? q(F (T 0 , h, ?, R), V )| ?
(21)
h?
n
The theorem now follows by combining Equation 21 with Equation 19.
17
6.7
Proof of Theorem 5
Lemma 4 (Parallel construction) Let A = {A1 , A2 , . . . , Ak } be a list of k independently randomized functions, and let Ai be ?i -differentially private. Let {D1 , D2 , . . . , Dk } be k subsets of a set
D such that i 6= j =? Di ? Dj = ?. Algorithm B(D, A) = (A1 (D1 ), A2 (D2 ), . . . , Ak (Dk )) is
max1?i?k ?i -differentially private.
P ROOF : Let D, D0 be two datasets such that their symmetric difference contains one element. We
have that
P (B(D, A) ? S)
P (B(D, A) ? S1 ? ? ? ? ? Sk )
P (A1 (D1 ) ? S1 ) ? ? ? P (Ak (Dk ) ? Sk )
=
=
P (B(D0 , A) ? S)
P (B(D0 , A) ? S1 ? ? ? ? ? Sk )
P (A1 (D10 ) ? S1 ) ? ? ? P (Ak (Dk0 ) ? Sk )
(22)
by independence of randomness in the Ai . Since i 6= j =? Di ? Dj = ?, there exists at most one
index j such that Dj 6= Dj0 . If j does not exist, (22) reduces to e0 ? emax1?i?k ?i . Let j exist, then
P (B(D, A) ? S)
P (Aj (Dj ) ? Sj )
? e?j ? emax1?i?k ?i ,
=
P (B(D0 , A) ? S)
P (Aj (Dj0 ) ? Sj )
which concludes the proof.
P ROOF : (Theorem 5) We begin by separating task (a) of producing the fi in step 1. from the task
(b) of computing ei in step 2. and selecting i? in step 3.
From the parallel construction Lemma 4 it follows that (a) in dataSplit is ?-differentially private.
From standard composition of privacy it follows that (a) in alphaSplit is ?-differentially private.
Task (b) is for both alphaSplit and dataSplit an application of the exponential mechanism [21],
which for choosing with a probability proportional to (?ei ) yields 2?-differential privacy, where
? is the sensitivity of ei . Since a single change in V can change the number of errors any fixed
classifier can make by at most 1 = ?, we get that task (b) is ?-differentially private for = ?/2.
If T and V are disjoint, we get by parallel construction that both alphaSplit and dataSplit yield
?-differential privacy. If T and V are not disjoint, by standard composition of privacy we get that
both alphaSplit and dataSplit yield 2?-differential privacy.
In Random, the results of step 2. in task (b) are never used in step 3. Step 3 is done without looking
at the input data and does not incur loss of differential privacy. We can therefore simulate Random
by first choosing i? uniformly at random, and then computing fi at ?-differential privacy, which by
standard privacy composition is ?-differentially private.
6.8
Experimental selection of regularizer index
18
Adult
Magic
6
?
?
4
?
?
?
?
?
2
0.3 0.5
1.0
?
?
?
?
?
2.0
3.0
5.0
0.3 0.5
1.0
2.0
3.0
5.0
alpha
? Stability
alphaSplit
dataSplit
Random
Control
Figure 2: A summary of 10 times 10-fold cross-validation selection of regularizer index i into ?
for different privacy levels ?. Each point in the figure represents a summary of 100 data points.
The error bars indiciate a boot-strap sample estimate of the 95% confidence interval of the mean. A
small amount of jitter was added to positions on the x-axes to avoid over-plotting.
19
| 5014 |@word private:106 version:2 repository:2 turlach:1 norm:1 logit:1 open:2 d2:2 citeseer:1 pick:2 mention:1 carry:1 contains:1 score:22 selecting:3 mag:1 miklau:1 past:3 existing:7 pickett:1 dx:5 written:4 partition:1 kdd:3 update:1 n0:2 discrimination:3 v:3 selected:1 discovering:1 pvldb:1 smith:4 core:2 provides:1 boosting:2 kasiviswanathan:1 attack:2 zhang:1 five:1 unbounded:2 along:2 direct:2 differential:32 focs:3 prove:2 combine:1 ray:1 privacy:88 manner:2 introduce:2 swets:1 abowd:1 expected:1 indeed:1 p1:2 bility:1 inspired:1 decomposed:1 actual:1 increasing:1 provided:11 begin:1 moreover:6 notation:12 maximizes:5 bounded:1 etz:2 what:2 emerging:1 z:3 finding:1 transformation:1 guarantee:13 ti:12 exactly:3 classifier:20 scaled:1 k2:2 control:8 unit:1 grant:1 yn:5 laxman:1 appear:1 producing:1 t1:5 engineering:1 consequence:4 encoding:1 ak:4 initiated:1 meet:1 approximately:8 lugosi:1 rothblum:2 plus:1 suggests:1 ease:3 range:1 confidentiality:1 practical:4 practice:5 union:1 differs:2 prevalence:2 procedure:59 area:1 empirical:2 significantly:1 pre:1 confidence:2 statistique:1 word:1 integrating:2 suggest:1 get:7 cannot:2 ga:5 selection:4 operator:1 nb:5 risk:3 optimize:2 measurable:1 deterministic:2 map:5 roth:1 dz:2 elusive:1 attention:2 pod:1 convex:15 survey:3 independently:3 simplicity:1 wasserman:1 m2:2 estimator:15 insight:1 stability:41 notion:5 laplace:12 diego:2 construction:5 suppose:4 us:8 logarithmically:1 element:3 approximated:1 particularly:1 database:1 observed:3 ft:2 worst:1 ensures:3 mentioned:2 intuition:1 environment:2 ti0:1 personal:1 solving:1 incur:1 division:1 max1:2 basis:1 easily:1 mh:2 represented:3 regularizer:8 train:6 jain:1 effective:3 argmaxi:2 query:2 rubinstein:1 outcome:5 ise:2 choosing:2 whose:1 heuristic:2 supplementary:4 plausible:1 solve:2 apparent:1 ramp:4 tested:1 statistic:4 final:3 online:1 hoc:1 sequence:7 product:1 zm:6 argminw:3 frequent:1 uci:2 combining:3 chaudhuri:5 achieve:1 insensitivity:1 validate:2 differentially:87 crossvalidation:1 convergence:1 cluster:1 requirement:1 r1:13 produce:2 generating:3 object:1 help:1 illustrate:1 depending:1 minor:1 auxiliary:1 c:1 predicted:2 implies:6 involves:1 come:1 differ:17 restate:1 material:4 bin:13 require:1 atmosphere:1 taft:1 fix:3 preliminary:1 randomization:1 d00:11 strictly:1 hold:25 confine:1 ground:1 exp:2 major:2 adopt:2 a2:2 purpose:1 estimation:10 outperformed:5 label:1 currently:2 combinatorial:1 thakurta:3 sensitive:7 gehrke:1 reflects:1 minimization:2 always:1 avoid:2 zhou:1 pn:2 publication:1 ax:2 focus:3 improvement:2 rank:1 likelihood:1 indicates:1 hk:2 contrast:1 lri:1 dependent:2 i0:3 sulq:1 integrated:1 a0:2 typically:1 selects:1 arg:1 classification:16 among:1 colt:2 constrained:1 fairly:2 uc:2 apriori:4 equal:7 construct:1 never:1 sampling:1 kw:3 represents:2 jones:1 icml:1 yu:1 look:1 others:1 t2:4 few:5 randomly:1 preserve:1 gamma:4 individual:11 roof:11 n02:1 ourselves:1 maintain:2 n1:9 friedman:1 detection:1 highly:1 mining:2 dwork:3 evaluation:2 mcsherry:3 held:8 suciu:1 cryptographically:2 tuple:7 gmax:8 explosion:3 institut:1 divide:3 loosely:1 re:3 isolated:1 e0:4 theoretical:2 sinha:1 instance:1 introducing:1 deviation:1 entry:2 subset:3 undetermined:1 uniform:2 predictor:3 ga0:3 too:1 characterize:1 sav:1 marron:1 kxi:1 person:5 density:40 fundamental:3 randomized:3 sensitivity:2 sequel:2 systematic:1 probabilistic:1 informatics:1 picking:1 together:2 na:2 squared:1 choose:3 huang:1 hn:1 usd:1 worse:1 tz:1 return:1 de:1 b2:1 matter:2 satisfy:2 depends:2 ad:1 multiplicative:3 h1:5 performed:2 tion:1 observing:1 sup:2 kapralov:1 complicated:1 parallel:3 asuncion:1 lr2:2 contribution:1 square:1 hmin:2 accuracy:4 ni:9 largely:1 efficiently:3 yield:5 identify:1 rastogi:1 accurately:1 critically:1 produced:2 machanavajjhala:1 iid:1 randomness:3 monteleoni:1 whenever:1 definition:11 evaluates:1 energy:1 involved:2 proof:15 di:2 hsu:1 dataset:16 hardt:2 popular:2 knowledge:2 higher:5 ta:1 improved:2 done:3 shrink:1 evaluated:3 though:1 furthermore:1 generality:1 biomedical:1 strongly:1 hand:11 ei:6 lack:1 d10:1 logistic:8 quality:4 aj:2 lei:1 building:4 calibrating:1 true:1 regularization:9 symmetric:3 round:1 width:1 auc:10 performs:1 fj:1 hellman:1 fi:5 nih:1 common:2 shower:1 empirically:1 rl:6 lr1:2 insensitive:1 sarwate:3 kwk2:5 measurement:1 composition:8 cv:3 ai:2 tuning:12 rd:11 consistency:1 similarly:1 i6:2 particle:2 language:1 dj:4 reliability:1 moving:1 calibration:6 stable:5 base:1 add:1 recent:1 jolla:2 moderate:1 certain:3 hay:1 yi:9 preserving:4 seen:1 additional:1 captured:1 fortunately:1 r0:5 determine:4 aggregated:1 maximize:1 monotonically:1 signal:4 ii:7 multiple:3 desirable:1 full:1 rj:19 d0:25 reduces:1 academic:1 calculation:1 cross:3 post:1 equally:2 plugging:1 a1:4 ensuring:1 prediction:1 variant:3 regression:8 basic:1 metric:1 expectation:1 histogram:22 represent:4 kernel:1 achieved:2 cosmic:1 cell:1 addition:3 whereas:2 background:1 interval:5 else:1 singular:1 unlike:1 induced:2 privacypreserving:1 sheather:1 mod:1 seem:1 bhaskar:1 integer:2 vadhan:1 near:1 yang:1 ter:2 enough:2 xj:1 independence:1 zi:23 bandwidth:2 suboptimal:1 reduce:2 idea:2 regarding:1 knowing:1 whether:2 motivated:2 pca:1 utility:3 bartlett:1 resistance:1 speaking:1 york:1 remark:1 prr:1 detailed:2 involve:2 yw:1 amount:2 telescope:1 reduced:1 generate:2 exist:3 nsf:1 zj:17 toend:1 diagnostic:1 disjoint:6 write:6 dj0:2 key:1 four:4 blum:1 achieving:2 drawn:5 icde:2 year:3 convert:1 sum:1 run:6 talwar:2 jitter:2 fourth:1 soda:1 almost:1 family:1 x0n:4 draw:6 summarizes:1 appendix:1 bit:1 bound:5 hi:13 followed:1 mined:1 correspondence:1 fold:7 annual:1 ahead:1 ri:22 simulate:1 min:12 performing:1 department:1 developing:1 ball:1 march:1 combination:3 describes:2 slightly:1 increasingly:1 smaller:1 s1:4 pr:62 census:1 glm:1 taken:1 ln:4 equation:22 remains:3 discus:1 turn:2 mechanism:12 differentiates:1 needed:2 end:24 serf:1 kifer:2 available:1 apply:5 obey:2 observe:15 generic:6 datamining:1 alternative:5 robustness:1 original:1 denotes:1 clustering:1 running:1 include:1 remaining:1 binomial:1 hinge:4 exploit:1 build:1 r01:1 objective:7 question:1 quantity:2 added:2 degrades:1 primary:2 fa:5 concentration:1 said:4 distance:3 separate:2 link:1 simulated:1 berlin:1 separating:1 gracefully:1 degrade:1 parame:2 w0:3 nissim:2 fresh:1 rom:1 assuming:1 devroye:1 length:2 index:10 mini:1 ratio:8 equivalently:1 stoc:1 frank:1 negative:3 magic:6 design:6 recoded:1 unknown:2 upper:2 boot:2 kothari:1 datasets:7 indiciate:2 ecml:1 immediate:1 extended:2 variability:1 looking:2 team:1 y1:1 ucsd:2 perturbation:11 community:2 atmospheric:1 required:1 z1:11 registered:1 established:2 yn0:3 nip:1 adult:8 beyond:1 adversary:1 bar:2 usually:1 pattern:1 xm:2 etx:1 challenge:2 max:9 including:2 endto:1 suitable:3 critical:3 difficulty:1 event:8 regularized:11 participation:1 brief:1 imply:2 concludes:1 categorical:1 hm:1 n6:1 review:1 literature:3 acknowledgement:1 l2:1 kf:8 determining:1 relative:1 loss:17 proportional:1 validation:78 h2:3 foundation:2 jasa:2 xiao:1 plotting:2 pi:6 summary:4 supported:1 last:2 side:11 characterizing:1 barrier:2 distributed:1 curve:1 dimension:2 xn:11 computes:1 commonly:1 refinement:1 san:2 projected:1 u54:1 polynomially:1 income:1 sj:2 vinterbo:2 approximate:2 alpha:3 dk0:1 receiver:1 xi:10 continuous:2 sk:4 zk:6 ca:2 composing:1 symmetry:1 heidelberg:1 mse:5 constructing:3 domain:4 krk:5 did:1 main:1 bounding:1 noise:4 n2:4 allowed:2 cryptography:1 body:1 x1:5 xu:1 referred:1 je:2 tl:3 en:11 sub:1 inferring:1 position:2 exceeding:1 exponential:12 lie:8 answering:2 third:4 theorem:42 rk:5 covariate:1 showing:2 maxi:1 list:5 dk:13 svm:3 r2:2 essential:2 exists:2 adding:1 kamalika:2 gained:2 effectively:1 budget:11 conditioned:2 gap:1 kxk:1 lrl:2 springer:1 satisfies:1 relies:1 goal:5 sized:2 exposition:2 labelled:6 price:1 absence:1 considerable:2 change:12 lipschitz:17 typical:1 determined:3 except:2 uniformly:1 degradation:1 strap:2 called:2 principal:1 lemma:17 experimental:6 la:2 select:4 ta0:1 searched:1 support:2 evaluate:5 d1:3 schuster:1 ex:1 |
4,437 | 5,015 | Similarity Component Analysis
Soravit Changpinyo?
Dept. of Computer Science
U. of Southern California
Los Angeles, CA 90089
[email protected]
Kuan Liu?
Dept. of Computer Science
U. of Southern California
Los Angeles, CA 90089
[email protected]
Fei Sha
Dept. of Computer Science
U. of Southern California
Los Angeles, CA 90089
[email protected]
Abstract
Measuring similarity is crucial to many learning tasks. To this end, metric learning
has been the dominant paradigm. However, similarity is a richer and broader notion than what metrics entail. For example, similarity can arise from the process of
aggregating the decisions of multiple latent components, where each latent component compares data in its own way by focusing on a different subset of features.
In this paper, we propose Similarity Component Analysis (SCA), a probabilistic
graphical model that discovers those latent components from data. In SCA, a latent component generates a local similarity value, computed with its own metric,
independently of other components. The final similarity measure is then obtained
by combining the local similarity values with a (noisy-)OR gate. We derive an
EM-based algorithm for fitting the model parameters with similarity-annotated
data from pairwise comparisons. We validate the SCA model on synthetic datasets
where SCA discovers the ground-truth about the latent components. We also apply SCA to a multiway classification task and a link prediction task. For both
tasks, SCA attains significantly better prediction accuracies than competing methods. Moreover, we show how SCA can be instrumental in exploratory analysis of
data, where we gain insights about the data by examining patterns hidden in its
latent components? local similarity values.
1
Introduction
Learning how to measure similarity (or dissimilarity) is a fundamental problem in machine learning.
Arguably, if we have the right measure, we would be able to achieve a perfect classification or
clustering of data. If we parameterize the desired dissimilarity measure in the form of a metric
function, the resulting learning problem is often referred to as metric learning. In the last few years,
researchers have invented a plethora of such algorithms [18, 5, 11, 13, 17, 9]. Those algorithms have
been successfully applied to a wide range of application domains.
However, the notion of (dis)similarity is much richer than what metric is able to capture. Consider
the classical example of CENTAUR, MAN and HORSE. MAN is similar to CENTAUR and CENTAUR
is similar to HORSE. Metric learning algorithms that model the two similarities well would need to
assign small distances among those two pairs. On the other hand, the algorithms will also need to
strenuously battle against assigning a small distance between MAN and HORSE due to the triangle inequality, so as to avoid the fallacy that MAN is similar to HORSE too! This example (and others [12])
thus illustrates the important properties, such as non-transitiveness and non-triangular inequality, of
(dis)similarity that metric learning has not adequately addressed.
Representing objects as points in high-dimensional feature spaces, most metric learning learning algorithms assume that the same set of features contribute indistinguishably to assessing similarity. In
?
Equal contributions
1
xm
xn
?k
Sk
x1
x3
x2
x2
x1
x3
K
S1
K
S
N?N
S
p1 = 0.9
2
S1
p2 = 0.1
p1 = 0.1
S
2
p2 = 0.9
S1
S
p1 = 0.1
p2 = 0.1
2
S12
S32
S 31
p( s = 1) = 0.91
p ( s = 1) = 0.91
p ( s = 1) = 0.19
Figure 1: Similarity Component Analysis and its application to the example of CENTAUR, MAN and HORSE.
SCA has K latent components which give rise to local similarity values sk conditioned on a pair of data xm
and xn . The model?s output s is a combination of all local values through an OR model (straightforward to
extend to a noisy-OR model). ?k is the parameter vector for p(sk |xm , xn ). See texts for details.
particular, the popular Mahalanobis metric weights each feature (and their interactions) additively
when calculating distances. In contrast, similarity can arise from a complex aggregation of comparing data instances on multiple subsets of features, to which we refer as latent components. For
instance, there are multiple reasons for us to rate two songs being similar: being written by the same
composers, being performed by the same band, or of the same genre. For an arbitrary pair of songs,
we can rate the similarity between them based on one of the many components or an arbitrary subset of components, while ignoring the rest. Note that, in the learning setting, we observe only the
aggregated results of those comparisons ? which components are used is latent.
Multi-component based similarity exists also in other types of data. Consider a social network where
the network structure (i.e., links) is a supposition of multiple networks where people are connected
for various organizational reasons: school, profession, or hobby. It is thus unrealistic to assume that
the links exist due to a single cause. More appropriately, social networks are ?multiplex? [6, 15].
In this paper, we propose Similarity Component Analysis (SCA) to model the richer similarity relationships beyond what current metric learning algorithms can offer. SCA is a Bayesian network,
illustrated in Fig. 1. The similarity (node s) is modeled as a probabilistic combination of multiple
latent components. Each latent component (sk ) assigns a local similarity value to whether or not two
objects are similar, inferring from only a subset (but unknown) of features. The (local) similarity
values of those latent components are aggregated with a (noisy-) OR model. Intuitively, two objects
are likely to be similar if they are considered to be similar by at least one component. Two objects
are likely to be dissimilar if none of the components voices up.
We derive an EM-based algorithm for fitting the model with data annotated with similarity relationships. The algorithm infers the intermediate similarity values of latent components and identifies the
parameters for the (noisy-)OR model, as well as each latent component?s conditional distribution,
by maximizing the likelihood of the training data.
We validate SCA on several learning tasks. On synthetic data where ground-truth is available, we
confirm SCA?s ability in discovering latent components and their corresponding subsets of features.
On a multiway classification task, we contrast SCA to state-of-the-art metric learning algorithms
and demonstrate SCA?s superior performance in classifying data samples. Finally, we use SCA to
model the network link structures among research articles published at NIPS proceedings. We show
that SCA achieves the best link prediction accuracy among competitive algorithms. We also conduct
extensive analysis on how learned latent components effectively represent link structures.
In section 2, we describe the SCA model and inference and learning algorithms. We report our
empirical findings in section 3. We discuss related work in section 4 and conclude in section 5.
2
Approach
We start by describing in detail Similarity Component Analysis (SCA), a Bayesian network for
modeling similarity between two objects. We then describe the inference procedure and learning
algorithm for fitting the model parameters with similarity-annotated data.
2
2.1
Probabilistic model of similarity
In what follows, let (u, v, s) denote a pair of D-dimensional data points u, v ? RD and their associated value of similarity s ? {DISSIMILAR, SIMILAR} or {0, 1} accordingly. We are interested
in modeling the process of assigning s to these two data points. To this end, we propose Similarity
Component Analysis (SCA) to model the conditional distribution p(s|u, v), illustrated in Fig. 1.
In SCA, we assume that p(s|u, v) is a mixture of multiple latent components?s local similarity
values. Each latent component evaluates its similarity value independently, using only a subset
of the D features. Intuitively, there are multiple reasons of annotating whether or not two data
instances are similar and each reason focuses locally on one aspect of the data, by restricting itself
to examining only a different subset of features.
Latent components Formally, let u[k] denote the subset of features from u corresponding to the
k-th latent component where [k] ? {1, 2, . . . , D}. The similarity assessment sk of this component
alone is determined by the distance between u[k] and v[k]
dk = (u ? v)T Mk (u ? v)
(1)
where Mk 0 is a D ? D positive semidefinite matrix, used to measure the distance more flexibly
than the standard Euclidean metric. We restrict Mk to be sparse, in particular, only the corresponding [k]-th rows and columns are non-zeroes. Note that in principle [k] needs to be inferred from
data, which is generally hard. Nonetheless, we have found that empirically even without explicitly
constraining Mk , we often obtain a sparse solution.
The distance dk is transformed to the probability for the Bernoulli variable sk according to
P (sk = 1|u, v) = (1 + e?bk )[1 ? ?(dk ? bk )]
(2)
?t ?1
where ?(?) is the sigmoid function ?(t) = (1 + e ) and bk is a bias term. Intuitively, when
the (biased) distance (dk ? bk ) is large, sk is less probable to be 1 and the two data points are
regarded less similar. Note that the constraint Mk being positive semidefinite is important as this
will constrain the probability to be bounded above by 1.
Combining local similarities Assume that there are K latent components. How can we combine
all the local similarity assessments? In this work, we use an OR-gate. Namely,
K
Y
P (s = 1|s1 , s2 , ? ? ? , sK ) = 1 ?
I[sk = 0]
(3)
k=1
Thus, the two data points are similar (s = 1) if at least one of the aspects deems so, corresponding
to sk = 1 for a particular k. The OR-model can be extended to the noisy-OR model [14]. To this
end, we model the non-deterministic effect of each component on the final similarity value,
P (s = 1|sk = 1) = ?k = 1 ? ?k , P (s = 1|sk = 0) = 0
(4)
In essence, the uncertainty comes from our probability of failure ?k (false negative) to identify
the similarity if we are only allowed to consider one component at a time. If we can consider
all components at the same time, this failure probability would be reduced. The noisy-OR model
captures precisely this notion:
K
Y
I[s =1]
P (s = 1|s1 , s2 , ? ? ? , sK ) = 1 ?
?k k
(5)
k=1
where the more sk = 1, the less the false-negative rate is after combination. Note that the noisy-OR
model reduces to the OR-model eq. (3) when ?k = 0 for all k.
Similarity model Our desired model for the conditional probability p(s|u, v) is obtained by
marginalizing all possible configurations of the latent components s = {s1 , s2 , ? ? ? , sK }
X
Y
X Y I[s =1]
P (s = 0|u, v) =
P (s = 0|s)
P (sk |u, v) =
?k k P (sk |u, v)
s
=
Y
s
k
[?k pk + 1 ? pk ] =
Y
k
(6)
[1 ? ?k pk ]
k
k
where pk = p(sk = 1|u, v) is a shorthand for eq. (2). Note that despite the exponential number of
configurations for s, the marginalized probability is tractable.
For the OR-model where ?k = 0, the
Q
conditional probability simplifies to P (s = 0|u, v) = k [1 ? pk ].
3
2.2
Inference and learning
Given an annotated training dataset D = {(xm , xn , smn )}, we learn the parameters, which include
all the positive semidefinite matrices Mk , the biases bk and the false negative rates ?k (if noisy-OR
is used), by maximizing the likelihood of D. Note that we will assume that K is known throughout
this work. We develop an EM-style algorithm to find the local optimum of the likelihood.
Posterior The posteriors over the hidden variables are computationally tractable:
Q
pk ?k l6=k [1 ? ?l pl ]
qk = P (sk = 1|u, v, s = 0) =
P (s = 0|u, v)
Q
pk 1 ? ?k l6=k [1 ? ?l pl ]
rk = P (sk = 1|u, v, s = 1) =
P (s = 1|u, v)
For OR-model eq. (3), these posteriors can be further simplified as all ?k = 0.
(7)
Note that, these posteriors are sufficient to learn the parameters Mk and bk . To learn the parameters
?k , however, we need to compute the expected likelihood with respect to the posterior P (s|u, v, s).
While this posterior is tractable, the expectation of the likelihood is not and variational inference is
needed [10]. We omit the derivation for brevity. In what follows, we focus on learning Mk and bk .
For the k-th component, the relevant terms in the expected log-likelihood, given the posteriors, from
a single similarity assessment s on (u, v), is
Jk = qk1?s rks log P (sk = 1|u, v) + (1 ? qk1?s rks ) log(1 ? P (sk = 1|u, v))
(8)
Learning the parameters Note that Jk is not jointly convex in bk and Mk . Thus, we optimize them
alternatively. Concretely, fixing Mk , we grid search and optimize over bk . Fixing bk , maximizing
Jk with respect to Mk is convex optimization as Jk is a concave function in Mk given the linear
dependency of the distance eq. (1) on this parameter.
We use the method of projected gradient ascent. Essentially, we take a gradient ascent step to update
Mk iteratively. If the update violates the positive semidefinite constraint, we project back to the
feasible region by setting all negative eigenvalues of Mk to zeroes. Alternatively, we have found that
reparameterizing Jk in the following form Mk = LTk Lk is more computationally advantageous, as
Lk is unconstrained. We use L-BFGS to optimize with respect to Lk and obtain faster convergence
and better objective function values. (While this procedure only guarantees local optima, we observe
no significant detrimental effect of arriving at those solutions.) We give the exact form of gradients
with respect to Mk and Lk in the Suppl. Material.
2.3
Extensions
Variants to local similarity models The choice of using logistic-like functions eq. (2) for modeling
local similarity of the latent components is orthogonal to how those similarities are combined in
eq. (3) or eq. (5). Thus, it is relatively straightforward to replace eq. (2) with a more suitable one.
For instance, in some of our empirical studies, we have constrained Mk to be a diagonal matrix
with nonnegative diagonal elements. This is especially useful when the feature dimensionality is
extremely high. We view this flexibility as a modeling advantage.
Disjoint components We could also explicitly express our desiderata that latent components focus
on non-overlapping features. To this end, we penalize the likelihood of the data with the following
regularizer to promote disjoint components
X
R({Mk }) =
diag(Mk )T diag(Mk0 )
(9)
k,k0
where diag(?) extracts the diagonal elements of the matrix. As the metrics are constrained to be positive semidefinite, the inner product attains its minimum of zero when the diagonal elements, which
are nonnegative, are orthogonal to each other. This will introduce zero elements on the diagonals
of the metrics, which will in turn deselect the corresponding feature dimensions, because the corresponding rows and columns of those elements are necessarily zero due to the positive semidefinite
constraints. Thus, metrics that have orthogonal diagonal vectors will use non-overlapping subsets
of features.
4
true metrics
true metrics
5
5
5
5
5
5
5
5
5
5
10
10
10
10
10
10
10
10
10
10
15
15
15
15
15
15
15
15
15
15
20
20
20
20
20
20
20
20
20
20
25
25
25
25
25
25
25
25
25
30
30
10
20
30
30
10
20
30
30
10
20
30
30
10
20
30
30
10
20
30
30
10
20
30
30
10
20
recovered metrics
30
25
30
10
20
30
30
10
20
30
5
5
5
5
5
5
5
5
5
10
10
10
10
10
10
10
10
10
10
15
15
15
15
15
15
15
15
15
15
20
20
20
20
20
20
20
20
20
20
25
25
25
25
25
25
25
25
25
30
10
20
30
30
10
20
30
30
10
20
30
20
30
10
20
30
recovered metrics
5
30
10
30
10
20
30
30
10
20
30
30
10
(a) Disjoint ground-truth metrics
20
30
30
10
20
30
25
30
10
20
30
30
10
20
30
(b) Overlapping ground-truth metrics
Figure 2: On synthetic datasets, SCA successfully identifies the sparse structures and
(non)overlapping patterns of ground-truth metrics. See texts for details. Best viewed in color.
3
Experimental results
We validate the effectiveness of SCA in modeling similarity relationships on three tasks. In section 3.1, we apply SCA to synthetic datasets where the ground-truth is available to confirm SCA?s
ability in identifying correctly underlying parameters. In section 3.2, we apply SCA to a multiway
classification task to recognize images of handwritten digits where similarity is equated to having
the same class label. SCA attains superior classification accuracy to state-of-the-art metric learning
algorithms. In section 3.3, we apply SCA to a link prediction problem for a network of scientific
articles. On this task, SCA outperforms competing methods significantly, too.
Our baseline algorithms for modeling similarity are information-theoretic metric learning (ITML) [5]
and large margin nearest neighbor (LMNN) [18]. Both methods are discriminative approaches where
a metric is optimized to reduce the distances between data points from the same label class (or similar
data instances) and increase the distances between data points from different classes (or dissimilar
data instances). When possible, we also contrast to multiple metric LMNN (MM - LMNN) [18], a
variant to LMNN where multiple metrics are learned from data.
3.1
Synthetic data
Data We generate a synthetic dataset according to the graphical model in Fig. 1. Specifically, our
feature dimensionality is D = 30 and the number of latent components is K = 5. For each component k, the corresponding metric Mk is a D ? D sparse positive semidefinite matrix where only
elements in a 6 ? 6 matrix block on the diagonal are nonzero. Moreover, for different k, these block
matrices do not overlap in rows and columns indices. In short, these metrics mimic the setup where
each component focuses on its own 1/K-th of total features that are disjoint from each other. The
first row of Fig. 2(a) illustrates these 5 matrices while the black background color indicates zero elements. The values of nonzero elements are randomly generated as long as they maintain the positive
semidefiniteness of the metrics. We set the bias term bk to zeroes for all components. We sample
N = 500 data points randomly from RD . We select a random pair and compute their similarity
according to eq. (6) and threshold at 0.5 to yield a binary label s ? {0, 1}. We select randomly
74850 pairs for training, 24950 for development, 24950 for testing.
Method We use the OR-model eq. (3) to combine latent components. We evaluate the results of
SCA on two aspects: how well we can recover the ground-truth metrics (and biases) and how well
we can use the parameters to predict similarities on the test set.
Results The second row of Fig. 2(a) contrasts the learned metrics to the ground-truth (the first row).
Clearly, these two sets of metrics have almost identical shapes and sparse structures. Note that for
this experiment, we did not use the disjoint regularizer (described in section 2.3) to promote sparsity
and disjointness in the learned metrics. Yet, the SCA model is still able to identify those structures.
For the biases, SCA identifies them as being close to zero (details are omitted for brevity).
5
Table 1: Similarity prediction accuracies and standard errors (%) on the synthetic dataset
BASELINES
ITML
LMNN
72.7?0.0
SCA
71.3?0.2
K=1
72.8?0.0
K=3
82.1?0.1
K=5
91.5?0.1
K=7
91.7?0.1
K = 10
91.8?0.1
K = 20
90.2?0.4
Table 2: Misclassification rates (%) on the MNIST recognition task
D
25
50
100
EUC .
BASELINES
ITML
LMNN
21.6
18.7
18.1
15.1
13.35
11.85
20.6
16.5
13.4
SCA
MM - LMNN
20.2
13.6
9.9
K=1
17.7 ? 0.9
13.8 ? 0.3
12.1 ? 0.1
K=5
16.0 ? 1.5
12.0 ? 1.1
10.8 ? 0.6
K = 10
14.5 ? 0.6
11.4 ? 0.6
11.1 ? 0.3
Table 1 contrasts the prediction accuracies by SCA to competing methods. Note that ITML, LMNN
and SCA with K = 1 perform similarly. However, when the number of latent components increases,
SCA outperforms other approaches by a large margin. Also note that when the number of latent
components exceeds the ground-truth K = 5, SCA reaches a plateau until overfitting.
In real-world data, ?true metrics? may overlap, that is, it is possible that different components of
similarity rely on overlapping set of features. To examine SCA?s effectiveness in this scenario,
we create another synthetic data where true metrics heavily overlap, illustrated in the first row of
Fig. 2(b). Nonetheless, SCA is able to identify the metrics correctly, as seen in the second row.
3.2
Multiway classification
For this task, we use the MNIST dataset, which consists of 10 classes of hand-written digit images.
We use PCA to reduce the original dimension from 784 to D = 25, 50 and 100, respectively. We
use 4200 examples for training, 1800 for development and 2000 for testing.
The data is in the format of (xn , yn ) where yn is the class label. We convert them into the format
(xm , xn , smn ) that SCA expects. Specifically, for every training data point, we select its 15 nearest
neighbors among samples in the same class and formulate 15 similar relationships. For dissimilar
relationships, we select its 80 nearest neighbors among samples from the rest classes. For testing,
the label y of x is determined by
X
y = arg maxc sc = arg maxc
P (s = 1|x, x0 )
(10)
x0 ?Bc (x)
where sc is the similarity score to the c-th class, computed as the sum of 5 largest similarity values
Bc to samples in that class. In Table 2, we show classification error rates for different values of D.
For K > 1, SCA clearly outperforms single-metric based baselines. In addition, SCA performs well
compared to MM - LMNN, achieving far better accuracy for small D.
3.3
Link prediction
We evaluate SCA on the task of link prediction in a ?social? network of scientific articles. We aim to
demonstrate SCA?s power to model similarity/dissimilarity in ?multiplex? real-world network data.
In particular, we are interested in not only link prediction accuracies, but also the insights about data
that we gain from analyzing the identified latent components.
Setup We use the NIPS 0-12 dataset [1] to construct the aforementioned network. The dataset
contains papers from the NIPS conferences between 1987 and 1999. The papers are organized into
9 sections (topics) (cf. Suppl. Material). We sample randomly 80 papers per section and use them
to construct the network. Each paper is a vertex and two papers are connected with an edge and
deemed as similar if both of them belong to the same section.
We experiment three representations for the papers: (1) Bag-of-words (BoW) uses normalized occurrences (frequencies) of words in the documents. As a preprocessing step, we remove ?rare?
words that appear less than 75 times and appear more than 240. Those words are either too specialized (thus generalize poorly) or just functional words. After the removal, we obtain 1067 words. (2)
Topic (ToP) uses the documents? topic vectors (mixture weights of topics) after fitting the corpus
6
Table 3: Link prediction accuracies and their standard errors (%) on a network of scientific papers
Feature
type
BoW
ToW
ToP
SCA - DIAG
SVM
BASELINES
ITML
LMNN
73.3?0.0
75.3?0.0
71.2?0.0
81.1?0.1
80.7?0.1
K=1
64.8 ? 0.1
67.0 ? 0.0
62.6 ? 0.0
K?
87.0 ? 1.2
88.1 ? 1.4
81.0 ? 0.8
SCA
K=1
81.0 ? 0.0
K?
87.6 ? 1.0
to a 50-topic LDA [4]. (3) Topic-words (ToW) is essentially BoW except that we retain only 1036
frequent words used by the topics of the LDA model (top 40 words per topic).
Methods We compare the proposed SCA extensively to several competing methods for link prediction. For BoW and ToW represented data, we compare SCA with diagonal metrics (SCA - DIAG,
cf. section 2.3) to Support Vector Machines (SVM) and logistic regression (LOGIT) to avoid high
computational costs associated with learning high-dimensional matrices (the feature dimensionality
D ? 1000). To apply SVM/LOGIT, we treat the link prediction as a binary classification problem
where the input is the absolute difference in feature values between the two data points.
For 50-dimensional ToP represented data, we compare SCA (SCA) and SCA - DIAG to SVM / LOGIT,
information-theoretical metric learning (ITML), and large margin nearest neighbor (LMNN).
Note that while LMNN was originally designed for nearest-neighbor based classification, it can be
adapted to use similarity information to learn a global metric to compute the distance between any
pair of data points. We learn such a metric and threshold on the distance to render a decision on
whether two data points are similar or not (i.e., whether there is a link between them). On the other
end, multiple-metric LMNN, while often having better classification performance, cannot be used
for similarity and link prediction as it does not provide a principled way of computing distances
between two arbitrary data points when there are multiple (local) metrics.
Link or not? In Table 3, we report link prediction accuracies, which are averaged over several runs
of randomly generated 70/30 splits of the data. SVM and LOGIT perform nearly identically so we
report only SVM. For both SCA and SCA - DIAG, we report results when a single component is used
as well as when the optimal number of components are used (under columns K? ).
Both SCA - DIAG and SCA outperform the rest methods by a significant margin, especially when the
number of latent components is greater than 1 (K? ranges from 3 to 13, depending on the methods
and the feature types). The only exception is SCA - DIAG with one component (K = 1), which is an
overly restrictive model as the diagonal metrics constrain features to be combined additively. This
restriction is overcome by using a larger number of components.
Edge component analysis Why does learning latent components in SCA achieve superior link prediction accuracies? The (noisy-)OR model used by SCA is naturally inclined to favoring ?positive?
opinions ? a pair of samples are regarded as being similar as long as there is one latent component strongly believing so. This implies that a latent component can be tuned to a specific group of
samples if those samples rely on common feature characteristics to be similar.
Fig. 3(a) confirms our intuition. The plot displays in relative strength ?darker being stronger ?
how much each latent component believes a pair of articles from the same section should be similar.
Concretely, after fitting a 9-component SCA (from documents in ToP features), we consider edges
connecting articles in the same section and compute the average local similarity values assigned by
each component. We observe two interesting sparse patterns: for each section, there is a dominant
latent component that strongly supports the fact that the articles from that section should be similar
(e.g., for section 1, the dominant one is the 9-th component). Moreover, for each latent component,
it often strongly ?voices up? for one section ? the exception is the second component which seems
to support both section 3 and 4. Nonetheless, the general picture is that, each section has a signature
in terms of how similarity values are distributed across latent components.
This notion is further illustrated, with greater details, in Fig. 3(b). While Fig. 3(a) depicts averaged
signature for each section, the scatterplot displays 2D embeddings computed with the t-SNE algorithm, on each individual edge?s signature ? 9-dimensional similarity values inferred with the 9
latent components. The embeddings are very well organized in 9 clusters, colored with section IDs.
7
9
9
2
8
8
3
7
7
4
6
6
5
5
4
4
8
3
3
9
2
2
Section ID
1
5
6
7
2
4
6
8
Metric ID
(a) Averaged componentwise similarity values of
edges within each section
1
(b)
Embedding
of
links, represented with
component-wise similarity
values
1
(c) Embedding of network
nodes (documents), represented in LDA?s topics
Figure 3: Edge component analysis. Representing network links with local similarity values reveals interesting structures, such as nearly one-to-one correspondence between latent components and sections, as well as
clusters. However, representing articles in LDA?s topics does not reveal useful clustering structures such that
links can be inferred. See texts for details. Best viewed in color.
In contrast, embedding documents using their topic representations does not reveal clear clustering structures such that network links can be inferred. This is shown in Fig. 3(c) where each dot
corresponds to a document and the low-dimensional coordinates are computed using t-SNE (symmetrized KL divergence between topics is used as a distance measure). We observe that while topics
themselves do not reveal intrinsic (network) structures, latent components are able to achieve so by
applying highly-specialized metrics to measure local similarities and yield characteristic signatures.
We also study whether or not the lack of an edge between a pair of dissimilar documents from
different sections, can give rise to characteristic signatures from the latent components. In summary,
we do not observe those telltale signatures for those pairs. Detailed results are in the Suppl. Material.
4
Related Work
Our model learns multiple metrics, one for each latent component. However, the similarity (or
associated dissimilarity) from our model is definitely non-metric due to the complex combination.
This stands in stark contrast to most metric learning algorithms [19, 8, 7, 18, 5, 11, 13, 17, 9].
[12] gives an information-theoretic definition of (non-metric) similarity as long as there is a probabilistic model for the data. Our approach of SCA focuses on the relationship between data but not
data themselves. [16] proposes visualization techniques for non-metric similarity data.
Our work is reminiscent of probabilistic modeling of overlapping communities in social networks,
such as the mixed membership stochastic blockmodels [3]. The key difference is that those works
model vertices with a mixture of latent components (communities) where we model the interactions
between vertices with a mixture of latent components. [2] studies a social network whose edge
set is the union of multiple edge sets in hidden similarity spaces. Our work explicitly models the
probabilistic process of combining latent components with a (noisy-)OR gate.
5
Conclusion
We propose Similarity Component Analysis (SCA) for probabilistic modeling of similarity relationship for pairwise data instances. The key ingredient of SCA is to model similarity as a complex
combination of multiple latent components, each giving rise to a local similarity value. SCA attains
significantly better accuracies than existing methods on both classification and link prediction tasks.
Acknowledgements We thank reviewers for extensive discussion and references on the topics of similarity and
learning similarity. We plan to include them as well as other suggested experimentations in a longer version
of this paper. This research is supported by a USC Annenberg Graduate Fellowship (S.C.) and the IARPA via
DoD/ARL contract # W911NF-12-C-0012. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the
official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.
8
References
[1] NIPS0-12 dataset. http://www.stats.ox.ac.uk/?teh/data.html.
[2] I. Abraham, S. Chechik, D. Kempe, and A. Slivkins. Low-distortion Inference of Latent Similarities from a Multiplex Social Network. CoRR, abs/1202.0922, 2012.
[3] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. Mixed Membership Stochastic
Blockmodels. Journal of Machine Learning Research, 9:1981?2014, June 2008.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[5] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic Metric Learning.
In ICML, 2007.
[6] S. E. Fienberg, M. M. Meyer, and S. S. Wasserman. Statistical Analysis of Multiple Sociometric Relations. Journal of the American Statistical Association, 80(389):51?67, March 1985.
[7] A. Globerson and S. Roweis. Metric Learning by Collapsing Classes. In NIPS, 2005.
[8] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood Components
Analysis. In NIPS, 2004.
[9] S. Hauberg, O. Freifeld, and M. Black. A Geometric take on Metric Learning. In NIPS, 2012.
[10] T. S. Jaakkola and M. I. Jordan. Variational Probabilistic Inference and the QMR-DT Network.
Journal of Artificial Intelligence Research, 10(1):291?322, May 1999.
[11] P. Jain, B. Kulis, I. Dhillon, and K. Grauman. Online Metric Learning and Fast Similarity
Search. In NIPS, 2008.
[12] D. Lin. An Information-Theoretic Definition of Similarity. In ICML, 1998.
[13] S. Parameswaran and K. Weinberger. Large Margin Multi-Task Metric Learning. In NIPS,
2010.
[14] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988.
[15] M. Szell, R. Lambiotte, and S. Thurner. Multirelational Organization of Large-scale Social
Networks in an Online World. Proceedings of the National Academy of Sciences, 2010.
[16] L. van der Maaten and G. Hinton. Visualizing Non-Metric Similarities in Multiple Maps.
Machine Learning, 33:33?55, 2012.
[17] J. Wang, A. Woznica, and A. Kalousis. Parametric Local Metric Learning for Nearest Neighbor
Classification. In NIPS, 2012.
[18] K. Q. Weinberger and L. K. Saul. Distance Metric Learning for Large Margin Nearest Neighbor Classification. Journal of Machine Learning Research, 10:207?244, 2009.
[19] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance Metric Learning, with Application
to Clustering with Side-information. In NIPS, 2002.
9
| 5015 |@word kulis:2 version:1 instrumental:1 advantageous:1 logit:4 stronger:1 seems:1 additively:2 confirms:1 deems:1 liu:1 configuration:2 score:1 contains:1 tuned:1 bc:2 document:7 outperforms:3 existing:1 current:1 comparing:1 recovered:2 goldberger:1 assigning:2 yet:1 written:2 reminiscent:1 indistinguishably:1 shape:1 remove:1 designed:1 plot:1 update:2 alone:1 intelligence:1 discovering:1 accordingly:1 short:1 colored:1 blei:2 contribute:1 node:2 shorthand:1 consists:1 fitting:5 combine:2 introduce:1 x0:2 pairwise:2 expected:2 p1:3 examine:1 themselves:2 multi:2 sociometric:1 salakhutdinov:1 lmnn:13 project:1 moreover:3 bounded:1 underlying:1 what:5 interpreted:1 finding:1 guarantee:1 every:1 concave:1 grauman:1 uk:1 omit:1 yn:2 appear:2 arguably:1 positive:9 thereon:1 aggregating:1 local:20 multiplex:3 treat:1 despite:1 qmr:1 analyzing:1 id:3 black:2 range:2 graduate:1 averaged:3 globerson:1 testing:3 union:1 block:2 centaur:4 x3:2 euc:1 digit:2 procedure:2 sca:64 empirical:2 significantly:3 chechik:1 word:9 cannot:1 close:1 applying:1 optimize:3 restriction:1 deterministic:1 reviewer:1 www:1 maximizing:3 map:1 straightforward:2 flexibly:1 independently:2 convex:2 formulate:1 identifying:1 assigns:1 stats:1 wasserman:1 insight:2 regarded:2 embedding:3 notion:4 exploratory:1 coordinate:1 heavily:1 exact:1 us:2 element:8 recognition:1 jk:5 invented:1 wang:1 capture:2 parameterize:1 region:1 inclined:1 connected:2 russell:1 principled:1 intuition:1 signature:6 triangle:1 k0:1 various:1 represented:4 genre:1 regularizer:2 derivation:1 jain:2 fast:1 describe:2 artificial:1 sc:2 horse:5 whose:1 richer:3 larger:1 plausible:1 distortion:1 annotating:1 triangular:1 ability:2 jointly:1 noisy:10 itself:1 kuan:1 final:2 online:2 advantage:1 eigenvalue:1 propose:4 interaction:2 product:1 frequent:1 relevant:1 combining:3 bow:4 flexibility:1 achieve:3 poorly:1 roweis:2 academy:1 validate:3 los:3 convergence:1 cluster:2 optimum:2 plethora:1 assessing:1 perfect:1 object:5 derive:2 develop:1 depending:1 fixing:2 ac:1 nearest:7 school:1 eq:10 p2:3 come:1 implies:1 arl:2 annotated:4 stochastic:2 opinion:1 violates:1 material:3 government:2 assign:1 probable:1 extension:1 pl:2 mm:3 considered:1 ground:9 predict:1 achieves:1 omitted:1 purpose:1 bag:1 label:5 s12:1 largest:1 create:1 successfully:2 reparameterizing:1 feisha:1 clearly:2 aim:1 avoid:2 broader:1 jaakkola:1 focus:5 june:1 bernoulli:1 likelihood:7 indicates:1 believing:1 contrast:7 attains:4 baseline:5 hobby:1 parameswaran:1 hauberg:1 inference:7 membership:2 hidden:3 relation:1 favoring:1 transformed:1 reproduce:1 interested:2 arg:2 classification:13 among:5 aforementioned:1 html:1 development:2 proposes:1 art:2 constrained:2 changpinyo:1 plan:1 kempe:1 equal:1 construct:2 having:2 tow:3 ng:2 identical:1 icml:2 nearly:2 promote:2 mimic:1 others:1 report:4 intelligent:1 few:1 randomly:5 recognize:1 divergence:1 individual:1 national:1 usc:4 maintain:1 ab:1 organization:1 highly:1 mixture:4 semidefinite:7 copyright:1 edge:9 ltk:1 orthogonal:3 conduct:1 euclidean:1 desired:2 theoretical:1 mk:20 instance:7 column:4 modeling:8 w911nf:1 measuring:1 cost:1 organizational:1 vertex:3 subset:9 expects:1 rare:1 mk0:1 dod:2 examining:2 too:3 itml:6 dependency:1 synthetic:8 combined:2 fundamental:1 definitely:1 retain:1 probabilistic:9 contract:1 connecting:1 collapsing:1 american:1 style:1 stark:1 distribute:1 bfgs:1 semidefiniteness:1 disjointness:1 inc:1 explicitly:3 performed:1 view:2 competitive:1 aggregation:1 start:1 recover:1 xing:2 annotation:1 multirelational:1 contribution:1 accuracy:11 qk:1 characteristic:3 kaufmann:1 yield:2 identify:3 generalize:1 bayesian:2 handwritten:1 none:1 researcher:1 published:1 plateau:1 maxc:2 reach:1 lambiotte:1 definition:2 against:1 evaluates:1 nonetheless:3 failure:2 frequency:1 naturally:1 associated:3 gain:2 dataset:7 popular:1 color:3 infers:1 dimensionality:3 organized:2 profession:1 back:1 focusing:1 originally:1 dt:1 ox:1 strongly:3 just:1 until:1 hand:2 assessment:3 overlapping:6 lack:1 logistic:2 lda:4 reveal:3 scientific:3 usa:1 effect:2 normalized:1 true:4 adequately:1 assigned:1 iteratively:1 nonzero:2 dhillon:2 illustrated:4 mahalanobis:1 visualizing:1 essence:1 davis:1 theoretic:4 demonstrate:2 performs:1 reasoning:1 image:2 variational:2 wise:1 discovers:2 superior:3 sigmoid:1 specialized:2 common:1 functional:1 empirically:1 extend:1 belong:1 association:1 refer:1 significant:2 rd:2 unconstrained:1 grid:1 similarly:1 multiway:4 dot:1 entail:1 similarity:78 longer:1 dominant:3 posterior:7 own:3 scenario:1 inequality:2 binary:2 der:1 seen:1 minimum:1 greater:2 morgan:1 aggregated:2 paradigm:1 multiple:16 reduces:1 exceeds:1 faster:1 offer:1 long:3 lin:1 dept:3 prediction:16 variant:2 desideratum:1 regression:1 essentially:2 metric:62 expectation:1 rks:2 represent:1 suppl:3 penalize:1 background:1 addition:1 fellowship:1 addressed:1 crucial:1 appropriately:1 biased:1 rest:3 publisher:1 ascent:2 effectiveness:2 jordan:3 intermediate:1 constraining:1 split:1 identically:1 embeddings:2 competing:4 restrict:1 identified:1 inner:1 simplifies:1 reduce:2 angeles:3 whether:5 pca:1 song:2 render:1 cause:1 generally:1 useful:2 clear:1 detailed:1 band:1 locally:1 extensively:1 reduced:1 generate:1 http:1 outperform:1 exist:1 governmental:1 disjoint:5 correctly:2 per:2 overly:1 woznica:1 express:1 group:1 key:2 threshold:2 achieving:1 smn:2 qk1:2 year:1 convert:1 sum:1 run:1 uncertainty:1 throughout:1 almost:1 endorsement:1 decision:2 maaten:1 display:2 correspondence:1 nonnegative:2 adapted:1 strength:1 constraint:3 precisely:1 fei:1 constrain:2 x2:2 generates:1 aspect:3 extremely:1 relatively:1 format:2 according:3 combination:5 march:1 kalousis:1 battle:1 across:1 em:3 s1:6 intuitively:3 fienberg:2 computationally:2 visualization:1 discus:1 describing:1 turn:1 needed:1 tractable:3 end:5 available:2 experimentation:1 apply:5 observe:5 occurrence:1 neighbourhood:1 voice:2 symmetrized:1 gate:3 weinberger:2 original:1 top:5 clustering:4 include:2 cf:2 dirichlet:1 graphical:2 marginalized:1 l6:2 calculating:1 giving:1 restrictive:1 especially:2 classical:1 implied:1 objective:1 parametric:1 sha:1 diagonal:9 southern:3 gradient:3 detrimental:1 distance:16 link:23 thank:1 topic:14 reason:4 modeled:1 relationship:7 index:1 setup:2 sne:2 negative:4 rise:3 policy:1 unknown:1 perform:2 teh:1 datasets:3 extended:1 hinton:2 arbitrary:3 community:2 inferred:4 bk:11 pair:11 namely:1 kl:1 extensive:2 optimized:1 componentwise:1 slivkins:1 california:3 learned:4 herein:1 pearl:1 nip:10 able:5 beyond:1 suggested:1 pattern:3 xm:5 sparsity:1 belief:1 unrealistic:1 suitable:1 overlap:3 misclassification:1 rely:2 power:1 representing:4 picture:1 identifies:3 lk:4 reprint:1 deemed:1 extract:1 text:3 geometric:1 acknowledgement:1 removal:1 marginalizing:1 relative:1 mixed:2 interesting:2 allocation:1 ingredient:1 sufficient:1 freifeld:1 article:7 principle:1 classifying:1 row:8 summary:1 supported:1 last:1 arriving:1 dis:2 bias:5 side:1 wide:1 neighbor:7 saul:1 absolute:1 sparse:6 distributed:1 van:1 overcome:1 dimension:2 xn:6 world:3 stand:1 concretely:2 equated:1 author:1 projected:1 simplified:1 preprocessing:1 san:1 far:1 social:7 confirm:2 global:1 overfitting:1 reveals:1 corpus:1 conclude:1 francisco:1 discriminative:1 alternatively:2 search:2 latent:48 fallacy:1 sk:23 why:1 table:6 learn:5 ca:4 sra:1 ignoring:1 composer:1 complex:3 necessarily:2 domain:1 diag:9 official:1 did:1 pk:7 blockmodels:2 abraham:1 s2:3 arise:2 iarpa:2 allowed:1 x1:2 fig:10 referred:1 depicts:1 darker:1 inferring:1 meyer:1 exponential:1 learns:1 rk:1 specific:1 supposition:1 dk:4 svm:6 exists:1 intrinsic:1 mnist:2 restricting:1 false:3 effectively:1 scatterplot:1 corr:1 airoldi:1 dissimilarity:4 notwithstanding:1 illustrates:2 conditioned:1 margin:6 authorized:1 likely:2 expressed:1 contained:1 corresponds:1 truth:9 conditional:4 s32:1 viewed:2 replace:1 man:5 feasible:1 hard:1 determined:2 specifically:2 except:1 total:1 experimental:1 exception:2 formally:1 select:4 people:1 support:3 dissimilar:5 brevity:2 evaluate:2 |
4,438 | 5,016 | A message-passing algorithm
for multi-agent trajectory planning
Jos?e Bento ?
[email protected]
Nate Derbinsky
[email protected]
Javier Alonso-Mora
[email protected]
Jonathan Yedidia
[email protected]
Abstract
We describe a novel approach for computing collision-free global trajectories for
p agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM). Compared with
existing methods, our approach is naturally parallelizable and allows for incorporating different cost functionals with only minor adjustments. We apply our
method to classical challenging instances and observe that its computational requirements scale well with p for several cost functionals. We also show that a
specialization of our algorithm can be used for local motion planning by solving
the problem of joint optimization in velocity space.
1
Introduction
Robot navigation relies on at least three sub-tasks: localization, mapping, and motion planning. The
latter can be described as an optimization problem: compute the lowest-cost path, or trajectory,
between an initial and final configuration. This paper focuses on trajectory planning for multiple
agents, an important problem in robotics [1, 2], computer animation, and crowd simulation [3].
Centralized planning for multiple agents is PSPACE hard [4, 5]. To contend with this complexity,
traditional multi-agent planning prioritizes agents and computes their trajectories sequentially [6],
leading to suboptimal solutions. By contrast, our method plans for all agents simultaneously. Trajectory planning is also simplified if agents are non-distinct and can be dynamically assigned to a set
of goal positions [1]. We consider the harder problem where robots have a unique identity and their
goal positions are statically pre-specified. Both mixed-integer quadratic programming (MIQP) [7]
and [more efficient, although local] sequential convex programming [8] approaches have been applied to the problem of computing collision-free trajectories for multiple agents with pre-specified
goal positions; however, due to the non-convexity of the problem, these approaches, especially the
former, do not scale well with the number of agents. Alternatively, trajectories may be found by
sampling in their joint configuration space [9]. This approach is probabilistic and, alone, only gives
asymptotic guarantees. See Appendix A for further comments on discrete search methods.
Due to the complexity of planning collision-free trajectories, real-time robot navigation is commonly decoupled into a global planner and a fast local planner that performs collision-avoidance.
Many single-agent reactive collision-avoidance algorithms are based either on potential fields [10],
which typically ignore the velocity of other agents, or ?velocity obstacles? [11], which provide
improved performance in dynamic environments by formulating the optimization in velocity space
instead of Cartesian space. Building on an extension of the velocity-obstacles approach, recent work
on centralized collision avoidance [12] computes collision-free local motions for all agents whilst
maximizing a joint utility using either a computationally expensive MIQP or an efficient, though
local, QP. While not the main focus of this paper, we show that a specialization of our approach
?
This author would like to thank Emily Hupf and Noa Ghersin for their support while writing this paper.
1
to global-trajectory optimization also applies for local-trajectory optimization, and our numerical
results demonstrate improvements in both efficiency and scaling performance.
In this paper we formalize the global trajectory planning task as follows. Given p agents of different
radii {ri }pi=1 with given desired initial and final positions, {xi (0)}pi=1 and {xi (T )}pi=1 , along with
a cost functional over trajectories, compute collision-free trajectories for all agents that minimize
the cost functional. That is, find a set of intermediate points {xi (t)}pi=1 , t 2 (0, T ), that satisfies the
?hard? collision-free constraints that kxi (t) xj (t)k > ri + rj , for all i, j and t, and that insofar as
possible, minimizes the cost functional.
The method we propose searches for a solution within the space of piece-wise linear trajectories,
wherein the trajectory of an agent is completely specified by a set of positions at a fixed set of time
instants {ts }?s=0 . We call these time instants break-points and they are the same for all agents, which
greatly simplifies the mathematics of our method. All other intermediate points of the trajectories
are computed by assuming that each agent moves with constant velocity in between break-points: if
t1 and t2 > t1 are consecutive break-points, then xi (t) = t2 1 t1 ((t2 t)xi (t1 ) + (t t1 )xi (t2 )) for
t 2 [t1 , t2 ]. Along with the set of initial and final configurations, the number of interior break-points
(? 1) is an input to our method, with a corresponding tradeoff: increasing ? yields trajectories that
are more flexible and smooth, with possibly higher quality; but increasing ? enlarges the problem,
leading to potentially increased computation.
The main contributions of this paper are as follows:
i) We formulate the global trajectory planning task as a decomposable optimization problem.
We show how to solve the resulting sub-problems exactly and efficiently, despite their nonconvexity, and how to coordinate their solutions using message-passing. Our method, based on
the ?three-weight? version of ADMM [13], is easily parallelized, does not require parameter
tuning, and we present empirical evidence of good scalability with p.
ii) Within our decomposable framework, we describe different sub-problems, called minimizers,
each ensuring the trajectories satisfy a separate criterion. Our method is flexible and can consider different combinations of minimizers. A particularly crucial minimizer ensures there are
no inter-agent collisions, but we also derive other minimizers that allow for finding trajectories
with minimal total energy, avoiding static obstacles, or imposing dynamic constraints, such as
maximum/minimum agent velocity.
iii) We show that our method can specialize to perform local planning by solving the problem of
joint optimization in velocity space [12].
Our work is among the few examples where the success of applying ADMM to find approximate
solutions to a large non-convex problems can be judged with the naked eye, by the gracefulness
of the trajectories found. This paper also reinforces the claim in [13] that small, yet important,
modifications to ADMM can bring an order of magnitude increase in speed. We emphasize the
importance of these modifications in our numerical experiments, where we compare the performance
of our method using the three-weight algorithm (TWA) versus that of standard ADMM.
The rest of the paper is organized as follows. Section 2 provides background on ADMM and the
TWA. Section 3 formulates the global-trajectory-planning task as an optimization problem and describes the separate blocks necessary to solve it (the mathematical details of solving these subproblems are left to appendices). Section 4 evaluates the performance of our solution: its scalability
with p, sensitivity to initial conditions, and the effect of different cost functionals. Section 5 explains
how to implement a velocity-obstacle method using our method and compares its performance with
prior work. Finally, Section 6 draws conclusions and suggests directions for future work.
2
Minimizers in the TWA
In this section we provide a short description of the TWA [13], and, in particular, the role of the
minimizer building blocks that it needs to solve a particular optimization problem. Section B of the
supplementary material includes a full description of the TWA.
As a small illustrative example of how the TWA is used to solve optimization problems, suppose we
want to solve minx2R3 f (x) = min{x1 ,x2 ,x3 } f1 (x1 , x3 ) + f2 (x1 , x2 , x3 ) + f3 (x3 ), where fi (.) 2
2
R[{+1}. The functions can represent soft costs, for example f3 (x3 ) = (x3 1)2 , or hard equality
or inequality constraints, such as f1 (x1 , x3 ) = J(x1 ? x3 ), where we are using the notation J(.) = 0
if (.) is true or +1 if (.) is false.
The TWA solves this optimization problem iteratively by passing messages on a bipartite graph, in
the form of a Forney factor graph [14]: one minimizer-node per function fb , one equality-node per
variable xj and an edge (b, j), connecting b and j, if fb depends on xj (see Figure 1-left).
1
g
=
1
2
g
=
2
3
g
=
3
?
n1,1, ?1,1
?
n1,3, ?1,3
g1
?
x1,1, ?1,1
?
x1,3, ?1,3
?
n2,1, ?2,1
?
n2,2 , ?2,2
?
n2,3, ?2,3
g2
?
x2,1, ?2,1
?
x2,2 , ??2,2
x2,3, ?2,3
?
n3,3, ?3,3
g3
?
x3,3, ?3,3
Figure 1: Left: bipartite graph, with one minimizer-node on the left for each function making up
the overall objective function, and one equality-node on the right per variable in the problem. Right:
The input and output variables for each minimizer block.
Apart from the first-iteration message values, and two internal parameters1 that we specify in Section
4, the algorithm is fully specified by the behavior of the minimizers and the topology of the graph.
What does a minimizer do? The minimizer-node g1 , for example, solves a small optimization problem over its local variables x1 and x3 . Without going into the full detail presented in [13] and the
supplementary material, the estimates x1,1 and x1,3 are then combined with running sums of the
differences between the minimizer estimates and the equality-node consensus estimates to obtain
messages m1,1 and m1,3 on each neighboring edge that are sent to the neighboring equality-nodes
along with corresponding certainty weights, !
? 1,2 and !
? 1,3 . All other minimizers act similarly.
The equality-nodes receive these local messages and weights and produce consensus estimates for
all variables by computing an average of the incoming messages, weighted by the incoming certainty
weights !
? . From these consensus estimates, correcting messages are computed and communicated
back to the minimizers to help them reach consensus. A certainty weight for the correcting messages,
? , is also communicated back to the minimizers. For example, the minimizer g1 receives correcting
messages n1,1 and n1,3 with corresponding certainty weights ? 1,1 and ? 1,3 (see Figure 1-right).
When producing new local estimates, the bth minimizer node computes its local estimates {xj } by
choosing a point that minimizes the sum of the local function fb and weighted squared distance from
the incoming messages (ties are broken randomly):
2
3
X
1
{xb,j }j = gb {nb,j }j , { ? kb,j }j ? arg min 4fb ({xj }j ) +
? b,j (xj nb,j )2 5 , (1)
2 j
{xj }j
P
where {}j and j run over all equality-nodes connected to b. In the TWA, the certainty weights
{!
? b,j } that this minimizer outputs must be 0 (uncertain); 1 (certain); or ?0 , set to some fixed
value. The logic for setting weights from minimizer-nodes depends on the problem; as we shall
see, in trajectory planning problems, we only use 0 or ?0 weights. If we choose that all minimizers
always output weights equal to ?0 , the TWA reduces to standard ADMM; however, 0-weights allows
equality-nodes to ignore inactive constraints, traversing the search space much faster.
Finally, notice that all minimizers can operate simultaneously, and the same is true for the consensus
calculation performed by each equality-node. The algorithm is thus easy to parallelize.
3
Global trajectory planning
We now turn to describing our decomposition of the global trajectory planning optimization problem
in detail. We begin by defining the variables to be optimized in our optimization problem. In
1
These are the step-size and ?0 constants. See Section B in the supplementary material for more detail.
3
our formulation, we are not tracking the points of the trajectories by a continuous-time variable
taking values in [0, T ]. Rather, our variables are the positions {xi (s)}i2[p] , where the trajectories
are indexed by i and break-points are indexed by a discrete variable s taking values between 1 and
? 1. Note that {xi (0)}i2[p] and {xi (?)}i2[p] are the initial and final configuration, sets of fixed
values, not variables to optimize.
3.1
Formulation as unconstrained optimization without static obstacles
In terms of these variables, the non-collision constraints2 are
k(?xi (s + 1) + (1 ?)xi (s)) (?xj (s + 1) + (1
for all i, j 2 [p], s 2 {0, ..., ? 1} and ? 2 [0, 1].
?)xj (s))k
ri + rj ,
(2)
The parameter ? is used to trace out the constant-velocity trajectories of agents i and j between
break-points s + 1 and s. The parameter ? has no units, it is a normalized time rather than an
absolute time. If t1 is the absolute time of the break-point with integer index s and t2 is the absolute
time of the break-point with integer index s + 1 and t parametrizes the trajectories in absolute time
then ? = (t t1 )/(t2 t1 ). Note that in the above formulation, absolute time does not appear, and
any solution is simply a set of paths that, when travelled by each agent at constant velocity between
break-points, leads to no collisions. When converting this solution into trajectories parameterized
by absolute time, the break-points do not need to be chosen uniformly spaced in absolute time.
The constraints represented in (2) can be formally incorporated into an unconstrained optimization
problem as follows. We search for a solution to the problem:
min f cost ({xi (s)}i,s ) +
{xi (s)}i,s
n
X1 X
frcoll
(xi (s), xi (s + 1), xj (s), xj (s + 1)),
i ,rj
(3)
s=0 i>j
where {xi (0)}p and {xi (?)}p are constants rather than optimization variables, and where the function f cost is a function that represents some cost to be minimized (e.g. the integrated kinetic energy
coll
or the maximum velocity over all the agents) and the function fr,r
0 is defined as,
coll
0
0
fr,r
0 (x, x, x , x ) = J(k?(x
x0 ) + (1
?)(x
x0 )k
r + r0 8? 2 [0, 1]).
(4)
In this section, x and x represent the position of an arbitrary agent of radius r at two consecutive
break-points and x0 and x0 the position of a second arbitrary agent of radius r0 at the same breakpoints. In the expression above J(.) takes the value 0 whenever its argument, a clause, is true and
coll
takes the value +1 otherwise. Intuitively, we pay an infinite cost in fr,r
0 whenever there is a
collision, and we pay zero otherwise.
In (3) we can set f cost (.), to enforce a preference for trajectories satisfying specific properties. For
example, we might prefer trajectories for which the total kinetic energy spent by the set of agents is
small. In this case, defining fCcost (x, x) = Ckx xk2 , we have,
p n 1
f cost ({xi (s)}i,s ) =
1 X X cost
f
(xi (s), xi (s + 1)).
pn i=1 s=0 Ci,s
(5)
where the coefficients {Ci,s } can account for agents with different masses, different absolute-time
intervals between-break points or different preferences regarding which agents we want to be less
active and which agents are allowed to move faster.
More simply, we might want to exclude trajectories in which agents move faster than a certain
amount, but without distinguishing among all remaining trajectories. For this case we can write,
fCcost (x, x) = J(kx
xk ? C).
(6)
In this case, associating each break-point to a time instant, the coefficients {Ci,s } in expression (5)
would represent different limits on the velocity of different agents between different sections of the
trajectory. If we want to force all agents to have a minimum velocity we can simply reverse the
inequality in (6).
2
We replaced the strict inequality in the condition for non-collision by a simple inequality ? ? to avoid
technicalities in formulating the optimization problem. Since the agents are round, this allows for a single point
of contact between two agents and does not reduce practical relevance.
4
3.2
Formulation as unconstrained optimization with static obstacles
In many scenarios agents should also avoid collisions with static obstacles. Given two points in
space, xL and xR , we can forbid all agents from crossing the line segment from xL to xR by adding
Pp Pn 1
the following term to the function (3): i=1 s=0 fxwall
(xi (s), xi (s + 1)). We recall that ri is
L ,xR ,ri
the radius of agent i and
fxwall
(x, x) = J(k(?x + (1
L ,xR ,r
Notice that f
coll
?)x)
can be expressed using f
( xR + (1
wall
coll
0
0
fr,r
0 (x, x, x , x )
)xL )k
r for all ?,
. In particular,
wall
0
= f0,0,r+r
0 (x
x, x0
2 [0, 1]).
x).
(7)
(8)
We use this fact later to express the minimizer associated with agent-agent collisions using the
minimizer associated with agent-obstacle collisions.
When agents move in the plane, i.e. xi (s) 2 R2 for all i 2 [p] and s+1 2 [? +1], being able to avoid
collisions with a general static line segment allows to automatically avoid collisions with multiple
static obstacles of arbitrary polygonal shape. Our numerical experiments only consider agents in the
plane and so, in this paper, we only describe the minimizer block for wall collision for a 2D world.
In higher dimensions, different obstacle primitives need to be considered.
3.3
Message-passing formulation
To solve (3) using the TWA, we need to specify the topology of the bipartite graph associated with
the unconstrained formulation (3) and the operation performed by every minimizer, i.e. the !
?weight update logic and x-variable update equations. We postpone describing the choice of initial
values and internal parameters until Section 4.
We first describe the bipartite graph. To be concrete, let us assume that the cost functional has the
form of (5). The unconstrained formulation (3) then tells us that the global objective function is
the sum of ?p(p + 1)/2 terms: ?p(p 1)/2 functions f coll and ?p functions fCcost . These functions
involve a total of (? + 1)p variables out of which only (? 1)p are free (since the initial and final
configurations are fixed). Correspondingly, the bipartite graph along which messages are passed has
?p(p + 1)/2 minimizer-nodes that connect to the (? + 1)p equality-nodes. In particular, the equalitynode associated with the break-point variable xi (s), ? > s > 0, is connected to 2(p 1) different
cost
g coll minimizer-nodes and two different gC
minimizer-nodes. If s = 0 or s = ? the equality-node
cost
only connects to half as many g coll nodes and gC
nodes.
We now describe the different minimizers. Every minimizer basically is a special case of (1).
3.3.1
Agent-agent collision minimizer
We start with the minimizer associated with the functions f coll , that we denoted by g coll . This minimizer receives as parameters the radius, r and r0 , of the two agents whose collision it is avoiding. The
minimizer takes as input a set of incoming n-messages, {n, n, n0 , n0 }, and associated ? -weights,
{ ? , ? , ? 0 , ? 0 }, and outputs a set of updated x-variables according to expression (9). Messages n
and n come from the two equality-nodes associated with the positions of one of the agents at two
consecutive break-points and n0 and n0 from the corresponding equality-nodes for the other agent.
g coll (n, n, n0 , n0 , ? , ? , ? 0 , ? 0 , r, r0 ) = arg
min
{x,x,x0 ,x0 }
coll
0
0
fr,r
0 (x, x, x , x )
?
?0 0
?
?0 0
kx nk2 + kx nk2 +
kx
n0 k2 +
kx
n0 k2 .
(9)
2
2
2
2
The update logic for the weights !
? for this minimizer is simple. If the trajectory from n to n for
an agent of radius r does not collide with the trajectory from n0 to n0 for an agent of radius r0 then
set all the outgoing weights !
? to zero. Otherwise set them all to ?0 . The outgoing zero weights
indicate to the receiving equality-nodes in the bipartite graph that the collision constraint for this
pair of agents is inactive and that the values it receives from this minimizer-node should be ignored
when computing the consensus values of the receiving equality-nodes.
+
The solution to (9) is found using the agent-obstacle collision minimizer that we describe next.
5
3.3.2
Agent-obstacle collision minimizer
The minimizer for f wall is denoted by g wall . It is parameterized by the obstacle position {xL , xR }
as well as the radius of the agent that needs to avoid the obstacle. It receives two n-messages,
{n, n}, and corresponding weights { ? , ? }, from the equality-nodes associated with two consecutive positions of an agent that needs to avoid the obstacle. Its output, the x-variables, are defined as
g wall (n, n, r, xL , xR , ? , ? ) = arg min fxwall
(x, x) +
L ,xR ,r
{x,x}
?
kx
2
nk2 +
?
kx
2
nk2 .
(10)
When agents move in the plane (2D), this minimizer can be solved by reformulating the optimization in (10) as a mechanical problem involving a system of springs that we can solve exactly and
efficiently. This reduction is explained in the supplementary material in Section D and the solution
to the mechanical problem is explained in Section I.
The update logic for the !
? -weights is similar to that of the g coll minimizer. If an agent of radius
r going from n and n does not collide with the line segment from xL to xR then set all outgoing
weights to zero because the constraint is inactive; otherwise set all the outgoing weights to ?0 .
Notice that, from (8), it follows that the agent-agent minimizer g coll can be expressed using g wall .
More concretely, as proved in the supplementary material, Section C,
?
?
g coll (n, n, n0 , n0 , ? , ? , ? 0 , ? 0 , r, r0 ) = M2 g wall M1 .{n, n, n0 , n0 , ? , ? , ? 0 , ? 0 , r, r0 } ,
for a constant rectangular matrix M1 and a matrix M2 that depend on {n, n, n0 , n0 , ? , ? , ? 0 , ? 0 }.
3.3.3
Minimum energy and maximum (minimum) velocity minimizer
When f cost can be decomposed as in (5), the minimizer associated with the functions f cost is denoted
by g cost and receives as input two n-messages, {n, n}, and corresponding weights, { ? , ? }. The
messages come from two equality-nodes associated with two consecutive positions of an agent. The
minimizer is also parameterized by a cost factor c. It outputs a set of updated x-messages defined as
g cost (n, n, ? , ? , c) = arg min fccost (x, x) +
{x,x}
?
kx
2
nk2 +
?
kx
2
nk2 .
(11)
The update logic for the !
? -weights of the minimum energy minimizer is very simply: always set all
outgoing weights !
? to ?0 . The update logic for the !
? -weights of the maximum velocity minimizer
is the following. If kn nk ? c set all outgoing weights to zero. Otherwise, set them to ?0 . The
update logic for the minimum velocity minimizer is similar. If kn nk c, set all the !
? -weights
to zero. Otherwise set them to ?0 .
The solution to the minimum energy, maximum velocity and minimum velocity minimizer is written
in the supplementary material in Sections E, F, and G respectively.
4
Numerical results
We now report on the performance of our algorithm (see Appendix J for an important comment on
the anytime properties of our algorithm). Note that the lack of open-source scalable algorithms for
global trajectory planning in the literature makes it difficult to benchmark our performance against
other methods. Also, in a paper it is difficult to appreciate the gracefulness of the discovered trajectory optimizations, so we include a video in the supplementary material that shows final optimized
trajectories as well as intermediate results as the algorithm progresses for a variety of additional
scenarios, including those with obstacles. All the tests described here are for agents in a twodimensional plane. All tests but the last were performed using six cores of a 3.4GHz i7 CPU.
The different tests did not require any special tuning of parameters. In particular, the step-size in
[13] (their ? variable) is always 0.1. In order to quickly equilibrate the system to a reasonable set of
variables and to wash out the importance of initial conditions, the default weight ?0 was set equal to
a small value (?p ? 10 5 ) for the first 20 iterations and then set to 1 for all further iterations.
6
The first test considers scenario CONF1: p (even) agents of radius r, equally spaced around on a
circle of radius R, are each required to exchange position with the corresponding antipodal agent,
r = (5/4)R sin(?/2(p 4)). This is a classical difficult test scenario because the straight line
motion of all agents to their goal would result in them all colliding in the center of the circle. We
compare the convergence time of the TWA with a similar version using standard ADMM to perform
the optimizations. In this test, the algorithm?s initial value for each variable in the problem was set
to the corresponding initial position of each agent. The objective is to minimize the total kinetic
energy (C in the energy minimizer is set to 1). Figure 2-left shows that the TWA scales better with
p than classic ADMM and typically gives an order of magnitude speed-up. Please see Appendix K
for a further comment on the scaling of the convergence time of ADMM and TWA with p.
Convergence time (sec)
?
Number of occurrences
?
?=4
?
?
1500
?=8
?
?
?
0
?
0
?
?
20
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
40
3
4
5
6
7
2500
60
? = ?6
25
20
15
10
5
? = ?4
80
100
p = 100
2000
1500
?
1000
?
?
500
?
0
100
200
Number of agents, p
300
400
500
600
700
Physical cores (? 12)
?
500
2
Convergence time (sec)
?=6
2000
1000
1
?
?=8
0
0
p = 80
?
?
?
p ?= 60
?
?
p?? ?
? 40
?
?
?
?
?
?
5
Objective value of trajectories
?
?
?
?
?
?
?
?
?
?
?
?
?
?
10
?
?
?
?
?
?
?
?
15
?
?
?
?
?
?
?
?
?
?
?
?
?
20
Number of cores
Figure 2: Left: Convergence time using standard ADMM (dashed lines) and using TWA (solid
lines). Middle: Distribution of total energy and time for convergence with random initial conditions
(p = 20 and ? = 5). Right: Convergence time using a different number of cores (? = 5).
The second test for CONF1 analyzes the sensitivity of the convergence time and objective value
when the variables? value at the first iteration are chosen uniformly at random in the smallest spacetime box that includes the initial and final configuration of the robots. Figure 2-middle shows that,
although there is some spread on the convergence time, our algorithm seems to reliably converge to
relatively similar-cost local minima (other experiments show that the objective value of these minima
is around 5 times smaller than that found when the algorithm is run using only the collision avoidance
minimizers without a kinetic energy cost term). As would be expected, the precise trajectories found
vary widely between different random runs.
Still for CONF1, and fixed initial conditions, we parallelize our method using several cores of
a 2.66GHz i7 processor and a very primitive scheduling/synchronization scheme. Although this
scheme does not fully exploit parallelization, Figure 2-right does show a speed-up as the number
of cores increases and the larger p is, the greater the speed-up. We stall when we reach the twelve
physical cores available and start using virtual cores.
Convergence time, one epoch (sec)
Finally, Figure 3-left compares the convergence time to optimize the total energy with the time to
simply find a feasible (i.e. collision-free) solution. The agents initial and final configuration is
randomly chosen in the plane (CONF2). Error bars indicate ? one standard deviation. Minimizing
the kinetic energy is orders of magnitude computationally more expensive than finding a feasible
solution, as is clear from the different magnitude of the left and right scale of Figure 3-left.
?
?=8
12
?=8
10
1500
?=6
?
?=4
8
1200
?
6
?
?
?
4
?
2
0
?
0
?
20
?
?
?
?
?
40
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
60
?
?
?
?
?
?
?
600
?
?=6
?
900
? 300
?=4
Minimum energy
Feasible
1800
Convergence time (sec)
Convergence time (sec)
Convergence time (sec)
30
?
0
80
3.0
2.5
Pink: MIQP
2.0
Light blue: TWA
1.5
1.0
0.5
0.0
8
100
Number of agents, p
10* 12
14* 16
18* 20
24* 24
30* 32
40* 40
50* 52
Number of agents, p
Figure 3: Left: Convergence time when minimizing energy (blue scale/dashed lines) and to simply
find a feasible solution (red scale/solid lines). Right: (For Section 5). Convergence-time distribution
for each epoch using our method (blue bars) and using the MIQP of [12] (red bars and star-values).
7
5
Local trajectory planning based on velocity obstacles
In this section we show how the joint optimization presented in [12], which is based on the concept
of velocity obstacles [11] (VO), can be also solved via the message-passing TWA. In VO, given
the current position {xi (0)}i2[p] and radius {ri } of all agents, a new velocity command is computed
jointly for all agents minimizing the distance to their preferred velocity {viref }i2[p] . This new velocity
command must guarantee that the trajectories of all agents remain collision-free for at least a time
horizon ? . New collision-free velocities are computed every ?? seconds, ? < 1, until all agents
reach their final configuration. Following [12], and assuming an obstacle-free environment and first
order dynamics, the collision-free velocities are given by,
X
minimize
Ci kvi viref k2 s.t. k(xi (0) + vi t) (xj (0) + vj t)k ri + rj 8 i 2 [p], t 2 [0, ? ].
{vi }i2[p]
i2[p]
Since the velocities {vi }i2[p] are related linearly to the final position of each object after ? seconds,
{xi (? )}i2[p] , a simple change of variables allows us to reformulate the above problem as,
X
2
minimize
Ci0 kxi xref
i k
{xi }i2[p]
i2[p]
s.t. k(1 ?)(xi (0) xj (0)) + ?(xi xj )k ri + rj 8 j > i 2 [p], ? 2 [0, 1]
(12)
2
ref
ref
where
= Ci /? , xi = xi (0) + vi ? and we have dropped the ? in xi (? ). The above problem,
extended to account for collisions with the static line segments {xRk , xLk }k , can be formulated in
an unconstrained form using the functions f cost , f coll and f wall . Namely,
X
X
XX
ref
min
fCcost
frcoll
(xi (0), xi , xj (0), xj ) +
fxwall
(xi (0), xi ). (13)
0 (xi , xi ) +
i ,rj
R k ,xL k ,ri
Ci0
{xi }i
i
i2[p]
i>j
i2[p] k
{xref
i }i
Note that {xi (0)}i and
are constants, not variables being optimized. Given this formulation,
the TWA can be used to solve the optimization. All corresponding minimizers are special cases
of minimizers derived in the previous section for global trajectory planning (see Section H in the
supplementary material for details). Figure 3-right shows the distribution of the time to solve (12)
for CONF1. We compare the mixed integer quadratic programming (MIQP) approach from [12]
with ours. Our method finds a local minima of exactly (13), while [12] finds a global minima of
an approximation to (13). Specifically, [12] requires approximating the search domain by hyperplanes and an additional branch-and-bound algorithm while ours does not. Both approaches use a
mechanism for breaking the symmetry from CONF1 and avoid deadlocks: theirs uses a preferential
rotation direction for agents, while we use agents with slightly different C coefficients in their energy minimizers (Cith agent = 1 + 0.001i). Both simulations were done on a single 2.66GHz core.
The results show the order of magnitude is similar, but, because our implementation is done in Java
while [12] uses Matlab-mex interface of CPLEX 11, the results are not exactly comparable.
6
Conclusion and future work
We have presented a novel algorithm for global and local planning of the trajectory of multiple
distinct agents, a problem known to be hard. The solution is based on solving a non-convex optimization problem using TWA, a modified ADMM. Its similarity to ADMM brings scalability and
easy parallelization. However, using TWA improves performance considerably. Our implementation of the algorithm in Java on a regular desktop computer, using a basic scheduler/synchronization
over its few cores, already scales to hundreds of agents and achieves real-time performance for local
planning.
The algorithm can flexibly account for obstacles and different cost functionals. For agents in the
plane, we derived explicit expressions that account for static obstacles, moving obstacles, and dynamic constraints on the velocity and energy. Future work should consider other restrictions on the
smoothness of the trajectory (e.g. acceleration constraints) and provide fast solvers to our minimizers for agents in 3D.
The message-passing nature of our algorithm hints that it might be possible to adapt our algorithm
to do planning in a decentralized fashion. For example, minimizers like g coll could be solved by
message exchange between pairs of agents within a maximum communication radius. It is an open
problem to build a practical communication-synchronization scheme for such an approach.
8
References
[1] Javier Alonso-Mora, Andreas Breitenmoser, Martin Rufli, Roland Siegwart, and Paul Beardsley. Image
and animation display with multiple mobile robots. 31(6):753?773, 2012.
[2] Peter R. Wurman, Raffaello D?Andrea, and Mick Mountz. Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Magazine, 29(1):9?19, 2008.
[3] Stephen J. Guy, Jatin Chhugani, Changkyu Kim, Nadathur Satish, Ming Lin, Dinesh Manocha, and
Pradeep Dubey. Clearpath: highly parallel collision avoidance for multi-agent simulation. In Proceedings
of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 177?187, 2009.
[4] John H. Reif. Complexity of the mover?s problem and generalizations. In IEEE Annual Symposium on
Foundations of Computer Science, pages 421?427, 1979.
[5] John E. Hopcroft, Jacob T. Schwartz, and Micha Sharir. On the complexity of motion planning for
multiple independent objects; pspace-hardness of the ?warehouseman?s problem?. The International
Journal of Robotics Research, 3(4):76?88, 1984.
[6] Maren Bennewitz, Wolfram Burgard, and Sebastian Thrun. Finding and optimizing solvable priority
schemes for decoupled path planning techniques for teams of mobile robots. Robotics and Autonomous
Systems, 41(2?3):89?99, 2002.
[7] Daniel Mellinger, Alex Kushleyev, and Vijay Kumar. Mixed-integer quadratic program trajectory generation for heterogeneous quadrotor teams. In IEEE International Conference on Robotics and Automation,
pages 477?483, 2012.
[8] Federico Augugliaro, Angela P. Schoellig, and Raffaello D?Andrea. Generation of collision-free trajectories for a quadrocopter fleet: A sequential convex programming approach. In IEEE/RSJ International
Conference on Intelligent Robots and Systems, pages 1917?1922, 2012.
[9] Steven M. LaValle and James J. Kuffner. Randomized kinodynamic planning. The International Journal
of Robotics Research, 20(5):378?400, 2001.
[10] Oussama Khatib. Real-time obstacle avoidance for manipulators and mobile robots. The International
Journal of Robotics Research, 5(1):90?98, 1986.
[11] Paolo Fiorini and Zvi Shiller. Motion planning in dynamic environments using velocity obstacles. The
International Journal of Robotics Research, 17(7):760?772, 1998.
[12] Javier Alonso-Mora, Martin Rufli, Roland Siegwart, and Paul Beardsley. Collision avoidance for multiple
agents with joint utility maximization. In IEEE International Conference on Robotics and Automation,
2013.
[13] Nate Derbinsky, Jos?e Bento, Veit Elser, and Jonathan S. Yedidia. An improved three-weight messagepassing algorithm. arXiv:1305.1961 [cs.AI], 2013.
[14] G. David Forney Jr. Codes on graphs: Normal realizations. IEEE Transactions on Information Theory,
47(2):520?548, 2001.
[15] Sertac Karaman and Emilio Frazzoli. Incremental sampling-based algorithms for optimal motion planning. arXiv preprint arXiv:1005.0416, 2010.
[16] R. Glowinski and A. Marrocco. Sur l?approximation, par e? l?ements finis d?ordre un, et la r?esolution, par
p?enalisization-dualit?e, d?une class de probl`ems de Dirichlet non lin?eare. Revue Franc?aise d?Automatique,
Informatique, et Recherche Op?erationelle, 9(2):41?76, 1975.
[17] Daniel Gabay and Bertrand Mercier. A dual algorithm for the solution of nonlinear variational problems
via finite element approximation. Computers & Mathematics with Applications, 2(1):17?40, 1976.
[18] Hugh Everett III. Generalized lagrange multiplier method for solving problems of optimum allocation of
resources. Operations Research, 11(3):399?417, 1963.
[19] Magnus R. Hestenes. Multiplier and gradient methods. Journal of Optimization Theory and Applications,
4(5):303?320, 1969.
[20] Magnus R. Hestenes. Multiplier and gradient methods. In L.A. Zadeh et al., editor, Computing Methods
in Optimization Problems 2. Academic Press, New York, 1969.
[21] M.J.D. Powell. A method for nonlinear constraints in minimization problems. In R. Fletcher, editor,
Optimization. Academic Press, London, 1969.
[22] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization
and statistical learning via the alternating direction method of multipliers. Foundations and Trends in
Machine Learning, 3(1):1?122, 2011.
9
| 5016 |@word middle:2 version:3 seems:1 open:2 simulation:3 decomposition:1 jacob:1 schoellig:1 solid:2 harder:1 reduction:1 initial:15 configuration:9 kinodynamic:1 daniel:2 ours:2 existing:1 current:1 com:4 yet:1 chu:1 must:2 written:1 john:2 numerical:4 shape:1 update:7 n0:16 alone:1 half:1 deadlock:1 une:1 plane:6 xk:1 desktop:1 short:1 core:10 wolfram:1 recherche:1 provides:1 node:27 preference:2 hyperplanes:1 mathematical:1 along:4 symposium:2 specialize:1 veit:1 warehouse:1 x0:7 inter:1 hardness:1 expected:1 andrea:2 automatique:1 planning:26 behavior:1 multi:3 antipodal:1 ming:1 decomposed:1 bertrand:1 automatically:1 cpu:1 solver:1 increasing:2 begin:1 xx:1 notation:1 elser:1 mass:1 lowest:1 what:1 minimizes:2 disneyresearch:4 whilst:1 finding:3 guarantee:2 certainty:5 every:3 act:1 tie:1 exactly:4 k2:3 schwartz:1 unit:1 appear:1 producing:1 t1:9 dropped:1 local:17 limit:1 despite:1 parallelize:2 path:3 might:3 dynamically:1 suggests:1 challenging:1 micha:1 unique:1 practical:2 revue:1 block:4 implement:1 postpone:1 x3:10 communicated:2 xr:9 powell:1 empirical:1 java:2 boyd:1 pre:2 regular:1 interior:1 judged:1 nb:2 twodimensional:1 applying:1 writing:1 scheduling:1 jbento:1 optimize:2 restriction:1 center:1 maximizing:1 primitive:2 flexibly:1 convex:4 emily:1 formulate:1 rectangular:1 decomposable:2 correcting:3 m2:2 avoidance:7 mora:3 classic:1 coordinate:1 autonomous:2 updated:2 suppose:1 magazine:1 programming:4 distinguishing:1 us:2 velocity:29 crossing:1 expensive:2 particularly:1 satisfying:1 element:1 trend:1 ci0:2 cooperative:1 role:1 steven:1 preprint:1 solved:3 ensures:1 connected:2 sharir:1 environment:3 convexity:1 complexity:4 broken:1 dynamic:5 depend:1 solving:5 segment:4 localization:1 bipartite:6 efficiency:1 f2:1 completely:1 eric:1 easily:1 joint:6 collide:2 siggraph:1 hopcroft:1 represented:1 distinct:2 fast:2 describe:6 informatique:1 london:1 tell:1 choosing:1 crowd:1 whose:1 supplementary:8 solve:9 widely:1 larger:1 otherwise:6 enlarges:1 federico:1 g1:3 jointly:1 bento:2 final:11 propose:1 fr:5 neighboring:2 realization:1 description:2 scalability:3 convergence:16 requirement:1 optimum:1 produce:1 incremental:1 object:2 help:1 derive:1 spent:1 bth:1 op:1 minor:1 progress:1 solves:2 c:1 come:2 indicate:2 direction:4 radius:13 kb:1 material:8 virtual:1 explains:1 require:2 exchange:2 f1:2 generalization:1 wall:9 extension:1 around:2 considered:1 magnus:2 normal:1 fletcher:1 mapping:1 claim:1 vary:1 consecutive:5 smallest:1 xk2:1 achieves:1 erationelle:1 weighted:2 minimization:1 always:3 conf1:5 rather:3 modified:1 pn:2 avoid:7 parameters1:1 mobile:3 command:2 derived:2 focus:2 improvement:1 greatly:1 contrast:1 kim:1 hestenes:2 minimizers:17 typically:2 integrated:1 going:2 overall:1 among:2 flexible:2 arg:4 denoted:3 dual:1 plan:1 miqp:5 special:3 field:1 equal:2 f3:2 sampling:2 represents:1 prioritizes:1 future:3 parametrizes:1 t2:7 minimized:1 report:1 hint:1 few:2 franc:1 intelligent:1 randomly:2 simultaneously:2 mover:1 replaced:1 raffaello:2 connects:1 cplex:1 n1:4 lavalle:1 centralized:2 message:22 highly:1 navigation:2 pradeep:1 light:1 xb:1 xrk:1 edge:2 necessary:1 preferential:1 decoupled:2 traversing:1 indexed:2 reif:1 desired:1 circle:2 minimal:1 uncertain:1 siegwart:2 instance:1 increased:1 soft:1 obstacle:24 formulates:1 maximization:1 cost:27 deviation:1 hundred:2 burgard:1 satish:1 zvi:1 connect:1 kn:2 kxi:2 considerably:1 combined:1 twelve:1 sensitivity:2 forbid:1 international:7 randomized:1 hugh:1 probabilistic:1 receiving:2 jos:2 nk2:6 connecting:1 travelled:1 concrete:1 quickly:1 squared:1 eurographics:1 frazzoli:1 choose:1 possibly:1 guy:1 priority:1 leading:2 account:4 potential:1 exclude:1 de:2 star:1 sec:6 includes:2 coefficient:3 automation:2 satisfy:1 depends:2 vi:4 piece:1 performed:3 break:15 later:1 vehicle:1 red:2 start:2 parallel:1 maren:1 contribution:1 minimize:4 efficiently:2 yield:1 spaced:2 ckx:1 basically:1 wurman:1 trajectory:50 straight:1 processor:1 parallelizable:1 reach:3 whenever:2 sebastian:1 evaluates:1 against:1 energy:16 pp:1 james:1 naturally:1 associated:10 static:8 proved:1 recall:1 anytime:1 improves:1 organized:1 formalize:1 javier:3 back:2 higher:2 wherein:1 improved:3 specify:2 formulation:8 done:2 though:1 box:1 until:2 receives:5 nonlinear:2 lack:1 brings:1 quality:1 manipulator:1 building:2 effect:1 normalized:1 multiplier:5 true:3 concept:1 former:1 equality:17 assigned:1 reformulating:1 alternating:2 changkyu:1 iteratively:1 i2:13 dinesh:1 neal:1 round:1 sin:1 please:1 illustrative:1 criterion:1 generalized:1 demonstrate:1 vo:2 performs:1 motion:7 bring:1 interface:1 bennewitz:1 image:1 wise:1 variational:1 novel:2 fi:1 parikh:1 rotation:1 functional:4 qp:1 clause:1 physical:2 m1:4 theirs:1 imposing:1 ai:2 probl:1 smoothness:1 tuning:2 unconstrained:6 mathematics:2 similarly:1 moving:1 robot:8 f0:1 similarity:1 recent:1 optimizing:1 apart:1 reverse:1 scenario:4 certain:2 inequality:4 success:1 minimum:13 additional:2 analyzes:1 greater:1 parallelized:1 converting:1 r0:7 converge:1 nate:3 dashed:2 ii:1 branch:1 multiple:8 full:2 rj:6 reduces:1 emilio:1 stephen:2 smooth:1 borja:1 faster:3 adapt:1 calculation:1 academic:2 lin:2 equally:1 roland:2 ensuring:1 involving:1 scalable:1 basic:1 heterogeneous:1 arxiv:3 iteration:4 represent:3 mex:1 pspace:2 robotics:8 receive:1 background:1 want:4 interval:1 source:1 crucial:1 parallelization:2 rest:1 operate:1 strict:1 comment:3 sent:1 integer:5 call:1 intermediate:3 iii:2 insofar:1 easy:2 variety:1 xj:16 topology:2 suboptimal:1 associating:1 reduce:1 simplifies:1 regarding:1 stall:1 tradeoff:1 andreas:1 i7:2 inactive:3 fleet:1 specialization:2 expression:4 six:1 utility:2 gb:1 passed:1 khatib:1 peter:1 passing:6 york:1 matlab:1 ignored:1 collision:34 clear:1 involve:1 dubey:1 amount:1 chhugani:1 notice:3 coordinating:1 per:3 reinforces:1 blue:3 discrete:2 write:1 shall:1 paolo:1 express:1 quadrotor:1 nonconvexity:1 ordre:1 graph:9 sum:3 run:3 parameterized:3 planner:2 reasonable:1 draw:1 zadeh:1 appendix:4 scaling:2 forney:2 prefer:1 comparable:1 bound:1 breakpoints:1 pay:2 spacetime:1 display:1 ements:1 quadratic:3 annual:1 constraint:10 alex:1 ri:9 x2:5 n3:1 colliding:1 equilibrate:1 speed:4 argument:1 min:7 formulating:2 spring:1 kumar:1 statically:1 relatively:1 martin:2 according:1 combination:1 pink:1 jr:1 describes:1 smaller:1 remain:1 slightly:1 em:1 g3:1 modification:2 making:1 kuffner:1 intuitively:1 explained:2 karaman:1 computationally:2 equation:1 resource:1 turn:1 describing:2 mechanism:1 mercier:1 finis:1 available:1 operation:2 decentralized:1 yedidia:3 apply:1 observe:1 enforce:1 occurrence:1 angela:1 running:1 remaining:1 include:1 dirichlet:1 instant:3 exploit:1 especially:1 build:1 approximating:1 classical:2 rsj:1 contact:1 appreciate:1 move:5 objective:6 already:1 traditional:1 gradient:2 distance:2 thank:1 separate:2 thrun:1 alonso:3 considers:1 consensus:6 assuming:2 code:1 sur:1 index:2 reformulate:1 minimizing:3 difficult:3 potentially:1 subproblems:1 marrocco:1 trace:1 implementation:2 reliably:1 contend:1 perform:2 benchmark:1 finite:1 t:1 defining:2 extended:1 incorporated:1 precise:1 communication:2 team:2 gc:2 discovered:1 glowinski:1 arbitrary:3 peleato:1 david:1 pair:2 mechanical:2 specified:5 required:1 optimized:3 namely:1 nadathur:1 eckstein:1 derbinsky:3 able:1 bar:3 program:1 including:1 video:1 force:1 noa:1 solvable:1 scheme:4 eye:1 mellinger:1 dualit:1 prior:1 literature:1 epoch:2 asymptotic:1 synchronization:3 fully:2 par:2 mixed:3 generation:2 allocation:1 versus:1 foundation:2 agent:82 editor:2 pi:4 naked:1 last:1 free:13 allow:1 taking:2 correspondingly:1 absolute:8 ghz:3 distributed:1 dimension:1 default:1 world:1 computes:3 fb:4 author:1 commonly:1 concretely:1 coll:17 simplified:1 transaction:1 functionals:4 approximate:1 emphasize:1 ignore:2 preferred:1 gracefulness:2 everett:1 logic:7 technicality:1 global:13 sequentially:1 incoming:4 active:1 xi:41 alternatively:1 search:5 continuous:1 un:1 nature:1 messagepassing:1 symmetry:1 domain:1 vj:1 did:1 main:2 spread:1 linearly:1 animation:3 paul:2 n2:3 gabay:1 allowed:1 ref:3 x1:11 aise:1 fashion:1 esolution:1 sub:3 position:16 scheduler:1 explicit:1 xl:7 breaking:1 specific:1 kvi:1 twa:19 r2:1 evidence:1 incorporating:1 false:1 sequential:2 adding:1 importance:2 ci:5 polygonal:1 magnitude:5 wash:1 cartesian:1 kx:9 nk:2 horizon:1 vijay:1 simply:6 lagrange:1 expressed:2 adjustment:1 tracking:1 g2:1 applies:1 minimizer:40 satisfies:1 relies:1 acm:1 kinetic:5 goal:4 identity:1 formulated:1 acceleration:1 xlk:1 admm:13 feasible:4 hard:4 change:1 infinite:1 specifically:1 uniformly:2 called:1 total:6 la:1 beardsley:2 formally:1 internal:2 support:1 latter:1 jonathan:3 reactive:1 relevance:1 outgoing:6 avoiding:2 |
4,439 | 5,017 | The Power of Asymmetry in Binary Hashing
Behnam Neyshabur
Payman Yadollahpour
Yury Makarychev
Toyota Technological Institute at Chicago
[btavakoli,pyadolla,yury]@ttic.edu
Ruslan Salakhutdinov
Departments of Statistics and Computer Science
University of Toronto
[email protected]
Nathan Srebro
Toyota Technological Institute at Chicago
and Technion, Haifa, Israel
[email protected]
Abstract
When approximating binary similarity using the hamming distance between short
binary hashes, we show that even if the similarity is symmetric, we can have
shorter and more accurate hashes by using two distinct code maps. I.e. by approximating the similarity between x and x0 as the hamming distance between f (x)
and g(x0 ), for two distinct binary codes f, g, rather than as the hamming distance
between f (x) and f (x0 ).
1
Introduction
Encoding high-dimensional objects using short binary hashes can be useful for fast approximate
similarity computations and nearest neighbor searches. Calculating the hamming distance between
two short binary strings is an extremely cheap computational operation, and the communication cost
of sending such hash strings for lookup on a server (e.g. sending hashes of all features or patches in
an image taken on a mobile device) is low. Furthermore, it is also possible to quickly look up nearby
hash strings in populated hash tables. Indeed, it only takes a fraction of a second to retrieve a shortlist
of similar items from a corpus containing billions of data points, which is important in image, video,
audio, and document retrieval tasks [11, 9, 10, 13]. Moreover, compact binary codes are remarkably
storage efficient, and allow one to store massive datasets in memory. It is therefore desirable to find
short binary hashes that correspond well to some target notion of similarity. Pioneering work on
Locality Sensitive Hashing used random linear thresholds for obtaining bits of the hash [1]. Later
work suggested learning hash functions attuned to the distribution of the data [15, 11, 5, 7, 3].
More recent work focuses on learning hash functions so as to optimize agreement with the target
similarity measure on specific datasets [14, 8, 9, 6] . It is important to obtain accurate and short
hashes?the computational and communication costs scale linearly with the length of the hash, and
more importantly, the memory cost of the hash table can scale exponentially with the length.
In all the above-mentioned approaches, similarity S(x, x0 ) between two objects is approximated by
the hamming distance between the outputs of the same hash function, i.e. between f (x) and f (x0 ),
for some f ? {?1}k . The emphasis here is that the same hash function is applied to both x and x0
(in methods like LSH multiple hashes might be used to boost accuracy, but the comparison is still
between outputs of the same function).
The only exception we are aware of is where a single mapping of objects to fractional vectors
k
f?(x) ? [?1, 1]k is used, its thresholding f (x) = signDf?(x) ? {?1}
E is used in the database,
0
0
and similarity between x and x is approximated using f (x), f?(x ) . This has become known
as ?asymmetric hashing? [2, 4], but even with such a-symmetry, both mappings are based on the
1
same fractional mapping f?(?). That is, the asymmetry is in that one side of the comparison gets
thresholded while the other is fractional, but not in the actual mapping.
In this paper, we propose using two distinct mappings f (x), g(x) ? {?1}k and approximating the
similarity S(x, x0 ) by the hamming distance between f (x) and g(x0 ). We refer to such hashing
schemes as ?asymmetric?. Our main result is that even if the target similarity function is symmetric and ?well behaved? (e.g., even if it is based on Euclidean distances between objects), using
asymmetric binary hashes can be much more powerful, and allow better approximation of the target similarity with shorter code lengths. In particular, we show extreme examples of collections
of points in Euclidean space, where the neighborhood similarity S(x, x0 ) can be realized using an
asymmetric binary hash (based on a pair of distinct functions) of length O(r) bits, but where a symmetric hash (based on a single function) would require at least ?(2r ) bits. Although actual data is
not as extreme, our experimental results on real data sets demonstrate significant benefits from using
asymmetric binary hashes.
Asymmetric hashes can be used in almost all places where symmetric hashes are typically used,
usually without any additional storage or computational cost. Consider the typical application of
storing hash vectors for all objects in a database, and then calculating similarities to queries by
computing the hash of the query and its hamming distance to the stored database hashes. Using
an asymmetric hash means using different hash functions for the database and for the query. This
neither increases the size of the database representation, nor the computational or communication
cost of populating the database or performing a query, as the exact same operations are required.
In fact, when hashing the entire database, asymmetric hashes provide even more opportunity for
improvement. We argue that using two different hash functions to encode database objects and
queries allows for much more flexibility in choosing the database hash. Unlike the query hash,
which has to be stored compactly and efficiently evaluated on queries as they appear, if the database
is fixed, an arbitrary mapping of database objects to bit strings may be used. We demonstrate that
this can indeed increase similarity accuracy while reducing the bit length required.
2
Minimum Code Lengths and the Power of Asymmetry
Let S : X ? X ? {?1} be a binary similarity function over a set of objects X , where we can
interpret S(x, x0 ) to mean that x and x0 are ?similar? or ?dissimilar?, or to indicate whether they are
?neighbors?. A symmetric binary coding of X is a mapping f : X ? {?1}k , where k is the bitlength of the code. We are interested in constructing codes such that the hamming distance between
f (x) and f (x0 ) corresponds to the similarity S(x, x0 ). That is, for some threshold ? ? R, S(x, x0 ) ?
sign(hf (x), f (x0 )i ? ?). Although discussing the hamming distance, it is more convenient for us
to work with the inner product hu, vi, which is equivalent to the hamming distance dh (u, v) since
hu, vi = (k ? 2dh (u, v)) for u, v ? {?1}k .
In this section, we will consider the problem of capturing a given similarity using an arbitrary binary
code. That is, we are given the entire similarity mapping S, e.g. as a matrix S ? {?1}n?n over
a finite domain X = {x1 , . . . , xn } of n objects, with Sij = S(xi , xj ). We ask for an encoding
ui = f (xi ) ? {?1}k of each object xi ? X , and a threshold ?, such that Sij = sign(hui , uj i ? ?),
or at least such that equality holds for as many pairs (i, j) as possible. It is important to emphasize
that the goal here is purely to approximate the given matrix S using a short binary code?there is no
out-of-sample generalization (yet).
We now ask: Can allowing an asymmetric coding enable approximating a symmetric similarity
matrix S with a shorter code length?
Denoting U ? {?1}n?k for the matrix whose columns contain the codewords ui , the minimal
binary code length that allows exactly representing S is then given by the following matrix factorization problem:
ks (S) = min k
k,U,?
s.t
U ? {?1}k?n
Y , U > U ? ?1n
?ij Sij Yij > 0
where 1n is an n ? n matrix of ones.
2
??R
(1)
We begin demonstrating the power of asymmetry by considering an asymmetric variant of the above
problem. That is, even if S is symmetric, we allow associating with each object xi two distinct
binary codewords, ui ? {?1}k and vi ? {?1}k (we can think of this as having two arbitrary
mappings ui = f (xi ) and vi = g(xi )), such that Sij = sign(hui , vj i ? ?). The minimal asymmetric
binary code length is then given by:
ka (S) = min k s.t U, V ? {?1}k?n
??R
(2)
k,U,V,?
Y , U > V ? ?1n
?ij Sij Yij > 0
Writing the binary coding problems as matrix factorization problems is useful for understanding
the power we can get by asymmetry: even if S is symmetric, and even if we seek a symmetric Y ,
insisting on writing Y as a square of a binary matrix might be a tough constraint. This is captured
in the following Theorem, which establishes that there could be an exponential gap between the
minimal asymmetry binary code length and the minimal symmetric code length, even if the matrix
S is symmetric and very well behaved:
r
Theorem
1. For any r, there exists a set of n = 2 points in Euclidean space, with similarity matrix
1
if kxi ? xj k ? 1
Sij =
, such that ka (S) ? 2r but ks (S) ? 2r /2
?1 if kxi ? xj k > 1
Proof. Let I1 = {1, . . . , n/2} and I2 = {n/2 + 1, . . . , n}. Consider the matrix G defined by
Gii = 1/2, Gij = ?1/(2n) if i, j ? I1 or i, j ? I2 , and Gij = 1/(2n) otherwise. Matrix G is
diagonally dominant. By the Gershgorin circle theorem, G is positive definite. Therefore, there exist
vectors x1 , . . . , xn such that hxi , xj i = Gij (for every i and j). Define
1
if kxi ? xj k ? 1
Sij =
.
?1 if kxi ? xj k > 1
2
Note that if i = j then Sij = 1; if i 6= j and (i, j) ? I1 ? I1 ? I2 ? I2 then kxi ? xj k =
Gii +Gjj ?2Gij = 1+1/n > 1 and therefore Sij = ?1. Finally, if i 6= j and (i, j) ? I1 ?I2 ?I2 ?I1
2
then kxi ? xj k = Gii + Gjj ? 2Gij = 1 + 1/n < 1 and therefore Sij = 1. We show that
ka (S) ? 2r. Let B be an r ? n matrix whose column vectors are the vertices of the cube {?1}r
(in any
let C be anr ? n matrix defined by Cij = 1 if j ? I1 and Cij = ?1 if j ? I2 . Let
order);
B
B
U =
and V =
. For Y = U > V ? ?1n where threshold ? = ?1 , we have that Yij ? 1
C
?C
if Sij = 1 and Yij ? ?1 if Sij = ?1. Therefore, ka (S) ? 2r.
Now we show that ks = ks (S) ? n/2. Consider Y , U and ? as in (1). Let Y 0 = (U > U ). Note
that Yij0 ? [?ks , ks ] and thus ? ? [?ks + 1, ks ? 1]. Let q = [1, . . . , 1, ?1, . . . , ?1]> (n/2 ones
followed by n/2 minus ones). We have,
n
X
X
X
Yii0 +
Yij0 ?
Yij0
0 ? q> Y 0 q =
i=1
?
n
X
i=1
i,j:Sij =?1
X
ks +
i,j:Sij =1,i6=j
(? ? 1) ?
i,j:Sij =?1
X
(? + 1)
i,j:Sij =1,i6=j
= nks + (0.5n2 ? n)(? ? 1) ? 0.5n2 (? + 1)
= nks ? n2 ? n(? ? 1)
? 2nks ? n2 .
We conclude that ks ? n/2.
The construction of Theorem 1 shows that there exists data sets for which an asymmetric binary hash
might be much shorter then a symmetric hash. This is an important observation as it demonstrates
that asymmetric hashes could be much more powerful, and should prompt us to consider them
instead of symmetric hashes. The precise construction of Theorem 1 is of course rather extreme (in
fact, the most extreme construction possible) and we would not expect actual data sets to have this
exact structure, but we will show later significant gaps also on real data sets.
3
10-D Uniform
60
25
50
20
15
Symmetric
Asymmetric
10
40
30
Symmetric
Asymmetric
20
5
0
0.7
LabelMe
70
30
bits
bits
35
10
0.75
0.8
0.85
0.9
0
0.8
0.95
Average Precision
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
Average Precision
Figure 1: Number of bits required for approximating two similarity matrices (as a function of average precision). Left: uniform data in the 10-dimensional hypercube, similarity represents a thresholded Euclidean
distance, set such that 30% of the similarities are positive. Right: Semantic similarity of a subset of LabelMe
images, thresholded such that 5% of the similarities are positive.
3
Approximate Binary Codes
As we turn to real data sets, we also need to depart from seeking a binary coding that exactly
captures the similarity matrix. Rather, we are usually satisfied with merely approximating S, and
for any fixed code length k seek the (symmetric or asymmetric) k-bit code that ?best captures? the
similarity matrix S. This is captured by the following optimization problem:
X
X
min L(Y ; S) , ?
`(Yij ) + (1 ? ?)
`(?Yij ) s.t. U, V ? {?1}k?n ? ? R (3)
U,V,?
i,j:Sij =1
i,j:Sij =?1
Y , U > V ? ?1n
where `(z) = 1z?0 is the zero-one-error and ? is a parameter that allows us to weight positive
and negative errors differently. Such weighting can compensate for Sij being imbalanced (typically
many more pairs of points are non-similar rather then similar), and allows us to obtain different
balances between precision and recall.
The optimization problem (3) is a discrete, discontinuous and highly non-convex problem. In our
experiments, we replace the zero-one loss `(?) with a continuous loss and perform local search
by greedily updating single bits so as to improve this objective. Although the resulting objective
(let alone the discrete optimization problem) is still not convex even if `(z) is convex, we found it
beneficial to use a loss function that is not flat on z < 0, so as to encourage moving towards the
correct sign. In our experiments, we used the square root of the logistic loss, `(z) = log1/2 (1+e?z ).
Before moving on to out-of-sample generalizations, we briefly report on the number of bits needed
empirically to find good approximations of actual similarity matrices with symmetric and asymmetric codes. We experimented with several data sets, attempting to fit them with both symmetric and
asymmetric codes, and then calculating average precision by varying the threshold ? (while keeping
U and V fixed). Results for two similarity matrices, one based on Euclidean distances between
points uniformly distributed in a hypoercube, and the other based on semantic similarity between
images, are shown in Figure 1.
4
Out of Sample Generalization: Learning a Mapping
So far we focused on learning binary codes over a fixed set of objects by associating an arbitrary
code word with each object and completely ignoring the input representation of the objects xi .
We discussed only how well binary hashing can approximate the similarity, but did not consider
generalizing to additional new objects. However, in most applications, we would like to be able to
have such an out-of-sample generalization. That is, we would like to learn a mapping f : X ?
{?1}k over an infinite domain X using only a finite training set of objects, and then apply the
mapping to obtain binary codes f (x) for future objects to be encountered, such that S(x, x0 ) ?
sign(hf (x), f (x0 )i ? ?). Thus, the mapping f : X ? {?1}k is usually limited to some constrained
parametric class, both so we could represent and evaluate it efficiently on new objects, and to ensure
good generalization. For example, when X = Rd , we can consider linear threshold mappings
fW (x) = sign(W x), where W ? Rk?d and sign(?) operates elementwise, as in Minimal Loss
Hashing [8]. Or, we could also consider more complex classes, such as multilayer networks [11, 9].
We already saw that asymmetric binary codes can allow for better approximations using shorter
codes, so it is natural to seek asymmetric codes here as well. That is, instead of learning a single
4
parametric map f (x) we can learn a pair of maps f : X ? {?1}k and g : X ? {?1}k , both
constrained to some parametric class, and a threshold ?, such that S(x, x0 ) ? sign(hf (x), g(x0 )i ?
?). This has the potential of allowing for better approximating the similarity, and thus better overall
accuracy with shorter codes (despite possibly slightly harder generalization due to the increase in
the number of parameters).
In fact, in a typical application where a database of objects is hashed for similarity search over
future queries, asymmetry allows us to go even further. Consider the following setup: We are given
n objects x1 , . . . , xn ? X from some infinite domain X and the similarities S(xi , xj ) between
these objects. Our goal is to hash these objects using short binary codes which would allow us to
quickly compute approximate similarities between these objects (the ?database?) and future objects
x (the ?query?). That is, we would like to generate and store compact binary codes for objects in a
database. Then, given a new query object, we would like to efficiently compute a compact binary
code for a given query and retrieve similar items in the database very fast by finding binary codes
in the database that are within small hamming distance from the query binary code. Recall that it
is important to ensure that the bit length of the hashes are small, as short codes allow for very fast
hamming distance calculations and low communication costs if the codes need to be sent remotely.
More importantly, if we would like to store the database in a hash table allowing immediate lookup,
the size of the hash table is exponential in the code length.
The symmetric binary hashing approach (e.g. [8]), would be to find a single parametric mapping
f : X ? {?1}k such that S(x, xi ) ? sign(hf (x), f (xi )i ? ?) for future queries x and database
objects xi , calculate f (xi ) for all database objects xi , and store these hashes (perhaps in a hash table
allowing for fast retrieval of codes within a short hamming distance). The asymmetric approach
described above would be to find two parametric mappings f : X ? {?1}k and g : X ? {?1}k
such that S(x, xi ) ? sign(hf (x), g(xi )i ? ?), and then calculate and store g(xi ).
But if the database is fixed, we can go further. There is actually no need for g(?) to be in a constrained
parametric class, as we do not need to generalize g(?) to future objects, nor do we have to efficiently
calculate it on-the-fly nor communicate g(x) to the database. Hence, we can consider allowing the
database hash function g(?) to be an arbitrary mapping. That is, we aim to find a simple parametric
mapping f : X ? {?1}k and n arbitrary codewords v1 , . . . , vn ? {?1}k for each x1 , . . . , xn
in the database, such that S(x, xi ) ? sign(hf (x), vi i ? ?) for future queries x and for the objects
xi , . . . , xn in the database. This form of asymmetry can allow us for greater approximation power,
and thus better accuracy with shorter codes, at no additional computational or storage cost.
In Section 6 we evaluate empirically both of the above asymmetric strategies and demonstrate their
benefits. But before doing so, in the next Section, we discuss a local-search approach for finding the
mappings f, g, or the mapping f and the codes v1 , . . . , vn .
5
Optimization
We focus on x ? X ? Rd and linear threshold hash maps of the form f (x) = sign(W x), where
W ? Rk?d . Given training points x1 , . . . , xn , we consider the two models discussed above:
L IN :L IN We learn two linear threshold functions f (x) = sign(Wq x) and g(x) = sign(Wd x).
I.e. we need to find the parameters Wq , Wd ? Rk?d .
L IN :V We learn a single linear threshold function f (x) = sign(Wq x) and n codewords
v1 , . . . , vn ? Rk . I.e. we need to find Wq ? Rk?d , as well as V ? Rk?n (where vi
are the columns of V ).
In either case we denote ui = f (xi ), and in L IN :L IN also vi = g(xi ), and learn by attempting to
minimizing the objective in (3), where `(?) is again a continuous loss function such as the square
root of the logistic. That is, we learn by optimizing the problem (3) with the additional constraint
U = sign(Wq X), and possibly also V = sign(Wd X) (for L IN :L IN), where X = [x1 . . . xn ] ?
Rd?n .
We optimize these problems by alternatively updating rows of Wq and either rows of Wd (for
L IN :L IN ) or of V (for L IN :V ). To understand these updates, let us first return to (3) (with un5
constrained U, V ), and consider updating a row u(t) ? Rn of U . Denote
>
Y (t) = U > V ? ?1n ? u(t) v (t) ,
the prediction matrix with component t subtracted away. It is easy to verify that we can write:
L(U > V ? ?1n ; S) = C ? u(t) M v (t)
>
(4)
where C = 12 (L(Y (t) +1n ; S)+L(Y (t) ?1n ; S)) does not depend on u(t) and v (t) , and M ? Rn?n
also does not depend on u(t) , v (t) and is given by:
?ij
(t)
(t)
`(Sij (Yij ? 1)) ? `(Sij (Yij + 1)) ,
Mij =
2
with ?ij = ? or ?ij = (1 ? ?) depending on Sij . This implies that we can optimize over the entire
>
row u(t) concurrently by maximizing u(t) M v (t) , and so the optimum (conditioned on ?, V and all
other rows of U ) is given by
u(t) = sign(M v (t) ).
(5)
Symmetrically, we can optimize over the row v (t) conditioned on ?, U and the rest of V , or in the
case of L IN :V , conditioned on ?, Wq and the rest of V .
Similarly, optimizing over a row w(t) of Wq amount to optimizing:
E
D
E
XD
>
arg max sign(w(t) X)M v (t) = arg max
Mi , v (t) sign( w(t) , xi ).
w(t) ?Rd
w(t) ?Rd
(6)
i
This is a weighted zero-one-loss binary classification problem, with targets sign( Mi , v (t) ) and
weights Mi , v (t) . We approximate it as a weighted logistic regression problem, and at each
update iteration attempt to improve the objective using a small number (e.g. 10) epochs of stochastic
gradient descent on the logistic loss. For L IN :L IN , we also symmetrically update rows of Wd .
When optimizing the model for some bit-length k, we initialize to the optimal k ? 1-length model.
We initialize the new bit either randomly, or by thresholding the rank-one projection of M (for
unconstrained U, V ) or the rank-one projection after projecting the columns of M (for L IN :V ) or
both rows and columns of M (for L IN :L IN ) to the column space of X. We take the initialization
(random, or rank-one based) that yields a lower objective value.
6
Empirical Evaluation
In order to empirically evaluate the benefits of asymmetry in hashing, we replicate the experiments
of [8], which were in turn based on [5], on six datasets using learned (symmetric) linear threshold
codes. These datasets include: LabelMe and Peekaboom are collections of images, represented as
512D GIST features [13], Photo-tourism is a database of image patches, represented as 128 SIFT
features [12], MNIST is a collection of 785D greyscale handwritten images, and Nursery contains
8D features. Similar to [8, 5], we also constructed a synthetic 10D Uniform dataset, containing
uniformly sampled 4000 points for a 10D hypercube. We used 1000 points for training and 3000 for
testing.
For each dataset, we find the Euclidean distance at which each point has, on average, 50 neighbours.
This defines our ground-truth similarity in terms of neighbours and non-neighbours. So for each
dataset, we are given a set of n points x1 , . . . , xn , represented as vectors in X = Rd , and the binary
similarities S(xi , xj ) between the points, with +1 corresponding to xi and xj being neighbors and
-1 otherwise. Based on these n training points, [8] present a sophisticated optimization approach
for learning a thresholded linear hash function of the form f (x) = sign(W x), where W ? Rk?d .
This hash function is then applied and f (x1 ), . . . , f (xn ) are stored in the database. [8] evaluate
the quality of the hash by considering an independent set of test points and comparing S(x, xi ) to
sign(hf (x), f (xi )i ? ?) on the test points x and the database objects (i.e. training points) xi .
In our experiments, we followed the same protocol, but with the two asymmetric variations L IN :L IN
and L IN :V, using the optimization method discussed in Sec. 5. In order to obtain different balances
between precision and recall, we should vary ? in (3), obtaining different codes for each value of
6
10-D Uniform
1
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
0.2
8
12
16
20
24
28
32
36
40
44
48
52
56
60
0.8
0.6
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
0.2
64
8
12
16
20
24
Number of Bits
36
40
44
48
52
56
60
0.2
8
0.2
16
20
24
28
32
36
40
44
16
20
24
48
52
56
60
32
36
40
44
48
52
56
60
64
0.8
0.6
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
0.2
64
28
Nursery
1
8
12
16
20
Number of Bits
24
28
32
36
40
44
48
52
56
60
Average Precision
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
Average Precision
0.6
12
12
Number of Bits
0.8
8
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
64
Photo-tourism
1
0.8
Average Precision
32
0.6
Number of Bits
Peekaboom
1
28
Average Precision
0.6
MNIST
1
0.8
Average Precision
0.8
Average Precision
LabelMe
1
0.6
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
0.2
64
8
12
16
20
Number of Bits
24
28
32
36
40
44
48
52
56
60
64
Number of Bits
Figure 2: Average Precision (AP) of points retrieved using Hamming distance as a function of code length
for six datasets. Five curves represent: LSH, BRE, KSH, MLH, and two variants of our method: Asymmetric
LIN-LIN and Asymmetric LIN-V. (Best viewed in color.)
LabelMe
MNIST
50
45
40
40
40
35
35
35
25
20
15
5
0
0.55
0.6
0.65
0.7
Average Precision
0.75
30
25
20
15
LIN:V
LIN:LIN
MLH
KSH
10
Bits Required
45
30
5
0
0.55
0.6
0.65
0.7
Average Precision
0.75
30
25
20
15
LIN:V
LIN:LIN
MLH
KSH
10
0.8
Peekaboom
50
45
Bits Required
Bits Required
50
LIN:V
LIN:LIN
MLH
KSH
10
5
0.8
0
0.55
0.6
0.65
0.7
0.75
0.8
Average Precision
Figure 3: Code length required as a function of Average Precision (AP) for three datasets.
?. However, as in the experiments of [8], we actually learn a code (i.e. mappings f (?) and g(?), or
a mapping f (?) and matrix V ) using a fixed value of ? = 0.7, and then only vary the threshold ? to
obtain the precision-recall curve.
In all of our experiments, in addition to Minimal Loss Hashing (MLH), we also compare our approach to three other widely used methods: Kernel-Based Supervised Hashing (KSH) of [6], Binary
Reconstructive Embedding (BRE) of [5], and Locality-Sensitive Hashing (LSH) of [1]. 1
In our first set of experiments, we test performance of the asymmetric hash codes as a function of
the bit length. Figure 2 displays Average Precision (AP) of data points retrieved using Hamming
distance as a function of code length. These results are similar to ones reported by [8], where MLH
yields higher precision compared to BRE and LSH. Observe that for all six datasets both variants
of our method, asymmetric L IN :L IN and asymmetric L IN :V , consistently outperform all other
methods for different binary code length. The gap is particularly large for short codes. For example,
for the LabelMe dataset, MLH and KSH with 16 bits achieve AP of 0.52 and 0.54 respectively,
whereas L IN :V already achieves AP of 0.54 with only 8 bits. Figure 3 shows similar performance
gains appear for a number of other datasets. We also note across all datasets L IN :V improves upon
L IN :L IN for short-sized codes. These results clearly show that an asymmetric binary hash can be
much more compact than a symmetric hash.
1
We used the BRE, KSH and MLH implementations available from the original authors. For each method,
we followed the instructions provided by the authors. More specifically, we set the number of points for each
hash function in BRE to 50 and the number of anchors in KSH to 300 (the default values). For MLH, we learn
the threshold and shrinkage parameters by cross-validation and other parameters are initialized to the suggested
values in the package.
7
LabelMe
64 bits
MNIST
16 bits
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.2
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.2
0.4
0.2
Recall
Precision
1
Precision
1
Precision
Precision
16 bits
1
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
0.4
0.2
Recall
64bits
LIN:V
LIN:LIN
MLH
KSH
BRE
LSH
Recall
Recall
Figure 4: Precision-Recall curves for LabelMe and MNIST datasets using 16 and 64 binary codes. (Best
viewed in color.)
0.8
0.3
LIN:V
MLH
KSH
Recall
Precision
0.6
0.2
0.4
0.1
LIN:V
MLH
KSH
0.2
0.2
0.4
0.6
0.8
1
500
Recall
1000
1500
2000
2500
3000
3500
4000
4500
5000
Number Retrieved
Figure 5: Left: Precision-Recall curves for the Semantic 22K LabelMe dataset Right: Percentage of 50
ground-truth neighbours as a function of retrieved images. (Best viewed in color.)
Next, we show, in Figure 4, the full Precision-Recall curves for two datasets, LabelMe and MNIST,
and for two specific code lengths: 16 and 64 bits. The performance of L IN :L IN and L IN :V is almost
uniformly superior to that of MLH, KSH and BRE methods. We observed similar behavior also for
the four other datasets across various different code lengths.
Results on previous 6 datasets show that asymmetric binary codes can significantly outperform
other state-of-the-art methods on relatively small scale datasets. We now consider a much larger
LabelMe dataset [13], called Semantic 22K LabelMe. It contains 20,019 training images and 2,000
test images, where each image is represented by a 512D GIST descriptor. The dataset also provides a
semantic similarity S(x, x0 ) between two images based on semantic content (object labels overlap in
two images). As argued by [8], hash functions learned using semantic labels should be more useful
for content-based image retrieval compared to Euclidean distances. Figure 5 shows that L IN :V with
64 bits substantially outperforms MLH and KSH with 64 bits.
7
Summary
The main point we would like to make is that when considering binary hashes in order to approximate similarity, even if the similarity measure is entirely symmetric and ?well behaved?, much power
can be gained by considering asymmetric codes. We substantiate this claim by both a theoretical
analysis of the possible power of asymmetric codes, and by showing, in a fairly direct experimental
replication, that asymmetric codes outperform state-of-the-art results obtained for symmetric codes.
The optimization approach we use is very crude. However, even using this crude approach, we could
find asymmetric codes that outperformed well-optimized symmetric codes. It should certainly be
possible to develop much better, and more well-founded, training and optimization procedures.
Although we demonstrated our results in a specific setting using linear threshold codes, we believe
the power of asymmetry is far more widely applicable in binary hashing, and view the experiments
here as merely a demonstration of this power. Using asymmetric codes instead of symmetric codes
could be much more powerful, and allow for shorter and more accurate codes, and is usually straightforward and does not require any additional computational, communication or significant additional
memory resources when using the code. We would therefore encourage the use of such asymmetric
codes (with two distinct hash mappings) wherever binary hashing is used to approximate similarity.
Acknowledgments
This research was partially supported by NSF CAREER award CCF-1150062 and NSF grant IIS1302662.
8
References
[1] M. Datar, N. Immorlica, P. Indyk, and V.S. Mirrokni. Locality-sensitive hashing scheme based
on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational
geometry, pages 253?262. ACM, 2004.
[2] W. Dong and M. Charikar. Asymmetric distance estimation with sketches for similarity search
in high-dimensional spaces. SIGIR, 2008.
[3] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean
approach to learning binary codes for large-scale image retrieval. TPAMI, 2012.
[4] A. Gordo and F. Perronnin. Asymmetric distances for binary embeddings. CVPR, 2011.
[5] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. NIPS, 2009.
[6] W. Liu, R. Ji J. Wang, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. CVPR,
2012.
[7] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. ICML, 2011.
[8] M. Norouzi and D. J. Fleet. Minimal loss hashing for compact binary codes. ICML, 2011.
[9] M. Norouzi, D. J. Fleet, and R. Salakhutdinov. Hamming distance metric learning. NIPS,
2012.
[10] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels.
NIPS, 2009.
[11] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate
Reasoning, 2009.
[12] N. Snavely, S. M. Seitz, and R.Szeliski. Photo tourism: Exploring photo collections in 3d. In
Proc. SIGGRAPH, 2006.
[13] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition.
CVPR, 2008.
[14] J. Wang, S. Kumar, and S. Chang. Sequential projection learning for hashing with compact
codes. ICML, 2010.
[15] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. NIPS, 2008.
9
| 5017 |@word kulis:1 briefly:1 replicate:1 instruction:1 hu:2 seitz:1 seek:3 nks:3 minus:1 harder:1 liu:2 contains:2 denoting:1 document:1 outperforms:1 ka:4 wd:5 comparing:1 yet:1 chicago:2 cheap:1 gist:2 update:3 hash:55 alone:1 device:1 item:2 payman:1 short:11 shortlist:1 provides:1 toronto:2 five:1 constructed:1 direct:1 become:1 symposium:1 replication:1 x0:20 indeed:2 behavior:1 nor:3 salakhutdinov:3 actual:4 considering:4 begin:1 provided:1 moreover:1 israel:1 string:4 substantially:1 finding:2 every:1 xd:1 exactly:2 demonstrates:1 grant:1 appear:2 positive:4 before:2 local:2 despite:1 encoding:2 jiang:1 datar:1 ap:5 might:3 emphasis:1 initialization:1 k:10 factorization:2 limited:1 acknowledgment:1 testing:1 definite:1 procedure:1 empirical:1 remotely:1 significantly:1 convenient:1 projection:3 word:1 get:2 storage:3 writing:2 optimize:4 equivalent:1 map:4 demonstrated:1 maximizing:1 go:2 straightforward:1 convex:3 focused:1 sigir:1 importantly:2 retrieve:2 embedding:1 notion:1 variation:1 target:5 construction:3 massive:1 exact:2 agreement:1 approximated:2 particularly:1 updating:3 recognition:1 asymmetric:38 database:28 observed:1 fly:1 wang:3 capture:2 calculate:3 technological:2 mentioned:1 ui:5 depend:2 purely:1 upon:1 completely:1 compactly:1 siggraph:1 differently:1 represented:4 various:1 distinct:6 fast:4 reconstructive:2 query:14 neighborhood:1 choosing:1 whose:2 widely:2 larger:1 cvpr:3 otherwise:2 statistic:1 think:1 indyk:1 tpami:1 propose:1 product:1 flexibility:1 achieve:1 billion:1 asymmetry:10 optimum:1 darrell:1 object:32 depending:1 develop:1 gong:1 ij:5 nearest:1 c:1 indicate:1 implies:1 discontinuous:1 correct:1 peekaboom:3 stochastic:1 enable:1 require:2 argued:1 generalization:6 yij:8 exploring:1 hold:1 ground:2 makarychev:1 mapping:23 claim:1 gordo:2 vary:2 achieves:1 torralba:2 ruslan:1 estimation:1 outperformed:1 applicable:1 proc:1 label:2 sensitive:4 saw:1 establishes:1 weighted:2 concurrently:1 clearly:1 aim:1 rather:4 shrinkage:1 mobile:1 varying:1 encode:1 focus:2 improvement:1 consistently:1 rank:3 greedily:1 perronnin:2 typically:2 entire:3 interested:1 i1:7 overall:1 arg:2 classification:1 constrained:4 tourism:3 initialize:2 art:2 cube:1 fairly:1 aware:1 having:1 represents:1 look:1 icml:3 future:6 report:1 randomly:1 neighbour:4 geometry:1 attempt:1 highly:1 evaluation:1 certainly:1 extreme:4 accurate:3 encourage:2 shorter:8 euclidean:7 initialized:1 haifa:1 circle:1 theoretical:1 minimal:7 column:6 cost:7 vertex:1 subset:1 uniform:4 technion:1 stored:3 reported:1 kxi:6 synthetic:1 international:1 dong:1 quickly:2 again:1 satisfied:1 containing:2 possibly:2 return:1 potential:1 lookup:2 coding:4 sec:1 yury:2 vi:7 later:2 root:2 view:1 doing:1 hf:7 square:3 accuracy:4 descriptor:1 efficiently:4 correspond:1 yield:2 generalize:1 handwritten:1 norouzi:2 populating:1 gershgorin:1 proof:1 mi:3 hamming:16 sampled:1 gain:1 dataset:7 ask:2 recall:13 color:3 fractional:3 improves:1 bre:16 sophisticated:1 actually:2 hashing:21 higher:1 supervised:2 wei:2 evaluated:1 furthermore:1 sketch:1 gjj:2 defines:1 logistic:4 quality:1 perhaps:1 behaved:3 believe:1 contain:1 verify:1 ccf:1 equality:1 hence:1 symmetric:25 i2:7 semantic:8 substantiate:1 procrustean:1 demonstrate:3 reasoning:1 image:16 lazebnik:2 superior:1 empirically:3 ji:1 exponentially:1 discussed:3 elementwise:1 interpret:1 refer:1 significant:3 rd:6 unconstrained:1 populated:1 i6:2 similarly:1 lsh:14 hxi:1 moving:2 hashed:1 similarity:42 stable:1 dominant:1 imbalanced:1 recent:1 retrieved:4 optimizing:4 store:5 server:1 binary:49 discussing:1 captured:2 minimum:1 additional:6 greater:1 multiple:1 desirable:1 full:1 calculation:1 cross:1 compensate:1 retrieval:4 lin:44 award:1 prediction:1 variant:3 regression:1 multilayer:1 metric:1 iteration:1 represent:2 kernel:3 addition:1 remarkably:1 whereas:1 rest:2 unlike:1 gii:3 sent:1 tough:1 symmetrically:2 easy:1 embeddings:2 xj:11 fit:1 associating:2 inner:1 shift:1 fleet:2 whether:1 six:3 useful:3 amount:1 generate:1 outperform:3 exist:1 percentage:1 nsf:2 sign:23 nursery:2 discrete:2 write:1 four:1 threshold:14 demonstrating:1 yadollahpour:1 neither:1 thresholded:4 v1:3 graph:1 merely:2 fraction:1 raginsky:1 package:1 powerful:3 communicate:1 place:1 almost:2 vn:3 patch:2 bit:33 capturing:1 entirely:1 followed:3 display:1 encountered:1 annual:1 constraint:2 flat:1 nearby:1 nathan:1 extremely:1 min:3 kumar:2 performing:1 attempting:2 relatively:1 department:1 charikar:1 beneficial:1 slightly:1 across:2 rsalakhu:1 wherever:1 projecting:1 invariant:1 sij:22 taken:1 resource:1 turn:2 discus:1 needed:1 mlh:23 sending:2 photo:4 available:1 operation:2 neyshabur:1 apply:1 observe:1 away:1 spectral:1 subtracted:1 original:1 ensure:2 include:1 opportunity:1 calculating:3 uj:1 approximating:7 hypercube:2 seeking:1 objective:5 already:2 realized:1 depart:1 codewords:4 parametric:7 strategy:1 snavely:1 mirrokni:1 gradient:1 distance:23 argue:1 code:68 length:23 balance:2 minimizing:1 demonstration:1 setup:1 cij:2 greyscale:1 negative:1 implementation:1 perform:1 allowing:5 observation:1 datasets:14 finite:2 descent:1 immediate:1 hinton:1 communication:5 precise:1 rn:2 arbitrary:6 prompt:1 ttic:2 pair:4 required:7 optimized:1 learned:2 boost:1 nip:4 able:1 suggested:2 usually:4 pioneering:1 max:2 memory:3 video:1 power:9 overlap:1 natural:1 representing:1 scheme:2 improve:2 log1:1 epoch:1 understanding:1 nati:1 loss:10 expect:1 srebro:1 validation:1 attuned:1 thresholding:2 storing:1 row:9 course:1 summary:1 diagonally:1 supported:1 keeping:1 side:1 allow:8 understand:1 institute:2 neighbor:3 szeliski:1 benefit:3 distributed:1 curve:5 default:1 xn:9 author:2 collection:4 founded:1 far:2 approximate:9 compact:6 emphasize:1 anchor:1 corpus:1 conclude:1 xi:26 fergus:2 alternatively:1 search:5 continuous:2 iterative:1 table:5 learn:8 career:1 ignoring:1 obtaining:2 symmetry:1 complex:1 constructing:1 domain:3 vj:1 protocol:1 did:1 main:2 linearly:1 n2:4 x1:8 precision:28 exponential:2 crude:2 toyota:2 weighting:1 theorem:5 rk:7 specific:3 sift:1 showing:1 behnam:1 experimented:1 exists:2 mnist:6 quantization:1 sequential:1 gained:1 hui:2 conditioned:3 gap:3 locality:4 generalizing:1 twentieth:1 partially:1 chang:3 mij:1 corresponds:1 truth:2 dh:2 insisting:1 acm:1 goal:2 ksh:22 viewed:3 sized:1 towards:1 labelme:12 replace:1 content:2 fw:1 typical:2 infinite:2 reducing:1 uniformly:3 operates:1 specifically:1 called:1 gij:5 experimental:2 exception:1 wq:8 immorlica:1 dissimilar:1 evaluate:4 audio:1 |
4,440 | 5,018 | Learning to Prune in Metric and Non-Metric Spaces
Leonid Boytsov
Bilegsaikhan Naidan
Carnegie Mellon University
Norwegian University of Science and Technology
Pittsburgh, PA, USA
Trondheim, Norway
[email protected]
[email protected]
Abstract
Our focus is on approximate nearest neighbor retrieval in metric and non-metric
spaces. We employ a VP-tree and explore two simple yet effective learning-toprune approaches: density estimation through sampling and ?stretching? of the
triangle inequality. Both methods are evaluated using data sets with metric (Euclidean) and non-metric (KL-divergence and Itakura-Saito) distance functions.
Conditions on spaces where the VP-tree is applicable are discussed. The VP-tree
with a learned pruner is compared against the recently proposed state-of-the-art
approaches: the bbtree, the multi-probe locality sensitive hashing (LSH), and permutation methods. Our method was competitive against state-of-the-art methods
and, in most cases, was more efficient for the same rank approximation quality.
1
Introduction
Similarity search algorithms are essential to multimedia retrieval, computational biology, and statistical machine learning. Resemblance between objects x and y is typically expressed in the form
of a distance function d(x, y), where smaller valuesPindicate less dissimilarity. In our work we
use
P the Euclidean distance (L2 ), the KL-divergence ( xi log xi /yi ), and the Itakura-Saito distance
( xi /yi ? log xi /yi ? 1). KL-divergence is commonly used in text analysis, image classification,
and machine learning [6]. Both KL-divergence and the Itakura-Saito distance belong to a class of
distances called Bregman divergences.
Our interest is in the nearest neighbor (NN) search, i.e., we aim to retrieve the object o that is closest
to the query q. For the KL-divergence and other non-symmetric distances two types of NN-queries
are defined. The left NN-query returns the object o that minimizes the distance d(o, q), while the
right NN-query finds o that minimizes d(q, o).
The distance function can be computationally expensive. There was a considerable effort to reduce computational costs through approximating the distance function, projecting data in a lowdimensional space, and/or applying a hierarchical space decomposition. In the case of the hierarchical space decomposition, a retrieval process is a recursion that employs an ?oracle? procedure. At
each step of the recursion, retrieval can continue in one or more partitions. The oracle allows one
to prune partitions without directly comparing the query against data points in these partitions. To
this end, the oracle assesses the query and estimates which partitions may contain an answer and,
therefore, should be recursively analyzed. A pruning algorithm is essentially a binary classifier. In
metric spaces, one can use the classifier based on the triangle inequality. In non-metric spaces, a
classifier can be learned from data.
There are numerous data structures that speedup the NN-search by creating hierarchies of partitions
at index time, most notably the VP-tree [28, 31] and the KD-tree [4]. A comprehensive review of
these approaches can be found in books by Zezula et al. [32] and Samet [27]. As dimensionality
1
increases, the filtering efficiency of space-partitioning methods decreases rapidly, which is known
as the ?curse of dimensionality? [30]. This happens because in high-dimensional spaces histograms
of distances and 1-Lipschitz function values become concentrated [25]. The negative effect can be
partially offset by creating overlapping partitions (see, e.g., [21]) and, thus, trading index size for
retrieval time. The approximate NN-queries are less affected by the curse of the dimensionality, because it is possible to reduce retrieval time at the cost of missing some relevant answers [18, 9, 25].
Low-dimensional data sets embedded into a high-dimensional space do not exhibit high concentration of distances, i.e., their intrinsic dimensionality is low. In metric spaces, it was proposed to
compute the intrinsic dimensionality as the half of the squared signal to noise ratio (for the distance
distribution) [10].
A well-known approximate NN-search method is the locality sensitive hashing (LSH) [18, 17]. It is
based on the idea of random projections [18, 20]. There is also an extension of the LSH for symmetric non-metric distances [23]. The LSH employs several hash functions: It is likely that close objects
have same hash values and distant objects have different hash values. In the classic LSH index, the
probability of finding an element in one hash table is small and, consequently, many hash tables
are to be created during indexing. To reduce space requirements, Lv et al. proposed a multi-probe
version of the LSH, which can query multiple buckets of the same hash table [22]. Performance of
the LSH depends on the choice of parameters, which can be tuned to fit the distribution of data [11].
For approximate searching it was demonstrated that an early termination strategy could rely on information about distances from typical queries to their respective nearest neighbors [33, 1]. Amato et
al. [1] showed that density estimates can be used to approximate a pruning function in metric spaces.
They relied on a hierarchical decomposition method (an M-tree) and proposed to visit partitions in
the order defined by density estimates. Ch?avez and Navarro [9] proposed to relax triangle-inequality
based lower bounds for distances to potential nearest neighbors. The approach, which they dubbed
as stretching of the triangle inequality, involves multiplying an exact bound by ? > 1.
Few methods were designed to work in non-metric spaces. One common indexing approach involves
mapping the data to a low-dimensional Euclidean space. The goal is to find the mapping without
large distortions of the original similarity measure [19, 16]. Jacobs et al. [19] review various projection methods and argue that such a coercion is often against the nature of a similarity measure,
which can be, e.g., intrinsically non-symmetric. A mapping can be found using machine learning
methods. This can be done either separately for each data point [12, 24] or by computing one global
model [3]. There are also a number of approaches, where machine learning is used to estimate
optimal parameters of classic search methods [7]. Vermorel [29] applied VP-trees to searching in
undisclosed non-metric spaces without trying to learn a pruning function. Like Amato et al. [1], he
proposed to visit partitions in the order defined by density estimates and employed the same early
termination method as Zezula et al. [33].
Cayton [6] proposed a Bregman ball tree (bbtree), which is an exact search method for Bregman
divergences. The bbtree divides data into two clusters (each covered by a Bregman ball) and recursively repeats this procedure for each cluster until the number of data points in a cluster falls below
a threshold (a bucket size). At search time, the method relies on properties of Bregman divergences
to compute the shortest distances to covering balls. This is an expensive iterative procedure that
may require several computations of direct and inverse gradients, as well as of several distances.
Additionally, Cayton [6] employed an early termination method: The algorithm can be told to stop
after processing a pre-specified number of buckets. The resulting method is an approximate search
procedure. Zhang et al. [34] proposed an exact search method based on estimating the maximum
distance to a bounding rectangle, but it works with left queries only. The most efficient variant of
this method relies on an optimization technique applicable only to certain decomposable Bregman
divergences (a decomposable distance is a sum of values computed separately for each coordinate).
Ch?avez et al. [8] as well as Amato and Savino [2] independently proposed permutation-based search
methods. These approximate methods do not involve learning, but, nevertheless, are applicable to
non-metric spaces. At index time, k pivots are selected. For every data point, we create a list, called
a permutation, where pivots are sorted in the order of increasing distances from the data point.
At query time, a rank correlation (e.g., Spearman?s) is computed between the permutation of the
query and permutations of data points. Candidate points, which have sufficiently small correlation
values, are then compared directly with the query (by computing the original distance function).
One can sequentially scan the list of permutations and compute the rank correlation between the
2
permutation of the query and the permutation of every data point [8]. Data points are then sorted
by rank-correlation values. This approach can be improved by incremental sorting [14], storing
permutations as inverted files [2], or prefix trees [13].
In this work we experiment with two approaches to learning a pruning function of the VP-tree,
which to our knowledge was not attempted previously. We compare the resulting method, which
can be applied to both metric and non-metric spaces, with the following state-of-the-art methods:
the multi-probe LSH, permutation methods, and the bbtree.
2
2.1
Proposed Method
Classic VP-tree
In the VP-tree (also known as a ball tree) the space is partitioned with respect to a (usually randomly)
chosen pivot ? [28, 31]. Assume that we have computed distances from all points to the pivot ? and
R is a median of these distances. The sphere centered at ? with the radius R divides the space
into two partitions, each of which contains approximately half of all points. Points inside the pivotcentered sphere are placed into the left subtree, while points outside the pivot-centered sphere are
placed into the right subtree (points on the border may be placed arbitrarily). The search algorithm
proceeds recursively. When the number of data points is below a certain threshold (the bucket size),
these data points are stored as a single bucket. The obtained hierarchical partition is represented by
the binary tree, where buckets are leaves.
The NN-search is a recursive traversal procedure that
starts from the root of the tree and iteratively updates
the distance r to the closest object found. When it
reaches a bucket (i.e., a leaf), bucket elements are
searched sequentially. Each internal node stores the
pivot ? and the radius R. In a metric space with
the distance d(x, y), we use the triangle inequality
to prune the search space. We visit:
? only the left subtree if d(?, q) < R ? r;
? only the right subtree if d(?, q) > R + r;
R
?
Figure 1: Three types of query balls in the
VP-tree. The black circle (centered at the
pivot ?) is the sphere that divides the space.
? both subtrees if R ? r ? d(?, q) ? R + r.
In the third case, we first visit the partition that contains q. These three cases are illustrated in Fig. 1. Let D?,R (x) = |R ? x|. Then we need to visit
both partitions if and only if r ? D?,R (d(?, q)). If r < D?,R (d(?, q)), we visit only the partition
containing the query point. In this case, we prune the other partition. Pruning is a classification task
with three classes, where the prediction function is defined through D?,R (x). The only argument of
this function is a distance between the pivot and the query, i.e., d(?, q). The function value is equal
to the maximum radius of the query ball that fits inside the partition containing the query (see the
red and the blue sample balls in Fig. 1).
2.2
Approximating D?,R (x) with a Piece-wise Linear Function
In Section 2 of the supplemental materials, we describe a straightforward sampling algorithm to
learn the decision function D?,R (x) for every pivot ?. This method turned out to be inferior to
most state-of-the-art approaches. It is, nevertheless, instructive to examine the decision functions
D?,R (x) learned by sampling for the Euclidean distance and KL-divergence (see Table 1 for details
on data sets).
Each point in Fig. 2a-2c is a value of the decision function obtained by sampling. Blue curves are
fit to these points. For the Euclidean data (Fig. 2a), D?,R (x) resembles a piece-wise linear function
approximately equal to |R ? x|. For the KL-divergence data (Fig. 2b and 2c), D?,R (x) looks like a
U-shape and a hockey-stick curve, respectively. Yet, most data points concentrate around the median
(denoted by a dashed red line). In this area, a piece-wise linear approximation of D?,R (x) could
3
?
0.3
0.2
0.1
?
?
?
?
?
?
?
??
??
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
???
???
?
??
?
??
???
?
??
?
??
?
?
??
? ?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
??
?
?
??
0.25
?
?
0.2
?
?
?
0.75
distance to pivot
(a) Colors, L2
?
?
?
??
?
?
?
?
0.1
?
?
?
?
? ?
?
? ? ?
?
?
? ? ??
?
?
? ?
?
?
??
? ? ?
?
?
?
?
? ? ??
??
?? ?
? ???? ? ??
? ? ?? ? ??? ? ?
?
?
?
?
?
?? ??
?? ?
?? ? ?
??
??
?
?
??
0.0
0.50
?
?
max distance to query
?
?
max distance to query
max distance to query
?
?
?
?
?
0.4
?
0.3
0.5
0
2
4
1.0
?
?
0.5
?
?
?
??
? ?
0.0
6
distance to pivot
(b) RCV-8, KL-divergence
?
?
?
?
? ?
?
?
?
???? ? ?? ? ??
??
??? ? ?
?? ??
?? ?
????
?
?? ??
? ? ? ? ? ?? ? ? ????
?
??
??
? ?
????
?
?
? ?? ??
?? ? ?
?
? ??
?
??? ?
?
?
??
0
2
4
6
distance to pivot
(c) RCV-16, gen. KL-divergence
Figure 2: The empirically obtained decision function D?,R (x). Each point is a value of the function
learned by sampling (see Section 2 of the supplemental materials). Blue curves are fit to these points.
The red dashed line denotes a median distance R from data set points to the pivot ?.
still be reasonable. Formally, we define the decision function as:
?
? ? |x ? R|,
if x ? R
lef t
D?,R (x) =
? ?right |x ? R|, if x ? R
(1)
Once we obtain the values of ?lef t and ?right that permit near exact searching, we can induce more
aggressive pruning by increasing ?lef t and/or ?right , thus, exploring trade-offs between retrieval
efficiency and effectiveness. This is similar to stretching of the triangle inequality proposed by
Ch?avez and Navarro [9].
Optimal ?lef t and ?right are determined using a grid search. To this end, we index a small subset of
the data points and seek to obtain parameters that give the shortest retrieval time at a specified recall
threshold. The grid search is initialized by values a and b. Then, recall values and retrieval times for
all ?lef t = a?i/m?0.5 and ?right = b?j/m?0.5 are obtained (1 ? i, j ? m). The values of m and
? are chosen so that: (1) the grid step is reasonably small (i.e., ?1/m is close to one); (2) the search
space is manageable (i.e., m is not large).
If the obtained recall values are considerably larger than a specified threshold, the procedure repeats
the grid search using larger values of a and b. Similarly, if the recall is not sufficient, the values
of a and b are decreased and the grid search is repeated. One can see that the perfect recall can be
achieved with ?lef t = 0 and ?right = 0. In this case, no pruning is done and the data set is searched
sequentially. Values of ?lef t = ? and ?right = ? represent an (almost) zero recall, because one
of the partitions is always pruned.
2.3
Applicability Conditions
It is possible to apply the classic VP-tree algorithm only to data sets such that D?,R (d(?, q)) > 0
when d(?, q) 6= R. In a relaxed version of this applicability condition, we require that
D?,R (d(?, q)) > 0 for almost all queries and a large subset of data points. More formally:
Property 1. For any pivot ?, probability ?, and distance x 6= R, there exists a radius r > 0
such that, if two randomly selected points q (a potential query) and u (a potential nearest neighbor)
satisfy d(?, q) = x and d(u, q) ? r, then both p and q belong to the same partition (defined by ?
and R) with a probability at least ?.
The Property 1, which is true for all metric spaces due to the triangle inequality, holds in the case of
thePKL-divergence and data points u sampled randomly and uniformly from the simplex {xi |xi ?
0, xi = 1}. The proof, which is given in Section 1 of supplemental materials, can be trivially
extended to other non-negative distance functions d(x, y) ? 0 (e.g., to the Itakura-Saito distance)
that satisfy (additional compactness requirements may be required): (1) d(x, y) = 0 ? x = y; (2)
the set of discontinuities of d(x, y) has measure zero in L2 . This suggests that the VP-tree could be
applicable to a wide class of non-metric spaces.
4
Table 1: Description of the data sets
Name
d(x, y)
Data set size
Colors
L2
1.1 ? 105
112
RCV-i
KL-div, L2
0.5 ? 106
i ? {8, 16, 32, 128, 256}
Cayton [6]
1111
Cayton [6]
SIFT-signat.
Uniform
3
KL-div, L2
L2
Dimensionality
4
1 ? 10
6
0.5 ? 10
64
Source
Metric Space Library1
Sampled from U 64 [0, 1]
Experiments
We run experiments on a Linux server equipped with Intel Core i7 2600 (3.40 GHz, 8192 KB of
L3 CPU cache) and 16 GB of DDR3 RAM (transfer rate is 20GB/sec). The software (including
scripts that can be used to reproduce our results) is available online, as a part of the Non-Metric
Space Library2 [5]. The code was written in C++, compiled using GNU C++ 4.7 (optimization
flag -Ofast), and executed in a single thread. SIMD instructions were enabled using the flags -msse2
-msse4.1 -mssse3.
All distance and rank correlation functions are highly optimized and employ SIMD instructions.
Vector elements were single-precision numbers. For the KL-divergence, though, we also implemented a slower version, which computes logarithms on-line, i.e., for each distance computation.
The faster version computes logarithms of vector elements off-line, i.e., during indexing, and stores
with the vectors. Additionally, we need to compute logarithms of query vector elements, but this is
done only once per query. The optimized implementation is about 30x times faster than the slower
one.
The data sets are described in Table 1. Each data set is randomly divided into two parts. The
smaller part (containing 1,000 elements) is used as queries, while the larger part is indexed. This
procedure is repeated 5 times (for each data sets) and results are aggregated using a classic fixedeffect model [15]. Improvement in efficiency due to indexing is measured as a reduction in retrieval
time compared to a sequential, i.e., exhaustive, search. The effectiveness was measured using a
simple rank error metric proposed by Cayton [6]. It is equal to the number of NN-points closer to
the query than the nearest point returned by the search method. This metric is appropriate mostly for
1-NN queries. We present results only for left queries, but we also verified that in the case of right
queries the VP-tree provides similar effectiveness/efficiency trade-offs. We ran benchmarks for L2 ,
the KL-divergence,3 and the Itakura-Saito distance. Implemented methods included:
? The novel search algorithm based on the VP-tree and a piece-wise linear approximation for
D?,R (x) as described in Section 2.2. The parameters of the grid search algorithm were:
m = 7 and ? = 8.
? The permutation method with incremental sorting [14]. The near-optimal performance was
obtained by using 16 pivots.
? The permutation prefix index, where permutation profiles are stored in a prefix tree of
limited depth [13]. We used 16 pivots and the maximal prefix length 4 (again selected for
best performance).
? The bbtree [6], which is designed for Bregman divergences, and, thus, it was not used with
L2 .
? The multi-probe LSH, which is designed to work only for L2 . The implementation employs
the LSHKit, 4 which is embedded in the Non-Metric Space Library. The best-performing
configuration that we could find used 10 probes and 50 hash tables. The remaining parameters were selected automatically using the cost model proposed by Dong et al. [11].
2
https://github.com/searchivarius/NonMetricSpaceLib
In the case of SIFT signatures, we use generalized KL-divergence (similarly to Cayton).
4
Downloaded from http://lshkit.sourceforge.net/
3
5
Improvement in efficiency (log. scale)
104
multi-probe LSH
pref. index
vp-tree
permutation
103
Uniform (L2)
RCV-128 (L2)
Colors (L2)
103
multi-probe LSH
pref. index
vp-tree
permutation
3
10
multi-probe LSH
pref. index
vp-tree
permutation
102
102
102
101
101
101
10?2
10?1
100
101
102
103
Number of points closer (log. scale)
100
100
RCV-16 (L2)
Improvement in efficiency (log. scale)
10?2
10?1 100
101
102
103
104
Number of points closer (log. scale)
10?1
100
101
102
103
Number of points closer (log. scale)
SIFT signatures (L2)
RCV-256 (L2)
104
103
103
multi-probe LSH
pref. index
vp-tree
permutation
multi-probe LSH
pref. index
vp-tree
permutation
102
102
102
101
multi-probe LSH
pref. index
vp-tree
permutation
100
101
101
100
10?2 10?1 100
101
102
103
Number of points closer (log. scale)
10?1 100
101
102
103
104
Number of points closer (log. scale)
10?6
10?4
10?2
100
102
104
Number of points closer (log. scale)
Figure 3: Performance of NN-search for L2
Improvement in efficiency (log. scale)
RCV-16 (KL-div)
103
pref. index
bbtree
vp-tree
permutation
RCV-256 (KL-div)
103
2
10
102
pref. index
bbtree
vp-tree
permutation
101
102
101
101
100
100
10?2 10?1
100
101
102
Number of points closer (log. scale)
10?1
104
10
3
10
2
103
SIFT signatures (Itakura-Saito)
2
pref. index
bbtree
vp-tree
permutation
10
100
pref. index
bbtree
vp-tree
permutation
100
101
102
103
Number of points closer (log. scale)
pref. index
bbtree
vp-tree
permutation
101
102
101
10?2
10?1
100
101
102
Number of points closer (log. scale)
100
101
102
103
Number of points closer (log. scale)
RCV-256 (Itakura-Saito)
RCV-16 (Itakura-Saito)
Improvement in efficiency (log. scale)
SIFT signatures (KL-div)
pref. index
bbtree
vp-tree
permutation
101
100
100
101
102
103
Number of points closer (log. scale)
104
100
101
102
Number of points closer (log. scale)
Figure 4: Performance of NN-search for the KL-divergence and Itakura-Saito distance
For the bbtree and the VP-tree, vectors in the same bucket were stored in contiguous chunks of memory (allowing for about 1.5-2x reduction in retrieval times). It is typically more efficient to search
elements of a small bucket sequentially, rather than using an index. A near-optimal performance
was obtained with 50 elements in a bucket. The same optimization approach was also used for both
permutation methods.
Several parameters were manually selected to achieve various effectiveness/efficiency trade-offs.
They included: the minimal number/percentage of candidates in permutation methods, the desired
6
Table 2: Improvement in efficiency and retrieval time (ms) for the bbtree without early termination
Data set
RCV-16
RCV-32
RCV-128
RCV-256
SIFT sign.
impr.
time
impr.
time
impr.
time
impr.
time
impr.
time
Slow KL-divergence
15.7
8
6.7
36
1.6
613
1.1
1700
0.9
164
Fast KL-divergence
4.6
2.5
1.9
9.6
0.5
108
0.4
274
0.4
18
recall in the multi-probe LSH and in the VP-tree, as well as the maximum number of processed
buckets in the bbtree.
The results for L2 are given in Fig. 3. Even though a representational dimensionality of the Uniform
data set is only 64, it has the highest intrinsic dimensionality among all sets in Table 1 (according to
the definition of Ch?avez et al. [10]). Thus, for the Uniform data set, no method achieved more than
a 10x speedup over sequential searching without substantial quality degradation. For instance, for
the VP-tree, a 160x speedup was only possible, when a retrieved object was a 40-th nearest neighbor
(on average) instead of the first one. The multi-probe LSH can be twice as fast as the VP-tree at the
expense of having a 4.7x larger index. All the remaining data sets have low or moderate intrinsic
dimensionality (smaller than eight). For example, the SIFT signatures have the representational
dimensionality of 1111, but the intrinsic dimensionality is only four. For data sets with low and
moderate intrinsic dimensionality, the VP-tree is faster than the other methods most of the time. For
the data sets Colors and RCV-16 there is a two orders of magnitude difference.
The results for the KL-divergence and Itakura-Saito distance are summarized in Fig. 4. The bbtree is never substantially faster than the VP-tree, while being up to an order of magnitude slower
for RCV-16 and RCV-256 in the case of Itakura-Saito distance. Similar to results in L2 , in most
cases, the VP-tree is at least as fast as other methods. Yet, for the SIFT signatures data set and the
Itakura-Saito distance, permutation methods can be twice as fast.
Additional analysis has showed that the VP-tree is a good rank-approximation method, but it is not
necessarily the best approach in terms of recall. When the VP-tree misses the nearest neighbor, it
often returns the second nearest or the third nearest neighbor instead. However, when other examined methods miss the nearest neighbor, they frequently return elements that are far from the true
result. For example, the multi-probe LSH may return a true nearest neighbor 50% of the time, and
50% of the time it would return the 100-th nearest neighbor. This observation about the LSH is in
line with previous findings [26].
Finally, we measured improvement in efficiency (over exhaustive search) for the bbtree, where the
early termination algorithm was disabled. This was done using both the slow and the fast implementation of the KL-divergence. The results are given in Table 2. Improvements in efficiency for the case
of the slower KL-divergence (reported in the first row) are consistent with those reported by Cayton
[6]. The second row shows improvements in efficiency for the case of the faster KL-divergence and
these improvements are substantially smaller than those reported in the first row, despite the fact
that using the faster KL-divergence greatly reduces retrieval times. The reason is that the pruning
algorithm of the bbtree is quite expensive. It involves computations of logarithms/exponents for
coordinates of unknown vectors, and, thus, these computations cannot be deferred to index time.
4
Discussion and conclusions
We evaluated two simple yet effective learning-to-prune methods and showed that the resulting approach was competitive against state-of-the-art methods in both metric and non-metric spaces. In
most cases, this method provided better trade-offs between rank approximation quality and retrieval
speed. For datasets with low or moderate intrinsic dimensionality, the VP-tree could be one-two orders of magnitude faster than other methods (for the same rank approximation quality). We discussed
applicability of our method (a VP-tree with the learned pruner) and proved a theorem supporting the
point of view that our method can be applicable to a class of non-metric distances, which includes
7
the KL-divergence. We also showed that a simple trick of pre-computing logarithms at index time
substantially improved performance of existing methods (e.g., bbtree) for the studied distances.
It should be possible to improve over basic learning-to-prune methods (employed in this work)
using: (1) a better pivot-selection strategy [31]; (2) a more sophisticated sampling strategy; (3) a
more accurate (non-linear) approximation for the decision function D?,R (x) (see section 2.1).
5
Acknowledgements
We thank Lawrence Cayton for providing the data sets, the bbtree code, and answering our questions;
Anna Belova for checking the proof of Property 1 (supplemental materials) and editing the paper.
References
[1] G. Amato, F. Rabitti, P. Savino, and P. Zezula. Region proximity in metric spaces and its use
for approximate similarity search. ACM Trans. Inf. Syst., 21(2):192?227, Apr. 2003.
[2] G. Amato and P. Savino. Approximate similarity search in metric spaces using inverted files.
In Proceedings of the 3rd international conference on Scalable information systems, InfoScale
?08, pages 28:1?28:10, ICST, Brussels, Belgium, Belgium, 2008. ICST (Institute for Computer
Sciences, Social-Informatics and Telecommunications Engineering).
[3] V. Athitsos, J. Alon, S. Sclaroff, and G. Kollios. BoostMap: A method for efficient approximate similarity rankings. In Computer Vision and Pattern Recognition, 2004. CVPR 2004.
Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II?268 ?
II?275 Vol.2, june-2 july 2004.
[4] J. Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509?517, 1975.
[5] L. Boytsov and B. Naidan. Engineering efficient and effective Non-Metric Space Library. In
N. Brisaboa, O. Pedreira, and P. Zezula, editors, Similarity Search and Applications, volume
8199 of Lecture Notes in Computer Science, pages 280?293. Springer Berlin Heidelberg, 2013.
[6] L. Cayton. Fast nearest neighbor retrieval for Bregman divergences. In Proceedings of the
25th international conference on Machine learning, ICML ?08, pages 112?119, New York,
NY, USA, 2008. ACM.
[7] L. Cayton and S. Dasgupta. A learning framework for nearest neighbor search. Advances in
Neural Information Processing Systems, 20, 2007.
[8] E. Ch?avez, K. Figueroa, and G. Navarro. Effective proximity retrieval by ordering permutations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(9):1647 ?1658,
sept. 2008.
[9] E. Ch?avez and G. Navarro. Probabilistic proximity search: Fighting the curse of dimensionality
in metric spaces. Information Processing Letters, 85(1):39?46, 2003.
[10] E. Ch?avez, G. Navarro, R. Baeza-Yates, and J. L. Marroquin. Searching in metric spaces. ACM
Computing Surveys, 33(3):273?321, 2001.
[11] W. Dong, Z. Wang, W. Josephson, M. Charikar, and K. Li. Modeling LSH for performance tuning. In Proceedings of the 17th ACM conference on Information and knowledge management,
CIKM ?08, pages 669?678, New York, NY, USA, 2008. ACM.
[12] O. Edsberg and M. L. Hetland. Indexing inexact proximity search with distance regression in
pivot space. In Proceedings of the Third International Conference on SImilarity Search and
APplications, SISAP ?10, pages 51?58, New York, NY, USA, 2010. ACM.
[13] A. Esuli. Use of permutation prefixes for efficient and scalable approximate similarity search.
Inf. Process. Manage., 48(5):889?902, Sept. 2012.
[14] E. Gonzalez, K. Figueroa, and G. Navarro. Effective proximity retrieval by ordering permutations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(9):1647?1658,
2008.
[15] L. V. Hedges and J. L. Vevea. Fixed-and random-effects models in meta-analysis. Psychological methods, 3(4):486?504, 1998.
8
[16] G. Hjaltason and H. Samet. Properties of embedding methods for similarity searching in metric
spaces. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(5):530?549,
2003.
[17] P. Indyk. Nearest neighbors in high-dimensional spaces. In J. E. Goodman and J. O?Rourke,
editors, Handbook of discrete and computational geometry, pages 877?892. Chapman and
Hall/CRC, 2004.
[18] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing,
STOC ?98, pages 604?613, New York, NY, USA, 1998. ACM.
[19] D. Jacobs, D. Weinshall, and Y. Gdalyahu. Classification with nonmetric distances: Image retrieval and class representation. Pattern Analysis and Machine Intelligence, IEEE Transactions
on, 22(6):583?600, 2000.
[20] E. Kushilevitz, R. Ostrovsky, and Y. Rabani. Efficient search for approximate nearest neighbor
in high dimensional spaces. In Proceedings of the 30th annual ACM symposium on Theory of
computing, STOC ?98, pages 614?623, New York, NY, USA, 1998. ACM.
?
[21] H. Lejsek, F. Asmundsson,
B. J?onsson, and L. Amsaleg. NV-Tree: An efficient disk-based
index for approximate search in very large high-dimensional collections. Pattern Analysis and
Machine Intelligence, IEEE Transactions on, 31(5):869 ?883, may 2009.
[22] Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li. Multi-probe LSH: efficient indexing
for high-dimensional similarity search. In Proceedings of the 33rd international conference on
Very large data bases, VLDB ?07, pages 950?961. VLDB Endowment, 2007.
[23] Y. Mu and S. Yan. Non-metric locality-sensitive hashing. In AAAI, 2010.
[24] T. Murakami, K. Takahashi, S. Serita, and Y. Fujii. Versatile probability-based indexing for
approximate similarity search. In Proceedings of the Fourth International Conference on SImilarity Search and APplications, SISAP ?11, pages 51?58, New York, NY, USA, 2011. ACM.
[25] V. Pestov. Indexability, concentration, and {VC} theory. Journal of Discrete Algorithms,
13(0):2 ? 18, 2012. Best Papers from the 3rd International Conference on Similarity Search
and Applications (SISAP 2010).
[26] P. Ram, D. Lee, H. Ouyang, and A. G. Gray. Rank-approximate nearest neighbor search: Retaining meaning and speed in high dimensions. In Advances in Neural Information Processing
Systems, pages 1536?1544, 2009.
[27] H. Samet. Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann
Publishers Inc., 2005.
[28] J. Uhlmann. Satisfying general proximity similarity queries with metric trees. Information
Processing Letters, 40:175?179, 1991.
[29] J. Vermorel. Near neighbor search in metric and nonmetric space, 2005. http://
hal.archives-ouvertes.fr/docs/00/03/04/85/PDF/densitree.pdf last
accessed on Nov 1st 2012.
[30] R. Weber, H. J. Schek, and S. Blott. A quantitative analysis and performance study for
similarity-search methods in high-dimensional spaces. In Proceedings of the 24th International Conference on Very Large Data Bases, pages 194?205. Morgan Kaufmann, August
1998.
[31] P. N. Yianilos. Data structures and algorithms for nearest neighbor search in general metric
spaces. In Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms,
SODA ?93, pages 311?321, Philadelphia, PA, USA, 1993. Society for Industrial and Applied
Mathematics.
[32] P. Zezula, G. Amato, V. Dohnal, and M. Batko. Similarity Search: The Metric Space Approach
(Advances in Database Systems). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005.
[33] P. Zezula, P. Savino, G. Amato, and F. Rabitti. Approximate similarity retrieval with M-trees.
The VLDB Journal, 7(4):275?293, Dec. 1998.
[34] Z. Zhang, B. C. Ooi, S. Parthasarathy, and A. K. H. Tung. Similarity search on Bregman
divergence: towards non-metric indexing. Proc. VLDB Endow., 2(1):13?24, Aug. 2009.
9
| 5018 |@word version:4 manageable:1 disk:1 termination:5 instruction:2 vldb:4 seek:1 decomposition:3 jacob:2 versatile:1 recursively:3 reduction:2 configuration:1 contains:2 tuned:1 prefix:5 existing:1 comparing:1 com:1 yet:4 written:1 distant:1 partition:17 shape:1 designed:3 update:1 hash:7 half:2 selected:5 leaf:2 intelligence:5 core:1 provides:1 idi:1 node:1 zhang:2 accessed:1 fujii:1 direct:1 become:1 symposium:3 schek:1 inside:2 notably:1 examine:1 frequently:1 multi:14 automatically:1 cpu:1 curse:4 equipped:1 cache:1 increasing:2 provided:1 estimating:1 rcv:17 weinshall:1 ouyang:1 minimizes:2 substantially:3 supplemental:4 finding:2 dubbed:1 nj:1 quantitative:1 every:3 multidimensional:2 ooi:1 classifier:3 ostrovsky:1 stick:1 partitioning:1 engineering:2 despite:1 approximately:2 black:1 twice:2 resembles:1 examined:1 studied:1 suggests:1 limited:1 gdalyahu:1 recursive:1 procedure:7 saito:12 area:1 yan:1 projection:2 pre:2 induce:1 cannot:1 close:2 selection:1 applying:1 demonstrated:1 missing:1 indexability:1 straightforward:1 independently:1 survey:1 decomposable:2 kushilevitz:1 retrieve:1 enabled:1 classic:5 searching:7 embedding:1 coordinate:2 hierarchy:1 exact:4 pa:2 element:9 trick:1 expensive:3 recognition:1 satisfying:1 database:1 signat:1 tung:1 wang:2 region:1 ordering:2 decrease:1 trade:4 highest:1 ran:1 substantial:1 mu:1 traversal:1 signature:6 efficiency:13 triangle:7 various:2 represented:1 fast:6 effective:5 describe:1 query:32 outside:1 exhaustive:2 pref:12 quite:1 larger:4 cvpr:1 distortion:1 relax:1 indyk:2 online:1 associative:1 net:1 lowdimensional:1 maximal:1 fr:1 relevant:1 turned:1 rapidly:1 gen:1 achieve:1 representational:2 description:1 secaucus:1 sourceforge:1 cluster:3 requirement:2 motwani:1 incremental:2 perfect:1 object:7 alon:1 measured:3 nearest:20 aug:1 implemented:2 involves:3 trading:1 concentrate:1 radius:4 avez:7 kb:1 centered:3 vc:1 material:4 crc:1 require:2 samet:3 extension:1 exploring:1 hold:1 proximity:6 sufficiently:1 around:1 hall:1 lawrence:1 mapping:3 early:5 belgium:2 estimation:1 proc:1 applicable:5 uhlmann:1 sensitive:3 create:1 offs:4 always:1 aim:1 rather:1 thirtieth:1 endow:1 focus:1 amato:7 june:1 improvement:10 rank:10 greatly:1 industrial:1 nn:12 typically:2 compactness:1 reproduce:1 classification:3 among:1 denoted:1 exponent:1 retaining:1 art:5 equal:3 once:2 simd:2 never:1 having:1 sampling:6 manually:1 biology:1 chapman:1 look:1 icml:1 simplex:1 employ:5 few:1 randomly:4 divergence:29 comprehensive:1 amsaleg:1 geometry:1 interest:1 highly:1 ouvertes:1 deferred:1 analyzed:1 subtrees:1 accurate:1 bregman:9 closer:13 respective:1 tree:48 indexed:1 euclidean:5 divide:3 initialized:1 circle:1 logarithm:5 desired:1 minimal:1 psychological:1 instance:1 modeling:1 contiguous:1 cost:3 applicability:3 subset:2 uniform:4 stored:3 reported:3 answer:2 considerably:1 chunk:1 st:1 density:4 international:7 siam:1 lee:1 told:1 off:1 dong:2 informatics:1 probabilistic:1 linux:1 squared:1 again:1 aaai:1 management:1 containing:3 manage:1 creating:2 book:1 murakami:1 return:5 li:2 syst:1 aggressive:1 potential:3 takahashi:1 sec:1 summarized:1 includes:1 inc:2 satisfy:2 ranking:1 depends:1 piece:4 script:1 root:1 view:1 red:3 competitive:2 relied:1 start:1 ass:1 kaufmann:2 stretching:3 vp:36 multiplying:1 reach:1 definition:1 inexact:1 against:5 proof:2 athitsos:1 stop:1 sampled:2 proved:1 intrinsically:1 recall:8 knowledge:2 color:4 dimensionality:15 rourke:1 sophisticated:1 marroquin:1 nonmetric:2 norway:1 hashing:3 improved:2 editing:1 evaluated:2 done:4 cayton:10 though:2 until:1 correlation:5 overlapping:1 quality:4 gray:1 resemblance:1 disabled:1 hal:1 bentley:1 usa:9 effect:2 name:1 contain:1 true:3 symmetric:3 iteratively:1 illustrated:1 during:2 inferior:1 covering:1 m:1 generalized:1 trying:1 pdf:2 image:2 wise:4 meaning:1 novel:1 recently:1 weber:1 common:1 empirically:1 volume:2 discussed:2 belong:2 he:1 mellon:1 rd:3 tuning:1 grid:6 trivially:1 similarly:2 mathematics:1 lsh:21 l3:1 similarity:19 compiled:1 base:2 closest:2 showed:4 retrieved:1 moderate:3 inf:2 store:2 certain:2 server:1 verlag:1 inequality:7 binary:3 continue:1 arbitrarily:1 meta:1 yi:3 inverted:2 morgan:2 additional:2 relaxed:1 employed:3 prune:6 aggregated:1 shortest:2 signal:1 dashed:2 ii:2 multiple:1 july:1 reduces:1 faster:7 sphere:4 retrieval:19 divided:1 visit:6 prediction:1 variant:1 basic:1 scalable:2 regression:1 essentially:1 metric:40 cmu:1 vision:1 histogram:1 represent:1 achieved:2 dec:1 separately:2 decreased:1 median:3 source:1 publisher:1 goodman:1 archive:1 file:2 navarro:6 nv:1 effectiveness:4 near:4 baeza:1 fit:4 reduce:3 idea:1 pestov:1 pivot:18 i7:1 thread:1 kollios:1 gb:2 effort:1 returned:1 york:7 covered:1 involve:1 vevea:1 concentrated:1 processed:1 http:3 percentage:1 impr:5 sign:1 cikm:1 per:1 blue:3 carnegie:1 discrete:3 dasgupta:1 vol:1 affected:1 yates:1 four:1 threshold:4 nevertheless:2 verified:1 rectangle:1 ram:2 icst:2 josephson:2 sum:1 run:1 inverse:1 letter:2 fourth:2 telecommunication:1 soda:1 almost:2 reasonable:1 doc:1 gonzalez:1 decision:6 bound:2 gnu:1 oracle:3 annual:3 figueroa:2 software:1 speed:2 argument:1 rabani:1 pruned:1 performing:1 speedup:3 charikar:2 according:1 brussels:1 ball:7 kd:1 spearman:1 smaller:4 partitioned:1 happens:1 projecting:1 indexing:8 bucket:12 trondheim:1 computationally:1 zezula:6 previously:1 end:2 available:1 permit:1 probe:15 apply:1 hierarchical:4 eight:1 appropriate:1 slower:4 original:2 denotes:1 remaining:2 approximating:2 society:2 question:1 strategy:3 concentration:2 exhibit:1 gradient:1 div:5 distance:49 thank:1 berlin:1 argue:1 reason:1 code:2 length:1 index:23 fighting:1 ratio:1 providing:1 executed:1 mostly:1 stoc:2 expense:1 negative:2 implementation:3 unknown:1 allowing:1 observation:1 datasets:1 benchmark:1 supporting:1 extended:1 norwegian:1 communication:1 august:1 required:1 kl:26 specified:3 optimized:2 learned:5 discontinuity:1 trans:1 proceeds:1 below:2 usually:1 pattern:6 max:3 including:1 memory:1 rely:1 recursion:2 improve:1 github:1 technology:1 library:2 numerous:1 created:1 philadelphia:1 sept:2 parthasarathy:1 text:1 review:2 l2:19 acknowledgement:1 checking:1 embedded:2 lecture:1 permutation:31 filtering:1 lv:2 foundation:1 downloaded:1 sufficient:1 consistent:1 editor:2 storing:1 endowment:1 row:3 repeat:2 placed:3 last:1 lef:7 institute:1 neighbor:19 fall:1 wide:1 ghz:1 curve:3 depth:1 dimension:1 computes:2 commonly:1 collection:1 far:1 vermorel:2 social:1 transaction:5 approximate:17 pruning:8 nov:1 global:1 sequentially:4 handbook:1 pittsburgh:1 xi:7 search:47 iterative:1 table:10 additionally:2 hockey:1 nature:1 learn:2 reasonably:1 transfer:1 itakura:12 heidelberg:1 necessarily:1 yianilos:1 anna:1 apr:1 bounding:1 noise:1 ntnu:1 border:1 profile:1 repeated:2 fig:7 intel:1 slow:2 ny:6 ddr3:1 precision:1 candidate:2 answering:1 third:3 onsson:1 theorem:1 removing:1 sift:8 offset:1 list:2 essential:1 intrinsic:7 exists:1 sequential:2 dissimilarity:1 magnitude:3 subtree:4 sorting:2 sclaroff:1 locality:3 explore:1 likely:1 expressed:1 partially:1 springer:2 ch:7 blott:1 relies:2 acm:13 hedge:1 goal:1 sorted:2 consequently:1 towards:2 lipschitz:1 leonid:1 considerable:1 included:2 typical:1 determined:1 uniformly:1 flag:2 degradation:1 miss:2 multimedia:1 called:2 attempted:1 formally:2 internal:1 searched:2 scan:1 instructive:1 |
4,441 | 5,019 | A Deep Architecture for Matching Short Texts
Hang Li
Noah?s Ark Lab
Huawei Technologies Co. Ltd.
Sha Tin, Hong Kong
[email protected]
Zhengdong Lu
Noah?s Ark Lab
Huawei Technologies Co. Ltd.
Sha Tin, Hong Kong
[email protected]
Abstract
Many machine learning problems can be interpreted as learning for matching two
types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner
product in a certain feature space, while the modeling effort focuses on mapping of
objects from the original space to the feature space. This schema, although proven
successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching
relations between two objects from heterogeneous domains. More specifically, we
apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture
naturally combines the localness and hierarchy intrinsic to the natural language
problems, and therefore greatly improves upon the state-of-the-art models.
1
Introduction
Many machine learning problems can be interpreted as matching two objects, e.g., images and
captions in automatic captioning [11, 14], users and products in recommender systems, queries
and retrieved documents in information retrieval. It is different from the usual notion of similarity
since it is usually defined between objects from two different domains (e.g., texts and images), and
it is usually associated with a particular purpose. The degree of matching is typically modeled as an
inner-product of two representing feature vectors for objects x and y in a Hilbert space H,
match(x, y) =< ?Y (x), ?X (y) >H
(1)
while the modeling effort boils down to finding the mapping from the original inputs to the feature
vectors. Linear models of this direction include the Partial Least Square (PLS) [19, 20], Canonical
Correlation Analysis (CCA) [7], and their large margin variants [1]. In addition, there is also limited
effort on finding the nonlinear mappings for that [3, 18].
In this paper, we focus on a rather difficult task of matching a given short text and candidate responses. Examples include retrieving answers for a given question and automatically commenting
on a given tweet. This inner-product based schema, although proven effective on tasks like information retrieval, are often incapable for modeling the matching between complicated objects. First,
representing structured objects like text as compact and meaningful vectors can be difficult; Second,
inner-product cannot sufficiently take into account the complicated interaction between components
within the objects, often in a rather nonlinear manner.
In this paper, we attack the problem of matching short texts from a brand new angle. Instead of
representing the text objects in each domain as semantically meaningful vectors, we directly model
object-object interactions with a deep architecture. This new architecture allows us to explicitly
capture the natural nonlinearity and the hierarchical structure in matching two structured objects.
1
2
Model Overview
We start with the bilinear model. Assume we can
represent objects in domain X and Y with vectors
x ? RDx and y ? RDy . The bilinear matching
model decides the score for any pair (x, y) as
match(x, y) = x> Ay =
Dy
Dx X
X
Anm xm yn , (2)
m=1 n=1
with a pre-determined A. From a different angle,
each element product xn ym in the above sum can
be viewed as a micro and local decision about the
matching level of x and y. The outer-product matrix
Figure 1: Architecture for linear matching. M = xy> specifies the space of element-wise interaction between objects x and y. The
Pfinal decision is
made considering all the local decisions, while in the bilinear case match(x, y) = nm Anm Mnm ,
it simply sums all the local decisions with a weight specified by A, as illustrated in Figure 1.
2.1
From Linear to Deep
This simple summarization strategy can be extended to a deep architecture to explore the nonlinearity and hierarchy in matching short texts. Unlike tasks like text classification, we need to work
on a pair of text objects to be matched, which we refer to as parallel texts, borrowed from machine
translation. This new architecture is mainly based on the following two intuitions:
Localness: there is a salient local structure in the semantic space of parallel text objects to be
matched, which can be roughly captured via the co-occurrence pattern of words across the objects.
This localness however should not prevent two ?distant? components from correlating with each
other on a higher level, hence calls for the hierarchical characteristic of our model;
Hierarchy: the decision making for matching has different levels of abstraction. The local decisions, capturing the interaction between semantically close words, will be combined later layer-bylayer to form the final and global decision on matching.
2.2
Localness
The localness of the text matching problem can
be best described using an analogy with the
?image patch?
?text patch?
patches in images, as illustrated in Figure 2.
Loosely speaking, a patch for parallel texts defines the set of interacting pairs of words from
the two text objects. Like the coordinate of an
image patch, we can use (?x,p , ?y,p ) to specify
the range of the path, with ?x,p and ?y,p each
specifying a subset of terms in X and Y respectively. Like the patches of images, the patches
defined here are meant to capture the segments
Figure 2: Image patches vs. parallel-text patches. of rich inherent structure. But unlike the naturally formed rectangular patches of images, the
patches defined here do not come with a pre-given spatial continuity. It is so since in texts, the
nearness of words are not naturally given as location of pixels in images, but instead needs to be
discovered from the co-occurrence patterns of the matched texts. As shown later in Section 3, we
actually do that with a method resembling bilingual topic modeling, which nicely captures the cooccurrence of the words within-domain and cross-domain simultaneously. The basic intuitions here
are, 1) when the words co-occur frequently across the domains (e.g., fever?antibiotics), they
are likely to have strong interaction in determining the matching score, and 2) when the words cooccur frequently in the same domain (e.g., {Hawaii,vacation}), they are likely to collaborate in
making the matching decision. For example, modeling the matching between the word ?Hawaii?
in question (likely to be a travel-related question) and the word ?RAM? in answer (likely an answer
to a computer-related question) is probably useless, judging from their co-occurrence pattern in
Question-Answer pairs. In other words, our architecture models only ?local? pairwise relations on
2
a low level with patches, while describing the interaction between semantically distant terms on
higher levels in the hierarchy.
2.3
Hierarchy
Once the local decisions on patches are made (most of them are N ULL for a particular short
text pair), they will be sent to the next layer, where the lower-level decisions are further combined to form more composite decisions, which in turn will be sent to still higher levels. This
process runs until it reaches the final decision. Figure 3 gives an illustrative example on hierarchical decision making. As it shows, the local decision on patch ?SIGHTSEEING IN PARIS?
and ?SIGHTSEEING IN B ERLIN? can be combined to form a higher level decision on patch for
?SIGHTSEEING?, which in turn can be combined with decisions on patches like ?HOTEL? and
?TRANSPORTATION? to form a even higher level decision on ?TRAVEL?. Note that one lowlevel topic does not exclusively belong to a higher-level one. For example, the ?WEATHER?
patch may belong to higher level patches ?TRAVEL? and ?AGRICULTURE? at the same time.
Quite intuitively, this decision composition mechanism is also local and varies with the ?locations?.
For example, when combining ?SIGHTSEEING IN
PARIS? and ?SIGHTSEEING IN B ERLIN?, it is more
like an OR logic since it only takes one of them to
be positive. A more complicated strategy is often
needed in, for example, a decision on ?TRAVELING?,
which often takes more than one element, like
?SIGHTSEEING?, ?HOTEL?, ?TRANSPORTATION?,
Figure 3: An example of decision hierarchy. or ?WEATHER?, but not necessarily all of them. The
particular strategy taken by a local decision composition unit is fully encoded in the weights of the corresponding neuron through
sp (x, y) = f wp> ?p (x, y) ,
(3)
where f is the active function. As stated in [12], a simple nonlinear function (such as sigmoid) with
proper weights is capable of realizing basic logics such as AND and OR. Here we decide the hierarchical architecture of the decision making, but leave the exact mechanism for decision combination
(encoded in the weights) to the learning algorithm later.
3
The Construction of Deep Architecture
The process for constructing the deep architecture for matching consists of two steps. First, we
define parallel text patches with different resolutions using bilingual topic models. Second, we
construct a layered directed acyclic graph (DAG) describing the hierarchy of the topics, based on
which we further construct the topology of the deep neural network.
3.1
Topic Modeling for Parallel Texts
This step is to discover parallel text segments for meaningful co-occurrence patterns of words in
both domains. Although more sophisticated methods may exist for capturing this relationship, we
take an approach similar to the multi-lingual pLSI proposed in [10], and simply put the words
from parallel texts together to a joint document, while using a different virtual vocabulary for each
domain to avoid any mixing up. For example, the word hotel appearing in domain X is treated as
a different word as hotel in domain Y. For modeling tool, we use latent Dirichlet allocation (LDA)
with Gibbs sampling [2] on all the training data. Notice that by using topic modeling, we allow the
overlapping sets of words, which is advantageous over non-overlapping clustering of words, since
we may expect some words (e.g., hotel and price) to appear in multiple segments. Table 1 gives
two example parallel-topics learned from a traveling-related Question-Answer corpus (see Section
5 for more details). As we can see intuitively, in the same topic, a word in domain X co-occurs
frequently not only with words in the same domain, but also with those in domain Y. We fit the
same corpus with L topic models with decreasing resolutions1 , with the series of learned topic sets
denoted as H = {T1 , ? ? ? , T` , ? ? ? , TL }, with ` indexing the topic resolution.
1
Topic resolution is controlled mainly by the number of topics, i.e., a topic model with 100 topics is considered to be of lower resolution (or more general) than the one with 500 topics.
3
Topic Label
S PECIAL
P RODUCT
T RANSPORTATION
Question
Answer
local delicacy, special product
snack food, quality, tasty, ? ? ?
route, arrangement, location
arrive, train station, fare, ? ? ?
tofu, speciality, aroma, duck, sweet, game, cuisine
sticky rice, dumpling, mushroom, traditional,? ? ?
distance, safety, spending, gateway, air ticket, pass
traffic control, highway, metroplis, tunnel, ? ? ?
Table 1: Examples of parallel topics. Originally in Chinese, translated into English by the authors.
3.2
Getting Matching Architecture
With the set of topics H, the architecture of the deep matching model can then be obtained in the
following three steps. First, we trim the words (in both domains X and Y) with the low probability
for each topic in T` ? H, and the remaining words in each topic specify a patch p. With a slight
abuse of symbols, we still use H to denote the patch sets with different resolutions. Second, based
on the patches specified in H, we construct a layered DAG G by assigning each patch with resolution
` to a number of patches with resolution ` ? 1 based on the word overlapping between patches, as
illustrated in Figure 4 (left panel). If a patch p in layer ` ? 1 is assigned to patch p0 in layer `, we
denote this relation as p ? p0 2 . Third, based on G, we can construct the architecture of the patchinduced layers of the neural network. More specifically, each patch p in layer ` will be transformed
into K` neurons in the (`?1)th hidden layer in the neural network, and the K` neurons are connected
to the neurons in the `th layer corresponding to patch p0 iff p ? p0 . In other words, we determine the
sparsity-pattern of the weights, but leave the values of the weights to the later learning phase. Using
the image analogy, the neurons corresponding to patch p are referred to as filters. Figure 4 illustrates
the process of transforming patches in layer ` ? 1 (specific topics) and layer ` (general topics) into
two layers in neural network with K` = 2.
patches
neural network
Figure 4: An illustration of constructing the deep architecture from hierarchical patches.
The overall structure is illustrated in Figure 5. The input layer is a two-dimensional interaction
space, which connects to the first patch-induced layer p-layerI followed by the second patchinduced layer p-layerII. The connections to p-layerI and p-layerII have pre-specified sparsity patterns. Following p-layerII is a committee layer (c-layer), with full connections from
p-layerII. With an input (x, y), we first get the local matching decisions on p-layerI, associated with patches in the interaction space. Those local decisions will be sent to the corresponding
neurons in p-layerII to get the first round of fusion. The outputs of p-layerII are then sent to
c-layer for further decision composition. Finally the logistic regression unit in the output layer
summarizes the decisions on c-layer to get the final matching score s(x, y). This architecture is
referred to as D EEP M ATCH in the remainder of the paper.
Figure 5: An illustration of the deep architecture for matching decisions.
2
In the assignment, we make sure each patch in layer ` is assigned to at least m` patches in layer ` ? 1.
4
3.3
Sparsity
The final constructed neural network has two types of sparsity. The first type of sparsity is enforced
through architecture, since most of the connections between neurons in adjacent layers are turned
off in construction. In our experiments, only about 2% of parameters are allowed to be nonzero.
The second type of sparsity is from the characteristics of the texts. For most object pairs in our
experiment, only a small percentage of neurons in the lower layers are active (see Section 5 for more
details). This is mainly due to two factors, 1) the input parallel texts are very short (usually < 100
words), and 2) the patches are well designed to give a compact and sparse representation of each of
the texts, as describe in Section 3.1.
To understand the second type of sparsity, let us start with the following definition:
Definition 3.1. An input pair (x, y) overlaps with patch p, iff x ? px 6= ? and y ? py 6= ?, where px
and py are respectively the word indices of patch p in domain X and Y.
def
We also define the following indicator function overlap((x, y), p) = kpx ? xk0 ? kpy ? yk0 . The
proposed architecture only allows neurons associated with patches overlapped with the input to have
nonzero output. More specifically, the output of neurons associated with patch p is
sp (x, y) = ap (x, y) ? overlap((x, y), p)
(4)
to ensure that sp (x, y) ? 0 only when there is non-empty cross-talking of x and y within patch p,
where ap (x, y) is the activation of neuron before this rule is enforced. It is not hard to understand,
for any input (x, y), when we track any upwards path of decisions from input to a higher level, there
is nonzero matching vote until we reach a patch that contains terms from both x and y. This view is
particularly useful in parameter tuning with back-propagation: the supervision signal can only get
down to a patch p when it overlaps with input (x, y). It is easy to show from the definition, once
the supervision signal stops at one patch p, it will not get pass p and propagate to p?s children, even
if those children have other ancestors. This indicates that when using stochastic gradient descent,
the updating of weights usually only involves a very small number of neurons, and therefore can be
very efficient.
3.4
Local Decision Models
In the hidden layers p-layerI, p-layerII, and c-layer, we allow two types of neurons, corresponding to two active functions: 1) linear flin (t) = x, and 2) sigmoid fsig (t) = (1 + e?t )?1 . In
the first layer, each patch p for (x, y) takes the value of the interaction matrix Mp = xp yp> , and the
(k)
(k) P
(k)
(k)
M
+
b
, with weight
A
k th local decision on p is given by ap (x, y) = fp
p
p,nm
p,nm
n,m
(k)
given by A(k) and the activation function fp
reduce the complexity, we essentially
have
? {flin , fsig } . With low-rank constraint on A(k) to
(k)
> (k)
(k) >
(k)
a(k)
(x,
y)
=
f
x
L
(L
)
y
+
b
, k = 1, ? ? ? , K1 ,
p
p
p
p x,p
y,p
p
(k)
(5)
(k)
where Lx,p ? R|px |?Dp , Ly,p ? R|py |?Dp , with the latent dimension Dp . As indicated in Figure 5,
the two-dimensional structure is lost after leaving the input layer, while the local structure is kept in
the second patch-induced layer p-layerII. Basically, a neuron in layer p-layerII processes the
low-level decisions assigned to it made in layer p-layerI
(k)
>
a(k)
wp,k
?p (x, y) , k = 1, ? ? ? , K2 ,
(6)
p (x, y) = fp
where ?p (x, y) lists all the lower-level decisions assigned to unit p:
(1)
(2)
(K1 )
?p (x, y) = [? ? ? , sp0 (x, y), sp0 (x, y), ? ? ? , sp0
(x, y), ? ? ? ],
?p0 ? p, p0 ? T1
which contains all the decisions on patches in layer p-layerI subsumed by p. The local decision
models in the committee layer c-layer are the same as in p-layerII, except that they are fully
connected to neurons in the previous layer.
4
Learning
We divide the parameters, denoted W, into three sets: 1) the low-rank bilinear model for mapping
(k)
(k)
(k)
from input patches to p-layerI, namely Lx,p , Ly,p , and offset bp for all p ? P and filter index
1 ? k ? K1 , 2) the parameters for connections between patch-induced neurons, i.e., the weights
5
(k)
(k)
between p-layerI and p-layerII, denoted (wp , bp ) for associated patch p and filter index
1 ? k ? K2 , and 3) the weights for committee layer (c-layer) and after, denoted as wc .
We employ a discriminative training strategy with a large margin objective. Suppose that we are
given the following triples (x, y+ , y? ) from the oracle, with x (? X ) matched with y+ better than
with y? (both ? Y). We have the following ranking-based loss as objective:
X
eW (xi , yi+ , yi? ) + R(W),
(7)
L(W, Dtrn ) =
(xi ,yi+ ,yi? )?Dtrn
where R(W) is the regularization term, and eW (xi , yi+ , yi? ) is the error for triple (xi , yi+ , yi? ),
given by the following large margin form:
ei = eW (xi , yi+ , yi? ) = max(0, m + s(xi , yi? ) ? s(xi , yi+ )),
with 0 < m < 1 controlling the margin in training. In the experiments, we use m = 0.1.
4.1
Back-Propagation
All three sets of parameters are updated through back-propagation (BP). The updating of the weights
from hidden layers are almost the same as that for conventional Multi-layer Perceptron (MLP), with
two slight differences: 1) we have a different input model and two types of activation function, and
2) we could gain some efficiency by leveraging the sparsity pattern of the neural network, but the
advantage diminishes quickly after the first two layers.
This sparsity however greatly reduces the number of parameters for the first two layers, and hence
(k)
the time on updating them. From Equation (4-6), the sub-gradient of Lx,p w.r.t. empirical error e is
?e
(k)
?Lx,p
=
?
i
(k)
? sp (xi , yi+ )
?ei
X
(k)
sp (xi , yi+ )
(k)
? sp (xi , yi? )
?ei
?
?
?
(k)
potp (xi , yi+ )
(k)
sp (xi , yi? )
?
(k)
potp (xi , yi? )
+ > (k)
xi,p (yi,p
) Ly,p ? overlap (xi , yi+ ), p
? > (k)
xi,p (yi,p
) Ly,p ? overlap (xi , yi? ), p , (8)
where i indices the training instances, and
> (k)
(k) >
(k)
pot(k)
p (x, y) = xp Lx,p (Ly,p ) yp + bp
(k)
(k)
stands for the potential value for sp . The gradient for Ly,p is given in a slightly different way.For
the weights between p-layerI and p-layerII, the gradient can also benefit from the sparsity in
activation.
We use stochastic sub-gradient descent with mini-batches [9], each of which consists of 50 randomly
generated triples (x, y+ , y? ), where the (x, y+ ) is the original pair, and y? is a randomly selected
response. With this type of optimization, most of the patches in p-layerI and p-layerII get zero
inputs, and therefore remain inactive by definition during the prediction as well as updating process.
On the tasks we have tried, only about 2% of parameters are allowed to be nonzero for weights
among the patch-induced layers. Moreover, during stochastic gradient descent, only about 5% of
neurons in p-layerI and p-layerII are active on average for each training instance, indicating
that the designed architecture has greatly reduced the essential capacity of the model.
5
Experiments
We compare our deep matching model to the inner-product based models, ranging from variants of
bilinear models to nonlinear mappings for ?X (?) and ?Y (?). For bilinear models, we consider only
the low-rank models with ?X (x) = Px> x and ?y (y) = Px> y, which gives
match(x, y) =< Px> x, Py> y >= x> Px Py> y.
With different kinds of constraints on Px and Py , we get different models. More specifically, with 1)
orthnormality constraints Px> Py = Id?d , we get partial least square (PLS) [19], and with 2) `2 and
`1 based constraints put on rows or columns, we get Regularized Mapping to Latent Space (RMLS)
6
[20]. For nonlinear models, we use a modified version of the Siamese architecture [3], which uses
two different neural networks for mapping objects in the two domains to the same d-dimensional
latent space, where inner product can be used as a measure of matching and is trained with a similar
large margin objective. Different from the original model in [3], we allow different parameters for
mapping to handle the domain heterogeneity. Please note here that we omit the nonlinear model for
shared representation [13, 18, 17] since they are essentially also inner product based models (when
used for matching) and not designed to deal with short texts with large vocabulary.
5.1
Data Sets
We use the learned matching function for retrieving response texts y for a given query text x, which
will be ranked purely based on the matching scores. We consider the following two data sets:
This data set contains around 20,000 traveling-related (Question, Answer) pairs
collected from Baidu Zhidao (zhidao.baidu.com) and Soso Wenwen (wenwen.soso.com),
two famous Chinese community QA Web sites. The vocabulary size is 52,315.
Question-Answer:
Weibo-Comments: This data set contains half million (Weibo, comment) pairs collected from Sina
Weibo (weibo.com), a Chinese Twitter-like microblog service. The task is to find the appropriate
responses (e.g., comments) to given Weibo posts. This task is significantly harder than the QuestionAnswer task since the Weibo data are usually shorter, more informal, and harder to capture with
bag-of-words. The vocabulary size for tweets and comments are both 48, 724.
On both data sets, we generate (x, y+ , y? ) triples, with y? being randomly selected. The training
data are randomly split into training data and testing data, and the parameters of all models (including
the learned patches for D EEP M ATCH) are learned on training data. The hyper parameters (e.g., the
latent dimensions of low-rank models and the regularization coefficients) are tuned on a validation
set (as part of the training set). We use NDCG@1 and NDCG@6 [8] on random pool with size 6
(one positive + five negative) to measure the performance of different matching models.
5.2
Performance Comparison
The retrieval performances of all four models are reported in Table 2. Among the two data sets, the
Question-Answer data set is relatively easy, with all four matching models improve upon random
guesses. As another observation, we get significant gain of performance by introducing nonlinearity
in the mapping function, but all the inner-product based matching models are outperformed by
the proposed D EEP M ATCH with large margin on this data set. The story is slightly different on
the Weibo-Response data set, which is significantly more challenging than the Q-A task in that it
relies more on the content of texts and is harder to be captured by bag-of-words. This difficulty
can be hardly handled by inner-product based methods, even with nonlinear mappings of S IAMESE
NETWORK . In contrast, D EEP M ATCH still manages to perform significantly better than all other
models.
To further understand the performances of the different matching models, we also compare the
generalization ability of two nonlinear models. We find that the S IAMESE NETWORK can achieve
over 90% correct pairwise comparisons on training set with small regularization, but generalizes
relatively poorly on the test set with all the configurations we tried. This is not surprising since
S IAMESE NETWORK has the same level of parameters (varying with the number of hidden units)
as D EEP M ATCH. We argue that our model has better generalization property than the Siamese
architecture with similar model complexity.
R ANDOM G UESS
PLS
RMLS
S IAMESE N ETWORK
D EEP M ATCH
Question-Answer
nDCG@1 nDCG@6
Weibo-Response
nDCG@1 nDCG@6
0.167
0.285
0.282
0.357
0.723
0.167
0.171
0.165
0.175
0.336
0.550
0.662
0.659
0.735
0.856
0.550
0.587
0.553
0.574
0.665
Table 2: The retrieval performance of matching models on the Q-A and Weibo data sets.
7
5.3
Model Selection
We tested different variants of the current D EEP M ATCH architecture, with results reported in Figure
6. There are two ways to increase the depth of the proposed method: adding patch-induced layers
and committee layers. As shown in Figure 6 (left and middle panels), the performance of D EEP M ATCH stops increasing in either way when the overall depth goes beyond 6, while the training
gets significantly slower with each added hidden layer. The number of neurons associated with each
patch (Figure 6, right panel) follows a similar story: the performance gets flat out after the number
of neurons per patch reaches 3, again with training time and memory increased significantly. As
another observation about the architecture, D EEP M ATCH with both linear and sigmoid activation
functions in hidden layers yields slightly but consistently better performance than that with only
sigmoid function. Our conjecture is that linear neurons provide shortcuts for low-level matching
decision to high level composition units, and therefore facilitate the informative low-level units in
determining the final matching score.
size of patch-induced layers
size of committee layer(s)
number of filters/patch
Figure 6: Choices of architecture for D EEP M ATCH. For the left and middle panels, the numbers in
parentheses stand for number of neurons in each layer.
6
Related Work
Our model is apparently a special case of the learning-to-match models, for which much effort is on
designing a bilinear form [1, 19, 7]. As we discussed earlier, this kind of models cannot sufficiently
model the rich and nonlinear structure of matching complicated objects. In order to introduce more
modeling flexibility, there has been some works on replacing ?(?) in Equation (1) with an nonlinear
mapping, e.g., with neural networks [3] or implicitly through kernelization [6]. Another similar
thread of work is the recent advances of deep learning models on multi-modal input [13, 17]. It
essentially finds a joint representation of inputs in two different domains, and hence can be used to
predict the other side. Those deep learning models however do not give a direct matching function,
and cannot handle short texts with a large vocabulary.
Our work is in a sense related to the sum-product network (SPN)[4, 5, 15], especially the work in
[4] that learns the deep architecture from clustering in the feature space for the image completion
task. However, it is difficult to determine a regular architecture like SPN for short texts, since the
structure of the matching task for short texts is not as well-defined as that for images. We therefore
adopt a more traditional MLP-like architecture in this paper.
Our work is conceptually close to the dynamic pooling algorithm recently proposed by Socher et al
[16] for paraphrase identification, which is essentially a special case of matching between two homogeneous domains. Similar to our model, their proposed model also constructs a neural network
on the interaction space of two objects (sentences in their case), and outputs the measure of semantic
similarity between them. The major differences are three-fold, 1) their model relies on a predefined
compact vectorial representation of short text, and therefore the similarity metric is not much more
than summing over the local decisions, 2) the nature of dynamic pooling allows no space for exploring more complicated structure in the interaction space, and 3) we do not exploit the syntactic
structure in the current model, although the proposed architecture has the flexibility for that.
7
Conclusion and Future Work
We proposed a novel deep architecture for matching problems, inspired partially by the long thread
of work on deep learning. The proposed architecture can sufficiently explore the nonlinearity and
hierarchy in the matching process, and has been empirically shown to be superior to various innerproduct based matching models on real-world data sets.
8
References
[1] B. Bai, J. Weston, D. Grangier, R. Collobert, K. Sadamasa, Y. Qi, O. Chapelle, and K. Weinberger.
Supervised semantic indexing. In CIKM?09, pages 187?196, 2009.
[2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[3] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to
face verification. In Proc. of Computer Vision and Pattern Recognition Conference. IEEE Press, 2005.
[4] A. Dennis and D. Ventura. Learning the architecture of sum-product networks using clustering on variables. In Advances in Neural Information Processing Systems 25.
[5] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In NIPS, pages 3248?3256,
2012.
[6] D. Grangier and S. Bengio. A discriminative kernel-based model to rank images from text queries. IEEE
transactions on PAMI, 30(8):1371?1384, 2008.
[7] D. Hardoon and J. Shawe-Taylor. Kcca for different level precision in content-based image retrieval. In
Proceedings of Third International Workshop on Content-Based Multimedia Indexing, 2003.
[8] K. J?arvelin and J. Kek?al?ainen. Ir evaluation methods for retrieving highly relevant documents. In SIGIR,
pages 41?48, 2000.
[9] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and M. K., editors, Neural
Networks: Tricks of the trade. Springer, 1998.
[10] M. Littman, S. Dumais, and T. Landauer. Automatic cross-language information retrieval using latent
semantic indexing. In Cross-Language Information Retrieval, chapter 5, pages 51?62, 1998.
[11] A. K. Menon and C. Elkan. Link prediction via matrix factorization. In Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II, ECML
PKDD?11, pages 437?452, 2011.
[12] M. Minsky and S. Papert. Perceptrons - an introduction to computational geometry. MIT Press, 1987.
[13] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In International
Conference on Machine Learning (ICML), Bellevue, USA, June 2011.
[14] V. Ordonez, G. Kulkarni, and T. L. Berg. Im2text: Describing images using 1 million captioned photographs. In Neural Information Processing Systems (NIPS), 2011.
[15] H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In UAI, pages 337?346,
2011.
[16] R. Socher and E. Huang and J. Pennington and A. Ng and C. Manning. Dynamic Pooling and Unfolding
Recursive Autoencoders for Paraphrase Detection. In Advances in NIPS 24. 2011.
[17] N. Srivastava and R. Salakhutdinov. Multimodal learning with deep boltzmann machines. In NIPS, pages
2231?2239, 2012.
[18] B. Wang, X. Wang, C. Sun, B. Liu, and L. Sun. Modeling semantic relevance for question-answer pairs
in web social communities. In ACL, pages 1230?1238, 2010.
[19] W. Wu, H. Li, and J. Xu. Learning query and document similarities from click-through bipartite graph
with metadata. In Proceedings of the sixth ACM international conference on WSDM, pages 687?696,
2013.
[20] W. Wu, Z. Lu, and H. Li. Regularized mapping to latent structures and its application to web search.
Technical report.
9
| 5019 |@word kong:2 middle:2 version:1 advantageous:1 propagate:1 tried:2 snack:1 p0:6 bellevue:1 harder:3 bai:1 configuration:1 series:1 score:5 exclusively:1 contains:4 liu:1 tuned:1 document:5 current:2 com:5 surprising:1 activation:5 mushroom:1 dx:1 assigning:1 distant:2 informative:1 designed:3 etwork:1 ainen:1 v:1 half:1 selected:2 guess:1 realizing:1 short:11 blei:1 nearness:1 location:3 lx:5 attack:1 five:1 cooccur:1 constructed:1 direct:1 baidu:2 retrieving:3 consists:2 combine:1 introduce:1 manner:1 pairwise:2 commenting:1 roughly:1 pkdd:1 frequently:3 multi:3 inspired:1 salakhutdinov:1 decreasing:1 wsdm:1 automatically:1 food:1 considering:1 increasing:1 hardoon:1 discover:1 matched:4 moreover:1 panel:4 kind:2 interpreted:2 finding:3 ull:1 k2:2 control:1 unit:6 ly:6 omit:1 yn:1 appear:1 positive:2 t1:2 cuisine:1 local:18 safety:1 before:1 service:1 bilinear:7 id:1 path:2 abuse:1 ap:3 ndcg:6 pami:1 acl:1 specifying:1 challenging:1 co:8 limited:1 factorization:1 range:2 directed:1 lecun:2 testing:1 lost:1 recursive:1 empirical:1 significantly:5 composite:1 matching:49 weather:2 word:27 pre:3 regular:1 hangli:1 cannot:3 close:2 layered:2 get:12 selection:1 put:2 py:7 conventional:1 transportation:2 resembling:1 go:1 lowlevel:1 rectangular:1 resolution:7 hadsell:1 sigir:1 rule:1 nam:1 handle:2 notion:1 coordinate:1 updated:1 hierarchy:8 construction:2 suppose:1 user:2 caption:2 exact:1 controlling:1 us:1 designing:1 homogeneous:1 domingo:2 elkan:1 overlapped:1 element:3 trick:1 recognition:1 particularly:1 updating:4 ark:2 database:1 wang:2 capture:4 connected:2 sticky:1 sp0:3 sun:2 trade:1 intuition:2 transforming:1 complexity:2 cooccurrence:1 littman:1 dynamic:3 trained:1 arvelin:1 segment:3 purely:1 upon:2 bipartite:1 efficiency:1 translated:1 multimodal:2 joint:2 various:1 chapter:1 train:1 effective:1 describe:1 query:5 hyper:1 quite:1 encoded:2 ability:1 syntactic:1 final:5 advantage:1 propose:1 interaction:11 product:18 remainder:1 relevant:2 combining:1 turned:1 gen:1 mixing:1 iff:2 achieve:1 poorly:1 flexibility:2 poon:1 getting:1 empty:1 captioning:1 kpx:1 leave:2 object:25 completion:1 measured:1 ticket:1 borrowed:1 strong:1 pot:1 involves:1 come:1 direction:1 correct:1 filter:4 stochastic:3 virtual:1 backprop:1 generalization:2 exploring:1 sufficiently:3 considered:1 around:1 mapping:12 predict:1 major:1 adopt:1 agriculture:1 purpose:1 diminishes:1 travel:3 outperformed:1 bag:2 label:1 proc:1 highway:1 tool:1 unfolding:1 mit:1 modified:1 rather:2 rdx:1 avoid:1 varying:1 focus:2 june:1 consistently:1 rank:5 indicates:1 mainly:3 greatly:3 contrast:1 kim:1 sense:1 twitter:1 huawei:4 abstraction:1 typically:1 hidden:6 captioned:1 relation:3 ancestor:1 transformed:1 pixel:1 overall:2 classification:1 among:2 denoted:4 art:1 spatial:1 special:3 once:2 construct:5 nicely:1 metroplis:1 sampling:1 ng:3 icml:1 future:1 report:1 micro:1 employ:1 inherent:1 sweet:1 randomly:4 simultaneously:1 phase:1 connects:1 minsky:1 geometry:1 subsumed:1 detection:1 mlp:2 highly:1 evaluation:1 predefined:1 capable:1 partial:2 xy:1 shorter:1 loosely:1 divide:1 taylor:1 uess:1 increased:1 instance:2 modeling:10 column:1 eep:10 earlier:1 assignment:1 introducing:1 subset:1 successful:1 reported:2 answer:12 varies:1 combined:4 dumais:1 international:3 lee:1 off:1 pool:1 ym:1 together:1 quickly:1 sadamasa:1 again:1 nm:3 huang:1 hawaii:2 questionanswer:1 yp:2 li:3 account:1 potential:1 orr:2 coefficient:1 explicitly:1 mp:1 ranking:1 collobert:1 later:4 view:1 lab:2 schema:2 traffic:1 apparently:1 start:2 complicated:7 parallel:11 square:2 ir:1 formed:1 air:1 kek:1 characteristic:2 yield:1 conceptually:1 zhengdong:2 famous:1 identification:1 manages:1 basically:1 lu:3 reach:3 definition:4 sixth:1 hotel:5 naturally:3 associated:6 boil:1 stop:2 gain:2 knowledge:1 improves:1 localness:5 hilbert:1 andom:1 sophisticated:1 actually:1 back:3 higher:8 originally:1 supervised:1 response:7 specify:2 modal:1 autoencoders:1 correlation:1 until:2 traveling:3 dennis:1 web:3 ei:3 replacing:1 nonlinear:10 overlapping:3 propagation:3 continuity:1 defines:1 logistic:1 lda:1 quality:1 indicated:1 menon:1 atch:10 ordonez:1 facilitate:1 usa:1 hence:3 assigned:4 regularization:3 wp:3 nonzero:4 semantic:5 illustrated:4 deal:1 round:1 adjacent:1 anm:2 game:1 during:2 please:1 illustrative:1 hong:2 ay:1 upwards:1 image:16 wise:1 spending:1 ranging:1 recently:1 novel:1 sigmoid:4 superior:1 empirically:1 overview:1 volume:1 million:2 belong:2 fare:1 slight:2 discussed:1 refer:1 composition:4 rdy:1 significant:1 gibbs:1 dag:2 automatic:2 tuning:1 collaborate:1 nonlinearity:4 grangier:2 language:4 shawe:1 chapelle:1 similarity:5 gateway:1 supervision:2 etc:1 plsi:1 recent:1 retrieved:1 route:1 certain:1 incapable:1 yi:22 muller:1 captured:2 determine:2 signal:2 ii:1 multiple:1 full:1 siamese:2 reduces:1 technical:1 match:5 cross:4 long:1 retrieval:7 post:1 controlled:1 parenthesis:1 prediction:2 variant:3 kcca:1 basic:2 heterogeneous:1 regression:1 essentially:4 metric:2 vision:1 yk0:1 represent:1 kernel:1 addition:1 leaving:1 unlike:2 probably:1 sure:1 induced:6 comment:4 pooling:3 sent:4 leveraging:1 jordan:1 call:1 chopra:1 split:1 easy:2 bengio:1 fit:1 architecture:34 topology:1 click:1 inner:9 reduce:1 inactive:1 thread:2 handled:1 ltd:2 effort:4 speaking:1 hardly:1 deep:20 tunnel:1 useful:1 iamese:4 reduced:1 antibiotic:1 specifies:1 generate:1 exist:1 percentage:1 canonical:1 notice:1 judging:1 cikm:1 track:1 per:1 salient:1 four:2 flin:2 prevent:1 kept:1 ram:1 graph:2 tweet:3 sum:6 enforced:2 run:1 angle:2 arrive:1 almost:1 decide:1 wu:2 patch:60 decision:37 dy:1 summarizes:1 capturing:3 layer:48 cca:1 def:1 followed:1 fever:1 fold:1 oracle:1 occur:1 noah:2 constraint:4 vectorial:1 bp:4 flat:1 wc:1 weibo:9 px:9 relatively:2 conjecture:1 structured:2 combination:1 manning:1 lingual:1 across:2 slightly:3 remain:1 making:4 hl:1 intuitively:2 indexing:4 taken:1 equation:2 describing:3 turn:2 mechanism:2 committee:5 needed:1 informal:1 generalizes:1 apply:1 hierarchical:5 appropriate:1 occurrence:4 appearing:1 batch:1 weinberger:1 vacation:1 slower:1 original:4 dirichlet:2 include:2 clustering:3 remaining:1 ensure:1 spn:2 exploit:1 k1:3 chinese:3 especially:1 objective:3 question:13 arrangement:1 occurs:1 added:1 strategy:4 sha:2 usual:1 traditional:2 gradient:6 dp:3 distance:1 link:1 capacity:1 sensible:1 outer:1 topic:23 argue:1 collected:2 modeled:1 useless:1 insufficient:1 relationship:1 illustration:2 index:4 mini:1 difficult:3 ventura:1 stated:1 negative:1 xk0:1 proper:1 summarization:1 boltzmann:1 perform:1 recommender:1 im2text:1 neuron:21 observation:2 descent:3 ecml:1 heterogeneity:1 extended:1 interacting:1 discovered:1 station:1 paraphrase:2 community:2 pair:11 paris:2 specified:3 namely:1 connection:4 sentence:1 learned:5 nip:4 qa:1 beyond:1 usually:6 mnm:1 xm:1 pattern:8 fp:3 sparsity:10 max:1 including:1 memory:1 overlap:6 natural:3 treated:1 regularized:2 ranked:1 indicator:1 difficulty:1 innerproduct:1 representing:3 improve:1 technology:2 tasty:1 metadata:1 text:35 discovery:1 determining:2 fully:2 expect:1 loss:1 qi:1 discriminatively:1 allocation:2 proven:2 analogy:2 acyclic:1 triple:4 validation:1 degree:1 verification:1 xp:2 editor:1 story:2 translation:1 row:1 english:1 side:1 allow:3 understand:3 perceptron:1 face:1 sparse:1 benefit:1 dimension:2 xn:1 vocabulary:5 stand:2 rich:3 depth:2 world:1 author:1 made:3 social:1 transaction:1 hang:1 compact:3 trim:1 implicitly:1 logic:2 global:1 decides:1 correlating:1 active:4 uai:1 corpus:2 summing:1 discriminative:3 xi:17 landauer:1 search:1 latent:8 khosla:1 table:4 nature:1 ngiam:1 bottou:1 necessarily:1 european:1 constructing:2 domain:21 sp:8 bilingual:2 allowed:2 child:2 xu:1 site:1 referred:2 tl:1 precision:1 sub:2 papert:1 duck:1 candidate:1 third:2 tin:2 learns:1 down:2 specific:1 symbol:1 list:1 offset:1 fusion:1 intrinsic:1 essential:1 socher:2 workshop:1 adding:1 effectively:1 pennington:1 illustrates:1 margin:6 photograph:1 simply:2 explore:2 likely:4 pls:3 partially:1 soso:2 talking:1 springer:1 relies:2 acm:1 rice:1 weston:1 viewed:1 price:1 shared:1 content:3 hard:1 shortcut:1 specifically:4 determined:1 except:1 semantically:3 multimedia:1 pas:2 brand:1 meaningful:3 vote:1 ew:3 indicating:1 perceptrons:1 berg:1 meant:1 sina:1 relevance:1 kulkarni:1 kernelization:1 tested:1 srivastava:1 |
4,442 | 502 | Fault Diagnosis of Antenna Pointing Systems
using Hybrid Neural Network and Signal
Processing Models
Padhraic Smyth, J eft" Mellstrom
Jet Propulsion Laboratory 238-420
California Institute of Technology
Pasadena, CA 91109
Abstract
We describe in this paper a novel application of neural networks to system
health monitoring of a large antenna for deep space communications. The
paper outlines our approach to building a monitoring system using hybrid
signal processing and neural network techniques, including autoregressive
modelling, pattern recognition, and Hidden Markov models. We discuss
several problems which are somewhat generic in applications of this kind
- in particular we address the problem of detecting classes which were
not present in the training data. Experimental results indicate that the
proposed system is sufficiently reliable for practical implementation.
1
Background: The Deep Space Network
The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL) for the National Aeronautics and Space Administration (NASA)) is
unique in terms of providing end-to-end telecommunication capabilities between
earth and various interplanetary spacecraft throughout the solar system. The
ground component of the DSN consists of three ground station complexes located
in California, Spain and Australia, giving full 24-hour coverage for deep space communications. Since spacecraft are always severely limited in terms of available
transmitter power (for example, each of the Voyager spacecraft only use 20 watts
to transmit signals back to earth), all subsystems of the end-to-end communications link (radio telemetry, coding, receivers, amplifiers) tend to be pushed to the
667
668
Smyth and Mellstrom
absolute limits of performance. The large steerable ground antennas (70m and 34m
dishes) represent critical potential single points of failure in the network. In particular there is only a single 70m antenna at each complex because of the large cost
and calibration effort involved in constructing and operating a steerable antenna
of that size - the entire structure (including pedestal support) weighs over 8,000
tons.
The antenna pointing systems consist of azimuth and elevation axes drives which
respond to computer-generated trajectory commands to steer the antenna in realtime. Pointing accuracy requirements for the antenna are such that there is little
tolerance for component degradation. Achieving the necessary degree of positional
accuracy is rendered difficult by various non-linearities in the gear and motor elements and environmental disturbances such as gusts of wind affecting the antenna
dish structure. Off-beam pointing can result in rapid fall-off in signal-to-noise ratios
and consequent potential loss of irrecoverable scientific data from the spacecraft.
The pointing systems are a complex mix of electro-mechanical and hydraulic components. A faulty component will manifest itself indirectly via a change in the characteristics of observed sensor readings in the pointing control loop. Because of the
non-linearity and feedback present, direct causal relationships between fault conditions and observed symptoms can be difficult to establish - this makes manual fault
diagnosis a slow and expensive process. In addition, if a pointing problem occurs
while a spacecraft is being tracked, the antenna is often shut-down to prevent any
potential damage to the structure, and the track is transferred to another antenna
if possible. Hence, at present, diagnosis often occurs after the fact, where the original fault conditions may be difficult to replicate. An obvious strategy is to design
an on-line automated monitoring system. Conventional control-theoretic models
for fault detection are impractical due to the difficulties in constructing accurate
models for such a non-linear system - an alternative is to learn the symptom-fault
mapping directly from training data, the approach we follow here.
2
2.1
Fault Classification over Time
Data Collection and Feature Extraction
The observable data consists of various sensor readings (in the form of sampled
time series) which can be monitored while the antenna is in tracking mode. The
approach we take is to estimate the state of the system at discrete intervals in time.
A feature vector ~ of dimension k is estimated from sets of successive windows
of sensor data. A pattern recognition component then models the instantaneous
estimate of the posterior class probability given the features, p(wd~), 1 :::; i :::; m.
Finally, a hidden Markov model is used to take advantage of temporal context and
estimate class probabilities conditioned on recent past history. This hierarchical
pattern of information flow, where the time series data is transformed and mapped
into a categorical representation (the fault classes) and integrated over time to
enable robust decision-making, is quite generic to systems which must passively
sense and monitor their environment in real-time.
Experimental data was gathered from a new antenna at a research ground-station
at the Goldstone DSN complex in California. We introduced hardware faults in a
Fault Diagnosis of Antenna Pointing Systems
controlled manner by switching faulty components in and out of the control loop.
Obtaining data in this manner is an expensive and time-consuming procedure since
the antenna is not currently instrumented for sensor data acquisition and is located
in a remote location of the Mojave Desert in Southern California. Sensor variables
monitored included wind speed, motor currents, tachometer voltages, estimated
antenna position, and so forth, under three separate fault conditions (plus normal
conditions) .
The time series data was segmented into windows of 4 seconds duration (200 sampies) to allow reasonably accurate estimates of the various features. The features
consisted of order statistics (such as the range) and moments (such as the variance) of particular sensor channels. In addition we also applied an autoregressiveexogenous (ARX) modelling technique to the motor current data, where the ARX
coefficients are estimated on each individual 4-second window of data. The autoregressive representation is particularly useful for discriminative purposes (Eggers and
Khuon, 1990).
2.2
State Estimation with a Hidden Markov Model
If one applies a simple feed-forward network model to estimate the class probabilities
at each discrete time instant t, the fact that faults are typically correlated over time
is ignored. Rather than modelling the temporal dependence of features, p(.f.(t)I.f.(t1), ... ,.f.(0)), a simpler approach is to model temporal dependence via the class
variable using a Hidden Markov Model (HMM). The m classes comprise the Markov
model states. Components of the Markov transition matrix A (of dimension m x m)
are specified subjectively rather than estimated from the data, since there is no
reliable database of fault-transition information available at the component level
from which to estimate these numbers. The hidden component of the HMM model
arises from the fact that one cannot observe the states directly, but only indirectly
via a stochastic mapping from states to symptoms (the features). For the results
reported in this paper, the state probability estimates at time t are calculated using
all the information available up to that point in time. The probability state vector
is denoted by p(s(t)). The probability estimate of state i at time t can be calculated
recursively via the standard HMM equations:
A
U
(
t
)
=
A.p s
(
t- 1
( ) )
and p
(
Si t
())
=
Ui(t)Yi(t)
",m
L.,.;j=l
U,
A
?
()
?
t Y,
(
t
)
where the estimates are initialised by a prior probability vector p(s(O)), the Ui(t)
are the components of u(t), 1 ~ i ~ m, and the Yi(t) are the likelihoods p(.f.IWi)
produced by the particular classifier being used (which can be estimated to within
a normalising cons tan t by p( Wi I.f.) / p( Wi)) .
2.3
Classification Results
We compare a feedforward multi-layer perceptron model (single hidden layer with 12
sigmoidal units, trained using the squared error objective function and a conjugategradient version of backpropagation) and a simple maximum-likelihood Gaussian
classifier (with an assumed diagonal covariance matrix, variances estimated from
the data), both with and without the HMM component. Table 1 summarizes the
669
670
Smyth and Mellstrom
1
----------------------v----------------
0.8
Estimated
probability 0 .6
of true
class
(Normal) 0 .4
-----neural+Markov
-neural
0.2
00
210
4
0
630
840
Time (seconds)
Figure 1: Stabilising effect of Markov component
overall classification accuracies obtained for each of the models - these results are
for models trained on data collected in early 1991 (450 windows) which were then
field-tested in real-time at the antenna site in November 1991 (596 windows)_ There
were 12 features used in this particular experiment, including both ARX and timedomain features. Clearly, the neural-Markov model is the best model in the sense
that no samples at all were misclassified - it is significantly better than the simple
Gaussian classifier. Without the Markov component, the neural model still classified
quite well (0 .84% error rate). However all of its errors were false alarms (the classifier
decision was a fault label, when in reality conditions were normal) which are highly
undesirable from an operational viewpoint - in this context, the Markov model
has significant practical benefit. Figure 1 demonstrates the stabilising effect of the
Markov model over time. The vertical axis corresponds to the probability estimate
of the model for the true class. Note the large fluctuations and general uncertainty
in the neural output (due to the inherent noise in the feature data) compared to
the stability when temporal context is modelled.
Table 1: Classification results for different models
Model
3
Detecting novel classes
While the neural model described above exhibits excellent performance in terms
of discrimination, there is another aspect to classifier performance which we must
consider for applications of this nature: how will the classifier respond if presented
with data from a class which was not included in the training set ? Ideally, one
would like the model to detect this situation. For fault diagnosis the chance that
one will encounter such novel classes under operational conditions is quite high since
there is little hope of having an exhaustive library of faults to train on.
In general, whether one uses a neural network, decision tree or other classification
Fault Diagnosis of Antenna Pointing Systems
feature 2
B
B
c
B
B
B B
B
B
"
novel input
B
A
A
A
A
A
A
....
__---training data
A
A
feature 1
Figure 2: Data from a novel class C
method, there are few guarantees about the extrapolation behaviour of the trained
classification model. Consider Figure 2, where point C is far away from the "A" s
and "B"s on which the model is trained. The response of the trained model to
point C may be somewhat arbitrary, since it may lie on either side of a decision
boundary depending on a variety of factors such as initial conditions for the training
algorithm, objective function used, particular training data, and so forth. One might
hope that for a feedforward multi-layer perceptron, novel input vectors would lead
to low response for all outputs. However, if units with non-local response functions
are used in the model (such as the commonly used sigmoid function), the tendency
of training algorithms such as backpropagation is to generate mappings which have
a large response for at least one of the classes as the attributes take on values which
extend well beyond the range of the training data values. Leonard and Kramer
(1990) discuss this particular problem of poor extrapolation in the context of fault
diagnosis of a chemical plant. The underlying problem lies in the basic nature of
discriminative models which focus on estimating decision boundaries based on the
differences between classes. In contrast, if one wants to detect data from novel
classes, one must have a generative model for each known class, namely one which
specifies how the data is generated for these classes. Hence, in a probabilistic
framework, one seeks estimates of the probability density function of the data given
a particular class, f(~.lwi)' from which one can in turn use Bayes' rule for prediction:
(1)
4
Kernel Density Estimation
Unless one assumes a particular parametric form for f(~lwd, then it must be somehow estimated from the data. Let us ignore the multi-class nature of the problem
temporarily and simply look at a single-class case. We focus here on the use of
kernel-based methods (Silverman, 1986). Consider the I-dimensional case of estimating the density f( x) given samples {Xi}, 1 ::; i ::; N. The idea is simple enough:
we obtain an estimate j(x), where x is the point at which we wish to know the
density, by summing the contributions of the kernel K((x - xi)/h) (where h is the
bandwidth of the estimator, and K(.) is the kernel function) over all the samples
?
671
672
Smyth and Mellstrom
and normalizing such that the estimate is itself a density, i.e.,
N
j(x) =
;h {;,K( x ~ x,)
(2)
The estimate j(x) directly inherits the properties of K(.), hence it is common to
choose the kernel shape itself to be some well-known smooth function, such as a
Gaussian. For the multi-dimensional case, the product kernel is commonly used:
j(x)
-
=
1
t(rr
Nh1???hd ,=1
.
K(xk -
k=1
hk
Xf))
(3)
where xk denotes the component in dimension k of vector ~, and the hi represent
different bandwidths in each dimension.
Various studies have shown that the quality of the estimate is typically much more
sensitive to the choice of the bandwidth h than it is to the kernel shape K(.) (Izenmann, 1991). Cross-validation techniques are usually the best method to estimate
the bandwidths from the data, although this can be computationally intensive and
the resulting estimates can have a high variance across particular data sets. A significant disadvantage of kernel models is the fact that all training data points must be
stored and a distance measure between a new point and each of the stored points
must be calculated for each class prediction. Another less obvious disadvantage
is the lack of empirical results and experience with using these models for realworld applications - in particular there is a dearth of results for high-dimensional
problems. In this context we now outline a kernel approximation model which is
considerably simpler both to train and implement than the full kernel model.
5
5.1
Kernel Approximation using Mixture Densities
Generating a kernel approximation
An obvious simplification to the full kernel model is to replace clusters of data
points by representative ceritroids, to be referred to as the centroid kernel model.
Intuitively, the sum of the responses from a number of kernels is approximated by
a single kernel of appropriate width. Omohundro (1992) has proposed algorithms
for bottom-up merging of data points for problems of this nature. Here, however,
we describe a top-down approach by observing that the kernel estimate is itself a
special case of a mixture density. The underlying density is assumed to be a linear
combination of L mixture components, i.e.,
L
f(x) = I>~ifi(X)
(4)
i=1
where the ai are the mixing proportions. The full kernel estimate is itself a special
case of a mixture model with ai = liN and Ji(x) = K(x). Hence, our centroid
kernel model can also be treated as a mixture model but now the parameters of the
mixture model (the mixing proportions or weights, and the widths and locations of
the centroid kernels) must be estimated from the data. There is a well-known and
Fault Diagnosis of Antenna Pointing Systems
Kernel Model
40
20
____________________________
-Centroid Kernel Likelihood
-----lower 1-s1gma boundary
:-:-:~r_1:.s~~a_~u~~ry
Log-likelihood of 0
unknown ellis -20
under normll
hypothesis -40
on test dltl
-10
-10
-100 +-'----t.;;.;;...--+-':...:;....-~-__F~-_F-=---~F_=-_F=-___1 00
Time (seconds)
o~
____~_____________________________
~__
Sigmoidal Model
-0-4
-likelihOOd.~eu~
---?-loWer
1-81
un
_?--?upper 105 ma un~~
Log-likelihood of
unknown ellss -0.'
under normll
hypothesis -1.2
on test dltl
-1.1
150
200
250
300
350
400
-2+----t---r--r---+--~-~~-~-___1
Time (seconds)
Figure 3: Likelihoods of kernel versus sigmoidal model on novel data
fast statistical procedure known as the EM (Expectation-Maximisation) algorithm
for iteratively calculating these parameters, given some initial estimates (e.g., Redner and Walker, 1984). Hence, the procedure for generating a centroid kernel model
is straightforward: divide the training data into homogeneous subsets according to
class labels and then fit a mixture model with L components to each class using
the EM procedure (initialisation can be based on randomly selected prototypes) .
Prediction of class labels then follows directly from Bayes' rule (Equation (1)). Note
that there is a strong similarity between mixture/kernel models and Radial Basis
Function (RBF) networks. However, unlike the RBF models, we do not train the
output layer of our network in order to improve discriminative performance as this
would potentially destroy the desired probability estimation properties of the model.
5.2
Experimental results on detecting novel classes
In Figure 3 we plot the log-likelihoods, log f(~lwd, as a function of time, for both a
centroid kernel model (Gaussian kernel, L 5) and the single-hidden layer sigmoidal
network described in Section 2.2. Both of these models have been trained on only 3
of the original 4 classes (the discriminative performance of the models was roughly
equivalent), excluding one of the known classes. The inputs {~i} to the models are
data from this fourth class. The dashed lines indicate the Jl ? u boundaries on the
=
673
674
Smyth and Mellstrom
log-likelihood for the normal class as calculated on the training data - this tells
us the typical response of each model for class "normal" (note that the absolute
values are irrelevant since the likelihoods have not been normalised via Bayes rule).
For both models, the maximum response for the novel data came from the normal
class. For the sigmoidal model, the novel response was actually greater than that
on the training data - the network is very confident in its erroneous decision that
the novel data belongs to class normal. Hence, in practice, the presence of a novel
class would be completely masked. On the other hand, for the kernel model, the
measured response on the novel data is significantly lower than that obtained on
the training data. The classifier can directly calculate that it is highly unlikely that
this new data belongs to any of the 3 classes on which the model was trained. In
practice, for a centroid kernel model, the training data will almost certainly fit the
model better than a new set of test data, even data from the same class. Hence,
it is a matter of calibration to determine appropriate levels at which new data is
deemed sufficiently unlikely to come from any of the known classes. Nonetheless,
the main point is that a local kernel representation facilitates such detection, in
contrast to models with global response functions (such as sigmoids).
In general, one does not expect a generative model which is not trained discriminatively to be fully competitive in terms of classification performance with discriminative models - on-going research involves developing hybrid discriminativegenerative classifiers. In addition, on-line learning of novel classes once they have
been detected is an interesting and important problem for applications of this nature. An initial version of the system we have described in this paper is currently
undergoing test and evaluation for implementation at DSN antenna sites.
Acknowledgements
The research described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration and was supported in part by DARPA under
grant number AFOSR-90-0199.
References
M. Eggers and T. Khuon, 'Neural network data fusion concepts and application,'
in Proceedings of 1990 IJCNN , San Diego, vol. II , 7-16, 1990.
M. A. Kramer and J. A. Leonard, 'Diagnosis using backpropagation neural networks - analysis and criticism,' Computers in Chemical Engineering, vol.14, no.12,
pp.1323-1338, 1990.
B. Silverman, Density Estimation for Statistics and Data Analysis, New York:
Chapman and Hall, 1986.
A. J. Iz en mann , 'Recent developments in nonparametric density estimation,' J.
Amer. Stat. Assoc., vol.86, pp.205-224, March 1991.
S. Omohundro, 'Model-merging for improved generalization,' in this volume.
R. A. Redner and H. F. Walker, 'Mixture densities, maximum likelihood, and the
EM algorithm,' SIAM Review, vol. 26 , no.2, pp.195-239, April 1984.
| 502 |@word version:2 proportion:2 replicate:1 seek:1 covariance:1 recursively:1 moment:1 initial:3 series:3 initialisation:1 tachometer:1 past:1 current:2 wd:1 si:1 must:7 shape:2 motor:3 designed:1 plot:1 discrimination:1 generative:2 selected:1 shut:1 gear:1 xk:2 normalising:1 detecting:3 location:2 successive:1 sigmoidal:5 simpler:2 direct:1 dsn:4 consists:2 manner:2 interplanetary:1 spacecraft:5 rapid:1 roughly:1 multi:4 ry:1 little:2 window:5 spain:1 estimating:2 linearity:2 underlying:2 kind:1 impractical:1 guarantee:1 temporal:4 classifier:8 demonstrates:1 assoc:1 control:3 unit:2 grant:1 t1:1 engineering:1 local:2 limit:1 severely:1 switching:1 fluctuation:1 might:1 plus:1 limited:1 range:2 practical:2 unique:1 maximisation:1 practice:2 implement:1 backpropagation:3 silverman:2 procedure:4 steerable:2 empirical:1 significantly:2 radial:1 cannot:1 undesirable:1 subsystem:1 faulty:2 context:5 conventional:1 equivalent:1 straightforward:1 duration:1 rule:3 estimator:1 hd:1 stability:1 transmit:1 diego:1 tan:1 smyth:5 homogeneous:1 us:1 hypothesis:2 element:1 recognition:2 expensive:2 located:2 particularly:1 approximated:1 database:1 observed:2 bottom:1 calculate:1 remote:1 eu:1 environment:1 ui:2 ideally:1 trained:8 basis:1 completely:1 darpa:1 various:5 hydraulic:1 train:3 fast:1 describe:2 detected:1 tell:1 exhaustive:1 quite:3 statistic:2 antenna:20 itself:5 advantage:1 rr:1 product:1 loop:2 mixing:2 forth:2 cluster:1 requirement:1 egger:2 stabilising:2 generating:2 depending:1 stat:1 measured:1 strong:1 coverage:1 involves:1 indicate:2 come:1 attribute:1 stochastic:1 australia:1 enable:1 mann:1 behaviour:1 generalization:1 elevation:1 sufficiently:2 hall:1 ground:4 normal:7 mapping:3 pointing:10 early:1 earth:2 purpose:1 estimation:5 radio:1 currently:2 label:3 sensitive:1 hope:2 clearly:1 sensor:6 always:1 gaussian:4 rather:2 nh1:1 command:1 voltage:1 ax:1 focus:2 inherits:1 modelling:3 transmitter:1 likelihood:11 hk:1 contrast:2 centroid:7 criticism:1 sense:2 detect:2 entire:1 integrated:1 typically:2 unlikely:2 pasadena:1 hidden:7 transformed:1 misclassified:1 going:1 overall:1 classification:7 denoted:1 development:1 special:2 ell:1 field:1 comprise:1 once:1 extraction:1 having:1 chapman:1 look:1 inherent:1 few:1 randomly:1 national:2 individual:1 amplifier:1 detection:2 highly:2 evaluation:1 certainly:1 mixture:9 operated:1 accurate:2 necessary:1 experience:1 unless:1 tree:1 divide:1 desired:1 causal:1 weighs:1 steer:1 elli:1 disadvantage:2 cost:1 subset:1 masked:1 azimuth:1 reported:1 stored:2 considerably:1 confident:1 density:11 siam:1 probabilistic:1 off:2 contract:1 squared:1 padhraic:1 choose:1 potential:3 pedestal:1 coding:1 coefficient:1 matter:1 performed:1 wind:2 extrapolation:2 observing:1 competitive:1 bayes:3 capability:1 iwi:1 solar:1 contribution:1 telemetry:1 accuracy:3 variance:3 characteristic:1 gathered:1 modelled:1 produced:1 monitoring:3 trajectory:1 drive:1 history:1 classified:1 lwd:2 manual:1 failure:1 nonetheless:1 acquisition:1 initialised:1 involved:1 pp:3 obvious:3 monitored:2 con:1 sampled:1 manifest:1 redner:2 actually:1 back:1 nasa:1 feed:1 follow:1 response:10 improved:1 april:1 amer:1 symptom:3 hand:1 lack:1 somehow:1 mode:1 quality:1 scientific:1 building:1 effect:2 consisted:1 true:2 concept:1 hence:7 chemical:2 laboratory:3 iteratively:1 width:2 outline:2 theoretic:1 omohundro:2 instantaneous:1 novel:15 sigmoid:1 common:1 ji:1 tracked:1 volume:1 jl:1 extend:1 eft:1 significant:2 ai:2 calibration:2 similarity:1 operating:1 aeronautics:2 subjectively:1 posterior:1 recent:2 irrelevant:1 dish:2 belongs:2 came:1 fault:19 yi:2 greater:1 somewhat:2 arx:3 determine:1 voyager:1 signal:4 dashed:1 ii:1 full:4 mix:1 segmented:1 smooth:1 jet:3 xf:1 cross:1 lin:1 controlled:1 prediction:3 basic:1 expectation:1 represent:2 kernel:30 beam:1 background:1 affecting:1 addition:3 want:1 interval:1 walker:2 unlike:1 tend:1 electro:1 facilitates:1 conjugategradient:1 flow:1 presence:1 feedforward:2 enough:1 automated:1 variety:1 fit:2 bandwidth:4 ifi:1 idea:1 prototype:1 intensive:1 administration:2 whether:1 effort:1 york:1 deep:4 ignored:1 useful:1 nonparametric:1 hardware:1 goldstone:1 generate:1 specifies:1 estimated:9 track:1 diagnosis:9 discrete:2 timedomain:1 vol:4 iz:1 achieving:1 monitor:1 prevent:1 destroy:1 sum:1 realworld:1 uncertainty:1 telecommunication:1 respond:2 fourth:1 throughout:1 almost:1 realtime:1 decision:6 summarizes:1 pushed:1 layer:5 hi:1 simplification:1 ijcnn:1 aspect:1 speed:1 passively:1 rendered:1 transferred:1 developing:1 according:1 watt:1 poor:1 combination:1 march:1 across:1 em:3 instrumented:1 wi:2 making:1 intuitively:1 computationally:1 equation:2 discus:2 turn:1 know:1 end:4 available:3 observe:1 hierarchical:1 away:1 generic:2 indirectly:2 appropriate:2 alternative:1 encounter:1 original:2 assumes:1 denotes:1 top:1 instant:1 calculating:1 giving:1 establish:1 objective:2 occurs:2 damage:1 strategy:1 dependence:2 parametric:1 diagonal:1 southern:1 exhibit:1 distance:1 link:1 mapped:1 separate:1 hmm:4 propulsion:3 collected:1 relationship:1 providing:1 ratio:1 difficult:3 potentially:1 implementation:2 design:1 unknown:2 upper:1 vertical:1 markov:12 november:1 situation:1 communication:3 excluding:1 station:2 arbitrary:1 introduced:1 namely:1 mechanical:1 specified:1 california:5 hour:1 address:1 beyond:1 usually:1 pattern:3 reading:2 including:3 reliable:2 power:1 critical:1 difficulty:1 hybrid:3 disturbance:1 treated:1 mellstrom:5 improve:1 technology:2 library:1 axis:1 deemed:1 categorical:1 health:1 prior:1 review:1 acknowledgement:1 afosr:1 loss:1 plant:1 expect:1 discriminatively:1 fully:1 interesting:1 versus:1 validation:1 degree:1 viewpoint:1 supported:1 side:1 allow:1 normalised:1 perceptron:2 institute:2 fall:1 absolute:2 tolerance:1 benefit:1 feedback:1 dimension:4 calculated:4 transition:2 boundary:4 autoregressive:2 forward:1 collection:1 commonly:2 san:1 far:1 dearth:1 observable:1 ignore:1 global:1 receiver:1 summing:1 assumed:2 a_:1 consuming:1 discriminative:5 xi:2 un:2 table:2 reality:1 nature:5 learn:1 reasonably:1 robust:1 f_:1 ca:1 obtaining:1 channel:1 operational:2 excellent:1 complex:4 constructing:2 main:1 noise:2 alarm:1 site:2 representative:1 referred:1 en:1 slow:1 position:1 wish:1 lie:2 down:2 erroneous:1 undergoing:1 ton:1 consequent:1 jpl:1 normalizing:1 fusion:1 consist:1 false:1 merging:2 conditioned:1 sigmoids:1 simply:1 positional:1 tracking:1 temporarily:1 applies:1 corresponds:1 environmental:1 chance:1 ma:1 lwi:1 kramer:2 leonard:2 rbf:2 replace:1 change:1 included:2 typical:1 degradation:1 experimental:3 tendency:1 desert:1 support:1 arises:1 tested:1 gust:1 correlated:1 |
4,443 | 5,020 | On the Representational Efficiency of Restricted
Boltzmann Machines
James Martens?
?
Arkadev Chattopadhyay+
Department of Computer Science
University of Toronto
+
Toniann Pitassi?
Richard Zemel?
School of Technology & Computer Science
Tata Institute of Fundamental Research
{jmartens,toni,zemel}@cs.toronto.edu
[email protected]
Abstract
This paper examines the question: What kinds of distributions can be efficiently
represented by Restricted Boltzmann Machines (RBMs)? We characterize the
RBM?s unnormalized log-likelihood function as a type of neural network, and
through a series of simulation results relate these networks to ones whose representational properties are better understood. We show the surprising result that
RBMs can efficiently capture any distribution whose density depends on the number of 1?s in their input. We also provide the first known example of a particular
type of distribution that provably cannot be efficiently represented by an RBM, assuming a realistic exponential upper bound on the weights. By formally demonstrating that a relatively simple distribution cannot be represented efficiently by
an RBM our results provide a new rigorous justification for the use of potentially
more expressive generative models, such as deeper ones.
1
Introduction
Standard Restricted Boltzmann Machines (RBMs) are a type of Markov Random Field (MRF) characterized by a bipartite dependency structure between a group of binary visible units x ? {0, 1}n
and binary hidden units h ? {0, 1}m . Their energy function is given by:
E? (x, h) = ?x> W h ? c> x ? b> h
where W ? Rn?m is the matrix of weights, c ? Rn and b ? Rm are vectors that store the
input and hidden biases (respectively) and together these are referred to as the RBM?s parameters
? = {W, c, b}. The energy function specifies the probability distribution over the joint space (x, h)
via the Boltzmann distribution p(x, h) = Z1? exp(?E? (x, h)) with the partition function Z? given
P
by x,h exp(?E? (x, h)). Based on this definition, the probability for any subset of variables can
be obtained by conditioning and marginalization, although this can only be done efficiently up to a
multiplicative constant due to the intractability of the RBM?s partition function (Long and Servedio,
2010).
RBMs have been widely applied to various modeling tasks, both as generative models (e.g. Salakhutdinov and Murray, 2008; Hinton, 2000; Courville et al., 2011; Marlin et al., 2010; Tang and
Sutskever, 2011), and for pre-training feed-forward neural nets in a layer-wise fashion (Hinton and
Salakhutdinov, 2006). This method has led to many new applications in general machine learning
problems including object recognition and dimensionality reduction. While promising for practical
applications, the scope and basic properties of these statistical models have only begun to be studied.
As with any statistical model, it is important to understand the expressive power of RBMs, both
to gain insight into the range of problems where they can be successfully applied, and to provide
justification for the use of potentially more expressive generative models. In particular, we are
interested in the question of how large the number of hidden units m must be in order to capture a
particular distribution to arbitrarily high accuracy. The question of size is of practical interest, since
very large models will be computationally more demanding (or totally impractical), and will tend to
overfit a lot more during training.
1
It was shown by Freund and Haussler (1994), and later by Le Roux and Bengio (2008) that for
binary-valued x, any distribution over x can be realized (up to an approximation error which vanishes exponentially quickly in the magnitude of the parameters) by an RBM, as long as m is allowed
to grow exponentially fast in input dimension (n). Intuitively, this construction works by instantiating, for each of the up to 2n possible values of x that have support, a single hidden unit which turns
on only for that particular value of x (with overwhelming probability), so that the corresponding
probability mass can be individually set by manipulating that unit?s bias parameter. An improvement to this result was obtained by Montufar and Ay (2011); however this construction still requires
that m grow exponentially fast in n.
Recently, Montufar et al. (2011) generalized the construction used by Le Roux and Bengio (2008)
so that each hidden unit turns on for, and assigns probability mass to, not just a single x, but a
?cubical set? of possible x?s, which is defined as a subset of {0, 1}n where some entries of x are
fixed/determined, and the rest are free. By combining such hidden units that are each specialized to
a particular cubic set, they showed that any k-component mixture of product distributions over the
free variables of mutually disjoint cubic sets can be approximated arbitrarily well by an RBM with
m = k hidden units.
Unfortunately, families of distributions that are of this specialized form (for some m = k bounded by
a polynomial function of n) constitute only a very limited subset of all distributions that have some
kind of meaningful/interesting structure. For example, this result
P would not allow us to efficiently
construct simple distributions where the mass is a function of i xi (e.g., for p(x) ? PARITY(x)).
In terms of what kinds of distributions provably cannot be efficiently represented by RBMs, even
less is known. Cueto et al. (2009) characterized the distributions that can be realized by a RBM with
k parameters as residing within a manifold inside the entire space of distributions on {0, 1}n whose
dimension depends on k. For sub-exponential k this implies the existence of distributions which
cannot be represented. However, this kind of result gives us no indication of what these hard-torepresent distributions might look like, leaving the possibility that they might all be structureless or
otherwise uninteresting.
In this paper we first develop some tools and simulation results which relate RBMs to certain easierto-analyze approximations, and to neural networks with 1 hidden layer of threshold units, for which
many results about representational efficiency are already known (Maass, 1992; Maass et al., 1994;
Hajnal et al., 1993). This opens the door to a range of potentially relevant complexity results, some
of which we apply in this paper.
Next, we present a construction that shows how RBMs with m = n2 + 1 can produce arbitrarily
good approximations
P to any distribution where the mass is a symmetric function of the inputs (that
is, it depends on i xi ). One example of such a function is the (in)famous PARITY function, which
was shown to be hard to compute in the perceptron model by the classic Minsky and Papert book
from 1968. This distribution is highly non-smooth and has exponentially many modes.
Having ruled out distributions with symmetric mass functions as candidates for ones that are hard for
RBMs to represent, we provide a concrete example of one whose mass computation involves only
one additional operation vs computing PARITY, and yet whose reprentation by an RBM provably
requires m to grow exponentially with n (assuming an exponental upper bound on the size of the
RBM?s weights). Because this distribution is particularly simple, it can be viewed as a special case
of many other more complex types of distributions, and thus our results speak to the hardness of
representing those distributions with RBMs as well.
Our results provide a fine delineation between what is ?easy? for RBMs to represent, and what is
?hard?. Perhaps more importantly, they demonstrate that the distributions that cannot be efficiently
represented by RBMs can have a relatively basic structure, and are not simply random in appearance
as one might hope given the previous results. This provides perhaps the first completely rigorous
justification for the use of deeper generative models such as Deep Boltzmann Machines (Salakhutdinov and Hinton, 2009), and contrastive backpropagation networks (Hinton et al., 2006) over standard
RBMs.
The rest of the paper is organized as follows. Section 2 characterizes the unnormalized loglikelihood as a type of neural network (called an ?RBM network?) and shows how this type is related
to single hidden layer neural networks of threshold neurons, and to an easier-to-analyze approximation (which we call a ?hardplus RBM network?).
Section 3 describes a m = n2 + 1 construction for
P
distributions whose mass is a function of
xi , and in Section 4 we present an exponential lower
bound on m for a slightly more complicated class of explicit distributions. Note that all proofs can
be found in the Appendix.
2
6
softplus
hardplus
Function values
5
4
3
2
1
0
?4
?2
0
2
4
6
y
Figure 1: Left: An illustration of a basic RBM network with n = 3 and m = 5. The hidden biases are omitted
to avoid clutter. Right: A plot comparing the soft and hard activation functions.
2
2.1
RBM networks
Free energy function
In an RBM, the (negative) unnormalized log probability of x, after h has been marginalized out,
is known as the free energy. Denoted by F? (x), the free energy satisfies the property that p(x) =
exp(?F? (x))/Z? where Z? is the usual partition function.
It is well known (see Appendix A.1 for a derivation) that, due to the bipartite structure of RBMs,
computing F is tractable and has a particularly nice form:
X
F? (x) = ?c> x ?
log(1 + exp(x> [W ]j + bj ))
(1)
j
where [W ]j is the j-th column of W .
Because the free energy completely determines the log probability of x, it fully characterizes an
RBM?s distribution. So studying what kinds of distributions an RBM can represent amounts to
studying the kinds of functions that can be realized by the free energy function for some setting of
?.
2.2
RBM networks
The form of an RBM?s free energy function can be expressed as a standard feed-forward neural
network, or equivalently, a real-valued circuit, where instead of using hidden units with the usual
sigmoidal activation functions, we have m ?neurons? (a term we will use to avoid confusion with
the original meaning of a ?unit? in the context of RBMs) that use the softplus activation function:
soft(y) = log(1 + exp(y))
Note that at the cost of increasing m by one (which does not matter asymptotically) and introducing
an arbitrarily small approximation error, we can assume that the visible biases (c) of an RBM are
all zero. To see this, note that up to an additive constant, we can very closely approximate c> x by
soft(K + c> x) ? K + c> x for a suitably large value of K (i.e., K kck1 ? maxx (c> x)).
Proposition 11 in the Appendix quantifies the very rapid convergence of this approximation as K
increases.
These observations motivate the following definition of an RBM network, which computes functions
with the same form as the negative free energy function of an RBM (assumed to have c = 0), or
equivalently the log probability (negative energy) function of an RBM. RBM networks are illustrated
in Figure 1.
Definition 1 A RBM network with parameters W, b is defined as a neural network with one hidden
layer containing m softplus neurons and weights and biases given by W and b, so that each neuron
j?s output is soft([W ]j + bj ). The output layer contains one neuron whose weights and bias are
given by 1 ? [11...1]> and the scalar B, respectively.
For convenience, we include the bias constant B so that RBM networks shift their output by an
additive constant (which does not affect the probability distribution implied by the RBM network
since any additive constant is canceled out by log Z in the full log probability).
3
2.3
Hardplus RBM networks
A function which is somewhat easier to analyze than the softplus function is the so-called hardplus
function (aka ?plus? or ?rectification?), defined by:
hard(y) = max(0, y)
As their names suggest, the softplus function can be viewed as a smooth approximation of the
hardplus, as illustrated in Figure 1. We define a hardplus RBM network in the obvious way: as an
RBM network with the softplus activation functions of the hidden neurons replaced with hardplus
functions.
The strategy we use to prove many of the results in this paper is to first establish them for hardplus
RBM networks, and then show how they can be adapted to the standard softplus case via simulation
results given in the following section.
2.4
Hardplus RBM networks versus (Softplus) RBM networks
In this section we present some approximate simulation results which relate hardplus and standard
(softplus) RBM networks.
The first result formalizes the simple observation that for large input magnitudes, the softplus and
hardplus functions behave very similarly (see Figure 1, and Proposition 11 in the Appendix).
Lemma 2. Suppose we have a softplus and hardplus RBM networks with identical sizes and parameters. If, for each possible input x ? {0, 1}n , the magnitude of the input to each neuron is
bounded from below by C, then the two networks compute the same real-valued function, up to an
error (measured by | ? |) which is bounded by m exp(?C).
The next result demonstrates how to approximately simulate a RBM network with a hardplus RBM
network while incurring an approximation error which shrinks as the number of neurons increases.
The basic idea is to simulate individual softplus neurons with groups of hardplus neurons that compute what amounts to a piece-wise linear approximation of the smooth region of a softplus function.
Theorem 3. Suppose we have a (softplus) RBM network with m hidden neurons with parameters bounded in magnitude by C. Let p > 0. Then there exists a hardplus RBM network with
? 2m2 p log(mp) + m hidden neurons and with parameters bounded in magnitude by C which
computes the same function, up to an approximation error of 1/p.
Note that if p and m are polynomial functions of n, then the simulation produces hardplus RBM
networks whose size is also polynomial in n.
2.5
Thresholded Networks and Boolean Functions
Many relevant results and proof techniques concerning the properties of neural networks focus on
the case where the output is thresholded to compute a Boolean function (i.e. a binary classification).
In this section we define some key concepts regarding output thresholding, and present some basic
propositions that demonstrate how hardness results for computing Boolean functions via thresholding yield analogous hardness results for computing certain real-valued functions.
We say that a real-valued function g represents a Boolean function f with margin ? if for all x g
satisfies |g(x)| ? ? and thresh(g(x)) = f (x), where thresh is the 0/1 valued threshold function
defined by:
1 :a?0
thresh(a) =
0 :a<0
We define a thresholded neural network (a distinct concept from a ?threshold network?, which is
a neural network with hidden neurons whose activation function is thresh) to be a neural network
whose output is a single real value, which is followed by an application of the threshold function.
Such a network will be said to compute a given Boolean function f with margin ? (similar to the
concept of ?separation? from Maass et al. (1994)) if the real valued input g to the final threshold
represents f according to the above definition.
While the output of a thresholded RBM network does not correspond to the log probability of an
RBM, the following observation spells out how we can use thresholded RBM networks to establish
lower bounds on the size of an RBM network required to compute certain simple functions (i.e.,
real-valued functions that represent certain Boolean functions):
4
Proposition 4. If an RBN network of size m can compute a real-valued function g which represents
f with margin ?, then there exists a thresholded RBM network that computes f with margin ?.
This statement clearly holds if we replace each instance of ?RBM network? with ?hardplus RBM
network? above.
Using Theorem 3 we can prove a more interesting result which states that any lower bound result
for thresholded hardplus RBMs implies a somewhat weaker lower bound result for standard RBM
networks:
Proposition 5. If an RBM network of size ? m with parameters bounded in magnitude by C computes a function which represents a Boolean function f with margin ?, then there exists a thresholded
hardplus RBM network of size ? 4m2 log(2m/?)/? + m with parameters bounded in magnitude by
C (C can be ?) that computes f (x) with margin ?/2
This proposition implies that any exponential lower bound on the size of a thresholded hardplus
RBM network will yield an exponential lower bound for (softplus) RBM networks that compute
functions of the given form, provided that the margin ? is bounded from below by some function of
the form 1/poly(n).
Intuitively, if f is a Boolean function and no RBM network of size m can compute a real-valued
function that represents f (with a margin ?), this means that no RBM of size m can represent any
distribution where the log probability of each member of {x|f (x) = 1} is at least 2? higher than
each member of {x|f (x) = 0}. In other words, RBMs of this size cannot generate any distribution
where the two ?classes? implied by f are separated in log probability by more than 2?.
2.6
RBM networks versus standard neural networks
Viewing the RBM log probability function through the formalism of neural networks (or real-valued
circuits) allows us to make use of known results for general neural networks, and helps highlight
important differences between what an RBM can effectively ?compute? (via its log probability) and
what a standard neural network can compute.
There is a rich literature studying the complexity of various forms of neural networks, with diverse
classes of activation functions, e.g., Maass (1992); Maass et al. (1994); Hajnal et al. (1993). RBM
networks are distinguished from these, primarily because they have a single hidden layer and because
the upper level weights are constrained to be 1.
For some activation functions this restriction may not be significant, but for soft/hard-plus neurons,
whose output is always positive, it makes particular computations much more awkward (or perhaps
impossible) to express efficiently. Intuitively, the j th softplus neuron acts as a ?feature detector?,
which when ?activated? by an input s.t. x> wj + bj 0, can only contribute positively to the log
probability of x, according to an (asymptotically) affine function of x given by that neuron?s input.
For example, it is easy to design an RBM network that can (approximately) output 1 for input x = ~0
and 0 otherwise (i.e., have a single hidden neuron with weights ?M 1 for a large M and bias b such
that soft(b) = 1), but it is not immediately obvious how an RBM network could efficiently compute
(or approximate) the function which is 1 on all inputs except x = ~0, and 0 otherwise (it turns out
that a non-obvious construction exists for m = n). By comparison, standard threshold networks
only requires 1 hidden neuron to compute such a function.
In fact, it is easy to show1 that without the constraint on upper level weights, an RBM network
would be, up to a linear factor, at least as efficient at representing real-valued functions as a neural
network with 1 hidden layer of threshold neurons. From this, and from Theorem 4.1 of Maass et al.
(1994), it follows that a thresholded RBM network is, up to a polynomial increase in size, at least as
efficient at computing Boolean functions as 1-hidden layer neural networks with any ?sigmoid-like?
activation function2 , and polynomially bounded weights.
1
To see this, note that we could use 2 softplus neurons to simulate a single neuron with a ?sigmoid-like?
activation function (i.e., by setting the weights that connect them to the output neuron to have opposite signs).
Then, by increasing the size of the weights so the sigmoid saturates in both directions for all inputs, we could
simulate a threshold function arbitrarily well, thus allowing the network to compute any function computable
by a one hidden layer threshold network while only using only twice as many neurons.
2
This is a broad class and includes the standard logistic sigmoid. See Maass et al. (1994) for a precise
technical definition
5
0
Network Output
Output from Building Blocks
150
100
50
?5
?10
?15
0
0
1
2
3
4
5
0
X
1
2
3
4
5
X
Figure 2: Left: The functions computed by the 5 building-blocks as constructed by Theorem 7 when applied to
the PARITY function for n = 5. Right: The output total of the hardplus RBM network constructed in Theorem
7. The dotted lines indicate the target 0 and 1 values. Note: For purposes of illustration we have extended the
function outputs over all real-values of X in the obvious way.
2.7
Simulating hardplus RBM networks by a one-hidden-layer threshold network
Here we provide a natural simulation of hardplus RBM networks by threshold networks with one
hidden layer. Because this is an efficient (polynomial) and exact simulation, it implies that a hardplus
RBM network can be no more powerful than a threshold network with one hidden layer, for which
several lower bound results are already known.
Theorem 6. Let f be a real-valued function computed by a hardplus RBM network of size m.
Then f can be computed by a single hidden layer threshold network, of size mn. Furthermore, if
the weights of the RBM network have magnitude at most C, then the weights of the corresponding
threshold network have magnitude at most (n + 1)C.
3
n2 + 1-sized RBM networks can compute any symmetric function
In this section we present perhaps the most surprising results of this paper: a construction of an
n2 -sized RBM network (or hardplus RBM network) for computing any given symmetric function of
x. Here, a symmetric function is defined as any real-valued function
P whose output depends only on
the number of 1-bits in the input x. This quantity is denoted X ? i xi . A well-known example of
a symmetric function is PARITY.
Symmetric functions are already known3 to be computable by single hidden layer threshold networks
(Hajnal et al., 1993) with m = n. Meanwhile (qualified) exponential lower bounds on m exist for
functions which are only slightly more complicated (Hajnal et al., 1993; Forster, 2002).
Given that hardplus RBM networks appear to be strictly less expressive than such threshold networks
(as discussed in Section 2.6), it is surprising that they can nonetheless efficiently compute functions
that test the limits of what those networks can compute efficiently.
P
Theorem 7. Let f : {0, 1}n ? R be a symmetric function defined by f (x) = tk for i xi = k.
Then (i) there exists a hardplus RBM network, of size n2 + 1, and with weights polynomial in n and
t1 , . . . , tk that computes f exactly, and (ii) for every there is a softplus RBM network of size n2 +1,
and with weights polynomial in n, t0 , . . . , tn and log(1/) that computes f within an additive error
.
The high level idea of this construction is as follows. Our hardplus RBM network consists of n
?building blocks?, each composed of n hardplus neurons, plus one additional hardplus neuron, for
a total size of m = n2 + 1. Each of these building blocks is designed to compute a function of the
form:
max(0, ?X(e ? X))
for parameters ? > 0 and e > 0. This function, examples of which are illustrated in Figure 2, is
quadratic from X = 0 to X = e and is 0 otherwise.
The main technical challenge is then to choose the parameters of these building blocks so that the
sum of n of these ?rectified quadratics?, plus the output of the extra hardplus neuron (which handles
3
The construction in Hajnal et al. (1993) is only given for Boolean-valued symmetric functions but can be
generalized easily.
6
the X = 0 case), yields a function that matches f , up to a additive constant (which we then fix by
setting the bias B of the output neuron). This would be easy if we could compute more general
rectified quadratics of the form max(0, ?(X ? g)(e ? X)), since we could just take g = k ? 1/2
and e = k + 1/2 for each possible value k of X. But the requirement that g = 0 makes this more
difficult since significant overlap between non-zero regions of these functions will be unavoidable.
Further complicating the situation is the fact that we cannot exploit linear cancelations due to the
restriction on the RBM network?s second layer weights. Figure 2 depicts an example of the solution
to this problem as given in our proof of Theorem 7.
Note that this construction is considerably more complex than the well-known construction used for
computing symmetric functions with 1 hidden layer threshold networks Hajnal et al. (1993). While
we cannot prove that ours is the most efficient possible construction RBM networks, we can prove
that a construction directly analogous to the one used for 1 hidden layer threshold networks?where
each individual neuron computes a symmetric function?cannot possibly work for RBM networks.
To see this, first observe that any neuron that computes a symmetric function must compute a function of the form g(?X + b), where g is the activation function and ? is some scalar. Then noting that
both soft(y) and hard(y) are convex functions of y, and that the composition of an affine function
and a convex function is convex, we have that each neuron computes a convex function of X. Then
because the positive sum of convex functions is convex, the output of the RBM network (which is
the unweighted sum of the output of its neurons, plus a constant) is itself convex in X. Thus the
symmetric functions computable by such RBM networks must be convex in X, a severe restriction
which rules out most examples.
4
4.1
Lower bounds on the size of RBM networks for certain functions
Existential results
In this section we prove a result which establishes the existence of functions which cannot be computed by RBM networks that are not exponentially large.
Instead of identifying non-representable distributions as lying in the complement of some lowdimensional manifold (as was done previously), we will establish the existence of Boolean functions
which cannot be represented with a sufficiently large margin by the output of any sub-exponentially
large RBM network. However, this result, like previous such existential results, will say nothing
about what these Boolean functions actually look like.
To prove this result, we will make use of Proposition 5 and a classical result of Muroga (1971) which
allows us to discretize the incoming weights of a threshold neuron (without changing the function
it computes), thus allowing us to bound the number of possible Boolean functions computable by
1-layer threshold networks of size m.
Theorem 8. Let Fm,?,n represent the set of those Boolean functions on {0, 1}n that can be computed
by a thresholded RBM network of size m with margin ?. Then, there exists a fixed number K such
that,
2
Fm,?,n ? 2poly(s,m,n,?) , where s(m, ?, n) = 4m n log 2m + m.
?
?
In particular, when m2 ? ?2?n , for any constant ? < 1/2, the ratio of the size of the set Fm,?,n to
n
the total number of Boolean functions on {0, 1}n (which is 22 ), rapidly converges to zero with n.
4.2
Qualified lower bound results for the IP function
While interesting, existential results such as the one above does not give us a clear picture of what
a particular hard-to-compute function for RBM networks might look like. Perhaps these functions
will resemble purely random maps without any interesting structure. Perhaps they will consist only
of functions that require exponential time to compute on a Turing machine, or even worse, ones that
are non-computable. In such cases, not being able to compute such functions would not constitute a
meaningful limitation on the expressive efficiency of RBM networks.
In this sub-section we present strong evidence that this is not the case by exhibiting a simple Boolean
function that provably requires exponentially many neurons to be computed by a thresholded RBM
network, provided that the margin is not allowed to be exponentially smaller than the weights. Prior
to these results, there was no formal separation between the kinds of unnormalized log-likelihoods
realizable by polynomially sized RBMs, and the class of functions computable efficiently by almost
any reasonable model of computation, such as arbitrarily deep Boolean circuits.
7
The Boolean function we will consider is the well-known ?inner product mod 2? function, denoted
IP (x), which is defined as the parity of the the inner product of the first half of x with the second
half (we assume for convenience that n is even). This function can be thought of as a strictly harder
to compute version of PARITY (since PARITY is trivially reducible to it), which as we saw in
Section 7, can be efficiently computed by thresholded RBM network (indeed, an RBM network can
efficiently compute any possible real-valued representation of PARITY). Intuively, IP (x) should be
harder than PARITY, since it involves an extra ?stage? or ?layer? of sequential computation, and our
formal results with RBMs agree with this intuition.
There are many computational problems that IP can be reduced to, so showing that RBM networks
cannot compute IP thus proves that RBMs cannot efficiently model a wide range of distributions
whose unnormalized log-likelihoods are sufficiently complex in a computational sense. Examples
of such log-likelihoods include ones given by the multiplication of binary-represented integers, or
the evaluation of the connectivity of an encoded graph. For other examples, see see Corollary 3.5 of
Hajnal et al. (1993).
Using the simulation of hardplus RBM networks by 1 hidden layer threshold networks (Theorem
6), and Proposition 5, and an existing result about the hardness of computing IP by 1 hidden layer
thresholded networks of bounded weights due to Hajnal et al. (1993), we can prove the following
basic result:
q
q
?
2n/3
n/6
n/9 3 ?
then no RBM network of size
Theorem 9. If m < min
C , 2
4C log(2/?) , 2
4C
m, whose weights are bounded in magnitude by C, can compute a function which represents ndimensional IP with margin ?. In particular, for C and 1/? bounded by polynomials in n, for n
sufficiently large, this condition is satisfied whenever m < 2(1/9?)n for some > 0.
Translating the definitions, this results says the following about the limitations of efficient representation by RBMs: Unless either the weights, or the number units of an RBM are exponentially
large in n, an RBM cannot capture any distribution that has the property that x?s s.t. IP(x) = 1 are
significantly more probable than the remaining x?s.
While the above theorem is easy to prove from known results and the simulation/hardness results
given in previous sections, by generalizing the techniques used in Hajnal et al. (1993), we can (with
much more effort) derive a stronger result. This gives an improved bound on m and lets us partially
relax the magnitude bound on parameters so that they can be arbitrarily negative:
?
n/4
, then no RBM network of size m, whose weights
Theorem 10. If m < 2?max{log 2,nC+log
2} ? 2
are upper bounded in value by C, can compute a function which represents n-dimensional IP with
margin ?. In particular, for C and 1/? bounded by polynomials in n, for n sufficiently large, this
condition is satisfied whenever m < 2(1/4?)n for some > 0.
The general theorem we use to prove this second result (Theorem 17 in the Appendix) requires only
that the neural network have 1 hidden layer of neurons with activation functions that are monotonic
and contribute to the top neuron (after multiplication by the outgoing weight) a quantity which can
be bounded by a certain exponentially growing function of n (that also depends on ?). Thus this
technique can be applied to produce lower bounds for much more general types of neural networks,
and thus may be independently interesting.
5
Conclusions and Future Work
In this paper we significantly advanced the theoretical understanding of the representational efficiency of RBMs. We treated the RBM?s unnormalized log likelihood as a neural network which
allowed us to relate an RBM?s representational efficiency to that of threshold networks, which are
much better understood. We showed that, quite suprisingly, RBMs can efficiently represent distributions that are given by symmetric functions such as PARITY, but cannot efficiently represent distributions which are slightly more complicated, assuming an exponential bound on the weights. This
provides rigorous justification for the use of potentially more expressive/deeper generative models.
Going forward, some promising research directions and open problems include characterizing the
expressive power of Deep Boltzmann Machines and more general Boltzmann machines, and proving
an exponential lower bound for some specific distribution without any qualifications on the weights.
Acknowledgments
This research was supported by NSERC. JM is supported by a Google Fellowship; AC by a Ramanujan Fellowship of the DST, India.
8
References
Aaron Courville, James Bergstra, and Yoshua Bengio. Unsupervised models of images by spikeand-slab RBMs. In Proceedings of the 28th International Conference on Machine Learning, pages
952?960, 2011.
Maria Anglica Cueto, Jason Morton, and Bernd Sturmfels. Geometry of the Restricted Boltzmann
Machine. arxiv:0908.4425v1, 2009.
J. Forster. A linear lower bound on the unbounded error probabilistic communication complexity. J.
Comput. Syst. Sci., 65(4):612?625, 2002.
Yoav Freund and David Haussler. Unsupervised learning of distributions on binary vectors using
two layer networks, 1994.
A. Hajnal, W. Maass, P. Pudl?ak, M. Szegedy, and G. Tur?an. Threshold circuits of bounded depth. J.
Comput. System. Sci., 46:129?154, 1993.
G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, 2006. ISSN 1095-9203.
Geoffrey Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:2002, 2000.
Geoffrey E. Hinton, Simon Osindero, Max Welling, and Yee Whye Teh. Unsupervised discovery of
nonlinear structure using contrastive backpropagation. Cognitive Science, 30(4):725?731, 2006.
Nicolas Le Roux and Yoshua Bengio. Representational power of Restricted Boltzmann Machines
and deep belief networks. Neural Computation, 20(6):1631?1649, 2008.
Philip Long and Rocco Servedio. Restricted Boltzmann Machines are hard to approximately evaluate or simulate. In Proceedings of the 27th International Conference on Machine Learning, pages
952?960, 2010.
Wolfgang Maass. Bounds for the computational power and learning complexity of analog neural
nets (extended abstract). In Proc. of the 25th ACM Symp. Theory of Computing, pages 335?344,
1992.
Wolfgang Maass, Georg Schnitger, and Eduardo D. Sontag. A comparison of the computational
power of sigmoid and boolean threshold circuits. In Theoretical Advances in Neural Computation
and Learning, pages 127?151. Kluwer, 1994.
Benjamin M. Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles for
Restricted Boltzmann Machine learning. Journal of Machine Learning Research - Proceedings
Track, 9:509?516, 2010.
G. Montufar, J. Rauh, and N. Ay. Expressive power and approximation errors of Restricted Boltzmann Machines. In Advances in Neural Information Processing Systems, 2011.
Guido Montufar and Nihat Ay. Refinements of universal approximation results for deep belief networks and Restricted Boltzmann Machines. Neural Comput., 23(5):1306?1319, May 2011.
Saburo Muroga. Threshold logic and its applications. Wiley, 1971.
Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. Journal of Machine
Learning Research - Proceedings Track, 5:448?455, 2009.
Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of Deep Belief Networks.
In Andrew McCallum and Sam Roweis, editors, Proceedings of the 25th Annual International
Conference on Machine Learning (ICML 2008), pages 872?879. Omnipress, 2008.
Yichuan Tang and Ilya Sutskever. Data normalization in the learning of Restricted Boltzmann
Machines. Technical Report UTML-TR-11-2, Department of Computer Science, University of
Toronto, 2011.
9
A
A.1
Appendix
Free-energy derivation
The following is a derivation of the well-known formula for the free-energy of an RBM. This
tractable form is made possible by the bipartite interaction structure of the RBM?s units:
X 1
exp(x> W h + c> x + b> h)
Z?
h
Y X
1
=
exp(c> x)
exp(x> [W ]j hj + bj hj )
Z?
j
p(x) =
hj ?{0,1}
X
1
=
exp(c> x) exp( (log
Z?
j
X
exp(x> [W ]j hj + bj hj )))
hj ?{0,1}
X
1
log[1 + exp(x> [W ]j + bj )])
=
exp(c> x +
Z?
j
=
A.2
1
exp(?F? (x))
Z?
Proofs for Section 2.4
We begin with a useful technical result:
Proposition 11. For arbitrary y ? R the following basic facts for the softplus function hold:
y ? soft(y) = ? soft(?y)
soft(y) ? exp(y)
Proof. The first fact follows from:
exp(y)
y? soft(y) = log(exp(y)) ? log(1 + exp(y)) = log
1 + exp(y)
1
= log
= ? log(1 + exp(y)) = ? soft(?y)
exp(?y) + 1
To prove the second fact, we will show that the function f (y) = exp(y) ? soft(y) is positive. Note
that f tends to 0 as y goes to ?? since both exp(y) and soft(y) do. It remains to show that f is
monotonically increasing, which we establish by showing that its derivative is positive:
1
f 0 (y) = exp(y) ?
>0
1 + exp(?y)
1 + exp(?y)
? exp(y)(1 + exp(?y)) ?
>0
1 + exp(?y)
? exp(y) + 1 ? 1 > 0 ? exp(y) > 0
Proof of Lemma 2. Consider a single neuron in the RBM network and the corresponding neuron
in the hardplus RBM network, whose net-input are given by y = w> x + b.
For each x, there are two cases for y. If y ? 0, we have by hypothesis that y ? C, and so:
| hard(y) ? soft(y)| = |y ? soft(y)| = | ? soft(?y)| = soft(?y)
? exp(?y) ? exp(?C)
And if y < 0, we have by hypothesis that y ? ?C and so:
| hard(y) ? soft(y)| = |0 ? soft(y)| = soft(y)
? exp(y) ? exp(?C)
10
Thus, each corresponding pair of neurons computes the same function up to an error bounded by
exp(?C). From this it is easy to show that the entire circuits compute the same function, up to an
error bounded by m exp(?C), as required.
Proof of Theorem 3. Suppose we have a softplus RBM network with a number of hidden neurons
given by m. To simulate this with a hardplus RBM network we will replace each neuron with a group
of hardplus neurons with weights and biases chosen so that the sum of their outputs approximates the
output of the original softplus neuron, to within a maximum error of 1/p where p is some constant
> 0.
First we describe the construction for the simulation of a single softplus neurons by a group of
hardplus neurons.
Let g be a positive integer and a > 0. We will define these more precisely later, but for what follows
their precise value is not important.
At a high level, this construction works by approximating soft(y), where y is the input to the neuron,
by a piece-wise linear function expressed as the sum of a number of hardplus functions, whose
?corners? all lie inside [?a, a]. Outside this range of values, we use the fact that soft(y) converges
exponentially fast (in a) to 0 on the left, and y on the right (which can both be trivially computed by
hardplus functions).
Formally, for i = 1, 2, ..., g, g + 1, let:
qi = (i ? 1)
2a
?a
g
For i = 1, 2, ..., g, let:
?i =
soft(qi+1 ) ? soft(qi )
qi+1 ? qi
and also let ?0 = 0 and ?g+1 = 1. Finally, for i = 1, 2, ..., g, g + 1, let:
?i = ?i ? ?i?1
With these definitions it is straightforward to show that 1 ? ?i > 0, ?i > ?i?1 and consequently
0 < ?i < 1 for each i. It is also easy to show that qi > qi?1 , q0 = ?a and qg+1 = a.
For i = 1, 2, ..., g, g + 1, we will set the weight vector wi and bias bi of the i-th hardplus neuron in
our group so that the neuron outputs hard(?i (y ? qi )). This is accomplished by taking wi = ?i w
and bi = ?i (b ? qi ), where w and b (without the subscripts), are the weight vector and bias of the
original softplus neuron.
Note that since |?i | ? 1 we have that the weights of these hard neurons are smaller in magnitude
than the weights of the original soft neuron and thus bounded by C as required.
The total output (sum) for this group is:
T (y) =
g+1
X
hard(?i (y ? qi ))
i=1
We will now bound the approximation error |T (y) ? soft(y)| of our single neuron simulation.
Note that for a given y we have that the i-th hardplus neuron in the group has a non-negative input
iff y ? qi . Thus for y < ?a all of the neurons have a negative input. And for y ? ?a , if we take
j to be the largest index i s.t. qi ? y, then each neuron from i = 1 to i = j will have positive input
and each neuron from i = j + 1 to i = g + 1 will have negative input.
Consider the case that y < ?a. Since the input to each neuron is negative, they each output 0 and
thus T (y) = 0. This results in an approximation error ? exp(?a):
|T (y) ? soft(y)| = |0 ? soft(y)| = soft(y) < soft(?a) ? exp(?a)
11
Next, consider the case that y ? ?a, and let j be as given above. In such a case we have:
T (y) =
g+1
X
hard(?i (y ? qi )) =
i=1
j
X
?i (y ? qi ) + 0
i=1
j
X
=
(?i ? ?i?1 )(y ? qi )
i=1
=y
j
X
(?i ? ?i?1 ) ?
i=1
j
X
(?i ? ?i?1 )qi
i=1
= y?j ? y?0 ? ?j qj +
j?1
X
?i (qi+1 ? qi ) + ?0 q1
i=1
= ?j (y ? qj ) +
j?1
X
(soft(qi+1 ) ? soft(qi ))
i=1
= ?j (y ? qj ) + soft(qj ) ? soft(q1 )
For y ? a we note that ?j (y ? qj ) + soft(qj ) is a secant approximation to soft(y) generated by the
secant from qj to qj+1 and upperbounds soft(y) for y ? [qj , qj+1 ]. Thus a crude bound on the error
is soft(qj+1 ) ? soft(qj ), which only makes use of the fact that soft(y) is monotonic. Then because
the slope (derivative) of soft(y) is ?(y) = 1/(1 + exp(?y)) < 1, we can further (crudely) bound
this by qj+1 ? qj . Thus the approximation error at such y?s may be bounded as:
|T (y)? soft(y)| = |(?j (y ? qj ) + soft(qj ) ? soft(q1 )) ? soft(y)|
? max{|?j (y ? qj ) + soft(qj ) ? soft(y)|, soft(q1 )}
2a
, exp(?a)
? max{qj+1 ? qj , exp(?a)} = max
g
where we have also used soft(q1 ) = soft(?a) ? exp(?a).
For the case y > a, all qi > y and the largest index j such that qj ? y is j = g + 1. So
?j (y ? qj ) + soft(qj ) ? soft(q1 ) = y ? a + soft(a) ? soft(?a) = y. Thus the approximation error
at such y?s is:
|y ? soft(y)| = | ? soft(?y)| = soft(?y) ? soft(?a) ? exp(?a)
Having covered all cases for y we conclude that the general approximation error for a single softplus
neuron satisfies the following bound:
2a
, exp(?a)
|y ? soft(y)| ? max
g
For a softplus RBM network with m neurons, our hardplus RBM neurons constructed by replacing
each neuron with a group of hardplus neurons as described above will require a total of m(g + 1)
neurons, and have an approximation error bounded by the sum of the individual approximation
errors, which is itself bounded by:
2a
m max
, exp(?a)
g
Taking a = log(mp), g = d2mpae. This gives:
2a
1
2a
1
m max
,
? m max
,
d2mpae mp
2mpa mp
1 1
1
=
= max
,
p p
p
Thus we see that with m(g + 1) = m(d2mp log(mp)e + 1) ? 2m2 p log(mp) + m neurons we
can produce a hardplus RBM network which approximates the output of our softplus RBM network
with error bounded by 1/p.
12
Remark 12. Note that the construction used in the above lemma is likely far from optimal, as the
placement of the qi ?s could be done more carefully. Also, the error bound we proved is crude and
does not make strong use of the properties of the softplus function. Nonetheless, it seems good
enough for our purposes.
A.3
Proofs for Section 2.5
Proof of Proposition 5. Suppose that there is an RBM network of size m with weights bounded in
magnitude by C computes a function g which represent f with margin ?.
Then taking p = 2/? and applying Theorem 3 we have that there exists an hardplus RBM network
of size 4m2 log(2m/?)/? + m which computes a function g 0 s.t. |g(x) ? g 0 (x)| ? 1/p = ?/2 for
all x.
Note that f (x) = 1 ? thresh(g(x)) = 1 ? g(x) ? ? ? g 0 (x) ? ? ? ?/2 = ?/2 and similarly,
f (x) = 0 ? thresh(g(x)) = 0 ? g(x) ? ?? ? g 0 (x) ? ?? + ?/2 = ??/2. Thus we conclude
that g 0 represents f with margin ?/2.
A.4
Proofs for Section 2.7
Proof of Theorem 6. Let f be a Boolean function on n variables computed by a size s hardplus RBM
network, with parameters (W, b, d) . We will first construct a three layer hybrid Boolean/threshold
circuit/network where the output gate is a simple weighted sum, the middle layer consists of AND
gates, and the bottom hidden layer consists of threshold neurons. There will be n?m AND gates, one
for every i ? [n] and j ? [m]. The (i, j)th AND gate will have inputs: (1) xi and (2) (x> [W ]j ? bj ).
The weights going from the (i, j)th AND gate to the output will be given by [W ]i,j . It is not hard to
see that our three layer netork computes the same Boolean function as the original hardplus RBM
network.
In order to obtain a single hidden layer threshold network, we replace each sub-network rooted at
an AND gate of the middle layer by a single threshold neuron. Consider a general
Pn sub-network
consisting of an AND of: (1) a variable xj and (2) a threshold neuron computing ( i=1 ai xi ? b).
Let Q be some number greater P
than the sum of all the ai ?s. We replace this sub-network by a single
n
threshold
gate
that
computes
(
i=1 ai xi + Qxj ? b + Q). Note that if the input x is such that
P
P
i ai xi ? b and xj = 1, then
i ai xi + Q?j will be at least
Pb + Q, so the threshold gate will
output 1. In all other cases, the threshold will output zero. (If i ai x
i < b, then even if xj = 1,
P
the
P sum will still be less than Q + b. Similarly, if xj = 0, then since i ai xi is never greater than
i ai , the total sum will be less than Q ? (n + 1)C.)
A.5
Proof of Theorem 7
Proof. We will first describe how to construct a hardplus RBM network which satisfies the properties
required for part (i). It will be composed of n special groups of hardplus neurons (which are defined
and discussed below), and one additional one we call the ?zero-neuron?, which will be defined later.
Definition 13 A ?building block? is a group of n hardplus neurons, parameterized by the scalars ?
and e, where the weight vector w ? Rn between the i-th neuron in the group and the input layer is
given by wi = M ? ? and wj = ?? for j 6= i and the bias will be given by b = ?e ? M , where M
is a constant chosen so that M > ?e.
For a given x, the input to the i-th neuron of a particular building block is given by:
n
X
j=1
wj xj + b = wi xi +
X
w j xj + b
j6=i
= (M ? ?)xi ? ?(X ? xi ) + ?e ? M
= ?(e ? X) ? M (1 ? xi )
13
When xi = 0, this is ?(e ? X) ? M < 0, and so the neuron will output 0 (by definition of the
hardplus function). On the other hand, when xi = 1, the input to the neuron will be ?(e ? X) and
thus the output will be max(0, ?(e ? X)).
In general, we have that the output will be given by:
xi max(0, ?(e ? X))
From this it follows that the combined output from the neurons in the building block is:
n
X
(xi max(0, ?(e ? X))) = max(0, ?(e ? X))
i=1
n
X
xi
i=1
= max(0, ?(e ? X))X = max(0, ?X(e ? X))
Note that whenever X is positive, the output is a concave quadratic function in X, with zeros at
X = 0 and X = e, and maximized at X = e/2, with value ?e2 /4.
Next we show how the parameters of the n building blocks used in our construction can be set to
produce a hardplus RBM network with the desired output.
P
First, define d to be any number greater than or equal to 2n2 j |tj |.
Indexing the building blocks by j for 1 ? j ? n we define their respective parameters ?j , ej as
follows:
?n =
tn + d
,
n2
en = 2n,
tj + d tj+1 + d
?
j2
(j + 1)2
2 tj + d tj+1 + d
?
ej =
?j
j
j+1
?j =
where we have assumed that ?j 6= 0 (which will be established, along with some other properties of
these definitions, in the next claim).
Claim 1. For all j, 1 ? j ? n, (i) ?j > 0 and (ii) for all j, 1 ? j ? n ? 1, j ? ej ? j + 1.
For d ?
Proof of Claim 1. Part (i): For j = n, by definition we know that ?n = tnn+d
2 .
P
2
2n
j |tj | > |tn |, the numerator will be positive and therefore ?n will be positive.
For j < n, we have:
?j > 0
tj + d
tj+1 + d
?
>
2
j
(j + 1)2
? (j + 1)2 (tj + d) > j 2 (tj+1 + d)
? d((j + 1)2 ? j 2 ) > j 2 tj+1 ? (j + 1)2 tj
? d>
j 2 tj+1 ? (j + 1)2 tj
2j + 1
(j+1)2 (|t
|+|t |)
j+1
j
The right side of the above inequality is less than or equal to
? (j+1)(|tj+1 |+|tj |)
2j+1
P
which is strictly upper bounded by 2n2 j |tj |, and thus by d. So it follows that ?j > 0 as needed.
Part (ii):
14
2 tj + d tj+1 + d
j ? ej =
?
?j
j
j+1
tj + d tj+1 + d
? j?j ? 2
?
j
j+1
tj + d tj+1 + d
tj + d j(tj+1 + d)
?
2
?
?
?
j
(j + 1)2
j
j+1
j(tj+1 + d)
tj + d
tj+1 + d
? ?
?
?2
(j + 1)2
j
j+1
?
? (tj+1 + d)j 2 ? (tj + d)(j + 1)2 ? 2(tj+1 + d)j(j + 1)
? d(j 2 ? 2j(j + 1) + (j + 1)2 ) ? ?j 2 tj+1 + 2j(j + 1)tj+1 ? (j + 1)2 tj
? d ? ?j 2 tj+1 + 2j(j + 1)tj+1 ? (j + 1)2 tj
where we have used j 2 ? 2j(j + 1) + (j + 1)2 = (j ? (j + 1))2 = 12 = 1 at the last line. Thus it
suffices to make d large enough to ensure that j ? ej . For our choice of d, this will be true.
For the upper bound we have:
2 tj + d tj+1 + d
?
= ej ? j + 1
?j
j
j+1
tj + d tj+1 + d
(j + 1)(tj + d) tj+1 + d
? 2
?
? (j + 1)?j =
?
j
j+1
j2
j+1
tj + d tj+1 + d
(j + 1)(tj + d)
? 2
?
?
j
j+1
j2
? 2(tj + d)j(j + 1) ? (tj+1 + d)j 2 ? (j + 1)2 (tj + d)
(d + tj )
(d + tj )
?(d ? tj+1 )
+2
? (j + 1)
?
j+1
j
j2
? ? j 2 (d + tj+1 ) + 2j(j + 1)(d + tj ) ? (j + 1)2 (d + tj )
?
d(j 2 ? 2j(j + 1) + (j + 1)2 )
? ?j 2 tj+1 + 2j(j + 1)tj ? (j + 1)2 tj
?
2
d ? ?j tj+1 + 2j(j + 1)tj ? (j + 1)2 tj
where we have used j 2 ? 2j(j + 1) + (j + 1)2 = 1 at the last line. Again, for our choice of d the
above inequality is satisfied.
Finally, define M to be any number greater than max(t0 + d, maxi {?i ei }).
In addition to the n building blocks, our hardplus RBM will include an addition unit that we will call
the zero-neuron, which handles x = 0. The zero-neuron will have weights w defined by wi = ?M
for each i, and b = t0 + d.
Finally, the output bias B of our hardplus RBM network will be set to ?d.
The total output of the network is simply the sum of the outputs of the n different building blocks,
the zero neuron, and constant bias ?d.
To show part (i) of the theorem we want to prove that for all k, whenever X = k, our circuit outputs
the value tk .
We make the following definitions:
ak ? ?
n
X
bk ?
?j
j=k
n
X
j=k
15
?j ej
Claim 2.
ak =
?(tk + d)
k2
bk =
2(tk + d)
k
bk = ?2kak
This claim is self-evidently true by examining basic definitions of ?j and ej and realizing that ak
and bk are telescoping sums.
Given these facts, we can prove the following:
Claim 3. For all k, 1 ? k ? n, when X = k the sum of the outputs of all the n building blocks is
given by tk + d.
Proof of Claim 3. For X = n, the (?n , en )-block computes max(0, ?n X(en ? X)) =
max(0, ??n X 2 +?n en X). By the definition of en , n ? en , and thus when X ? n, ?n X(en ?X) ?
0. For all other building blocks (?j , ej ), j < n, since ej ? j + 1, this block outputs zero since
?j X(ej ? X) is less than or equal to zero. Thus the sum of all of the building blocks when X = n
is just the output of the (?n , en )-block which is
?n ? n(en ? n) = ??n ? n2 + ?n en ? n = ?(tn + d) + 2(tn + d) = tn + d
as desired.
For X = k, 1 ? k < n the argument is similar. For all building blocks j ? k, by Claim 1 we know
that ej ? j and therefore this block on X = k is nonnegative and therefore contributes to the sum.
On the other hand, for all building blocks j < k, by Claim 1 we know that ej ? j + 1 and therefore
this outputs 0 and so does not contribute to the sum.
Thus the sum of all of the building blocks is equal to the sum of the non-zero regions of the building
blocks j for j ? k. Since each of this is a quadratic function of X, it can written as a single quadratic
polynomial of the form ak X 2 + bk X where ak and bk are defined as before.
Plugging in the above expressions for ak and bk from Claim 2, we see that the value of this polynomial at X = k is:
ak k 2 + bk k =
?(tk + d) 2 2(tk + d)
k +
k = ?(tk + d) + 2(tk + d) = tk + d
k2
k
Finally, it remains to ensure that our hardplus RBM network outputs t0 for X = 0. Note that the
sum of the outputs of all n building blocks and the output bias is ?d at X = 0. To correct this, we
set the incoming weights and the bias of the zero-neuron according to wi = ?M for each i, and
b = t0 + d. When X = 0, this neuron will output t0 + d, making the total output of the network
?d + t0 + d = t0 as needed. Furthermore, note that the addition of the zero-neuron does not affect
the output of the network when X = k > 0 because the zero-neuron outputs 0 on all of these inputs
as long as M ? t0 + d.
This completes the proof of part (i) of the theorem and it remains to prove part (ii).
Observe that the size of the weights grows linearly in M and d, which follows directly from their definitions. And note that the magnitude of the input to each neuron is lower bounded by a positive linear function of M and d (a non-trivial fact which we will prove below). From these two observations
it follows that to achieve the condition that the magnitude of the input to each neuron is greater than
C(n) for some function C of n, the weights need to grow linearly with C. Noting that error bound
condition ? (n2 + 1) exp(?C) in Lemma 2 can be rewritten as C ? log((n2 + 1)) + log(1/),
from which part (ii) of the theorem then follows.
There are two cases where a hardplus neuron in building block j has a negative input. Either the
input is ?j (ej ? X) ? M , or it is ?j (ej ? X) for X ? j + 1. In the first case it is clear that as M
grows the net input becomes more negative since ej doesn?t depend on M at all.
16
The second case requires more work. First note that from its defintion, ej can be rewritten as
(j+1)aj+1 ?jaj
2
. Then for any X ? j + 1 and j ? n ? 1 we have:
?j
?j (ej ? X) ? ?j (ej ? (j + 1))
(j + 1)aj+1 ? jaj
= ?j 2
? (j + 1)
?j
= 2(j + 1)aj+1 ? 2jaj ? (j + 1)?j
= 2(j + 1)aj+1 ? 2jaj ? (j + 1)(aj+1 ? aj )
= (j + 1)aj+1 ? 2jaj + (j + 1)aj
?(d ? tj+1 )
(d + tj+1 )
d + tj+1
=
+2
? (j + 1)
j+1
j
j2
2
?j (d + tj+1 ) + 2j(j + 1)(d + tj ) ? (j + 1)2 (d + tj )
=
j 2 (j + 1)
2
?(j ? 2j(j + 1) + (j + 1)2 )d ? j 2 tj + 2j(j + 1)tj
=
j 2 (j + 1)
2
2
?(j ? (j + 1)) d ? j tj + 2j(j + 1)tj
=
j 2 (j + 1)
2
?d ? j tj + 2j(j + 1)tj
=
j 2 (j + 1)
?d
?j 2 tj + 2j(j + 1)tj
= 2
+
j (j + 1)
j 2 (j + 1)
So we see that as d increases, this bound guarantees that ?j (ej ? X) becomes more negative for
each X ? j + 1. Also note that for the special zero-neuron, for X ? 1 the net input will be
?M X + t0 + d ? ?M + t0 + d, which will shrink as M grows.
For neurons belonging to building block j which have a positive valued input, we have that X < ej .
Note that for any X ? j and j < n we have:
(j + 1)aj+1 ? jaj
?j
?j (ej ? X) ? ?j (ej ? j) = ?j 2
?j
= 2(j + 1)aj+1 ? 2jaj ? j?j
= 2(j + 1)aj+1 ? 2jaj ? j(aj+1 ? aj )
= 2(j + 1)aj+1 ? jaj ? jaj+1
?(d + tj+1 ) (d + tj )
(d + tj+1 )
=2
+
+j
j+1
j
(j + 1)2
?2j(j + 1)(d + tj+1 ) + (j + 1)2 (d + tj ) + j 2 (d + tj+1 )
=
j(j + 1)2
((j + 1)2 ? 2j(j + 1) + j 2 )d + (j + 1)2 tj ? 2j(j + 1)tj+1 + j 2 tj+1
=
j(j + 1)2
(j + 1 ? j)2 d + (j + 1)2 tj ? 2j(j + 1)tj+1 + j 2 tj+1
=
j(j + 1)2
2
d + (j + 1) tj ? 2j(j + 1)tj+1 + j 2 tj+1
=
j(j + 1)2
d
(j + 1)2 tj ? 2j(j + 1)tj+1 + j 2 tj+1
=
+
j(j + 1)2
j(j + 1)2
And for the case j = n, we have for X ? j that:
?j (ej ? X) ? ?j (ej ? j) =
d + tn
d
tn
(2n ? n) = +
n2
n
n
17
So in all cases we see that as d increases, this bound guarantees that ?j (ej ? X) grows linearly.
Also note that for the special zero-neuron, the net input will be t0 + d for X = 0, which will grow
linearly as d increases.
A.6
A.6.1
Proofs for Section 4
Proof of Theorem 8
We first state some basic facts which we need.
Fact 14 (Muroga (1971)). Let f : {0, 1}n ? {0, 1} be a Boolean function computed by a threshold
neuron with arbitrary real incoming weights and bias. There exists a constant K and another
threshold neuron computing f , all of whose incoming weights and bias are integers with magnitude
at most 2Kn log n .
A direct consequence of the above fact is the following fact, by now folklore, whose simple proof
we present for the sake of completeness.
Fact 15. Let fn be the set of all Boolean functions on {0, 1}n . For each 0 < ? < 1, let f?,n be the
subset of such Boolean functions that are computable by threshold networks with one hidden layer
with at most s neurons. Then, there exits a constant K such that,
f?,n ? 2K(n2 s log n+s2 log s) .
Proof. Let s be the number of hidden neurons in our threshold network. By using Fact 14 repeatedly
for each of the hidden neurons, we obtain another threshold network having still s hidden units computing the same Boolean function such that the incoming weights and biases of all hidden neurons
is bounded by 2Kn log n . Finally applying Fact 14 to the output neuron, we convert it to a threshold
gate with parameters bounded by 2Ks log s . Henceforth, we count only the total number of Boolean
functions that can be computed by such threshold networks with integer weights. We do this by
establishing a simple upper bound on the total number of distinct such networks. Clearly, there are
2
at most 2Kn log n ways to choose the incoming weights of a given neuron in the hidden layer. There
are s incoming weights to choose for the output threshold, each of which is an integer of magnitude
2
2
at most 2Ks log s . Combining these observations, there are at most 2Ks?n log n ? 2Ks log s distinct
networks. Hence, the total number of distinct Boolean functions that can be computed is at most
2
2
2K(n s log n+s log s) .
With these basic facts in hand, we prove below Theorem 8 using Proposition 5 and Theorem 6.
Proof of Theorem 8. Consider any thresholded RBM network with m hidden units that is computing
a n-dimensional Boolean function with margin ?. Using Proposition 5, we can obtain a thresholded
hardplus RBM network of size 4m2 /? ? log(2m/?) + m that computes the same Boolean function
as the thresholded original RBM network. Applying Theorem 6 and thresholding the output, we
obtain a thresholded network with 1 hidden layer of thresholds which is the same size and computes
the same Boolean function. This argument shows that the set of Boolean functions computed by
thresholded RBM networks of m hidden units and margin ? is a subset of the Boolean functions
computed by 1-hidden-layer threshold networks of size 4m2 n/? ?log(2m/?)+mn. Hence, invoking
Fact 15 establishes our theorem.
A.6.2
Proof of Theorem 9
Note that the theorems from Hajnal et al. (1993) assume integer weights, but this hypthosis can
be easily removed from their Theorem 3.6. In particular, Theorem 3.6 assumes nothing about the
lower weights, and as we will see, the integrality assumption on the top level weights can be easily
replaced with a margin condition.
First note that their Lemma 3.3 only uses the integrality of the upper weights to establish that the
margin must be ? 1. Otherwise it is easy to see that with a margin ?, Lemma 3.3 implies that
a threshold neuron in a thresholded network of size m is a 2?
? -discriminator (? is the sum of the
18
absolute values of the 2nd-level weights in their notation). Then Theorem 3.6?s proof gives m ?
?2(1/3?)n for sufficiently large n (instead of just m ? 2(1/3?)n ). A more precise bound that they
n/3
implictly prove in Theorem 3.6 is m ? 6?2C .
Thus we have the following fact adapted from Hajnal et al. (1993):
Fact 16. For a neural network of size m with a single hidden layer of threshold neurons and weights
n/3
bounded by C that computes a function that represents IP with margin ?, we have m ? 6?2C .
Proof of Theorem 9. By Proposition 5 it suffices to show that no thresholded hardplus RBM network
of size ? 4m2 log(2m/?)/? + m with parameters bounded by C can compute IP with margin ?/2.
Well, suppose by contradiction that such a thresholded RBM network exists. Then by Theorem 6
there exists a single hidden layer threshold network of size ? 4m2 n log(2m/?)/?+mn with weights
bounded in magnitude by (n + 1)C that computes the same function, i.e. one which represents IP
with margin ?/2.
Applying the above Fact we have 4m2 n log(2m/?)/? + mn ?
3?2n/3
(n+1)C .
It is simple to check that this bound is violated if m is bounded as in the statement of this theorem.
A.6.3
Proof of Theorem 10
We prove a more general result here from which we easily derive Theorem 10 as a special case.
To state this general result, we introduce some simple notions. Let h : R ? R be an activation
function. We say h is monotone if it satisfies the following: Either h(x) ? h(y) for all x < y OR
h(x) ? h(y) for all x < y. Let ` : {0, 1}n ? R be an inner function. An (h, `)gate/neuron Gh,`
is just one
h and ` in the natural way, i.e. Gh,` x = h `(x) . We
thatis obtained by composing
notate (h, `)? = maxx?{0,1}n Gh,` (x).
We assume for the discussion here that the number of input variables (or observables) is even and is
divided into two halves, called x and y, each being a Boolean string of n bits. In this language, the inner production Boolean function, denoted by IP (x, y), is just defined as x1 y1 +? ? ?+xn yn (mod 2).
We call an inner function of a neuron/gate to be (x, y)-separable if it can be expressed as g(x)+f (y).
For instance, all affine inner functions are (x, y)-separable. Finally, given a set of activation functions H and a set of inner functions I, an (H, I)- networkis one each
of whose
unit is a
hidden
neuron of the form Gh,` for some h ? H and ` ? I. Let (H, I)? = sup (h, `)? : h ?
H, ` ? I .
Theorem 17. Let H be any set of monotone activation functions and I be a set of (x, y) separable
inner functions. Then, every (H, I) network with one layer of m hidden units computing IP with a
margin of ? must satisfy the following:
?
2n/4 .
m ?
2 (H, I)?
In order to prove Theorem 17, it would be convenient to consider the following 1/-1 valued function:
(?1)IP(x,y) = (?1)x1 y1 +???+xn yn . Please note that when IP evaluates to 0, (?1)IP evaluates to 1 and
when IP evaluates to 1, (?1)IP evaluates to -1.
We also consider a matrix Mn with entries in {1, ?1} which has 2n rows and 2n columns. Each
row of Mn is indexed by a unique Boolean string in {0, 1}n . The columns of the matrix are also
indexed similarly. The entry Mn [x, y] is just the 1/-1 value of (?1)IP(x,y) . We need the following
fact that is a special case of the classical result of Lindsey.
Lemma 18 (Chor and Goldreich,1988).
The magnitude of the sum of elements in every r ? s sub?
matrix of Mn is at most rs2n .
We use Lemma 18 to prove the following key fact about monotone activation functions:
Lemma 19. Let Gh,` be any neuron with a monotone activation function h and inner function ` that
is (x, y)-separable. Then,
19
Ex,y Gh,` x, y (?1)IP
x,y
? ||(h, `)||? ? 2??(n) .
(2)
Proof. Let `(x, y) = g(x) + f (y) and let 0 < ? < 1 be some constant specified later. Define a
total order ?g on {0, 1}n by setting x ?g x0 whenever g(x) ? g(x0 ) and x occurs before x0 in the
lexicographic ordering. We divide {0, 1}n into t = 2(1??)n groups of equal size as follows: the first
group contains the first 2?n elements in the order specified by ?g , the second group has the next
2?n elements and so on. The ith such group is denoted by Xi for i ? 2(1??)n . Likewise, we define
the total order ?f and use it to define equal sized blocks Y1 , . . . , Y2(1??)n .
The way we estimate the LHS of (2) is to pair points in the block (Xi , Yj ) with (Xi+1 , Yj+1 )
in the following manner: wlog assume that the activation function h in non-decreasing. Then,
Gh,` (x, y) ? Gh,` (x0 , y 0 ) for each (x, y) ? (Xi , Yj ) and (x0 , y 0 ) ? (Xi+1 , Yj+1 ). Further, applying
Lemma 18, we will argue that the total number of points in (Xi , Yj ) at which the product in the
LHS evaluates negative (positive) is very close to the number of points in (Xi+1 , Yj+1 ) at which
the product evaluates to positive (negative). Moreover, by assumption, the composed function (h, `)
does not take very large values in our domain by assumption. These observations will be used to
show that the points in blocks that are diagonally across each other will almost cancel each other?s
contribution to the LHS. There are too few uncancelled blocks and hence the sum in the LHS will
be small. Forthwith the details.
+
?
Let Pi,j
= {(x, y) ? (Xi , Yj ) | IP(x, y) = 1} and Pi,j
= {(x, y) ? (Xi , Yi ) | IP(x, y) = ?1}.
(1??)n
Let t = 2
. Let hi,j be the max value that the gate takes on points in (Xi , Yj ). Note that the
non-decreasing assumption on h implies that hi,j ? hi+1,j+1 . Using this observation, we get the
following:
Ex,y Gh,` x, y (?1)IP
x,y
+ ?
1
1 X
hi,j Pi,j ? Pi+1,j+1 + n
? n
4
4
(i,j)<t
X
hi,j |Pi,j |
i=tORj=t
(3)
?
(?+1/2)n
. Thus, we get
i+1,j+1 ? Pi,j is at most 2 ? 2
We apply Lemma 18 to conclude that P +
1
RHS of (3) ? ||(h, `)||? ? 2 ? 2?(?? 2 )n + 4 ? 2?(1??)n .
(4)
Thus, setting ? = 3/4 gives us the bound that the RHS above is arbitrarily close to ||(h, `)||? ?2?n/4 .
Similarly, pairing things slightly differently, we get
+
?
1 X
? P ? 1
Ex,y Gh,` x, y (?1)IP x,y ? n
hi+1,j+1 Pi+1,j+1
i,j
4n
4
(i,j)<t
X
|hi,j | ? |Pi,j |
i=tORj=t
(5)
Again similar conditions and settings of ? imply that RHS of (5) is no smaller than ?||(h, `)||? ?
2?n/4 , thus proving our lemma.
We are now ready to prove Theorem 17.
Proof of Theorem 17. Let C be any (H, I) network having m hidden units, Gh1 ,`1 , . . . , Ghm ,`m ,
where each hi ? H and each `i ? I is (x, y)-separable. Further, let the output threshold gate
be such that whenever the sum is at least b, C outputs 1 and whenever it is at most a, C outputs
-1. Then, let f be the sum total of the function feeding into the top threshold gate of C. Define
t = f ? (a + b)/2. Hence,
a+b
Ex,y f (x, y)(?1)IP(x,y) = Ex,y t(x, y)(?1)IP (x, y) +
Ex,y (?1)IP(x,y)
2
a+b
IP(x,y)
? (b ? a)/2 +
Ex,y (?1)
.
2
20
Thus, it follows easily
Ex,y f (x, y)(?1)IP(x,y) ? b ? a ? a + b 2?n .
2
2
(6)
On the other hand, by linearity of expectation and applying Lemma 19, we get
m
X
Ex,y Ghj ,`j x, y (?1)IP(x,y) ? m ? (H, I) ? 2?n/4 .
Ex,y f (x, y)(?1)IP(x,y) ?
?
j=1
(7)
Comparing (6) and (7), observing that each of |a| and |b| is at most m(H, I)? and recalling that
? = (b ? a), our desired bound on m follows.
Proof of Theorem 10. The proof follows quite simply by noting that the set of activation functions in
this case is just the singleton set having only the monotone function sof t(y) = log(1+exp(y)). The
set of inner functions are all affine functions with each coefficient having value at most
C. As
the
affine functions are (x, y)-separable, we can apply Theorem 17. We do so by noting (H, I)? ?
log(1 + exp(nC)) ? max log 2, nC + log 2 . That yields our result.
Remark 20. It is also interesting to note that Theorem 17 appears to be tight in the sense that
none of the hypotheses can be removed. That is, for neurons with general non-montonic activation
functions, or for neurons with monotonic activation functions whose output magnitude violates the
aforementioned bounds, there are example networks that can efficiently compute any real-valued
function. Thus, to improve this result (e.g. removing the weight bounds) it appears one would need
to use a stronger property of the particular activation function than monotonicity.
21
| 5020 |@word nihat:1 middle:2 version:1 polynomial:11 stronger:2 seems:1 nd:1 suitably:1 open:2 simulation:11 contrastive:3 q1:6 invoking:1 tr:1 harder:2 reduction:1 series:1 contains:2 ours:1 ghj:1 existing:1 freitas:1 comparing:2 surprising:3 activation:21 yet:1 schnitger:1 must:5 written:1 fn:1 realistic:1 visible:2 partition:3 hajnal:12 additive:5 show1:1 utml:1 plot:1 designed:1 v:1 generative:5 half:3 mccallum:1 ith:1 realizing:1 provides:2 completeness:1 contribute:3 toronto:3 sigmoidal:1 unbounded:1 along:1 constructed:3 direct:1 pairing:1 prove:20 consists:3 symp:1 inside:2 manner:1 introduce:1 x0:5 indeed:1 hardness:5 rapid:1 growing:1 salakhutdinov:6 decreasing:2 overwhelming:1 delineation:1 jm:1 totally:1 increasing:3 provided:2 begin:1 bounded:32 notation:1 circuit:8 mass:7 moreover:1 becomes:2 what:13 linearity:1 kind:7 string:2 lindsey:1 marlin:2 impractical:1 eduardo:1 formalizes:1 guarantee:2 quantitative:1 every:4 act:1 concave:1 exactly:1 rm:1 demonstrates:1 k2:2 unit:20 appear:1 yn:2 positive:13 t1:1 understood:2 before:2 qualification:1 tends:1 limit:1 consequence:1 ak:8 establishing:1 subscript:1 approximately:3 might:4 plus:5 twice:1 studied:1 k:4 limited:1 range:4 bi:2 practical:2 acknowledgment:1 unique:1 yj:8 block:30 backpropagation:2 secant:2 universal:1 maxx:2 thought:1 significantly:2 convenient:1 pre:1 word:1 suggest:1 get:4 cannot:15 convenience:2 close:2 context:1 impossible:1 applying:6 yee:1 restriction:3 function2:1 map:1 marten:1 ramanujan:1 go:1 straightforward:1 independently:1 convex:8 roux:3 identifying:1 assigns:1 immediately:1 m2:10 examines:1 insight:1 haussler:2 importantly:1 rule:1 iain:1 contradiction:1 classic:1 handle:2 proving:2 notion:1 justification:4 analogous:2 construction:17 suppose:5 target:1 speak:1 exact:1 guido:1 us:1 hypothesis:3 element:3 recognition:1 approximated:1 particularly:2 cueto:2 bottom:1 reducible:1 capture:3 region:3 wj:3 montufar:4 ordering:1 tur:1 removed:2 intuition:1 vanishes:1 benjamin:1 complexity:4 motivate:1 depend:1 tight:1 purely:1 bipartite:3 efficiency:5 exit:1 completely:2 observables:1 easily:5 joint:1 goldreich:1 differently:1 represented:8 various:2 derivation:3 separated:1 distinct:4 fast:3 describe:2 zemel:2 kevin:1 outside:1 whose:21 encoded:1 widely:1 valued:19 quite:2 loglikelihood:1 say:4 otherwise:5 relax:1 itself:2 final:1 ip:32 indication:1 evidently:1 net:6 lowdimensional:1 interaction:1 product:6 j2:5 relevant:2 combining:2 rapidly:1 iff:1 achieve:1 representational:6 roweis:1 sutskever:2 convergence:1 requirement:1 produce:5 converges:2 object:1 help:1 tk:11 develop:1 derive:2 ac:1 andrew:1 measured:1 school:1 strong:2 c:1 resemble:1 implies:6 indicate:1 involves:2 exhibiting:1 direction:2 closely:1 correct:1 nando:1 viewing:1 translating:1 violates:1 rauh:1 require:2 feeding:1 fix:1 suffices:2 proposition:13 probable:1 strictly:3 hold:2 lying:1 sufficiently:5 residing:1 exp:50 scope:1 bj:7 slab:1 claim:10 omitted:1 purpose:2 ruslan:2 proc:1 saw:1 individually:1 largest:2 successfully:1 tool:1 establishes:2 suprisingly:1 hope:1 weighted:1 clearly:2 lexicographic:1 always:1 avoid:2 hj:6 pn:1 ej:26 corollary:1 morton:1 focus:1 improvement:1 maria:1 likelihood:5 check:1 aka:1 rigorous:3 realizable:1 sense:2 entire:2 hidden:50 manipulating:1 going:2 interested:1 provably:4 canceled:1 classification:1 aforementioned:1 denoted:5 constrained:1 special:6 field:1 construct:3 equal:6 having:6 never:1 identical:1 represents:10 broad:1 look:3 unsupervised:3 icml:1 muroga:3 cancel:1 future:1 yoshua:2 report:1 richard:1 primarily:1 few:1 composed:3 divergence:1 individual:3 replaced:2 minsky:1 geometry:1 consisting:1 recalling:1 interest:1 possibility:1 highly:1 evaluation:1 severe:1 mixture:1 activated:1 tj:93 lh:4 respective:1 unless:1 indexed:2 divide:1 re:1 ruled:1 desired:3 theoretical:2 instance:2 column:3 modeling:1 soft:60 boolean:36 formalism:1 yoav:1 cost:1 introducing:1 subset:5 entry:3 uninteresting:1 examining:1 osindero:1 too:1 characterize:1 dependency:1 connect:1 mpa:1 kn:3 considerably:1 combined:1 notate:1 density:1 fundamental:1 international:3 probabilistic:1 together:1 quickly:1 concrete:1 ilya:1 connectivity:1 again:2 unavoidable:1 satisfied:3 containing:1 choose:3 possibly:1 henceforth:1 worse:1 cognitive:1 book:1 expert:1 derivative:2 corner:1 syst:1 szegedy:1 de:1 singleton:1 bergstra:1 includes:1 coefficient:1 matter:1 satisfy:1 mp:6 depends:5 piece:2 multiplicative:1 later:4 lot:1 jason:1 wolfgang:2 analyze:3 characterizes:2 sup:1 observing:1 complicated:3 simon:1 slope:1 contribution:1 structureless:1 accuracy:1 efficiently:19 maximized:1 yield:4 correspond:1 likewise:1 famous:1 none:1 cubical:1 rectified:2 j6:1 detector:1 whenever:7 definition:15 evaluates:6 servedio:2 rbms:24 energy:12 nonetheless:2 james:2 obvious:4 e2:1 chattopadhyay:1 rbm:122 proof:29 gain:1 proved:1 begun:1 dimensionality:2 organized:1 carefully:1 actually:1 appears:2 feed:2 higher:1 awkward:1 improved:1 done:3 shrink:2 furthermore:2 just:8 stage:1 crudely:1 overfit:1 hand:4 ei:1 expressive:8 replacing:1 nonlinear:1 google:1 mode:1 logistic:1 aj:14 perhaps:6 grows:4 name:1 building:22 concept:3 y2:1 true:2 implictly:1 spell:1 inductive:1 hence:4 symmetric:14 q0:1 maass:10 illustrated:3 during:1 numerator:1 self:1 please:1 rooted:1 kak:1 unnormalized:6 generalized:2 whye:1 ay:3 demonstrate:2 confusion:1 tn:8 gh:10 omnipress:1 meaning:1 wise:3 image:1 recently:1 sigmoid:5 specialized:2 conditioning:1 exponentially:12 discussed:2 analog:1 approximates:2 kluwer:1 significant:2 composition:1 ai:8 trivially:2 similarly:5 language:1 uncancelled:1 pitassi:1 showed:2 thresh:6 store:1 certain:6 inequality:2 binary:6 arbitrarily:8 accomplished:1 yi:1 additional:3 somewhat:2 greater:5 monotonically:1 ii:5 full:1 smooth:3 technical:4 match:1 characterized:2 long:4 divided:1 concerning:1 plugging:1 qg:1 qi:22 instantiating:1 mrf:1 basic:10 rbn:1 expectation:1 arxiv:1 represent:9 normalization:1 sof:1 addition:3 fellowship:2 fine:1 want:1 completes:1 grow:5 leaving:1 extra:2 rest:2 tend:1 thing:1 member:2 mod:2 call:4 integer:6 noting:4 door:1 bengio:4 easy:8 enough:2 marginalization:1 affect:2 xj:6 fm:3 opposite:1 inner:10 idea:2 regarding:1 computable:7 shift:1 qj:23 t0:12 expression:1 jaj:10 effort:1 sontag:1 constitute:2 remark:2 repeatedly:1 deep:7 useful:1 clear:2 covered:1 amount:2 clutter:1 sturmfels:1 reduced:1 generate:1 specifies:1 exist:1 dotted:1 sign:1 disjoint:1 track:2 diverse:1 georg:1 express:1 group:15 key:2 demonstrating:1 threshold:49 pb:1 changing:1 thresholded:22 integrality:2 v1:1 asymptotically:2 graph:1 monotone:5 sum:25 convert:1 turing:1 parameterized:1 powerful:1 dst:1 swersky:1 family:1 almost:2 reasonable:1 separation:2 appendix:6 bit:2 layer:37 hi:8 bound:36 followed:1 courville:2 quadratic:6 annual:1 nonnegative:1 adapted:2 placement:1 constraint:1 precisely:1 sake:1 simulate:6 argument:2 min:1 separable:6 relatively:2 department:2 according:3 representable:1 belonging:1 describes:1 slightly:4 smaller:3 sam:1 chor:1 wi:6 across:1 making:1 intuitively:3 restricted:10 indexing:1 computationally:1 rectification:1 mutually:1 previously:1 agree:1 turn:3 remains:3 count:1 needed:2 know:3 tractable:2 studying:3 operation:1 incurring:1 rewritten:2 apply:3 observe:2 simulating:1 distinguished:1 gate:14 existence:3 original:6 top:3 remaining:1 include:4 ensure:2 assumes:1 marginalized:1 upperbounds:1 exploit:1 folklore:1 murray:2 establish:5 prof:1 classical:2 approximating:1 implied:2 question:3 realized:3 already:3 quantity:2 strategy:1 rocco:1 occurs:1 usual:2 forster:2 said:1 sci:2 philip:1 tata:1 manifold:2 argue:1 trivial:1 assuming:3 issn:1 index:2 illustration:2 ratio:1 minimizing:1 equivalently:2 difficult:1 unfortunately:1 nc:3 potentially:4 relate:4 statement:2 negative:13 design:1 boltzmann:15 allowing:2 upper:9 discretize:1 neuron:103 observation:7 markov:1 spikeand:1 teh:1 behave:1 defintion:1 situation:1 hinton:8 saturates:1 precise:3 extended:2 communication:1 rn:3 y1:3 arbitrary:2 david:1 complement:1 bernd:1 required:4 pair:2 bk:8 z1:1 discriminator:1 specified:2 established:1 able:1 below:5 challenge:1 including:1 max:24 belief:3 power:6 overlap:1 demanding:1 natural:2 treated:1 hybrid:1 ndimensional:1 advanced:1 mn:8 representing:2 telescoping:1 improve:1 technology:1 imply:1 picture:1 ready:1 existential:3 nice:1 literature:1 prior:1 understanding:1 discovery:1 multiplication:2 toniann:1 freund:2 fully:1 highlight:1 interesting:6 limitation:2 versus:2 geoffrey:3 affine:5 thresholding:3 principle:1 editor:1 intractability:1 pi:8 production:1 row:2 diagonally:1 supported:2 parity:11 free:11 qualified:2 last:2 side:1 bias:20 deeper:3 understand:1 institute:1 allow:1 perceptron:1 weaker:1 formal:2 wide:1 characterizing:1 india:1 taking:3 absolute:1 dimension:2 xn:2 kck1:1 complicating:1 depth:1 rich:1 computes:21 unweighted:1 forward:3 made:1 refinement:1 doesn:1 far:1 polynomially:2 welling:1 approximate:3 yichuan:1 logic:1 monotonicity:1 incoming:7 assumed:2 conclude:3 xi:30 quantifies:1 promising:2 nicolas:1 composing:1 contributes:1 complex:3 poly:2 meanwhile:1 domain:1 main:1 linearly:4 rh:3 s2:1 n2:15 nothing:2 toni:1 allowed:3 positively:1 x1:2 referred:1 en:10 depicts:1 fashion:1 cubic:2 wiley:1 wlog:1 sub:7 papert:1 explicit:1 exponential:9 comput:3 candidate:1 lie:1 crude:2 tang:2 theorem:46 formula:1 removing:1 specific:1 showing:2 tnn:1 maxi:1 evidence:1 exists:10 consist:1 sequential:1 effectively:1 magnitude:20 margin:24 chen:1 easier:2 generalizing:1 led:1 arkadev:2 simply:3 appearance:1 likely:1 expressed:3 nserc:1 partially:1 scalar:3 bo:1 monotonic:3 satisfies:5 determines:1 acm:1 viewed:2 sized:4 consequently:1 replace:4 hard:17 determined:1 except:1 reducing:1 lemma:13 called:3 total:15 meaningful:2 aaron:1 formally:2 support:1 softplus:27 violated:1 evaluate:1 outgoing:1 ex:10 |
4,444 | 5,021 | Distributed Representations of Words and Phrases
and their Compositionality
Tomas Mikolov
Google Inc.
Mountain View
[email protected]
Ilya Sutskever
Google Inc.
Mountain View
[email protected]
Kai Chen
Google Inc.
Mountain View
[email protected]
Jeffrey Dean
Google Inc.
Mountain View
[email protected]
Greg Corrado
Google Inc.
Mountain View
[email protected]
Abstract
The recently introduced continuous Skip-gram model is an efficient method for
learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present
several extensions that improve both the quality of the vectors and the training
speed. By subsampling of the frequent words we obtain significant speedup and
also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling.
An inherent limitation of word representations is their indifference to word order
and their inability to represent idiomatic phrases. For example, the meanings of
?Canada? and ?Air? cannot be easily combined to obtain ?Air Canada?. Motivated
by this example, we present a simple method for finding phrases in text, and show
that learning good vector representations for millions of phrases is possible.
1 Introduction
Distributed representations of words in a vector space help learning algorithms to achieve better
performance in natural language processing tasks by grouping similar words. One of the earliest use
of word representations dates back to 1986 due to Rumelhart, Hinton, and Williams [13]. This idea
has since been applied to statistical language modeling with considerable success [1]. The follow
up work includes applications to automatic speech recognition and machine translation [14, 7], and
a wide range of NLP tasks [2, 20, 15, 3, 18, 19, 9].
Recently, Mikolov et al. [8] introduced the Skip-gram model, an efficient method for learning highquality vector representations of words from large amounts of unstructured text data. Unlike most
of the previously used neural network architectures for learning word vectors, training of the Skipgram model (see Figure 1) does not involve dense matrix multiplications. This makes the training
extremely efficient: an optimized single-machine implementation can train on more than 100 billion
words in one day.
The word representations computed using neural networks are very interesting because the learned
vectors explicitly encode many linguistic regularities and patterns. Somewhat surprisingly, many of
these patterns can be represented as linear translations. For example, the result of a vector calculation vec(?Madrid?) - vec(?Spain?) + vec(?France?) is closer to vec(?Paris?) than to any other word
vector [9, 8].
1
Figure 1: The Skip-gram model architecture. The training objective is to learn word vector representations
that are good at predicting the nearby words.
In this paper we present several extensions of the original Skip-gram model. We show that subsampling of frequent words during training results in a significant speedup (around 2x - 10x), and
improves accuracy of the representations of less frequent words. In addition, we present a simplified variant of Noise Contrastive Estimation (NCE) [4] for training the Skip-gram model that results
in faster training and better vector representations for frequent words, compared to more complex
hierarchical softmax that was used in the prior work [8].
Word representations are limited by their inability to represent idiomatic phrases that are not compositions of the individual words. For example, ?Boston Globe? is a newspaper, and so it is not a
natural combination of the meanings of ?Boston? and ?Globe?. Therefore, using vectors to represent the whole phrases makes the Skip-gram model considerably more expressive. Other techniques
that aim to represent meaning of sentences by composing the word vectors, such as the recursive
autoencoders [15], would also benefit from using phrase vectors instead of the word vectors.
The extension from word based to phrase based models is relatively simple. First we identify a large
number of phrases using a data-driven approach, and then we treat the phrases as individual tokens
during the training. To evaluate the quality of the phrase vectors, we developed a test set of analogical reasoning tasks that contains both words and phrases. A typical analogy pair from our test set is
?Montreal?:?Montreal Canadiens?::?Toronto?:?Toronto Maple Leafs?. It is considered to have been
answered correctly if the nearest representation to vec(?Montreal Canadiens?) - vec(?Montreal?) +
vec(?Toronto?) is vec(?Toronto Maple Leafs?).
Finally, we describe another interesting property of the Skip-gram model. We found that simple
vector addition can often produce meaningful results. For example, vec(?Russia?) + vec(?river?) is
close to vec(?Volga River?), and vec(?Germany?) + vec(?capital?) is close to vec(?Berlin?). This
compositionality suggests that a non-obvious degree of language understanding can be obtained by
using basic mathematical operations on the word vector representations.
2 The Skip-gram Model
The training objective of the Skip-gram model is to find word representations that are useful for
predicting the surrounding words in a sentence or a document. More formally, given a sequence of
training words w1 , w2 , w3 , . . . , wT , the objective of the Skip-gram model is to maximize the average
log probability
T
1X
T t=1
X
log p(wt+j |wt )
(1)
?c?j?c,j6=0
where c is the size of the training context (which can be a function of the center word wt ). Larger
c results in more training examples and thus can lead to a higher accuracy, at the expense of the
2
training time. The basic Skip-gram formulation defines p(wt+j |wt ) using the softmax function:
? ?
vwI
exp vw
O
p(wO |wI ) = P
(2)
W
? ?
w=1 exp vw vwI
?
where vw and vw
are the ?input? and ?output? vector representations of w, and W is the number of words in the vocabulary. This formulation is impractical because the cost of computing
? log p(wO |wI ) is proportional to W , which is often large (105 ?107 terms).
2.1 Hierarchical Softmax
A computationally efficient approximation of the full softmax is the hierarchical softmax. In the
context of neural network language models, it was first introduced by Morin and Bengio [12]. The
main advantage is that instead of evaluating W output nodes in the neural network to obtain the
probability distribution, it is needed to evaluate only about log2 (W ) nodes.
The hierarchical softmax uses a binary tree representation of the output layer with the W words as
its leaves and, for each node, explicitly represents the relative probabilities of its child nodes. These
define a random walk that assigns probabilities to words.
More precisely, each word w can be reached by an appropriate path from the root of the tree. Let
n(w, j) be the j-th node on the path from the root to w, and let L(w) be the length of this path, so
n(w, 1) = root and n(w, L(w)) = w. In addition, for any inner node n, let ch(n) be an arbitrary
fixed child of n and let [[x]] be 1 if x is true and -1 otherwise. Then the hierarchical softmax defines
p(wO |wI ) as follows:
L(w)?1
p(w|wI ) =
Y
j=1
?
?
vwI
? [[n(w, j + 1) = ch(n(w, j))]] ? vn(w,j)
(3)
PW
where ?(x) = 1/(1 + exp(?x)). It can be verified that w=1 p(w|wI ) = 1. This implies that the
cost of computing log p(wO |wI ) and ? log p(wO |wI ) is proportional to L(wO ), which on average
is no greater than log W . Also, unlike the standard softmax formulation of the Skip-gram which
?
assigns two representations vw and vw
to each word w, the hierarchical softmax formulation has
one representation vw for each word w and one representation vn? for every inner node n of the
binary tree.
The structure of the tree used by the hierarchical softmax has a considerable effect on the performance. Mnih and Hinton explored a number of methods for constructing the tree structure and the
effect on both the training time and the resulting model accuracy [10]. In our work we use a binary
Huffman tree, as it assigns short codes to the frequent words which results in fast training. It has
been observed before that grouping words together by their frequency works well as a very simple
speedup technique for the neural network based language models [5, 8].
2.2 Negative Sampling
An alternative to the hierarchical softmax is Noise Contrastive Estimation (NCE), which was introduced by Gutmann and Hyvarinen [4] and applied to language modeling by Mnih and Teh [11].
NCE posits that a good model should be able to differentiate data from noise by means of logistic
regression. This is similar to hinge loss used by Collobert and Weston [2] who trained the models
by ranking the data above noise.
While NCE can be shown to approximately maximize the log probability of the softmax, the Skipgram model is only concerned with learning high-quality vector representations, so we are free to
simplify NCE as long as the vector representations retain their quality. We define Negative sampling
(NEG) by the objective
?
?
vwI ) +
log ?(vw
O
k
X
i=1
i
h
? ?
)
v
Ewi ?Pn (w) log ?(?vw
w
I
i
3
(4)
Country and Capital Vectors Projected by PCA
2
China
Beijing
1.5
Russia
Japan
Moscow
Tokyo
1
Ankara
Turkey
0.5
Poland
Germany
France
0
-0.5
Italy
-1
Spain
-1.5
Portugal
Warsaw
Berlin
Paris
Athens
Rome
Greece
Madrid
Lisbon
-2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Figure 2: Two-dimensional PCA projection of the 1000-dimensional Skip-gram vectors of countries and their
capital cities. The figure illustrates ability of the model to automatically organize concepts and learn implicitly
the relationships between them, as during the training we did not provide any supervised information about
what a capital city means.
which is used to replace every log P (wO |wI ) term in the Skip-gram objective. Thus the task is to
distinguish the target word wO from draws from the noise distribution Pn (w) using logistic regression, where there are k negative samples for each data sample. Our experiments indicate that values
of k in the range 5?20 are useful for small training datasets, while for large datasets the k can be as
small as 2?5. The main difference between the Negative sampling and NCE is that NCE needs both
samples and the numerical probabilities of the noise distribution, while Negative sampling uses only
samples. And while NCE approximately maximizes the log probability of the softmax, this property
is not important for our application.
Both NCE and NEG have the noise distribution Pn (w) as a free parameter. We investigated a number
of choices for Pn (w) and found that the unigram distribution U (w) raised to the 3/4rd power (i.e.,
U (w)3/4 /Z) outperformed significantly the unigram and the uniform distributions, for both NCE
and NEG on every task we tried including language modeling (not reported here).
2.3 Subsampling of Frequent Words
In very large corpora, the most frequent words can easily occur hundreds of millions of times (e.g.,
?in?, ?the?, and ?a?). Such words usually provide less information value than the rare words. For
example, while the Skip-gram model benefits from observing the co-occurrences of ?France? and
?Paris?, it benefits much less from observing the frequent co-occurrences of ?France? and ?the?, as
nearly every word co-occurs frequently within a sentence with ?the?. This idea can also be applied
in the opposite direction; the vector representations of frequent words do not change significantly
after training on several million examples.
To counter the imbalance between the rare and frequent words, we used a simple subsampling approach: each word wi in the training set is discarded with probability computed by the formula
P (wi ) = 1 ?
4
s
t
f (wi )
(5)
Method
NEG-5
NEG-15
HS-Huffman
NCE-5
NEG-5
NEG-15
HS-Huffman
Time [min] Syntactic [%] Semantic [%] Total accuracy [%]
38
63
54
59
97
63
58
61
41
53
40
47
38
60
45
53
The following results use 10?5 subsampling
14
61
58
60
36
61
61
61
21
52
59
55
Table 1: Accuracy of various Skip-gram 300-dimensional models on the analogical reasoning task
as defined in [8]. NEG-k stands for Negative Sampling with k negative samples for each positive
sample; NCE stands for Noise Contrastive Estimation and HS-Huffman stands for the Hierarchical
Softmax with the frequency-based Huffman codes.
where f (wi ) is the frequency of word wi and t is a chosen threshold, typically around 10?5 .
We chose this subsampling formula because it aggressively subsamples words whose frequency
is greater than t while preserving the ranking of the frequencies. Although this subsampling formula was chosen heuristically, we found it to work well in practice. It accelerates learning and even
significantly improves the accuracy of the learned vectors of the rare words, as will be shown in the
following sections.
3 Empirical Results
In this section we evaluate the Hierarchical Softmax (HS), Noise Contrastive Estimation, Negative
Sampling, and subsampling of the training words. We used the analogical reasoning task1 introduced
by Mikolov et al. [8]. The task consists of analogies such as ?Germany? : ?Berlin? :: ?France? : ?,
which are solved by finding a vector x such that vec(x) is closest to vec(?Berlin?) - vec(?Germany?)
+ vec(?France?) according to the cosine distance (we discard the input words from the search). This
specific example is considered to have been answered correctly if x is ?Paris?. The task has two
broad categories: the syntactic analogies (such as ?quick? : ?quickly? :: ?slow? : ?slowly?) and the
semantic analogies, such as the country to capital city relationship.
For training the Skip-gram models, we have used a large dataset consisting of various news articles
(an internal Google dataset with one billion words). We discarded from the vocabulary all words
that occurred less than 5 times in the training data, which resulted in a vocabulary of size 692K.
The performance of various Skip-gram models on the word analogy test set is reported in Table 1.
The table shows that Negative Sampling outperforms the Hierarchical Softmax on the analogical
reasoning task, and has even slightly better performance than the Noise Contrastive Estimation. The
subsampling of the frequent words improves the training speed several times and makes the word
representations significantly more accurate.
It can be argued that the linearity of the skip-gram model makes its vectors more suitable for such
linear analogical reasoning, but the results of Mikolov et al. [8] also show that the vectors learned
by the standard sigmoidal recurrent neural networks (which are highly non-linear) improve on this
task significantly as the amount of the training data increases, suggesting that non-linear models also
have a preference for a linear structure of the word representations.
4 Learning Phrases
As discussed earlier, many phrases have a meaning that is not a simple composition of the meanings of its individual words. To learn vector representation for phrases, we first find words that
appear frequently together, and infrequently in other contexts. For example, ?New York Times? and
?Toronto Maple Leafs? are replaced by unique tokens in the training data, while a bigram ?this is?
will remain unchanged.
1
code.google.com/p/word2vec/source/browse/trunk/questions-words.txt
5
New York
San Jose
Boston
Phoenix
Detroit
Oakland
Austria
Belgium
Steve Ballmer
Samuel J. Palmisano
Newspapers
New York Times
Baltimore
San Jose Mercury News
Cincinnati
NHL Teams
Boston Bruins
Montreal
Phoenix Coyotes
Nashville
NBA Teams
Detroit Pistons
Toronto
Golden State Warriors
Memphis
Airlines
Austrian Airlines
Spain
Brussels Airlines
Greece
Company executives
Microsoft
Larry Page
IBM
Werner Vogels
Baltimore Sun
Cincinnati Enquirer
Montreal Canadiens
Nashville Predators
Toronto Raptors
Memphis Grizzlies
Spainair
Aegean Airlines
Google
Amazon
Table 2: Examples of the analogical reasoning task for phrases (the full test set has 3218 examples).
The goal is to compute the fourth phrase using the first three. Our best model achieved an accuracy
of 72% on this dataset.
This way, we can form many reasonable phrases without greatly increasing the size of the vocabulary; in theory, we can train the Skip-gram model using all n-grams, but that would be too memory
intensive. Many techniques have been previously developed to identify phrases in the text; however,
it is out of scope of our work to compare them. We decided to use a simple data-driven approach,
where phrases are formed based on the unigram and bigram counts, using
score(wi , wj ) =
count(wi wj ) ? ?
.
count(wi ) ? count(wj )
(6)
The ? is used as a discounting coefficient and prevents too many phrases consisting of very infrequent words to be formed. The bigrams with score above the chosen threshold are then used as
phrases. Typically, we run 2-4 passes over the training data with decreasing threshold value, allowing longer phrases that consists of several words to be formed. We evaluate the quality of the phrase
representations using a new analogical reasoning task that involves phrases. Table 2 shows examples
of the five categories of analogies used in this task. This dataset is publicly available on the web2 .
4.1 Phrase Skip-Gram Results
Starting with the same news data as in the previous experiments, we first constructed the phrase
based training corpus and then we trained several Skip-gram models using different hyperparameters. As before, we used vector dimensionality 300 and context size 5. This setting already
achieves good performance on the phrase dataset, and allowed us to quickly compare the Negative
Sampling and the Hierarchical Softmax, both with and without subsampling of the frequent tokens.
The results are summarized in Table 3.
The results show that while Negative Sampling achieves a respectable accuracy even with k = 5,
using k = 15 achieves considerably better performance. Surprisingly, while we found the Hierarchical Softmax to achieve lower performance when trained without subsampling, it became the best
performing method when we downsampled the frequent words. This shows that the subsampling
can result in faster training and can also improve accuracy, at least in some cases.
2
code.google.com/p/word2vec/source/browse/trunk/questions-phrases.txt
Method
NEG-5
NEG-15
HS-Huffman
Dimensionality
300
300
300
No subsampling [%]
24
27
19
10?5 subsampling [%]
27
42
47
Table 3: Accuracies of the Skip-gram models on the phrase analogy dataset. The models were
trained on approximately one billion words from the news dataset.
6
Vasco de Gama
Lake Baikal
Alan Bean
Ionian Sea
chess master
NEG-15 with 10?5 subsampling
Lingsugur
Great Rift Valley
Rebbeca Naomi
Ruegen
chess grandmaster
HS with 10?5 subsampling
Italian explorer
Aral Sea
moonwalker
Ionian Islands
Garry Kasparov
Table 4: Examples of the closest entities to the given short phrases, using two different models.
Czech + currency
koruna
Check crown
Polish zolty
CTK
Vietnam + capital
Hanoi
Ho Chi Minh City
Viet Nam
Vietnamese
German + airlines
airline Lufthansa
carrier Lufthansa
flag carrier Lufthansa
Lufthansa
Russian + river
Moscow
Volga River
upriver
Russia
French + actress
Juliette Binoche
Vanessa Paradis
Charlotte Gainsbourg
Cecile De
Table 5: Vector compositionality using element-wise addition. Four closest tokens to the sum of two
vectors are shown, using the best Skip-gram model.
To maximize the accuracy on the phrase analogy task, we increased the amount of the training data
by using a dataset with about 33 billion words. We used the hierarchical softmax, dimensionality
of 1000, and the entire sentence for the context. This resulted in a model that reached an accuracy
of 72%. We achieved lower accuracy 66% when we reduced the size of the training dataset to 6B
words, which suggests that the large amount of the training data is crucial.
To gain further insight into how different the representations learned by different models are, we did
inspect manually the nearest neighbours of infrequent phrases using various models. In Table 4, we
show a sample of such comparison. Consistently with the previous results, it seems that the best
representations of phrases are learned by a model with the hierarchical softmax and subsampling.
5 Additive Compositionality
We demonstrated that the word and phrase representations learned by the Skip-gram model exhibit
a linear structure that makes it possible to perform precise analogical reasoning using simple vector
arithmetics. Interestingly, we found that the Skip-gram representations exhibit another kind of linear
structure that makes it possible to meaningfully combine words by an element-wise addition of their
vector representations. This phenomenon is illustrated in Table 5.
The additive property of the vectors can be explained by inspecting the training objective. The word
vectors are in a linear relationship with the inputs to the softmax nonlinearity. As the word vectors
are trained to predict the surrounding words in the sentence, the vectors can be seen as representing
the distribution of the context in which a word appears. These values are related logarithmically
to the probabilities computed by the output layer, so the sum of two word vectors is related to the
product of the two context distributions. The product works here as the AND function: words that
are assigned high probabilities by both word vectors will have high probability, and the other words
will have low probability. Thus, if ?Volga River? appears frequently in the same sentence together
with the words ?Russian? and ?river?, the sum of these two word vectors will result in such a feature
vector that is close to the vector of ?Volga River?.
6 Comparison to Published Word Representations
Many authors who previously worked on the neural network based representations of words have
published their resulting models for further use and comparison: amongst the most well known authors are Collobert and Weston [2], Turian et al. [17], and Mnih and Hinton [10]. We downloaded
their word vectors from the web3 . Mikolov et al. [8] have already evaluated these word representations on the word analogy task, where the Skip-gram models achieved the best performance with a
huge margin.
3
http://metaoptimize.com/projects/wordreprs/
7
Model
(training time)
Redmond
Havel
ninjutsu
graffiti
capitulate
Collobert (50d)
(2 months)
conyers
lubbock
keene
McCarthy
Alston
Cousins
Podhurst
Harlang
Agarwal
Redmond Wash.
Redmond Washington
Microsoft
plauen
dzerzhinsky
osterreich
Jewell
Arzu
Ovitz
Pontiff
Pinochet
Rodionov
Vaclav Havel
president Vaclav Havel
Velvet Revolution
reiki
kohona
karate
ninja
martial arts
swordsmanship
cheesecake
gossip
dioramas
gunfire
emotion
impunity
anaesthetics
monkeys
Jews
spray paint
grafitti
taggers
abdicate
accede
rearm
Mavericks
planning
hesitated
capitulation
capitulated
capitulating
Turian (200d)
(few weeks)
Mnih (100d)
(7 days)
Skip-Phrase
(1000d, 1 day)
Table 6: Examples of the closest tokens given various well known models and the Skip-gram model
trained on phrases using over 30 billion training words. An empty cell means that the word was not
in the vocabulary.
To give more insight into the difference of the quality of the learned vectors, we provide empirical
comparison by showing the nearest neighbours of infrequent words in Table 6. These examples show
that the big Skip-gram model trained on a large corpus visibly outperforms all the other models in
the quality of the learned representations. This can be attributed in part to the fact that this model
has been trained on about 30 billion words, which is about two to three orders of magnitude more
data than the typical size used in the prior work. Interestingly, although the training set is much
larger, the training time of the Skip-gram model is just a fraction of the time complexity required by
the previous model architectures.
7 Conclusion
This work has several key contributions. We show how to train distributed representations of words
and phrases with the Skip-gram model and demonstrate that these representations exhibit linear
structure that makes precise analogical reasoning possible. The techniques introduced in this paper
can be used also for training the continuous bag-of-words model introduced in [8].
We successfully trained models on several orders of magnitude more data than the previously published models, thanks to the computationally efficient model architecture. This results in a great
improvement in the quality of the learned word and phrase representations, especially for the rare
entities. We also found that the subsampling of the frequent words results in both faster training
and significantly better representations of uncommon words. Another contribution of our paper is
the Negative sampling algorithm, which is an extremely simple training method that learns accurate
representations especially for frequent words.
The choice of the training algorithm and the hyper-parameter selection is a task specific decision,
as we found that different problems have different optimal hyperparameter configurations. In our
experiments, the most crucial decisions that affect the performance are the choice of the model
architecture, the size of the vectors, the subsampling rate, and the size of the training window.
A very interesting result of this work is that the word vectors can be somewhat meaningfully combined using just simple vector addition. Another approach for learning representations of phrases
presented in this paper is to simply represent the phrases with a single token. Combination of these
two approaches gives a powerful yet simple way how to represent longer pieces of text, while having minimal computational complexity. Our work can thus be seen as complementary to the existing
approach that attempts to represent phrases using recursive matrix-vector operations [16].
We made the code for training the word and phrase vectors based on the techniques described in this
paper available as an open-source project4 .
4
code.google.com/p/word2vec
8
References
[1] Yoshua Bengio, R?ejean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language
model. The Journal of Machine Learning Research, 3:1137?1155, 2003.
[2] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine
learning, pages 160?167. ACM, 2008.
[3] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, 513?520, 2011.
[4] Michael U Gutmann and Aapo Hyv?arinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. The Journal of Machine Learning Research, 13:307?361,
2012.
[5] Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Extensions of
recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011
IEEE International Conference on, pages 5528?5531. IEEE, 2011.
[6] Tomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget and Jan Cernocky. Strategies for Training
Large Scale Neural Network Language Models. In Proc. Automatic Speech Recognition and Understanding, 2011.
[7] Tomas Mikolov. Statistical Language Models Based on Neural Networks. PhD thesis, PhD Thesis, Brno
University of Technology, 2012.
[8] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. ICLR Workshop, 2013.
[9] Tomas Mikolov, Wen-tau Yih and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word
Representations. In Proceedings of NAACL HLT, 2013.
[10] Andriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. Advances in
neural information processing systems, 21:1081?1088, 2009.
[11] Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language
models. arXiv preprint arXiv:1206.6426, 2012.
[12] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Proceedings of the international workshop on artificial intelligence and statistics, pages 246?252, 2005.
[13] David E Rumelhart, Geoffrey E Hintont, and Ronald J Williams. Learning representations by backpropagating errors. Nature, 323(6088):533?536, 1986.
[14] Holger Schwenk. Continuous space language models. Computer Speech and Language, vol. 21, 2007.
[15] Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. Parsing natural scenes and
natural language with recursive neural networks. In Proceedings of the 26th International Conference on
Machine Learning (ICML), volume 2, 2011.
[16] Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. Semantic Compositionality
Through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Conference on Empirical Methods
in Natural Language Processing (EMNLP), 2012.
[17] Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method for
semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384?394. Association for Computational Linguistics, 2010.
[18] Peter D. Turney and Patrick Pantel. From frequency to meaning: Vector space models of semantics. In
Journal of Artificial Intelligence Research, 37:141-188, 2010.
[19] Peter D. Turney. Distributional semantics beyond words: Supervised learning of analogy and paraphrase.
In Transactions of the Association for Computational Linguistics (TACL), 353?366, 2013.
[20] Jason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume
Volume Three, pages 2764?2770. AAAI Press, 2011.
9
| 5021 |@word h:6 multitask:1 pw:1 bigram:3 seems:1 open:1 heuristically:1 hyv:1 tried:1 contrastive:6 yih:1 configuration:1 contains:1 score:2 daniel:1 document:1 interestingly:2 task1:1 outperforms:2 existing:1 com:9 yet:1 parsing:1 ronald:1 numerical:1 additive:2 ronan:1 christian:1 vasco:1 intelligence:3 leaf:4 short:2 node:7 toronto:7 preference:1 sigmoidal:1 five:1 tagger:1 mathematical:1 constructed:1 spray:1 gcorrado:1 consists:2 combine:1 frequently:3 planning:1 chi:1 decreasing:1 automatically:1 company:1 window:1 increasing:1 spain:3 project:1 linearity:1 maximizes:1 what:1 mountain:5 kind:1 monkey:1 developed:2 unified:1 finding:2 impractical:1 every:4 golden:1 highquality:1 appear:1 organize:1 before:2 positive:1 carrier:2 treat:1 cliff:1 lev:1 path:3 approximately:3 chose:1 china:1 suggests:2 co:3 limited:1 range:2 decided:1 unique:1 recursive:4 practice:1 jan:2 empirical:3 significantly:6 projection:1 burget:2 word:100 regular:1 downsampled:1 morin:2 cannot:1 close:3 valley:1 kasparov:1 selection:1 context:7 yee:1 dean:2 quick:1 center:1 demonstrated:1 maple:3 williams:2 starting:1 tomas:6 amazon:1 unstructured:1 assigns:3 insight:2 nam:1 cincinnati:2 bruin:1 president:1 memphis:2 target:1 infrequent:3 us:2 samy:1 element:2 rumelhart:2 recognition:2 infrequently:1 logarithmically:1 distributional:1 observed:1 preprint:1 solved:1 capture:1 wj:3 news:4 sun:1 gutmann:2 counter:1 jewell:1 complexity:2 trained:9 hintont:1 ewi:1 easily:2 icassp:1 joint:1 schwenk:1 represented:1 various:5 surrounding:2 train:3 fast:2 describe:2 artificial:3 hyper:1 whose:1 kai:3 larger:2 ducharme:1 deoras:1 otherwise:1 ability:1 statistic:2 mercury:1 syntactic:3 differentiate:1 sequence:1 advantage:1 subsamples:1 product:2 adaptation:1 frequent:15 nashville:2 date:1 achieve:2 pantel:1 analogical:9 sutskever:1 billion:6 regularity:2 empty:1 sea:2 produce:1 impunity:1 help:1 recurrent:2 montreal:6 andrew:2 nearest:3 ejean:1 skip:32 implies:1 indicate:1 involves:1 direction:1 posit:1 tokyo:1 bean:1 larry:1 argued:1 arinen:1 canadiens:3 inspecting:1 extension:4 around:2 considered:2 exp:3 great:2 warsaw:1 scope:1 predict:1 week:1 achieves:3 belgium:1 estimation:7 proc:1 outperformed:1 athens:1 bag:1 vanessa:1 city:4 detroit:2 successfully:1 stefan:1 grandmaster:1 aim:1 pn:4 earliest:1 encode:1 linguistic:2 improvement:1 consistently:1 check:1 greatly:1 polish:1 visibly:1 typically:2 entire:1 italian:1 france:6 germany:4 semantics:2 classification:1 pascal:1 raised:1 softmax:22 art:1 emotion:1 having:1 washington:1 sampling:11 manually:1 ng:2 represents:1 broad:1 holger:1 icml:2 nearly:1 yoshua:4 simplify:1 inherent:1 few:1 wen:1 richard:2 neighbour:2 resulted:2 individual:3 replaced:1 consisting:2 jeffrey:2 microsoft:2 attempt:1 huge:1 highly:1 mnih:6 uncommon:1 word2vec:3 accurate:2 closer:1 tree:6 walk:1 minimal:1 increased:1 ctk:1 modeling:3 earlier:1 respectable:1 werner:1 phrase:43 cost:2 rare:4 uniform:1 hundred:1 coyote:1 too:2 reported:2 considerably:2 combined:2 thanks:1 international:5 river:7 retain:1 probabilistic:3 michael:1 together:3 ilya:1 quickly:2 sanjeev:1 w1:1 thesis:2 aaai:1 russia:3 slowly:1 emnlp:1 charlotte:1 japan:1 suggesting:1 de:2 huval:1 summarized:1 includes:1 coefficient:1 inc:5 explicitly:2 ranking:2 collobert:4 piece:1 view:5 root:3 jason:2 observing:2 reached:2 annotation:1 predator:1 contribution:2 air:2 formed:3 greg:2 accuracy:13 became:1 who:2 publicly:1 identify:2 vincent:1 j6:1 published:3 hlt:1 web2:1 frequency:6 obvious:1 attributed:1 gain:1 dataset:9 austria:1 improves:3 dimensionality:3 greece:2 back:1 appears:2 steve:1 higher:1 day:3 follow:1 supervised:3 formulation:4 juliette:1 evaluated:1 just:2 autoencoders:1 tacl:1 expressive:1 christopher:2 warrior:1 google:15 french:1 defines:2 logistic:2 ionian:2 quality:9 vogels:1 russian:2 effect:2 naacl:1 concept:1 true:1 xavier:1 discounting:1 assigned:1 aggressively:1 semantic:4 illustrated:1 during:3 backpropagating:1 cosine:1 samuel:1 unnormalized:1 whye:1 demonstrate:1 reasoning:9 crown:1 meaning:6 wise:2 image:2 recently:2 phoenix:2 volume:3 million:3 discussed:1 occurred:1 association:3 significant:2 composition:2 vec:18 automatic:2 rd:1 maverick:1 portugal:1 nonlinearity:1 language:19 longer:2 patrick:1 closest:4 mccarthy:1 italy:1 driven:2 discard:1 browse:2 binary:3 success:1 meeting:1 neg:11 preserving:1 seen:2 greater:2 somewhat:2 maximize:3 corrado:2 signal:1 arithmetic:1 semi:1 full:2 currency:1 turkey:1 alan:1 faster:3 calculation:1 long:1 zweig:1 lin:1 variant:1 basic:2 globe:2 regression:2 txt:2 austrian:1 aapo:1 scalable:1 arxiv:2 represent:7 agarwal:1 achieved:3 cell:1 addition:6 huffman:6 baltimore:2 country:3 source:3 crucial:2 w2:1 unlike:2 airline:6 pass:1 meaningfully:2 vw:9 bengio:6 concerned:1 affect:1 w3:1 architecture:6 opposite:1 andriy:2 inner:2 idea:2 intensive:1 cousin:1 motivated:1 pca:2 wo:8 sentiment:1 peter:2 speech:4 york:3 deep:2 useful:2 involve:1 amount:4 category:2 reduced:1 http:1 correctly:2 hyperparameter:1 vol:1 key:1 four:1 threshold:3 nhl:1 actress:1 capital:6 verified:1 povey:1 fraction:1 nce:12 beijing:1 sum:3 run:1 jose:2 ratinov:1 fourth:1 master:1 powerful:1 reasonable:1 vn:2 lake:1 draw:1 decision:2 scaling:1 nba:1 accelerates:1 layer:2 brody:1 distinguish:1 vietnam:1 annual:1 occur:1 precisely:1 worked:1 scene:1 nearby:1 speed:2 answered:2 extremely:2 min:1 mikolov:11 performing:1 jew:1 relatively:1 speedup:3 according:1 brussels:1 combination:2 manning:2 remain:1 slightly:1 brno:1 wsabie:1 wi:16 island:1 joseph:1 chess:2 explained:1 computationally:2 previously:4 trunk:2 count:4 german:1 needed:1 oakland:1 usunier:1 available:2 operation:2 hierarchical:18 appropriate:1 occurrence:2 alternative:2 ho:1 original:1 moscow:2 subsampling:19 nlp:1 linguistics:3 log2:1 hinge:1 especially:2 unchanged:1 objective:6 question:2 already:2 occurs:1 paint:1 strategy:1 antoine:1 exhibit:3 amongst:1 iclr:1 distance:1 berlin:4 entity:2 karate:1 length:1 code:6 relationship:4 expense:1 ilyasu:1 negative:13 implementation:1 twenty:1 perform:1 teh:2 imbalance:1 allowing:1 inspect:1 datasets:2 discarded:2 minh:1 cernocky:2 hinton:4 precise:3 team:2 rome:1 arbitrary:1 paraphrase:1 canada:2 compositionality:5 introduced:7 david:1 pair:1 paris:4 required:1 janvin:1 optimized:1 sentence:6 acoustic:1 learned:9 czech:1 able:1 redmond:3 beyond:1 usually:1 pattern:2 including:1 memory:1 tau:1 power:1 suitable:1 natural:7 lisbon:1 explorer:1 predicting:2 representing:1 improve:3 technology:1 martial:1 text:4 prior:2 understanding:2 poland:1 garry:1 multiplication:1 relative:1 loss:1 gama:1 interesting:3 limitation:1 proportional:2 analogy:10 geoffrey:3 executive:1 downloaded:1 degree:1 article:1 bordes:1 translation:2 ibm:1 kombrink:1 token:6 surprisingly:2 free:2 viet:1 wide:1 vwi:4 lukas:2 distributed:5 benefit:3 vocabulary:6 gram:32 evaluating:1 stand:3 author:2 made:1 projected:1 simplified:1 san:2 hyvarinen:1 newspaper:2 transaction:1 implicitly:1 grizzly:1 vaclav:2 corpus:3 naomi:1 continuous:4 search:1 table:13 learn:4 nature:1 composing:1 nicolas:1 investigated:1 complex:1 constructing:1 domain:1 did:2 dense:1 main:2 whole:1 noise:11 hyperparameters:1 turian:3 big:1 child:2 allowed:1 complementary:1 madrid:2 gossip:1 slow:1 hanoi:1 learns:1 formula:3 unigram:3 specific:2 revolution:1 showing:1 explored:1 glorot:1 grouping:2 workshop:2 frederic:1 socher:2 phd:2 wash:1 magnitude:2 illustrates:1 margin:1 chen:2 boston:4 simply:1 prevents:1 indifference:1 khudanpur:1 ch:2 acm:1 weston:4 goal:1 month:1 jeff:1 replace:1 considerable:2 change:1 typical:2 wt:6 flag:1 called:1 total:1 aegean:1 meaningful:1 turney:2 formally:1 internal:1 inability:2 anoop:1 evaluate:4 phenomenon:1 |
4,445 | 5,022 | Stochastic Ratio Matching of RBMs for Sparse
High-Dimensional Inputs
Yann N. Dauphin, Yoshua Bengio
D?epartement d?informatique et de recherche op?erationnelle
Universit?e de Montr?eal
Montr?eal, QC H3C 3J7
[email protected],
[email protected]
Abstract
Sparse high-dimensional data vectors are common in many application domains
where a very large number of rarely non-zero features can be devised. Unfortunately, this creates a computational bottleneck for unsupervised feature learning
algorithms such as those based on auto-encoders and RBMs, because they involve
a reconstruction step where the whole input vector is predicted from the current
feature values. An algorithm was recently developed to successfully handle the
case of auto-encoders, based on an importance sampling scheme stochastically
selecting which input elements to actually reconstruct during training for each
particular example. To generalize this idea to RBMs, we propose a stochastic
ratio-matching algorithm that inherits all the computational advantages and unbiasedness of the importance sampling scheme. We show that stochastic ratio
matching is a good estimator, allowing the approach to beat the state-of-the-art
on two bag-of-word text classification benchmarks (20 Newsgroups and RCV1),
while keeping computational cost linear in the number of non-zeros.
1
Introduction
Unsupervised feature learning algorithms have recently attracted much attention, with the promise of
letting the data guide the discovery of good representations. In particular, unsupervised feature learning is an important component of many Deep Learning algorithms (Bengio, 2009), such as those
based on auto-encoders (Bengio et al., 2007) and Restricted Boltzmann Machines or RBMs (Hinton
et al., 2006). Deep Learning of representations involves the discovery of several levels of representation, with some algorithms able to exploit unlabeled examples and unsupervised or semi-supervised
learning.
Whereas Deep Learning has mostly been applied to computer vision and speech recognition, an important set of application areas involve high-dimensional sparse input vectors, for example in some
Natural Language Processing tasks (such as the text categorization tasks tackled here), as well as in
information retrieval and other web-related applications where a very large number of rarely nonzero features can be devised. We would like learning algorithms whose computational requirements
grow with the number of non-zeros in the input but not with the total number of features. Unfortunately, auto-encoders and RBMs are computationally inconvenient when it comes to handling such
high-dimensional sparse input vectors, because they require a form of reconstruction of the input
vector, for all the elements of the input vector, even the ones that were zero.
In Section 2, we recapitulate the Reconstruction Sampling algorithm (Dauphin et al., 2011) that was
proposed to handle that problem in the case of auto-encoder variants. The basic idea is to use an
1
importance sampling scheme to stochastically select a subset of the input elements to reconstruct,
and importance weights to obtain an unbiased estimator of the reconstruction error gradient.
In this paper, we are interested in extending these ideas to the realm of RBMs. In Section 3 we
briefly review the basics of RBMs and the Gibbs chain involved in training them. Ratio matching (Hyv?arinen, 2007), is an inductive principle and training criterion that can be applied to train
RBMs but does not require a Gibbs chain. In Section 4, we present and justify a novel algorithm
based on ratio matching order to achieve our objective of taking advantage of highly sparse inputs.
The new algorithm is called Stochastic Ratio Matching or SRM. In Section 6 we present a wide array
of experimental results demonstrating the successful application of Stochastic Ratio Matching, both
in terms of computational performance (flat growth of computation as the number of non-zeros is increased, linear speedup with respect to regular training) and in terms of generalization performance:
the state-of-the-art on two text classification benchmarks is achieved and surpassed. An interesting
and unexpected result is that we find the biased version of the algorithm (without reweighting) to
yield more discriminant features.
2
Reconstruction Sampling
An auto-encoder learns an encoder function f mapping inputs x to features h = f (x), and a decoding or reconstruction function g such that g(f (x)) ? x for training examples x. See Bengio et al.
(2012) for a review. In particular, with the denoising auto-encoder, x is stochastically corrupted into
x
? (e.g. by flipping some bits) and trained to make g(f (?
x)) ? x. To avoid the expensive reconstruction g(h) when the input is very high-dimensional, Dauphin et al. (2011) propose that for each
example, a small random subset of the input elements be selected for which gi (h) and the associated
reconstruction error is computed. To make the corresponding estimator of reconstruction error (and
its gradient) unbiased, they propose to use an importance weighting scheme whereby the loss on the
i-th input is weighted by the inverse of the probability that it be selected. To reduce the variance of
the estimator, they propose to always reconstruct the i-th input if it was one of the non-zeros in x
or in x
?, and to choose uniformly at random an equal number of zero elements. They show that the
unbiased estimator yields the expected linear speedup in training time compared to the deterministic
gradient computation, while maintaining good performance for unsupervised feature learning. We
would like to extend similar ideas to RBMs.
3
Restricted Boltzmann Machines
A restricted Boltzmann machine (RBM) is an undirected graphical model with binary variables (Hinton et al., 2006): observed variables x and hidden variables h. In this model, the hidden variables
help uncover higher order correlations in the data.
The energy takes the form
?E(x, h) = hT Wx + bT h + cT x
with parameters ? = (W, b, c).
The RBM can be trained by following the gradient of the negative log-likelihood
? log P (x)
?F (x)
?F (x)
?
= Edata
? Emodel
??
??
??
where F (x) is the free energy (unnormalized log-probability associated with P (x)). However, this
gradient is intractable because the second expectation is combinatorial. Stochastic Maximum Likelihood or SML (Younes, 1999; Tieleman, 2008) estimates this expectation using sample averages
taken from a persistent MCMC chain (Tieleman, 2008). Starting from xi a step in this chain is
taken by sampling hi ? P (h|xi ), then we have xi+1 ? P (x|hi ). SML-k is the variant where k is
the number of steps between parameter updates, with SML-1 being the simplest and most common
choice, although better results (at greater computational expense) can be achieved with more steps.
Training the RBM using SML-1 is on the order of O(dn) where d is the dimension of the input
variables and n is the number of hidden variables. In the case of high-dimensional sparse vectors
with p non-zeros, SML does not take advantage of the sparsity. More precisely, sampling P (h|x)
2
(inference) can take advantage of sparsity and costs O(pn) computations while ?reconstruction?,
i.e., sampling from P (x|h) requires O(dn) computations. Thus scaling to larger input sizes n yields
a linear increase in training time even if the number of non-zeros p in the input remains constant.
4
Ratio Matching
Ratio matching (Hyv?arinen, 2007) is an estimation method for statistical models where the normalization constant is not known. It is similar to score matching (Hyv?arinen, 2005) but applied on
discrete data whereas score matching is limited to continuous inputs, and both are computationally
simple and yield consistent estimators. The use of Ratio Matching in RBMs is of particular interest
because their normalization constant is computationally intractable.
The core idea of ratio matching is to match ratios of probabilities between the data and the model.
Thus Hyv?arinen (2007) proposes to minimize the following objective function
Px (x)
2
2
d
X
Px (x)
P (x)
Px (?
xi )
P (?
xi )
g
?
g
+
g
?
g
Px (?
xi )
P (?
xi )
Px (x)
P (x)
i=1
(1)
1
where Px is the true probability distribution, P the distribution defined by the model, g(x) = 1+x
? i = (x1 , x2 , . . . , 1 ? xi , . . . , xd ). In this form, we can see the simiis an activation function and x
larity between score matching and ratio matching. The normalization constant is canceled because
P (x)
e?F (x)
, however this objective requires access to the true distribution Px which is rarely
xi )
P (?
xi ) = e?F (?
available.
Hyv?arinen (2007) shows that the Ratio Matching (RM) objective can be simplified to
2
d
X
P (x)
JRM (x) =
g
P (?
xi )
i=1
(2)
which does not require knowledge of the true distribution Px . This objective can be described as
ensuring that the training example x has the highest probability in the neighborhood of points at
hamming distance 1.
We propose to rewrite Eq. 2 in a form reminiscent of auto-encoders:
JRM (x) =
d
X
(xi ? P (xi = 1|x?i ))2 .
(3)
i=1
This will be useful for reasoning about this estimator. The main difference with auto-encoders is
that each input variable is predicted by excluding it from the input.
2
Pd
Applying Equation 2 to the RBM we obtain JRM (x) = i=1 ?(F (x) ? F (?
xi )) . The gradients
have the familiar form
d
?JRM (x) X
?F (x) ?F (?
xi )
(4)
?
=
2?i
?
??
??
??
i=1
with ?i = ?(F (x) ? F (?
xi ))
2
3
? ?(F (x) ? F (?
xi )) .
A naive implementation of this objective is O(d2 n) because it requires d computations of the free
energy per example. This is much more expensive than SML as noted by Marlin et al. (2010).
Thankfully, as we argue here, it is possible to greatly reduce this complexity by reusing computation
and taking advantage of the parametrization
of RBMs. This can be done by saving the results of the
P
computations ? = cT x and ?j = i Wji xi +bj when computing F (x). The computation of F (?
xi )
P
i
?j ?(2xi ?1)Wji
can be reduced to O(n) with the formula ?F (?
x ) = ? ? (2xi ? 1)ci + j log(1 + e
).
This implementation is O(dn) which is the same complexity as SML. However, like SML, RM does
not take advantage of sparsity in the input.
3
5
Stochastic Ratio Matching
We propose Stochastic Ratio Matching (SRM) as a more efficient form of ratio matching for highdimensional sparse distributions. The ratio matching objective requires the summation of d terms
in O(n). The basic idea of SRM is to estimate this sum using a very small fraction of the terms,
randomly chosen. If we rewrite the ratio matching objective as an expectation over a discrete distribution
d
X
1 2 P (x)
P (x)
2
JRM (x) = d
g
= dE g
(5)
d
P (?
xi )
P (?
xi )
i=1
we can use Monte Carlo methods to estimate JRM without computing all the terms in Equation
2. However, in practice this estimator has a high variance. Thus it is a poor estimator, especially
if we want to use very few Monte Carlo samples. The solution proposed for SRM is to use an
Importance Sampling scheme to obtain a lower variance estimator of JRM . Combining Monte
Carlo with importance sampling, we obtain the SRM objective
JSRM (x) =
d
X
?i 2 P (x)
g
E[?i ]
P (?
xi )
i=1
(6)
where ?i ? P (?i = 1|x) is the so-called proposal distribution of our importance sampling scheme.
The proposal distribution determines which terms will be used to estimate the objective since only
the terms where ?i = 1 are non-zero. JSRM (x) is an unbiased estimator of JRM (x), i.e.,
E[JSRM (x)]
=
d
X
E[?i ]
i=1
=
E[?i ]
g2
P (x)
P (?
xi )
JRM (x)
The intuition behind importance sampling is that the variance of the estimator can be reduced by
focusing sampling on the largest terms of the expectation. More precisely, it is possible to show
that the variance of the estimator is minimized when P (?i = 1|x) ? g 2 (P (x)/P (?
xi )). Thus we
would like the probability P (?i = 1|x) to reflect how large the error (xi ? P (xi = 1|x?i ))2 will be.
The challenge is finding a good approximation for (xi ? P (xi = 1|x?i ))2 and to define a proposal
distribution that is efficient to sample from.
Following Dauphin et al. (2011), we propose such a distribution for high-dimensional sparse distributions. In these types of distributions the marginals Px (xi = 1) are very small. They can
easily be learned by the biases c of the model, and may even be initialized very close to their
optimal value. Once the marginals are learned, the model will likely only make wrong predictions when Px (xi = 1|x?i ) differs significantly from Px (xi = 1). If xi = 0 then the error
(0 ? P (xi = 1|x?i ))2 is likely small because the model has a high bias towards P (xi = 0).
Conversely, the error will be high when xi = 1. In other words, the model will mostly make errors
for terms where xi = 1 and a small number of dimensions where xi = 0. We can use this to define
the heuristic proposal distribution
1
if xi = 1
P
P (?i = 1|x) =
(7)
p/(d ?
1
) otherwise
j
xj >0
where p is the average number of non-zeros
Pin the data. The idea is to always sample the terms
where xi = 1 and a subset of k of the (d ? j 1xj >0 ) remaining terms where xi = 0. Note that if
we sampled the ?i independently, we would get E[k] = p.
However, instead of sampling those ?i bits independently, we find that much smaller variance is
obtained by sampling a number of zeros k that is constant for all examples, i.e., k = p. A random k
can cause very significant variance in the gradients
P and this makes stochastic gradient descent more
difficult. In our experiments we set k = p = E[ j 1xj >0 ] which is a small number by definition of
these sparse distributions, and guarantees that computation costs will remain constant as n increases
for a fixed number of non-zeros. The computational cost of SRM per training example is O(pn),
as opposed to O(dn) for RM. While simple, we find that this heuristic proposal distribution works
well in practice, as shown below.
4
For comparison, we also perform experiments with a biased version of Equation 6
JBiasedSRM (x) =
d
X
i=1
?i g 2
P (x)
P (?
xi )
.
(8)
This will allow us to gauge the effectiveness of our importance weights for unbiasing the objective.
The biased objective can be thought as down-weighting the ratios where xi = 0 by a factor of E[?i ].
SRM is related to previous work (Dahl et al., 2012) on applying RBMs to high-dimensional sparse
inputs, more precisely multinomial observations, e.g., one K-ary multinomial for each word in an
n-gram window. A careful choice of Metropolis-Hastings transitions replaces Gibbs transitions and
allows to handle large vocabularies. In comparison, SRM is geared towards general sparse vectors
and involves an extremely simple procedure without MCMC.
6
Experimental Results
In this section, we demonstrate the effectiveness of SRM for training RBMs. Additionally, we show
that RBMs are useful features extractors for topic classification.
Datasets We have performed experiments with the Reuters Corpus Volume I (RCV1) and 20
Newsgroups (20 NG). RCV1 is a benchmark for document classification of over 800,000 news wire
stories (Lewis et al., 2004). The documents are represented as bag-of-words vectors with 47,236
dimensions. The training set contains 23,149 documents and the test set has 781,265. While there
are 3 types of labels for the documents, we focus on the task of predicting the topic. There are a
set of 103 non-mutually exclusive topics for a document. We report the performance using the F1.0
measure for comparison with the state of the art. 20 Newsgroups is a collection of Usenet posts composing a training set of 11,269 examples and 7505 test examples. The bag-of-words vectors contain
61,188 dimensions. The postings are to be classified into one of 20 categories. We use the by-date
train/test split which ensures that the training set contains postings preceding the test examples in
time. Following Larochelle et al. (2012), we report the classification error and for a fair comparison
we use the same preprocessing1 .
Methodology We compare the different estimation methods for the RBM based on the loglikelihoods they achieve. To do this we use Annealed Importance Sampling or AIS (Salakhutdinov and Murray, 2008). For all models we average 100 AIS runs with 10,000 uniformly spaced
reverse temperatures ?k . We compare RBMs trained with ratio matching, stochastic ratio matching
and biased stochastic ratio matching. We include experiments with RBMs trained with SML-1 for
comparison.
Additionally, we provide experiments to motivate the use of high-dimensional RBMs in NLP. We
use the RBM to pretrain the hidden layers of a feed-forward neural network (Hinton et al., 2006).
This acts as a regularization for the network and it helps optimization by initializing the network
close to a good local minimum (Erhan et al., 2010).
The hyper-parameters are cross-validated on a validation set consisting of 5% of the training set. In
our experiments with AIS, we use the validation log-likelihood as the objective. For classification,
we use the discriminative performance on the validation set. The hyper-parameters are found using
random search (Bergstra and Bengio, 2012) with 64 trials per set of experiments. The learning
rate for the RBMs is sampled from 10?[0,3] , the number of hidden units from [500, 2000] and the
number of training epochs from [5, 20]. The learning rate for the MLP is sampled from 10?[2,0] . It
is trained for 32 epochs using early-stopping based on the validation set. We regularize the MLP by
dropping out 50% of the hidden units during training (Hinton et al., 2012). We adapt the learning
rate dynamically by multiplying it by 0.95 when the validation error increases.
All experiments are run on a cluster of double quad-core Intel Xeon E5345 running at 2.33Ghz with
2GB of RAM.
5
Table 1: Log-probabilities estimated by AIS for the RBMs trained with the different estimation
methods. With a fixed budget of epochs, SRM achieves likelihoods on the test set comparable with
RM and SML-1.
E STIMATES
RCV1
20 NG
6.1
AVG .
LOG - PROB .
log Z?
log(Z? ? ?
?)
T RAIN
T EST
B IASED SRM
SRM
RM
SML-1
1084.96
325.26
499.88
323.33
1079.66, 1085.65
325.24, 325.27
499.48, 500.17
320.69, 323.99
-758.73
-139.79
-119.98
-138.90
-793.20
-151.30
-147.32
-153.50
B IASED SRM
SRM
RM
SML-1
1723.94
546.52
975.42
612.15
1718.65, 1724.63
546.55, 546.49
975.62, 975.18
611.68, 612.46
-960.34
-178.39
-159.92
-173.56
-1018.73
-190.72
-185.61
-188.82
Using SRM to train RBMs
We can measure the effectiveness of SRM by comparing it with various estimation methods for
the RBM. As the RBM is a generative model, we must compare these methods based on the loglikelihoods they achieve. Note that Dauphin et al. (2011) relies on the classification error because
there is no accepted performance measure for DAEs. As both RM and SML scale badly with input
dimension, we restrict the dimension of the dataset to the p = 1, 000 most frequent words. We will
describe experiments with all dimensions in the next section.
As seen in Table 1, SRM is a good estimator for training RBMs and is a good approximation of RM.
We see that with the same budget of epochs SRM achieves log-likelihoods comparable with RM
on both datasets. The striking difference of more than 500 nats with Biased SRM shows that the
importance weights successfully unbias the estimator. Interestingly, we observe that RM is able to
learn better generative models than SML-1 for both datasets. This is similar to Marlin et al. (2010)
where Pseudolikelihood achieves better log-likelihood than SML on a subset of 20 newsgroups. We
observe this is an optimization problem since the training log-likelihood is also higher than RM.
One explanation is that SML-1 might experience mixing problems (Bengio et al., 2013).
Figure 1: Average speedup in the calculation of gradients by using the SRM objective compared to
RM. The speed-up is linear and reaches up to 2 orders of magnitude.
Figure 1 shows that as expected SRM achieves a linear speed-up compared to RM, reaching speedups of 2 orders of magnitude. In fact, we observed that the computation time of the gradients for RM
scale linearly with the size of the input while the computation time of SRM remains fairly constant
because the number of non-zeros varies little. This is an important property of SRM which makes it
suitable for very large scale inputs.
1
http://qwone.com/?jason/20Newsgroups/20news-bydate-matlab.tgz
6
Figure 2: Average norm of the gradients for the terms in Equation 2 where xi = 1 and xi = 0.
Confirming the hypothesis for the proposal distribution the terms where xi = 1 are 2 orders of
magnitude larger.
The importance sampling scheme of SRM (Equation 7) relies on the hypothesis that terms where
xi = 1 produce a larger gradient than terms where xi = 0. We can verify this by monitoring the
average gradients during learning on RCV1. Figure 2 demonstrates that the average gradients for
the terms where xi = 1 is 2 orders of magnitudes larger than those where xi = 0. This confirms the
hypothesis underlying the sampling scheme of SRM.
6.2
Using RBMs as feature extractors for NLP
Having established that SRM is an efficient unbiased estimator of RM, we turn to the task of using
RBMs not as generative models but as feature extractors. We find that keeping the bias in SRM is
helpful for classification. This is similar to the known result that contrastive divergence, which is
biased, yields better classification results than persistent contrastive divergence, which is unbiased.
The bias increases the weight of non-zeros features. The superior performance of the biased objective suggests that the non-zero features contain more information about the classification task. In
other words, for these tasks it?s more important to focus on what is there than what is not there.
Table 2: Classification results on RCV1 with all 47,326 dimensions. The DBN trained with SRM
achieves state-of-the-art performance.
M ODEL
T EST SET F1
ROCCHIO
k-NN
SVM
0.693
0.765
0.816
SDA-MLP (R EC . SAMPLING )
RBM-MLP (U NBIASED SRM)
RBM-MLP (B IASED SRM)
DBN-MLP (B IASED SRM)
0.831
0.816
0.829
0.836
On RCV1, we train our models on all 47,326 dimensions. The RBM trained with SRM improves
on the state-of-the-art (Lewis et al., 2004), as shown in Table 2. The total training time for this
RBM using SRM is 57 minutes. We also train a Deep Belief Net (DBN) by stacking an RBM
trained with SML on top of the RBMs learned with SRM. This type of 2-layer deep architecture is
able to significantly improve the performance on that task (Table 2). In particular the DBN does
significantly better than a stack of denoising auto-encoders we trained using biased reconstruction
sampling (Dauphin et al., 2011), which appears as SDA-MLP (Rec. Sampling) in Table 2.
We apply RBMs trained with SRM on 20 newsgroups with all 61,188 dimensions. We see in Table
3 that this approach improves the previous state-of-the-art by over 1% (Larochelle et al., 2012),
beating non-pretrained MLPs and SVMs by close to 10 %. This result is closely followed by the
DAE trained with reconstruction sampling which in our experiments reaches 20.6% test error. The
7
Table 3: Classification results on 20 Newsgroups with all 61,188 dimensions. Prior results from
(Larochelle et al., 2012). The RBM trained with SRM achieves state-of-the-art results.
M ODEL
T EST SET E RROR
SVM
MLP
RBM
HDRBM
32.8
28.2
24.9
21.9
%
%
%
%
DAE-MLP (R EC . SAMPLING )
RBM-MLP (B IASED SRM)
20.6 %
20.5 %
simpler RBM trained by SRM is able to beat the more powerful HD-RBM model because it uses all
the 61,188 dimensions.
7
Conclusion
We have proposed a very simple algorithm called Stochastic Ratio Matching (SRM) to take advantage of sparsity in high-dimensional data when training discrete RBMs. It can be used to estimate
gradients in O(np) computation where p is the number of non-zeros, yielding linear speedup against
the O(nd) of Ratio Matching (RM) where d is the input size. It does so while providing an unbiased
estimator of the ratio matching gradient. Using this efficient estimator we train RBMs as features
extractors and achieve state-of-the-art results on 2 text classification benchmarks.
References
Bengio, Y. (2009). Learning deep architectures for AI. Now Publishers.
Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep
networks. In NIPS?2006.
Bengio, Y., Courville, A., and Vincent, P. (2012). Representation learning: A review and new
perspectives. Technical report, arXiv:1206.5538.
Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2013). Better mixing via deep representations.
In ICML?13.
Bergstra, J. and Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of
Machine Learning Research, 13, 281?305.
Dahl, G., Adams, R., and Larochelle, H. (2012). Training restricted boltzmann machines on word
observations. In J. Langford and J. Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ?12, pages 679?686, New York, NY, USA. Omnipress.
Dauphin, Y., Glorot, X., and Bengio, Y. (2011). Large-scale learning of embeddings with reconstruction sampling. In ICML?11.
Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., and Bengio, S. (2010). Why does
unsupervised pre-training help deep learning? JMLR, 11, 625?660.
Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets.
Neural Computation, 18, 1527?1554.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinv, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. Technical report,
arXiv:1207.0580.
Hyv?arinen, A. (2005). Estimation of non-normalized statistical models using score matching. 6,
695?709.
Hyv?arinen, A. (2007). Some extensions of score matching. Computational Statistics and Data
Analysis, 51, 2499?2512.
Larochelle, H., Mandel, M. I., Pascanu, R., and Bengio, Y. (2012). Learning algorithms for the
classification restricted boltzmann machine. Journal of Machine Learning Research, 13, 643?
669.
8
Lewis, D. D., Yang, Y., Rose, T. G., Li, F., Dietterich, G., and Li, F. (2004). Rcv1: A new benchmark
collection for text categorization research. Journal of Machine Learning Research, 5, 361?397.
Marlin, B., Swersky, K., Chen, B., and de Freitas, N. (2010). Inductive principles for restricted
Boltzmann machine learning. volume 9, pages 509?516.
Salakhutdinov, R. and Murray, I. (2008). On the quantitative analysis of deep belief networks. In
ICML 2008, volume 25, pages 872?879.
Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood
gradient. In ICML?2008, pages 1064?1071.
Younes, L. (1999). On the convergence of Markovian stochastic algorithms with rapidly decreasing
ergodicity rates. Stochastics and Stochastic Reports, 65(3), 177?228.
9
| 5022 |@word trial:1 version:2 briefly:1 norm:1 nd:1 hyv:7 d2:1 confirms:1 recapitulate:1 contrastive:2 epartement:1 contains:2 score:5 selecting:1 document:5 interestingly:1 freitas:1 current:1 comparing:1 com:1 activation:1 attracted:1 reminiscent:1 must:1 wx:1 confirming:1 update:1 generative:3 selected:2 greedy:1 parametrization:1 core:2 recherche:1 pascanu:1 simpler:1 dn:4 persistent:2 bydate:1 expected:2 salakhutdinov:2 decreasing:1 little:1 quad:1 window:1 underlying:1 what:2 developed:1 marlin:3 finding:1 guarantee:1 quantitative:1 act:1 growth:1 xd:1 universit:1 rm:16 wrong:1 demonstrates:1 unit:2 local:1 usenet:1 might:1 dynamically:1 conversely:1 suggests:1 co:1 limited:1 practice:2 differs:1 procedure:1 area:1 significantly:3 thought:1 matching:29 word:8 pre:1 regular:1 mandel:1 get:1 unlabeled:1 close:3 applying:2 deterministic:1 annealed:1 attention:1 starting:1 independently:2 qc:1 estimator:18 array:1 lamblin:1 regularize:1 hd:1 handle:3 us:1 hypothesis:3 element:5 recognition:1 expensive:2 rec:1 observed:2 initializing:1 ensures:1 news:2 mesnil:1 highest:1 rose:1 intuition:1 pd:1 complexity:2 nats:1 trained:14 motivate:1 rewrite:2 creates:1 easily:1 represented:1 various:1 train:6 informatique:1 fast:1 describe:1 monte:3 hyper:3 neighborhood:1 whose:1 heuristic:2 larger:4 reconstruct:3 otherwise:1 encoder:4 statistic:1 gi:1 h3c:1 advantage:7 net:2 reconstruction:13 propose:7 adaptation:1 frequent:1 combining:1 date:1 rapidly:1 loglikelihoods:2 mixing:2 achieve:4 sutskever:1 convergence:1 cluster:1 requirement:1 extending:1 double:1 produce:1 categorization:2 adam:1 help:3 op:1 eq:1 predicted:2 involves:2 come:1 larochelle:6 closely:1 stochastic:14 require:3 arinen:7 f1:2 generalization:1 summation:1 extension:1 mapping:1 bj:1 achieves:6 early:1 rocchio:1 estimation:5 bag:3 combinatorial:1 label:1 largest:1 gauge:1 successfully:2 weighted:1 j7:1 always:2 larity:1 reaching:1 avoid:1 pn:2 validated:1 inherits:1 focus:2 likelihood:8 greatly:1 pretrain:1 helpful:1 inference:1 stopping:1 nn:1 bt:1 hidden:6 interested:1 canceled:1 classification:14 dauphin:8 proposes:1 art:8 fairly:1 equal:1 once:1 saving:1 having:1 ng:2 sampling:24 unsupervised:6 icml:6 minimized:1 yoshua:2 report:5 np:1 few:1 randomly:1 divergence:2 familiar:1 consisting:1 montr:2 interest:1 mlp:10 highly:1 dauphiya:1 yielding:1 behind:1 chain:4 experience:1 initialized:1 inconvenient:1 dae:2 increased:1 eal:2 xeon:1 markovian:1 cost:4 stacking:1 subset:4 srm:39 krizhevsky:1 successful:1 osindero:1 encoders:7 varies:1 corrupted:1 unbiasedness:1 sda:2 international:1 decoding:1 reflect:1 opposed:1 choose:1 stochastically:3 reusing:1 li:2 de:4 bergstra:2 sml:17 performed:1 jason:1 odel:2 minimize:1 mlps:1 variance:7 yield:5 spaced:1 generalize:1 vincent:2 carlo:3 multiplying:1 monitoring:1 ary:1 classified:1 detector:1 reach:2 definition:1 against:1 rbms:27 energy:3 involved:1 associated:2 rbm:18 hamming:1 sampled:3 dataset:1 realm:1 knowledge:1 improves:2 uncover:1 actually:1 focusing:1 feed:1 appears:1 higher:2 supervised:1 methodology:1 done:1 ergodicity:1 langford:1 correlation:1 hastings:1 web:1 reweighting:1 unbiasing:1 pineau:1 usa:1 dietterich:1 contain:2 unbiased:7 true:3 normalized:1 inductive:2 regularization:1 verify:1 qwone:1 nonzero:1 daes:1 during:3 whereby:1 noted:1 unnormalized:1 criterion:1 demonstrate:1 temperature:1 omnipress:1 reasoning:1 wise:1 novel:1 recently:2 umontreal:2 common:2 superior:1 multinomial:2 volume:3 extend:1 marginals:2 significant:1 gibbs:3 ai:5 dbn:4 language:1 access:1 geared:1 perspective:1 reverse:1 binary:1 wji:2 seen:1 minimum:1 greater:1 preceding:1 semi:1 technical:2 match:1 adapt:1 calculation:1 cross:1 retrieval:1 devised:2 post:1 nbiased:1 ensuring:1 prediction:1 variant:2 basic:3 vision:1 expectation:4 surpassed:1 arxiv:2 normalization:3 achieved:2 proposal:6 whereas:2 want:1 grow:1 publisher:1 biased:8 undirected:1 effectiveness:3 yang:1 bengio:16 split:1 embeddings:1 newsgroups:7 xj:3 architecture:2 restrict:1 reduce:2 idea:7 rifai:1 bottleneck:1 tgz:1 gb:1 speech:1 york:1 cause:1 matlab:1 deep:11 useful:2 involve:2 younes:2 category:1 simplest:1 reduced:2 http:1 svms:1 estimated:1 per:3 discrete:3 promise:1 dropping:1 demonstrating:1 dahl:2 ht:1 ram:1 fraction:1 sum:1 run:2 inverse:1 prob:1 powerful:1 striking:1 swersky:1 yann:1 scaling:1 comparable:2 bit:2 layer:3 ct:2 hi:2 unbias:1 followed:1 tackled:1 courville:2 replaces:1 badly:1 precisely:3 x2:1 flat:1 edata:1 speed:2 extremely:1 rcv1:8 px:11 speedup:5 poor:1 smaller:1 remain:1 metropolis:1 stochastics:1 restricted:7 taken:2 computationally:3 equation:5 mutually:1 remains:2 pin:1 turn:1 letting:1 available:1 apply:1 observe:2 rain:1 remaining:1 include:1 nlp:2 running:1 graphical:1 top:1 maintaining:1 exploit:1 especially:1 murray:2 objective:15 flipping:1 erationnelle:1 exclusive:1 gradient:17 distance:1 topic:3 argue:1 discriminant:1 iro:1 manzagol:1 ratio:26 providing:1 difficult:1 unfortunately:2 mostly:2 expense:1 negative:1 implementation:2 boltzmann:7 perform:1 allowing:1 teh:1 observation:2 wire:1 datasets:3 benchmark:5 descent:1 beat:2 hinton:6 excluding:1 stack:1 learned:3 established:1 nip:1 able:4 below:1 beating:1 sparsity:4 challenge:1 explanation:1 belief:3 suitable:1 natural:1 predicting:1 scheme:8 improve:1 auto:10 naive:1 text:5 review:3 epoch:4 discovery:2 prior:1 popovici:1 loss:1 interesting:1 validation:5 consistent:1 principle:2 editor:1 story:1 keeping:2 free:2 guide:1 bias:4 allow:1 pseudolikelihood:1 wide:1 taking:2 sparse:11 ghz:1 dimension:12 vocabulary:1 gram:1 transition:2 preventing:1 forward:1 collection:2 avg:1 simplified:1 erhan:2 ec:2 corpus:1 xi:51 discriminative:1 continuous:1 search:2 why:1 thankfully:1 additionally:2 table:8 learn:1 ca:2 composing:1 improving:1 domain:1 main:1 linearly:1 whole:1 reuters:1 fair:1 x1:1 intel:1 ny:1 jmlr:1 weighting:2 extractor:4 posting:2 learns:1 formula:1 down:1 minute:1 svm:2 glorot:1 intractable:2 importance:13 ci:1 magnitude:4 budget:2 chen:1 likely:2 unexpected:1 g2:1 rror:1 pretrained:1 srivastava:1 tieleman:3 determines:1 lewis:3 relies:2 careful:1 towards:2 uniformly:2 justify:1 denoising:2 total:2 called:3 accepted:1 experimental:2 est:3 rarely:3 select:1 highdimensional:1 mcmc:2 handling:1 |
4,446 | 5,023 | Generalized Denoising Auto-Encoders as Generative
Models
Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent
D?epartement d?informatique et recherche op?erationnelle, Universit?e de Montr?eal
Abstract
Recent work has shown how denoising and contractive autoencoders implicitly
capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the
data is continuous-valued. This has led to various proposals for sampling from
this implicitly learned density function, using Langevin and Metropolis-Hastings
MCMC. However, it remained unclear how to connect the training procedure
of regularized auto-encoders to the implicit estimation of the underlying datagenerating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here
a different attack on the problem, which deals with all these issues: arbitrary (but
noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood),
handling both discrete and continuous-valued variables, and removing the bias due
to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).
1
Introduction
Auto-encoders learn an encoder function from input to representation and a decoder function back
from representation to input space, such that the reconstruction (composition of encoder and decoder) is good for training examples. Regularized auto-encoders also involve some form of regularization that prevents the auto-encoder from simply learning the identity function, so that reconstruction error will be low at training examples (and hopefully at test examples) but high in general.
Different variants of auto-encoders and sparse coding have been, along with RBMs, among the
most successful building blocks in recent research in deep learning (Bengio et al., 2013b). Whereas
the usefulness of auto-encoder variants as feature learners for supervised learning can directly be
assessed by performing supervised learning experiments with unsupervised pre-training, what has
remained until recently rather unclear is the interpretation of these algorithms in the context of
pure unsupervised learning, as devices to capture the salient structure of the input data distribution.
Whereas the answer is clear for RBMs, it is less obvious for regularized auto-encoders. Do they
completely characterize the input distribution or only some aspect of it? For example, clustering
algorithms such as k-means only capture the modes of the distribution, while manifold learning
algorithms characterize the low-dimensional regions where the density concentrates.
Some of the first ideas about the probabilistic interpretation of auto-encoders were proposed by Ranzato et al. (2008): they were viewed as approximating an energy function through the reconstruction
error, i.e., being trained to have low reconstruction error at the training examples and high reconstruction error elsewhere (through the regularizer, e.g., sparsity or otherwise, which prevents the
auto-encoder from learning the identity function). An important breakthrough then came, yielding
a first formal probabilistic interpretation of regularized auto-encoders as models of the input distribution, with the work of Vincent (2011). This work showed that some denoising auto-encoders
(DAEs) correspond to a Gaussian RBM and that minimizing the denoising reconstruction error (as a
squared error) estimates the energy function through a regularized form of score matching, with the
regularization disappearing as the amount of corruption noise goes to 0, and then converging to the
same solution as score matching (Hyv?arinen, 2005). This connection and its generalization to other
1
energy functions, giving rise to the general denoising score matching training criterion, is discussed
in several other papers (Kingma and LeCun, 2010; Swersky et al., 2011; Alain and Bengio, 2013).
Another breakthrough has been the development of an empirically successful sampling algorithm
for contractive auto-encoders (Rifai et al., 2012), which basically involves composing encoding, decoding, and noise addition steps. This algorithm is motivated by the observation that the Jacobian
matrix (of derivatives) of the encoding function provides an estimator of a local Gaussian approximation of the density, i.e., the leading singular vectors of that matrix span the tangent plane of the
manifold near which the data density concentrates. However, a formal justification for this algorithm
remains an open problem.
The last step in this development (Alain and Bengio, 2013) generalized the result from Vincent
(2011) by showing that when a DAE (or a contractive auto-encoder with the contraction on the whole
encode/decode reconstruction function) is trained with small Gaussian corruption and squared error
loss, it estimates the score (derivative of the log-density) of the underlying data-generating distribution, which is proportional to the difference between reconstruction and input. This result does
not depend on the parametrization of the auto-encoder, but suffers from the following limitations: it
applies to one kind of corruption (Gaussian), only to continuous-valued inputs, only for one kind of
loss (squared error), and it becomes valid only in the limit of small noise (even though in practice,
best results are obtained with large noise levels, comparable to the range of the input).
What we propose here is a different probabilistic interpretation of DAEs, which is valid for any data
type, any corruption process (so long as it has broad enough support), and any reconstruction loss
(so long as we can view it as a log-likelihood).
? using conditional distribution
The basic idea is that if we corrupt observed random variable X into X
?
? Combining
C(X|X),
we are really training the DAE to estimate the reverse conditional P (X|X).
?
this estimator with the known C(X|X), we show that we can recover a consistent estimator of
? and sampling from
P (X) through a Markov chain that alternates between sampling from P (X|X)
?
? apply
C(X|X), i.e., encode/decode, sample from the reconstruction distribution model P (X|X),
?
the stochastic corruption procedure C(X|X), and iterate.
This theoretical result is validated through experiments on artificial data in a non-parametric setting
and experiments on real data in a parametric setting (with neural net DAEs). We find that we can
improve the sampling behavior by using the model itself to define the corruption process, yielding a
training procedure that has some surface similarity to the contrastive divergence algorithm (Hinton,
1999; Hinton et al., 2006).
Algorithm 1 T HE GENERALIZED DENOISING AUTO - ENCODER TRAINING ALGORITHM requires
?
a training set or training distribution D of examples X, a given corruption process C(X|X)
from
? from which
which one can sample, and with which one trains a conditional distribution P? (X|X)
one can sample.
repeat
? sample training example X ? D
? ? C(X|X)
?
? sample corrupted input X
?
? use (X, X) as an additional training example towards minimizing the expected value of
? e.g., by a gradient step with respect to ?.
? log P? (X|X),
until convergence of training (e.g., as measured by early stopping on out-of-sample negative loglikelihood)
2
2.1
Generalizing Denoising Auto-Encoders
Definition and Training
Let P(X) be the data-generating distribution over observed random variable X. Let C be a given
? through conditional distribution C(X|X).
?
corruption process that stochastically maps an X to a X
?
The training data for the generalized denoising auto-encoder is a set of pairs (X, X) with X ?
? ? C(X|X).
?
? through a learned conditional
P(X) and X
The DAE is trained to predict X given X
?
distribution P? (X|X), by choosing this conditional distribution within some family of distributions
2
indexed by ?, not necessarily a neural net. The training procedure for the DAE can generally be
? by possibly regularized maximum likelihood, i.e., the
formulated as learning to predict X given X
generalization performance that this training criterion attempts to minimize is
?
L(?) = ?E[log P? (X|X)]
(1)
where the expectation is taken over the joint data-generating distribution
? = P(X)C(X|X).
?
P(X, X)
2.2 Sampling
We define the following pseudo-Gibbs Markov chain associated with P? :
? t?1 )
Xt ? P? (X|X
? t ? C(X|X
? t)
X
(2)
(3)
which can be initialized from an arbitrary choice X0 . This is the process by which we are going to generate samples Xt according to the model implicitly learned by choosing ?. We define
T (Xt |Xt?1 ) the transition operator that defines a conditional distribution for Xt given Xt?1 , independently of t, so that the sequence of Xt ?s forms a homogeneous Markov chain. If the asymptotic
marginal distribution of the Xt ?s exists, we call this distribution ?(X), and we show below that it
consistently estimates P(X).
Note that the above chain is not a proper Gibbs chain in general because there is no guarantee
? t?1 ) and C(X|X
? t ) are consistent with a unique joint distribution. In that respect, the
that P? (X|X
situation is similar to the sampling procedure for dependency networks (Heckerman et al., 2000), in
? t?1 ) are not guaranteed to have the same asymptotic distribution as the pairs
that the pairs (Xt , X
?
(Xt , Xt ) as t ? ?. As a follow-up to the results in the next section, it is shown in Bengio et al.
(2013a) that dependency networks can be cast into the same framework (which is that of Generative
Stochastic Networks), and that if the Markov chain is ergodic, then its stationary distribution will
? even if the
define a joint distribution between the random variables (here that would be X and X),
conditionals are not consistent with it.
2.3 Consistency
Normally we only have access to a finite number n of training examples but as n ? ?, the empirical training distribution approaches the data-generating distribution. To compensate for the finite
training set, we generally introduce a (possibly data-dependent) regularizer ? and the actual training
?
criterion is a sum over n training examples (X, X),
Ln (?) =
1
n
X
? ? log P? (X|X)
?
?n ?(?, X, X)
(4)
?
?
X?P(X),X?C(
X|X)
where we allow the regularization coefficient ?n to be chosen according to the number of training
examples n, with ?n ? 0 as n ? ?. With ?n ? 0 we get that Ln ? L (i.e. converges to
? stay consistent. We define ?n to be
generalization error, Eq. 1), so consistent estimators of P(X|X)
the minimizer of Ln (?) when given n training examples.
R
? X|X
? t?1 )dX
? associated
We define Tn to be the transition operator Tn (Xt |Xt?1 ) = P?n (Xt |X)C(
with ?n (the parameter obtained by minimizing the training criterion with n examples), and define
?n to be the asymptotic distribution of the Markov chain generated by Tn (if it exists). We also
define T be the operator of the Markov chain associated with the learned model as n ? ?.
? is a consistent estimator of the true conditional distribution P(X|X)
?
Theorem 1. If P?n (X|X)
and Tn defines an ergodic Markov chain, then as the number of examples n ? ?, the asymptotic
distribution ?n (X) of the generated samples converges to the data-generating distribution P(X).
Proof. If Tn is ergodic, then the Markov chain converges to a ?n . Based on our definition of
? ? P(X)C(X|X).
?
the ?true? joint (Eq. 2), one obtains a conditional P(X|X)
This conditional,
?
?
along with P(X|X) = C(X|X) can be used to define a proper Gibbs chain where one alterna?
? Let T be the corresponding ?true? transition
tively samples from P(X|X)
and from P(X|X).
operator,
which maps the t-th sample X to the t + 1-th in that chain. That is, T (Xt |Xt?1 ) =
R
? X|X
? t?1 )dX.
? T produces P(X) as asymptotic marginal distribution over X (as we
P(Xt |X)C(
3
consider more samples from the chain) simply because P(X) is the marginal distribution of the joint
?
? ? P(X|X)
?
P(X)C(X|X)
to which the chain converges. By hypothesis we have that P?n (X|X)
?
?
as n ? ?. Note that Tn is defined exactly as T but with P(Xt |X) replaced by P?n (X|X). Hence
Tn ? T as n ? ?.
Now let us convert the convergence of Tn to T into the convergence of ?n (X) to
P(X).
We will exploit the fact that for the 2-norm, matrix M and unit vector v,
||M v||2 ? sup||x||2 =1 ||M x||2 = ||M ||2 . Consider M = T ? Tn and v the principal eigenvector
of T , which, by the Perron-Frobenius theorem, corresponds to the asymptotic distribution P(X).
Since Tn ? T , ||T ? Tn ||2 ? 0. Hence ||(T ? Tn )v||2 ? ||T ? Tn ||2 ? 0, which implies
that Tn v ? T v = v, where the last equality comes from the Perron-Frobenius theorem (the leading eigenvalue is 1). Since Tn v ? v, it implies that v becomes the leading eigenvector of Tn ,
i.e., the asymptotic distribution of the Markov chain, ?n (X) converges to the true data-generating
distribution, P(X), as n ? ?.
Hence the asymptotic sampling distribution associated with the Markov chain defined by Tn (i.e., the
model) implicitly defines the distribution ?n (X) learned by the DAE over the observed variable X.
Furthermore, that estimator of P(X) is consistent so long as our (regularized) maximum likelihood
? is also consistent. We now provide sufficient conditions for
estimator of the conditional P? (X|X)
the ergodicity of the chain operator (i.e. to apply theorem 1).
? is a consistent estimator of the true conditional distribution P(X|X),
?
Corollary 1. If P? (X|X)
and both the data-generating distribution and denoising model are contained in and non-zero in
? ?X ?
? = 0), and ?X,
? ?X ? V,
a finite-volume region V (i.e., ?X,
/ V, P(X) = 0, P? (X|X)
?
?
P(X) > 0, P? (X|X) > 0, C(X|X) > 0 and these statements remain true in the limit of n ? ?,
then the asymptotic distribution ?n (X) of the generated samples converges to the data-generating
distribution P(X).
Proof. To obtain the existence of a stationary distribution, it is sufficient to have irreducibility (every
value reachable from every other value), aperiodicity (no cycle where only paths through the cycle
allow to return to some value), and recurrence (probability 1 of returning eventually). These conditions can be generalized to the continuous case, where we obtain ergodic Harris chains rather than
? > 0 and C(X|X)
?
ergodic Markov chains. If P? (X|X)
> 0 (for X ? V ), then Tn (Xt |Xt?1 ) > 0
as well, because
Z
? X|X
? t?1 )dX
?
T (Xt |Xt?1 ) = P? (Xt |X)C(
This positivity of the transition operator guarantees that one can jump from any point in V to any
other point in one step, thus yielding irreducibility and aperiodicity. To obtain recurrence (preventing the chain from diverging to infinity), we rely on the assumption that the domain V is bounded.
Note that although Tn (Xt |Xt?1 ) > 0 could be true for any finite n, we need this condition to hold
for n ? ? as well, to obtain the consistency result of theorem 1. By assuming this positivity
(Boltzmann distribution) holds for the data-generating distribution, we make sure that ?n does not
converge to a distribution which puts 0?s anywhere in V . Having satisfied all the conditions for
the existence of a stationary distribution for Tn as n ? ?, we can apply theorem 1 and obtain its
conclusion.
Note how these conditions take care of the various troubling cases one could think of. We avoid
the case where there is no corruption (which would yield a wrong estimation, with the DAE simply
learning a dirac probability its input). Second, we avoid the case where the chain wanders to infinity
by assuming a finite volume where the model and data live, a real concern in the continuous case.
? produces
If it became a real issue, we could perform rejection sampling to make sure that P (X|X)
X ?V.
2.4
Locality of the Corruption and Energy Function
? is well estimated for all (X, X)
? pairs, i.e., that it is approximately
If we believe that P (X|X)
?
consistent with C(X|X),
then we get as many estimators of the energy function as we want, by
?
picking a particular value of X.
Let us define the notation P (?) to denote the probability of the joint, marginals or conditionals over
? t?1 ) that are produced by the model?s Markov chain T as t ? ?. So P (X) = ?(X)
the pairs (Xt , X
4
? the marginal over the X?s
? in that
is the asymptotic distribution of the Markov chain T , and P (X)
?
?
chain. The above assumption means that P (Xt?1 |Xt ) ? C(Xt?1 |Xt ) (which is not guaranteed
in general, but only asymptotically as P approaches the true P). Then, by Bayes rule, P (X) =
? (X)
?
? (X)
?
?
P (X|X)P
X)P
P (X|X)
? P (X|
? C(
so that we can get an estimated energy function from any
?
?
?
P (X|X)
C(X|X)
X|X)
? through energy(X) ? ? log P (X|X)
? + log C(X|X).
?
given choice of X
where one should note
?
that the intractable partition function depends on the chosen value of X.
? be chosen? First note that P (X|X)
? has
How much can we trust that estimator and how should X
?
?
only been trained for pairs (X, X) for which X is relatively close to X (assuming that the corruption
is indeed changing X generally into some neighborhood). Hence, although in theory (with infinite
amount of data and capacity) the above estimator should be good, in practice it might be poor when
? So if we pick a particular X
? the estimated energy might be good for X in the
X is far from X.
?
neighborhood of X but poor elsewhere. What we could do though, is use a different approximate
energy function in different regions of the input space. Hence the above estimator gives us a way to
compare the probabilities of nearby points X1 and X2 (through their difference in energy), picking
? = 1 (X1 + X2 ). One could also imagine that if X1 and XN are far apart,
for example a midpoint X
2
we could chart a path between X1 and XN with intermediate points Xk and use an estimator of
the relative energies between the neighbors Xk , Xk+1 , add them up, and obtain an estimator of the
relative energy between X1 and XN .
? is often simple and approxFigure 1: Although P(X) may be complex and multi-modal, P(X|X)
? when C(X|X)
?
imately unimodal (e.g., multivariate Gaussian, pink oval) for most values of X
is a
local corruption. P(X) can be seen as an infinite mixture of these local distributions (weighted by
?
P(X)).
? for any
This brings up an interesting point. If we could always obtain a good estimator P (X|X)
? we could just train the model with C(X|X)
?
? i.e., with an unconditional noise process
X,
= C(X),
? would directly equal P (X) since X
? and X
that ignores X. In that case, the estimator P (X|X)
are actually sampled independently in its ?denoising? training data. We would have gained nothing
over just training any probabilistic model just directly modeling the observed X?s. The gain we
? is a local perturbation of X, then the true
expect from using the denoising framework is that if X
? can be well approximated by a much simpler distribution than P(X). See Figure 1 for a
P(X|X)
?
visual explanation: in the limit of very small perturbations, one could even assume that P(X|X)
can be well approximated by a simple unimodal distribution such as the Gaussian (for continuous
data) or factorized binomial (for discrete binary data) commonly used in DAEs as the reconstruction
? This idea is already behind the non-local manifold Parzen
probability function (conditioned on X).
windows (Bengio et al., 2006a) and non-local manifold tangent learning (Bengio et al., 2006b) al? can be approximated by a multivariate Gaussian whose
gorithms: the local density around a point X
covariance matrix has leading eigenvectors that span the local tangent of the manifold near which
the data concentrates (if it does). The idea of a locally Gaussian approximation of a density with a
manifold structure is also exploited in the more recent work on the contractive auto-encoder (Rifai
et al., 2011) and associated sampling procedures (Rifai et al., 2012). Finally, strong theoretical evidence in favor of that idea comes from the result from Alain and Bengio (2013): when the amount
of corruption noise converges to 0 and the input variables have a smooth continuous density, then a
unimodal Gaussian reconstruction density suffices to fully capture the joint distribution.
? encapsulates all information about P(X) (assuming C given), it will
Hence, although P (X|X)
generally have far fewer non-negligible modes, making easier to approximate it. This can be seen
analytically by considering the case where P(X) is a mixture of many Gaussians and the corruption
5
Figure 2: Walkback samples get attracted by spurious modes and contribute to removing them.
Segment of data manifold in violet and example
walkback path in red dotted line, starting on the
manifold and going towards a spurious attractor.
The vector field represents expected moves of the
? with arrows from
chain, for a unimodal P (X|X),
?
X to X.
? remains a Gaussian mixture, but one for which most of the modes
is a local Gaussian: P (X|X)
have become negligible (Alain and Bengio, 2013). We return to this in Section 3, suggesting that
in order to avoid spurious modes, it is better to have non-infinitesimal corruption, allowing faster
mixing and successful burn-in not pulled by spurious modes far from the data.
3
Reducing the Spurious Modes with Walkback Training
Sampling in high-dimensional spaces (like in experiments below) using a simple local corruption
process (such as Gaussian or salt-and-pepper noise) suggests that if the corruption is too local, the
DAE?s behavior far from the training examples can create spurious modes in the regions insufficiently visited during training. More training iterations or increasing the amount of corruption noise
helps to substantially alleviate that problem, but we discovered an even bigger boost by training the
DAE Markov chain to walk back towards the training examples (see Figure 2). We exploit knowl? to define the corruption, so as to pick values of X
? that
edge of the currently learned model P (X|X)
would be obtained by following the generative chain: wherever the model would go if we sampled
using the generative Markov chain starting at a training example X, we consider to be a kind of
? from which the auto-encoder should move away (and towards X). The spirit
?negative example? X
of this procedure is thus very similar to the CD-k (Contrastive Divergence with k MCMC steps)
procedure proposed to train RBMs (Hinton, 1999; Hinton et al., 2006).
More precisely, the modified corruption process C? we propose is the following, based on the original
corruption process C. We use it in a version of the training algorithm called walkback, where we
replace the corruption process C of Algorithm 1 by the walkback process C? of Algorithm 2. This
? samples generated along the walk
also provides extra training examples (taking advantage of the X
away from X). It is called walkback because it forces the DAE to learn to walk back from the
random walk it generates, towards the X?s in the training set.
? X|X),
?
Algorithm 2: T HE WALKBACK ALGORITHM is based on the walkback corruption process C(
?
defined below in terms of a generic original corruption process C(X|X)
and the current model?s re? For each training example X, it provides a sequence
construction conditional distribution P (X|X).
?
?
of additional training examples (X, X ) for the DAE. It has a hyper-parameter that is a geometric
distribution parameter 0 < p < 1 controlling the length of these walks away from X, with p = 0.5
? ? in the returned list L to form the
by default. Training by Algorithm 1 is the same, but using all X
?
?
?
pairs (X, X ) as training examples instead of just (X, X).
1:
2:
3:
4:
5:
6:
7:
8:
X ? ? X, L ? [ ]
? ?)
? ? ? C(X|X
Sample X
Sample u ? Uniform(0, 1)
if u > p then
? ? to L and return L
Append X
? ? to L, so (X, X
? ? ) will be an additional training example.
If during training, append X
?
?
?
Sample X ? P (X|X )
goto 2.
Proposition 1. Let P (X) be the implicitly defined asymptotic distribution of the Markov chain al? and C(X|X),
?
ternating sampling from P (X|X)
where C is the original local corruption process.
Under the assumptions of corollary 1, minimizing the training criterion in walkback training algo6
rithm for generalized DAEs (combining Algorithms 1 and 2) produces a P (X) that is a consistent
estimator of the data generating distribution P(X).
? where Pk
Proof. Consider that during training, we produce a sequence of estimators Pk (X|X)
corresponds to the k-th training iteration (modifying the parameters after each iteration). With the
? from which the next model
walkback algorithm, Pk?1 is used to obtain the corrupted samples X
Pk is produced. If training converges, Pk ? Pk+1 = P and we can then consider the whole
corruption process C? fixed. By corollary 1, the Markov chain obtained by alternating samples from
? and samples from C(
? X|X)
?
P (X|X)
converges to an asymptotic distribution P (X) which estimates
? X|X)
?
the underlying data-generating distribution P(X). The walkback corruption C(
corresponds
?
to a few steps alternating sampling from C(X|X)
(the fixed local corruption) and sampling from
? Hence the overall sequence when using C? can be seen as a Markov chain obtained by
P (X|X).
?
? just as it was when using merely C. Hence,
alternatively sampling from C(X|X)
and from P (X|X)
?
once the model is trained with walkback, one can sample from it usingc orruption C(X|X).
A consequence is that the walkback training algorithm estimates the same distribution as the original denoising algorithm, but may do it more efficiently (as we observe in the experiments), by
exploring the space of corruptions in a way that spends more time where it most helps the model.
4
Experimental Validation
Non-parametric case. The mathematical results presented here apply to any denoising training
criterion where the reconstruction loss can be interpreted as a negative log-likelihood. This re? is parametrized as the composition of
mains true whether or not the denoising machine P (X|X)
an encoder and decoder. This is also true of the asymptotic estimation results in Alain and Bengio
(2013). We experimentally validate the above theorems in a case where the asymptotic limit (of
enough data and enough capacity) can be reached, i.e., in a low-dimensional non-parametric setting.
Fig. 3 shows the distribution recovered by the Markov chain for discrete data with only 10 different
? was estimated by multinomial models and maximum likelihood
values. The conditional P (X|X)
(counting) from 5000 training examples. 5000 samples were generated from the chain to estimate
the asymptotic distribution ?n (X). For continuous data, Figure 3 also shows the result of 5000
generated samples and 500 original training examples with X ? R10 , with scatter plots of pairs of
?
dimensions. The estimator is also non-parametric (Parzen density estimator of P (X|X)).
Figure 3: Top left: histogram of a data-generating distribution (true, blue), the empirical distribution
(red), and the estimated distribution using a denoising maximum likelihood estimator. Other figures:
pairs of variables (out of 10) showing the training samples and the model-generated samples.
7
MNIST digits. We trained a DAE on the binarized MNIST data (thresholding at 0.5). A Theano1
(Bergstra et al., 2010) implementation is available2 . The 784-2000-784 auto-encoder is trained for
200 epochs with the 50000 training examples and salt-and-pepper noise (probability 0.5 of corrupting each bit, setting it to 1 or 0 with probability 0.5). It has 2000 tanh hidden units and is trained by
minimizing cross-entropy loss, i.e., maximum likelihood on a factorized Bernoulli reconstruction
distribution. With walkback training, a chain of 5 steps was used to generate 5 corrupted examples
for each training example. Figure 4 shows samples generated with and without walkback. The
quality of the samples was also estimated quantitatively by measuring the log-likelihood of the test
? constructed from 10000 conset under a non-parametric density estimator P? (x) = meanX? P (x|X)
? from the Markov chain). The expected value of E[P? (x)] over the
secutively generated samples (X
samples can be shown (Bengio and Yao, 2013) to be a lower bound (i.e. conservative estimate) of
the true (implicit) model density P (x). The test set log-likelihood bound was not used to select
among model architectures, but visual inspection of samples generated did guide the preliminary
search reported here. Optimization hyper-parameters (learning rate, momentum, and learning rate
reduction schedule) were selected based on the training objective. We compare against a state-ofthe-art RBM (Cho et al., 2013) with an AIS log-likelihood estimate of -64.1 (AIS estimates tend
to be optimistic). We also drew samples from the RBM and applied the same estimator (using the
mean of the RBM?s P (x|h) with h sampled from the Gibbs chain), and obtained a log-likelihood
non-parametric bound of -233, skipping 100 MCMC steps between samples (otherwise numbers are
very poor for the RBM, which does not mix at all). The DAE log-likelihood bound with and without
walkback is respectively -116 and -142, confirming visual inspection suggesting that the walkback
algorithm produces less spurious samples. However, the RBM samples can be improved by a spatial
blur. By tuning the amount of blur (the spread of the Gaussian convolution), we obtained a bound
of -112 for the RBM. Blurring did not help the auto-encoder.
Figure 4: Successive samples generated by Markov chain associated with the trained DAEs according to the plain sampling scheme (left) and walkback sampling scheme (right). There are less
?spurious? samples with the walkback algorithm.
5
Conclusion and Future Work
We have proven that training a model to denoise is a way to implicitly estimate the underlying datagenerating process, and that a simple Markov chain that alternates sampling from the denoising
model and from the corruption process converges to that estimator. This provides a means for
generating data from any DAE (if the corruption is not degenerate, more precisely, if the above
chain converges). We have validated those results empirically, both in a non-parametric setting and
with real data. This study has also suggested a variant of the training procedure, walkback training,
which seem to converge faster to same the target distribution.
One of the insights arising out of the theoretical results presented here is that in order to reach the
asymptotic limit of fully capturing the data distribution P(X), it may be necessary for the model?s
? to have the ability to represent multi-modal distributions over X (given X).
?
P (X|X)
Acknowledgments
The authors would acknowledge input from A. Courville, I. Goodfellow, R. Memisevic, K. Cho as
well as funding from NSERC, CIFAR (YB is a CIFAR Fellow), and Canada Research Chairs.
1
2
http://deeplearning.net/software/theano/
[email protected]:yaoli/GSN.git
8
References
Alain, G. and Bengio, Y. (2013). What regularized auto-encoders learn from the data generating
distribution. In International Conference on Learning Representations (ICLR?2013).
Bengio, Y. and Yao, L. (2013). Bounding the test log-likelihood of generative models. Technical
report, U. Montreal, arXiv.
Bengio, Y., Larochelle, H., and Vincent, P. (2006a). Non-local manifold Parzen windows. In
NIPS?05, pages 115?122. MIT Press.
Bengio, Y., Monperrus, M., and Larochelle, H. (2006b). Nonlocal estimation of manifold structure.
Neural Computation, 18(10).
Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2013a). Deep generative stochastic networks
trainable by backprop. Technical Report arXiv:1306.1091, Universite de Montreal.
Bengio, Y., Courville, A., and Vincent, P. (2013b). Unsupervised feature learning and deep learning:
A review and new perspectives. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI).
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In
Proceedings of the Python for Scientific Computing Conference (SciPy).
Cho, K., Raiko, T., and Ilin, A. (2013). Enhanced gradient for training restricted boltzmann machines. Neural computation, 25(3), 805?831.
Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R., and Kadie, C. (2000). Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine
Learning Research, 1, 49?75.
Hinton, G. E. (1999). Products of experts. In ICANN?1999.
Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets.
Neural Computation, 18, 1527?1554.
Hyv?arinen, A. (2005). Estimation of non-normalized statistical models using score matching. Journal of Machine Learning Research, 6, 695?709.
Kingma, D. and LeCun, Y. (2010). Regularized estimation of image statistics by score matching.
In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in
Neural Information Processing Systems 23, pages 1126?1134.
Ranzato, M., Boureau, Y.-L., and LeCun, Y. (2008). Sparse feature learning for deep belief networks.
In NIPS?07, pages 1185?1192, Cambridge, MA. MIT Press.
Rifai, S., Vincent, P., Muller, X., Glorot, X., and Bengio, Y. (2011). Contractive auto-encoders:
Explicit invariance during feature extraction. In ICML?2011.
Rifai, S., Bengio, Y., Dauphin, Y., and Vincent, P. (2012). A generative process for sampling contractive auto-encoders. In ICML?2012.
Swersky, K., Ranzato, M., Buchman, D., Marlin, B., and de Freitas, N. (2011). On autoencoders
and score matching for energy based models. In ICML?2011. ACM.
Vincent, P. (2011). A connection between score matching and denoising autoencoders. Neural
Computation, 23(7).
9
| 5023 |@word version:1 norm:1 open:1 hyv:2 git:2 contraction:1 covariance:1 contrastive:2 datagenerating:2 pick:2 reduction:1 epartement:1 score:8 yaoli:1 freitas:1 current:1 recovered:1 com:1 skipping:1 scatter:1 dx:3 attracted:1 gpu:1 partition:1 blur:2 confirming:1 plot:1 stationary:3 generative:7 fewer:1 device:1 selected:1 intelligence:1 plane:1 xk:3 inspection:2 parametrization:1 recherche:1 provides:4 pascanu:1 contribute:1 math:1 successive:1 attack:1 simpler:1 mathematical:2 along:3 constructed:1 become:1 ilin:1 introduce:1 x0:1 indeed:1 expected:3 behavior:2 multi:2 actual:1 cpu:1 window:2 considering:1 increasing:1 becomes:2 underlying:4 bounded:1 notation:1 factorized:2 what:4 kind:3 interpreted:1 substantially:1 eigenvector:2 spends:1 marlin:1 guarantee:2 pseudo:1 fellow:1 every:2 binarized:1 exactly:1 universit:1 returning:1 wrong:1 normally:1 unit:2 negligible:2 local:14 laufer:1 limit:6 consequence:1 encoding:2 path:3 approximately:1 pami:1 might:2 burn:1 disappearing:1 suggests:1 contractive:7 range:1 unique:1 lecun:3 acknowledgment:1 practice:2 block:1 digit:1 procedure:9 empirical:2 matching:7 pre:1 get:4 close:1 operator:6 put:1 context:1 live:1 map:2 go:2 williams:1 starting:2 independently:2 ergodic:5 pure:1 scipy:1 estimator:24 rule:1 insight:1 lamblin:1 gsn:1 justification:2 imagine:1 construction:1 controlling:1 decode:2 target:1 enhanced:1 homogeneous:1 hypothesis:1 goodfellow:1 approximated:3 observed:4 capture:4 region:4 culotta:1 cycle:2 ranzato:3 trained:9 depend:1 segment:1 learner:1 completely:1 blurring:1 joint:7 various:2 regularizer:2 train:3 informatique:1 fast:1 artificial:1 zemel:1 hyper:2 choosing:2 neighborhood:2 walkback:20 whose:1 valued:3 loglikelihood:1 otherwise:2 encoder:14 favor:1 ability:1 statistic:1 think:1 noisy:1 itself:1 sequence:4 eigenvalue:1 advantage:1 net:4 reconstruction:17 propose:3 product:1 combining:2 mixing:1 degenerate:1 frobenius:2 validate:1 dirac:1 convergence:3 produce:5 generating:15 converges:11 help:3 montreal:2 measured:1 op:1 eq:2 strong:1 involves:1 implies:2 come:2 larochelle:2 concentrate:3 modifying:1 stochastic:3 backprop:1 arinen:2 suffices:1 generalization:3 really:1 alleviate:1 preliminary:1 proposition:1 exploring:1 hold:2 around:1 predict:2 desjardins:1 early:1 estimation:6 currently:1 visited:1 knowl:1 tanh:1 create:1 weighted:1 mit:2 gaussian:14 always:1 modified:1 rather:2 avoid:3 corollary:3 encode:2 validated:2 consistently:1 bernoulli:1 likelihood:14 inference:1 dependent:1 stopping:1 spurious:8 hidden:1 going:2 issue:3 among:2 overall:1 pascal:1 dauphin:1 development:2 available2:1 art:1 breakthrough:2 spatial:1 marginal:4 equal:1 once:1 field:1 having:1 extraction:1 sampling:19 represents:1 broad:1 unsupervised:3 icml:3 future:1 yoshua:1 report:2 quantitatively:1 few:1 imately:1 divergence:2 replaced:1 attractor:1 attempt:1 montr:1 mixture:3 yielding:3 unconditional:1 behind:1 chain:39 meanx:1 edge:1 necessary:1 indexed:1 taylor:1 initialized:1 walk:5 re:2 dae:13 theoretical:3 eal:1 modeling:1 measuring:1 violet:1 uniform:1 usefulness:1 successful:3 osindero:1 too:1 characterize:2 reported:1 dependency:3 encoders:14 connect:1 answer:1 corrupted:3 thibodeau:1 cho:3 density:13 international:1 stay:1 memisevic:1 probabilistic:4 decoding:1 picking:2 parzen:3 yao:3 squared:4 satisfied:1 possibly:2 positivity:2 stochastically:1 expert:1 derivative:2 leading:4 return:3 li:1 suggesting:2 de:3 bergstra:2 coding:1 kadie:1 coefficient:1 depends:1 view:1 optimistic:1 sup:1 red:2 reached:1 recover:1 bayes:1 compiler:1 collaborative:1 minimize:1 chart:1 aperiodicity:2 became:1 efficiently:1 correspond:1 yield:1 ofthe:1 vincent:8 produced:2 basically:1 corruption:36 reach:1 suffers:1 definition:2 infinitesimal:3 against:1 ternating:1 rbms:3 energy:13 obvious:1 universite:1 associated:6 rbm:7 proof:3 sampled:3 gain:1 schedule:1 actually:1 back:3 supervised:2 follow:1 modal:2 improved:1 yb:1 though:2 furthermore:1 ergodicity:1 implicit:2 anywhere:1 just:5 autoencoders:3 until:2 hastings:1 trust:1 monperrus:1 hopefully:1 defines:3 mode:8 brings:1 quality:1 scientific:1 believe:1 building:1 normalized:1 true:13 regularization:3 hence:8 equality:1 analytically:1 alternating:2 deal:1 daes:6 during:4 recurrence:2 criterion:6 generalized:6 tn:20 image:1 recently:1 funding:1 multinomial:1 empirically:2 tively:1 salt:2 volume:2 discussed:1 interpretation:4 he:2 yosinski:1 marginals:1 composition:2 cambridge:1 gibbs:4 ai:2 tuning:1 consistency:2 shawe:1 reachable:1 access:1 similarity:1 surface:1 add:1 multivariate:2 recent:3 showed:1 perspective:1 apart:1 reverse:1 binary:1 came:1 exploited:1 muller:1 seen:4 additional:3 care:1 converge:2 unimodal:4 mix:1 smooth:1 technical:2 faster:2 cross:1 long:3 compensate:1 cifar:2 bigger:1 converging:1 variant:3 basic:1 expectation:1 arxiv:2 iteration:3 histogram:1 represent:1 proposal:1 whereas:2 addition:1 conditionals:2 want:1 singular:1 extra:1 alterna:1 breuleux:1 sure:2 goto:1 tend:1 lafferty:1 spirit:1 seem:1 call:1 near:2 counting:1 intermediate:1 bengio:20 enough:4 iterate:1 pepper:2 irreducibility:2 architecture:1 idea:5 rifai:5 whether:1 motivated:1 expression:1 penalty:1 returned:1 deep:5 generally:4 clear:1 involve:1 eigenvectors:1 amount:5 locally:1 generate:2 wanders:1 http:1 dotted:1 estimated:6 arising:1 blue:1 discrete:4 salient:1 changing:1 r10:1 asymptotically:1 merely:1 sum:1 convert:1 swersky:2 family:1 comparable:1 bit:1 capturing:1 bound:5 guaranteed:2 meek:1 courville:2 insufficiently:1 infinity:2 precisely:2 x2:2 software:1 nearby:1 generates:1 aspect:1 span:2 chair:1 performing:1 relatively:1 according:3 alternate:2 poor:3 pink:1 heckerman:2 remain:1 metropolis:1 encapsulates:1 making:1 wherever:1 restricted:1 theano:2 taken:1 ln:3 visualization:1 remains:2 eventually:1 gaussians:1 apply:4 observe:1 away:3 generic:1 existence:2 original:5 binomial:1 clustering:1 top:1 exploit:2 giving:1 approximating:1 move:2 objective:1 already:1 erationnelle:1 parametric:8 unclear:2 gradient:2 iclr:1 capacity:2 decoder:3 parametrized:1 manifold:10 assuming:4 length:1 minimizing:5 troubling:1 statement:1 negative:3 rise:1 append:2 implementation:1 proper:2 boltzmann:2 perform:1 allowing:1 teh:1 observation:1 convolution:1 markov:22 finite:5 acknowledge:1 langevin:1 hinton:6 situation:1 discovered:1 perturbation:2 arbitrary:3 canada:1 pair:9 cast:1 perron:2 connection:2 learned:6 kingma:2 boost:1 nip:2 trans:1 suggested:1 wardefarley:1 below:3 pattern:1 sparsity:1 explanation:1 belief:2 rely:1 regularized:9 force:1 scheme:2 improve:1 github:1 raiko:1 auto:25 rounthwaite:1 epoch:1 geometric:1 review:1 tangent:3 python:1 asymptotic:16 relative:2 loss:6 expect:1 fully:2 interesting:1 limitation:1 proportional:1 filtering:1 proven:1 validation:1 sufficient:2 consistent:11 thresholding:1 editor:1 corrupting:1 corrupt:1 cd:1 elsewhere:2 repeat:1 last:2 alain:7 bias:1 formal:2 allow:2 pulled:1 guide:1 neighbor:1 taking:1 midpoint:1 sparse:2 default:1 xn:3 valid:3 transition:4 dimension:1 plain:1 preventing:1 ignores:1 commonly:1 jump:1 author:1 far:5 nonlocal:1 approximate:2 obtains:1 implicitly:6 alternatively:1 continuous:8 search:1 buchman:1 learn:3 composing:1 necessarily:1 complex:1 domain:1 did:2 pk:6 main:1 spread:1 icann:1 arrow:1 whole:2 noise:12 bounding:1 turian:1 nothing:1 denoise:1 x1:5 fig:1 rithm:1 gorithms:1 momentum:1 explicit:1 chickering:1 jacobian:1 removing:2 remained:2 theorem:7 xt:30 bastien:1 showing:2 list:1 deeplearning:1 concern:1 evidence:1 exists:2 intractable:1 mnist:2 glorot:1 gained:1 drew:1 conditioned:1 boureau:1 easier:1 rejection:1 locality:1 entropy:1 generalizing:1 led:1 simply:3 visual:3 prevents:2 contained:1 nserc:1 applies:1 corresponds:3 minimizer:1 harris:1 ma:1 acm:1 conditional:14 identity:2 viewed:1 formulated:1 towards:5 replace:1 experimentally:1 infinite:2 reducing:1 denoising:17 principal:1 conservative:1 called:2 oval:1 invariance:1 experimental:1 diverging:1 select:1 guillaume:1 support:1 assessed:1 mcmc:3 trainable:1 handling:1 |
4,447 | 5,024 | Multi-Prediction Deep Boltzmann Machines
Ian J. Goodfellow, Mehdi Mirza, Aaron Courville, Yoshua Bengio
D?epartement d?informatique et de recherche op?erationnelle
Universit?e de Montr?eal
Montr?eal, QC H3C 3J7
{goodfeli,mirzamom,courvila}@iro.umontreal.ca,
[email protected]
Abstract
We introduce the multi-prediction deep Boltzmann machine (MP-DBM). The MPDBM can be seen as a single probabilistic model trained to maximize a variational
approximation to the generalized pseudolikelihood, or as a family of recurrent nets
that share parameters and approximately solve different inference problems. Prior
methods of training DBMs either do not perform well on classification tasks or
require an initial learning pass that trains the DBM greedily, one layer at a time.
The MP-DBM does not require greedy layerwise pretraining, and outperforms the
standard DBM at classification, classification with missing inputs, and mean field
prediction tasks.1
1
Introduction
A deep Boltzmann machine (DBM) [18] is a structured probabilistic model consisting of many
layers of random variables, most of which are latent. DBMs are well established as generative
models and as feature learning algorithms for classifiers.
Exact inference in a DBM is intractable. DBMs are usually used as feature learners, where the mean
field expectations of the hidden units are used as input features to a separate classifier, such as an
MLP or logistic regression. To some extent, this erodes the utility of the DBM as a probabilistic
model?it can generate good samples, and provides good features for deterministic models, but it
has not proven especially useful for solving inference problems such as predicting class labels given
input features or completing missing input features.
Another drawback to the DBM is the complexity of training it. Typically it is trained in a greedy,
layerwise fashion, by training a stack of RBMs. Training each RBM to model samples from the
previous RBM?s posterior distribution increases a variational lower bound on the likelihood of the
DBM, and serves as a good way to initialize the joint model. Training the DBM from a random
initialization generally does not work. It can be difficult for practitioners to tell whether a given
lower layer RBM is a good starting point to build a larger model.
We propose a new way of training deep Boltzmann machines called multi-prediction training (MPT).
MPT uses the mean field equations for the DBM to induce recurrent nets that are then trained to
solve different inference tasks. The resulting trained MP-DBM model can be viewed either as a
single probabilistic model trained with a variational criterion, or as a family of recurrent nets that
solve related inference tasks.
We find empirically that the MP-DBM does not require greedy layerwise training, so its performance
on the final task can be monitored from the start. This makes it more suitable than the DBM for
1
Code and hyperparameters available at http://www-etud.iro.umontreal.ca/?goodfeli/
mp_dbm.html
1
practitioners who do not have extensive experience with layerwise pretraining techniques or Markov
chains. Anyone with experience minimizing non-convex functions should find MP-DBM training
familiar and straightforward. Moreover, we show that inference in the MP-DBM is useful? the MPDBM does not need an extra classifier built on top of its learned features to obtain good inference
accuracy. We show that it outperforms the DBM at solving a variety of inference tasks including
classification, classification with missing inputs, and prediction of randomly selected subsets of
variables. Specifically, we use the MP-DBM to outperform the classification results reported for the
standard DBM by Salakhutdinov and Hinton [18] on both the MNIST handwritten character dataset
[14] and the NORB object recognition dataset [13].
2
Review of deep Boltzmann machines
Typically, a DBM contains a set of D input features v that are called the visible units because they
are always observed during both training and evaluation. When a class label is present the DBM
typically represents it with a discrete-valued label unit y. The unit y is observed (on examples for
which it is available) during training, but typically is not available at test time. The DBM also
contains several latent variables that are never observed. These hidden units are usually organized
into L layers h(i) of size Ni , i ? {1, . . . , L}, with each unit in a layer conditionally independent of
the other units in the layer given the neighboring layers.
The DBM is trained to maximize the mean field lower bound on log P (v, y). Unfortunately, training
the entire model simultaneously does not seem to be feasible. See [8] for an example of a DBM that
has failed to learn using the naive training algorithm. Salakhutdinov and Hinton [18] found that for
their joint training procedure to work, the DBM must first be initialized by training one layer at a
time. After each layer is trained as an RBM, the RBMs can be modified slightly, assembled into a
DBM, and the DBM may be trained with PCD [22, 21] and mean field. In order to achieve good
classification results, an MLP designed specifically to predict y from v must be trained on top of the
DBM model. Simply running mean field inference to predict y given v in the DBM model does not
work nearly as well. See figure 1 for a graphical description of the training procedure used by [18].
The standard approach to training a DBM requires training L + 2 different models using L + 2
different objective functions, and does not yield a single model that excels at answering all queries.
Our proposed approach requires training only one model with only one objective function, and the
resulting model outperforms previous approaches at answering many kinds of queries (classification,
classification with missing inputs, predicting arbitrary subsets of variables given the complementary
subset).
3
Motivation
There are numerous reasons to prefer a single-model, single-training stage approach to deep Boltzmann machine learning:
1. Optimization As a greedy optimization procedure, layerwise training may be suboptimal.
Small-scale experimental work has demonstrated this to be the case for deep belief networks [1].
In general, for layerwise training to be optimal, the training procedure for each layer must
take into account the influence that the deeper layers will provide. The layerwise initialization procedure simply does not attempt to be optimal.
The procedures used by Le Roux and Bengio [12], Arnold and Ollivier [1] make an optimistic assumption that the deeper layers will be able to implement the best possible prior
on the current layer?s hidden units. This approach is not immediately applicable to Boltzmann machines because it is specified in terms of learning the parameters of P (h(i?1) |h(i) )
assuming that the parameters of the P (h(i) ) will be set optimally later. In a DBM the symmetrical nature of the interactions between units means that these two distributions share
parameters, so it is not possible to set the parameters of the one distribution, leave them
fixed for the remainder of learning, and then set the parameters of the other distribution.
Moreover, model architectures incorporating design features such as sparse connections,
2
pooling, or factored multilinear interactions make it difficult to predict how best to structure one layer?s hidden units in order for the next layer to make good use of them.
2. Probabilistic modeling Using multiple models and having some models specialized for
exactly one task (like predicting y from v) loses some of the benefit of probabilistic modeling. If we have one model that excels at all tasks, we can use inference in this model to
answer arbitrary queries, perform classification with missing inputs, and so on. The standard DBM training procedure gives this up by training a rich probabilistic model and then
using it as just a feature extractor for an MLP.
3. Simplicity Needing to implement multiple models and training stages makes the cost of
developing software with DBMs greater, and makes using them more cumbersome. Beyond the software engineering considerations, it can be difficult to monitor training and tell
what kind of results during layerwise RBM pretraining will correspond to good DBM classification accuracy later. Our joint training procedure allows the user to monitor the model?s
ability of interest (usually ability to classify y given v) from the very start of training.
4
Methods
We now described the new methods proposed in this paper, and some pre-existing methods that we
compare against.
4.1
Multi-prediction Training
Our proposed approach is to directly train the DBM to be good at solving all possible variational
inference problems. We call this multi-prediction training because the procedure involves training
the model to predict any subset of variables given the complement of that subset of variables.
Let O be a vector containing all variables that are observed during training. For a purely unsupervised learning task, O is just v itself. In the supervised setting, O = [v, y]T . Note that y won?t
be observed at test time, only training time. Let D be the training set, i.e. a collection of values
of O. Let S be a sequence of subsets of the possible indices of O. Let Qi be the variational (e.g.,
mean-field) approximation to the joint of OSi and h given O?Si .
Qi (OSi , h) = argminQ DKL (Q(OSi , h)kP (OSi , h | O?Si )) .
In all of the experiments presented in this paper, Q is constrained to be factorial, though one could
design model families for which it makes sense to use richer structure in Q. Note that there is not
an explicit formula for Q; Q must be computed by an iterative optimization process. In order to
accomplish this minimization, we run the mean field fixed point equations to convergence. Because
each fixed point update uses the output of a previous fixed point update as input, this optimization
procedure can be viewed as a recurrent neural network. (To simplify implementation, we don?t
explicitly test for convergence, but run the recurrent net for a pre-specified number of iterations that
is chosen to be high enough that the net usually converges)
We train the MP-DBM by using minibatch stochastic gradient descent on the multi-prediction (MP)
objective function
J(D, ?) = ?
XX
O?D
log Qi (OSi )
i
In other words, the criterion for a single example O is a sum of several terms, with term i measuring
the model?s ability to predict (through a variational approximation) a subset of the variables in the
training set, OSi , given the remainder of the observed variables, O?Si .
During SGD training, we sample minibatches of values of O and Si . Sampling O just means
drawing an example from the training set. Sampling an Si uniformly simply requires sampling
one bit (1 with probability 0.5) for each variable, to determine whether that variable should be an
input to the inference procedure or a prediction target. To compute the gradient, we simply backprop
the error derivatives of J through the recurrent net defining Q.
See Fig. 2 for a graphical description of this training procedure, and Fig. 3 for an example of the
inference procedure run on MNIST digits.
3
a)
b)
c)
d)
Figure 1: The training procedure used by Salakhutdinov and Hinton [18] on MNIST. a) Train an RBM to
maximize log P (v) using CD. b) Train another RBM
to maximize log P (h(1) , y) where h(1) is drawn from
the first RBM?s posterior. c) Stitch the two RBMs into
one DBM. Train the DBM to maximize log P (v, y).
d) Delete y from the model (don?t marginalize it out,
just remove the layer from the model). Make an MLP
with inputs v and the mean field expectations of h(1)
and h(2) . Fix the DBM parameters. Initialize the MLP
parameters based on the DBM parameters. Train the
MLP parameters to predict y.
Figure 3: Mean field inference applied to MNIST digits. Within each pair of rows, the upper row shows pixels and the lower row shows class labels. The first column shows a complete, labeled example. The second
column shows information to be masked out, using red
pixels to indicate information that is removed. The
subsequent columns show steps of mean field. The images show the pixels being filled back in by the mean
field inference, and the blue bars show the probability
of the correct class under the mean field posterior.
Mean Field Iteration
Previous State
Step 1
Step 2
Multi-Inference Iteration
+
=
Previous State + Reconstruction
Step 1
Step 2
Figure 4: Multi-inference trick: When estimating y
given v, a mean field iteration consists of first applying
a mean field update to h(1) and y, then applying one to
h(2) . To use the multi-inference trick, start the iteration by computing r as the mean field update v would
receive if it were not observed. Then use 0.5(r + v)
in place of v and run a regular mean field iteration.
Figure 2: Multi-prediction training: This diagram
shows the neural nets instantiated to do multiprediction training on one minibatch of data. The
three rows show three different examples. Black circles represent variables the net is allowed to oberve.
Blue circles represent prediction targets. Green arrows
represent computational dependencies. Each column
shows a single mean field fixed point update. Each
mean field iteration consists of two fixed point updates. Here we show only one iteration to save space,
but in a real application MP training should be run
with 5-15 iterations.
Figure 5: Samples generated by alternately sampling
Si uniformly and sampling O?Si from Qi (O?Si ).
4
This training procedure is similar to one introduced by Brakel et al. [6] for time-series models. The
primary difference is that we use log Q as the loss function, while Brakel et al. [6] apply hard-coded
loss functions such as mean squared error to the predictions of the missing values.
4.2
The Multi-Inference Trick
Mean field inference can be expensive due to needing to run the fixed point equations several times
in order to reach convergence. In order to reduce this computational expense, it is possible to train
using fewer mean field iterations than required to reach convergence. In this case, we are no longer
necessarily minimizing J as written, but rather doing partial training of a large number of fixediteration recurrent nets that solve related problems.
We can approximately take the geometric mean over all predicted distributions Q (for different
subsets Si ) and renormalize in order to combine the predictions of all of these recurrent nets. This
way, imperfections in the training procedure are averaged out, and we are able to solve inference
tasks even if the corresponding recurrent net was never sampled during MP training.
In order to approximate this average efficiently, we simply take the geometric mean at each step of
inference, instead of attempting to take the correct geometric mean of the entire inference process.
See Fig. 4 for a graphical depiction of the method. This is the same type of approximation used to
take the average over several MLP predictions when using dropout [10]. Here, the averaging rule
is slightly different. In dropout, the different MLPs we average over either include or exclude each
variable. To take the geometric mean over a unit hj that receives input from vi , we average together
the contribution vi Wij from the model that contains vi and the contribution 0 from the model that
does not. The final contribution from vi is 0.5vi Wij so the dropout model averaging rule is to run
an MLP with the weights divided by 2.
For the multi-inference trick, each recurrent net we average over solves a different inference problem. In half of the problems, vi is observed, and contributes vi Wij to hj ?s total input. In the other
half of the problems, vi is inferred. In contrast to dropout, vi is never completely absent. If we
represent the mean field estimate of vi with ri , then in this case that unit contributes ri Wij to hj ?s
total input. To run multi-inference, we thus replace references to v with 0.5(v + r), where r is
updated at each mean field iteration. The main benefit to this approach is that it gives a good way to
incorporate information from many recurrent nets trained in slightly different ways. If the recurrent
net corresponding to the desired inference task is somewhat suboptimal due to not having been sampled enough during training, its defects can be oftened be remedied by averaging its predictions with
those of other similar recurrent nets. The multi-inference trick can also be understood as including
an input denoising step built into the inference. In practice, multi-inference mostly seems to be beneficial if the network was trained without letting mean field run to convergence. When the model was
trained with converged mean field, each recurrent net is just solving an optimization problem in a
graphical model, and it doesn?t matter whether every recurrent net has been individually trained. The
multi-inference trick is mostly useful as a cheap alternative when getting the absolute best possible
test set accuracy is not as important as fast training and evaluation.
4.3
Justification and advantages
In the case where we run the recurrent net for predicting Q to convergence, the multi-prediction
training algorithm follows the gradient of the objective function J. This can be viewed as a mean
field approximation to the generalized pseudolikelihood.
While both pseudolikelihood and likelihood are asymptotically consistent estimators, their behavior
in the limited data case is different. Maximum likelihood should be better if the overall goal is
to draw realistic samples from the model, but generalized pseudolikelihood can often be better for
training a model to answer queries conditioning on sets similar to the Si used during training.
Note that our variational approximation is not quite the same as the way variational approximations
are usually applied. We use variational inference to ensure that the distributions we shape using
backprop are as close as possible to the true conditionals. This is different from the usual approach
to variational learning, where Q is used to define a lower bound on the log likelihood and variational
inference is used to make the bound as tight as possible.
5
In the case where the recurrent net is not trained to convergence, there is an alternate way to justify
MP training. Rather than doing variational learning on a single probabilistic model, the MP procedure trains a family of recurrent nets to solve related prediction problems by running for some
fixed number of iterations. Each recurrent net is trained only on a subset of the data (and most recurrent nets are never trained at all, but only work because they share parameters with the others).
In this case, the multi-inference trick allows us to justify MP training as approximately training an
ensemble of recurrent nets using bagging.
Stoyanov et al. [20] have observed that a training strategy similar to MPT (but lacking the multiinference trick) is useful because it trains the model to work well with the inference approximations
it will be evaluated with at test time. We find these properties to be useful as well. The choice of this
type of variational learning combined with the underlying generalized pseudolikelihood objective
makes an MP-DBM very well suited for solving approximate inference problems but not very well
suited for sampling.
Our primary design consideration when developing multi-prediction training was ensuring that the
learning rule was state-free. PCD training uses persistent Markov chains to estimate the gradient.
These Markov chains are used to approximately sample from the model, and only sample from
approximately the right distribution if the model parameters evolve slowly. The MP training rule
does not make any reference to earlier training steps, and can be computed with no burn in. This
means that the accuracy of the MP gradient is not dependent on properties of the training algorithm
such as the learning rate which can easily break PCD for many choices of the hyperparameters.
Another benefit of MP is that it is easy to obtain an unbiased estimate of the MP objective from
a small number of samples of v and i. This is in contrast to the log likelihood, which requires
estimating the log partition function. The best known method for doing so is AIS, which is relatively
expensive [16]. Cheap estimates of the objective function enable early stopping based on the MPobjective (though we generally use early stopping based on classification accuracy) and optimization
based on line searches (though we do not explore that possibility in this paper).
4.4
Regularization
In order to obtain good generalization performance, Salakhutdinov and Hinton [18] regularized both
the weights and the activations of the network.
Salakhutdinov and Hinton [18] regularize the weights using an L2 penalty. We find that for joint
training, it is critically important to not do this (on the MNIST dataset, we were not able to find any
MP-DBM hyperparameter configuration involving weight decay that performs as well as layerwise
DBMs, but without weight decay MP-DBMs outperform DBMs). When the second layer weights
are not trained well enough for them to be useful for modeling the data, the weight decay term will
drive them to become very small, and they will never have an opportunity to recover. It is much
better to use constraints on the norms of the columns of the weight vectors as done by Srebro and
Shraibman [19].
Salakhutdinov and Hinton [18] regularize the activities of the hidden units with a somewhat complicated sparsity penalty. See http://www.mit.edu/?rsalakhu/DBM.html for details. We
use max(|Eh?Q(h) [h] ? t| ? ?, 0) and backpropagate this through the entire inference graph. t and
? are hyperparameters.
4.5
Related work: centering
Montavon and M?uller [15] showed that an alternative, ?centered? representation of the DBM results
in successful generative training without a greedy layerwise pretraining step. However, centered
DBMs have never been shown to have good classification performance. We therefore evaluate the
classification performance of centering in this work. We consider two methods of variational PCD
training. In one, we use Rao-Blackwellization [5, 11, 17] of the negative phase particles to reduce
the variance of the negative phase. In the other variant (?centering+?), we use a special negative
phase that Salakhutdinov and Hinton [18] found useful. This negative phase uses a small amount of
mean field, which reduces the variance further but introduces some bias, and has better symmetry
with the positive phase. See http://www.mit.edu/?rsalakhu/DBM.html for details.
6
MNIST classification with missing inputs
1.0
0.8
Standard DBM (no fine tuned stage)
Centered DBM
Standard DBM (+ fine tuned stage)
MP-DBM
MP-DBM (2X hidden units)
0.6
10?1
0.4
0.2
10?2
Centering
Centering+ Multi-Prediction
0.0
0.0
0.2
0.4
0.6
0.8
Probability of dropping each input unit
1.0
Average test set log Q(vi) for i ? S
Variation across hyperparameters
Test set misclassification rate
Validation set misclassification rate
100
0.0
Ability to answer general queries
-0.1
-0.2
-0.3
-0.4
-0.5
-0.6
-0.7
0.0
MP-DBM
Standard DBM
Centering+ DBM
0.2
0.4
0.6
0.8
Probability of including a unit in S
1.0
(a) Cross-validation
(b) Missing inputs
(c) General queries
Figure 6: Quantitative results on MNIST: (a) During cross-validation, MP training performs well for most
hyperparameters, while both centering and centering with the special negative phase do not perform as well
and only perform well for a few hyperparameter values. Note that the vertical axis is on a log scale. (b) Generic
inference tasks: When classifying with missing inputs, the MP-DBM outperforms the other DBMs for most
amounts of missing inputs. (c) When using approximate inference to resolve general queries, the standard
DBM, centered DBM, and MP-DBM all perform about the same when asked to predict a small number of
variables. For larger queries, the MP-DBM performs the best.
4.6
Sampling, and a connection to GSNs
The focus of this paper is solving inference problems, not generating samples, so we do not investigate the sampling properties of MP-DBMs extensively. However, it is interesting to note that
an MP-DBM can be viewed as a collection of dependency networks [9] with shared parameters.
Dependency networks are a special case of generative stochastic networks or GSNs (Bengio et al.
[3], section 3.4). This means that the MP-DBM is associated with a distribution arising out of the
Markov chain in which at each step one samples an Si uniformly and then samples O from Qi (O).
Example samples are shown in figure 5. Furthermore, it means that if MPT is a consistent estimator
of the conditional distributions, then MPT is a consistent estimator of the probability distribution defined by the stationary distribution of this Markov chain. Samples drawn by Gibbs sampling in the
DBM model do not look as good (probably because the variational approximation is too damaging).
This suggests that the perspective of the MP-DBM as a GSN merits further investigation.
5
Experiments
5.1 MNIST experiments
In order to compare MP training and centering to standard DBM performance, we cross-validated
each of the new methods by running 25 training experiments for each of three conditions: centered
DBMs, centered DBMs with the special negative phase (?Centering+?), and MP training.
All three conditions visited exactly the same set of 25 hyperparameter values for the momentum
schedule, sparsity regularization hyperparameters, weight and bias initialization hyperparameters,
weight norm constraint values, and number of mean field iterations. The centered DBMs also required one additional hyperparameter, the number of Gibbs steps to run for variational PCD. We
used different values of the learning rate for the different conditions, because the different conditions require different ranges of learning rate to perform well. We use the same size of model,
minibatch and negative chain collection as Salakhutdinov and Hinton [18], with 500 hidden units
in the first layer, 1,000 hidden units in the second, 100 examples per minibatch, and 100 negative
chains. The energy function for this model is
E(v, h, y) = ?v T W (1) h(1) ? h(1)T W (2) h(2) ? h(2)T W (3) y
?v T b(0) ? h(1)T b(1) ? h(2)T b(2) ? y T b(3) .
See Fig. 6a for the results of cross-validation. On the validation set, MP training consistently
performs better and is much less sensitive to hyperparameters than the other methods. This is likely
because the state-free nature of the learning rule makes it perform better with settings of the learning
rate and momentum schedule that result in the model distribution changing too fast for a method
based on Markov chains to keep up.
When we add an MLP classifier (as shown in Fig. 1d), the best ?Centering+? DBM obtains a classification error of 1.22% on the test set. The best MP-DBM obtains a classification error of 0.88%.
This compares to 0.95% obtained by Salakhutdinov and Hinton [18].
7
If instead of adding an MLP to the model, we simply train a larger MP-DBM with twice as many
hidden units in each layer, and apply the multi-inference trick, we obtain a classification error rate
of 0.91%. In other words, we are able to classify nearly as well using a single large DBM and a
generic inference procedure, rather than using a DBM followed by an entirely separate MLP model
specialized for classification.
The original DBM was motivated primarily as a generative model with a high AIS score and as
a means of initializing a classifier. Here we explore some more uses of the DBM as a generative
model. Fig. 6b shows an evaluation of various DBM?s ability to classify with missing inputs. Fig. 6c
shows an evaluation of their ability to resolve queries about random subsets of variables. In both
cases we find that the MP-DBM performs the best for most amounts of missing inputs.
5.2
NORB experiments
NORB consists of 96?96 binocular greyscale images of objects from five different categories, under
a variety of pose and lighting conditions. Salakhutdinov and Hinton [18] preprocessed the images
by resampling them with bigger pixels near the border of the image, yielding an input vector of size
8,976. We used this preprocessing as well. Salakhutdinov and Hinton [18] then trained an RBM
with 4,000 binary hidden units and Gaussian visible units to preprocess the data into an all-binary
representation, and trained a DBM with two hidden layers of 4,000 units each on this representation.
Since the goal of this work is to provide a single unified model and training algorithm, we do not
train a separate Gaussian RBM. Instead we train a single MP-DBM with Gaussian visible units and
three hidden layers of 4,000 units each. The energy function for this model is
E(v, h, y) = ?(v ? ?)T ?W (1) h(1) ? h(1)T W (2) h(2) ? h(2)T W (3) h(3) ? h(3)T W (4) y
1
+ (v ? ?)T ?(v ? ?) ? h(1)T b(1) ? h(2)T b(2) ? h(3)T b(3) ? y T b(4) .
2
where ? is a learned vector of visible unit means and ? is a learned diagonal precision matrix.
By adding an MLP on top of the MP-DBM, following the same architecture as Salakhutdinov and
Hinton [18], we were able to obtain a test set error of 10.6%. This is a slight improvement over the
standard DBM?s 10.8%.
On MNIST we were able to outperform the DBM without using the MLP classifier because we were
able to train a larger MP-DBM. On NORB, the model size used by Salakhutdinov and Hinton [18] is
already as large as we are able to fit on most of our graphics cards, so we were not able to do the same
for this dataset. It is possible to do better on NORB using convolution or synthetic transformations
of the training data. We did not evaluate the effect of these techniques on the MP-DBM because
our present goal is not to obtain state-of-the-art object recognition performance but only to verify
that our joint training procedure works as well as the layerwise training procedure for DBMs. There
is no public demo code available for the standard DBM on this dataset, and we were not able to
reproduce the standard DBM results (layerwise DBM training requires significant experience and
intuition). We therefore can?t compare the MP-DBM to the original DBM in terms of answering
general queries or classification with missing inputs on this dataset.
6
Conclusion
This paper has demonstrated that MP training and the multi-inference trick provide a means of
training a single model, with a single stage of training, that matches the performance of standard
DBMs but still works as a general probabilistic model, capable of handling missing inputs and
answering general queries. We have verified that MP training outperforms the standard training
procedure at classification on the MNIST and NORB datasets where the original DBM was first
applied. We have shown that MP training works well with binary, Gaussian, and softmax units,
as well as architectures with either two or three hidden layers. In future work, we hope to apply
the MP-DBM to more practical applications, and explore techniques, such as dropout, that could
improve its performance further.
Acknowledgments
We would like to thank the developers of Theano [4, 2], Pylearn2 [7]. We would also like to thank
NSERC, Compute Canada, and Calcul Qu?ebec for providing computational resources.
8
References
[1] Arnold, L. and Ollivier, Y. (2012). Layer-wise learning of deep generative models. Technical report,
arXiv:1212.1524.
[2] Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and
Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsupervised
Feature Learning NIPS 2012 Workshop.
[3] Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2013). Deep generative stochastic networks trainable
by backprop. Technical Report arXiv:1306.1091, Universite de Montreal.
[4] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley,
D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceedings of the
Python for Scientific Computing Conference (SciPy). Oral Presentation.
[5] Blackwell, D. (1947). Conditional Expectation and Unbiased Sequential Estimation. Ann.Math.Statist.,
18, 105?110.
[6] Brakel, P., Stroobandt, D., and Schrauwen, B. (2013). Training energy-based models for time-series imputation. Journal of Machine Learning Research, 14, 2771?2797.
[7] Goodfellow, I. J., Warde-Farley, D., Lamblin, P., Dumoulin, V., Mirza, M., Pascanu, R., Bergstra, J.,
Bastien, F., and Bengio, Y. (2013a). Pylearn2: a machine learning research library. arXiv preprint
arXiv:1308.4214.
[8] Goodfellow, I. J., Courville, A., and Bengio, Y. (2013b). Scaling up spike-and-slab models for unsupervised
feature learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1902?1914.
[9] Heckerman, D., Chickering, D. M., Meek, C., Rounthwaite, R., and Kadie, C. (2000). Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research,
1, 49?75.
[10] Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinv, R. (2012). Improving neural
networks by preventing co-adaptation of feature detectors. Technical report, arXiv:1207.0580.
[11] Kolmogorov, A. (1953). Unbiased Estimates:. American Mathematical Society translations. American
Mathematical Society.
[12] Le Roux, N. and Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep
belief networks. Neural Computation, 20(6), 1631?1649.
[13] LeCun, Y., Huang, F.-J., and Bottou, L. (????). Learning methods for generic object recognition with
invariance to pose and lighting. In CVPR?2004, pages 97?104.
[14] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11), 2278?2324.
[15] Montavon, G. and M?uller, K.-R. (2012). Learning feature hierarchies with centered deep Boltzmann
machines. CoRR, abs/1203.4416.
[16] Neal, R. M. (2001). Annealed importance sampling. Statistics and Computing, 11(2), 125?139.
[17] Rao, C. R. (1973). Linear Statistical Inference and its Applications. J. Wiley and Sons, New York, 2nd
edition.
[18] Salakhutdinov, R. and Hinton, G. (2009). Deep Boltzmann machines. In Proceedings of the Twelfth
International Conference on Artificial Intelligence and Statistics (AISTATS 2009), volume 8.
[19] Srebro, N. and Shraibman, A. (2005). Rank, trace-norm and max-norm. In Proceedings of the 18th
Annual Conference on Learning Theory, pages 545?560. Springer-Verlag.
[20] Stoyanov, V., Ropson, A., and Eisner, J. (2011). Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In AISTATS?2011.
[21] Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likelihood
gradient. In ICML?2008, pages 1064?1071.
[22] Younes, L. (1999). On the convergence of Markovian stochastic algorithms with rapidly decreasing
ergodicity rates. Stochastics and Stochastic Reports, 65(3), 177?228.
9
| 5024 |@word seems:1 norm:4 nd:1 twelfth:1 sgd:1 epartement:1 initial:1 configuration:1 contains:3 series:2 score:1 tuned:2 document:1 outperforms:5 existing:1 current:1 si:11 activation:1 must:4 gpu:1 written:1 visible:4 subsequent:1 realistic:1 partition:1 shape:1 cheap:2 remove:1 designed:1 update:6 resampling:1 stationary:1 greedy:5 generative:7 selected:1 fewer:1 half:2 intelligence:2 recherche:1 provides:1 pascanu:3 math:2 five:1 mathematical:2 become:1 persistent:1 consists:3 combine:1 introduce:1 behavior:1 multi:22 blackwellization:1 salakhutdinov:14 decreasing:1 resolve:2 cpu:1 xx:1 moreover:2 estimating:2 underlying:1 what:1 kind:2 courvila:1 developer:1 shraibman:2 unified:1 transformation:1 quantitative:1 every:1 ebec:1 exactly:2 universit:1 classifier:6 unit:26 positive:1 understood:1 laufer:1 engineering:1 approximately:5 black:1 burn:1 twice:1 initialization:3 suggests:1 co:1 limited:1 range:1 averaged:1 practical:1 acknowledgment:1 lecun:2 practice:1 implement:2 digit:2 procedure:21 empirical:1 pre:2 induce:1 word:2 regular:1 bergeron:1 marginalize:1 close:1 risk:1 influence:1 applying:2 www:3 deterministic:1 demonstrated:2 missing:14 annealed:1 straightforward:1 starting:1 convex:1 qc:1 roux:2 simplicity:1 immediately:1 scipy:1 factored:1 rule:5 estimator:3 lamblin:3 regularize:2 ropson:1 gsn:1 variation:1 justification:1 updated:1 target:2 hierarchy:1 user:1 exact:1 us:5 goodfellow:4 trick:10 recognition:4 expensive:2 labeled:1 observed:9 preprint:1 initializing:1 removed:1 intuition:1 complexity:1 asked:1 warde:2 trained:19 solving:6 tight:1 oral:1 purely:1 learner:1 completely:1 easily:1 joint:6 various:1 kolmogorov:1 train:14 informatique:1 instantiated:1 fast:2 kp:1 query:11 artificial:1 tell:2 quite:1 richer:1 larger:4 solve:6 valued:1 cvpr:1 drawing:1 ability:6 statistic:2 h3c:1 itself:1 final:2 sequence:1 advantage:1 net:23 propose:1 reconstruction:1 interaction:2 remainder:2 adaptation:1 neighboring:1 osi:6 rapidly:1 achieve:1 representational:1 description:2 getting:1 sutskever:1 convergence:8 generating:1 leave:1 converges:1 object:4 recurrent:21 montreal:1 pose:2 op:1 solves:1 predicted:1 involves:1 indicate:1 drawback:1 correct:2 stochastic:5 centered:8 dbms:15 enable:1 public:1 backprop:3 require:4 fix:1 generalization:1 investigation:1 multilinear:1 dbm:85 predict:7 slab:1 desjardins:1 early:2 estimation:1 applicable:1 label:4 visited:1 sensitive:1 individually:1 minimization:2 uller:2 mit:2 hope:1 j7:1 gaussian:4 always:1 imperfection:1 modified:1 rather:3 hj:3 validated:1 focus:1 improvement:2 consistently:1 rank:1 likelihood:6 contrast:2 greedily:1 sense:1 inference:46 dependent:1 stopping:2 typically:4 entire:3 hidden:13 wij:4 reproduce:1 pixel:4 overall:1 classification:21 html:3 constrained:1 special:4 initialize:2 art:1 softmax:1 field:29 never:6 having:2 sampling:10 represents:1 look:1 unsupervised:3 nearly:2 icml:1 future:1 yoshua:2 mirza:2 simplify:1 others:1 few:1 primarily:1 report:4 randomly:1 simultaneously:1 familiar:1 phase:7 consisting:1 erodes:1 attempt:1 ab:1 montr:2 mlp:13 interest:1 possibility:1 investigate:1 evaluation:4 introduces:1 yielding:1 farley:2 goodfeli:2 chain:8 capable:1 partial:1 experience:3 filled:1 initialized:1 circle:2 desired:1 renormalize:1 delete:1 eal:2 modeling:3 classify:3 column:5 earlier:1 rao:2 markovian:1 measuring:1 cost:1 subset:10 masked:1 krizhevsky:1 successful:1 too:2 graphic:1 optimally:1 reported:1 dependency:4 answer:3 thibodeau:1 accomplish:1 combined:1 synthetic:1 international:1 probabilistic:9 decoding:1 together:1 schrauwen:1 squared:1 containing:1 huang:1 slowly:1 american:2 derivative:1 account:1 exclude:1 de:3 bergstra:3 kadie:1 matter:1 mp:47 explicitly:1 vi:11 later:2 break:1 optimistic:1 doing:3 dumoulin:1 red:1 start:3 recover:1 compiler:1 complicated:1 bouchard:1 contribution:3 mlps:1 collaborative:1 ni:1 accuracy:5 variance:2 who:1 efficiently:1 ensemble:1 yield:1 correspond:1 preprocess:1 handwritten:1 critically:1 lighting:2 drive:1 converged:1 detector:1 reach:2 cumbersome:1 centering:11 against:1 rbms:3 energy:3 universite:1 associated:1 rbm:10 monitored:1 sampled:2 dataset:6 organized:1 schedule:2 back:1 supervised:1 evaluated:1 though:3 done:1 furthermore:1 just:5 stage:5 binocular:1 ergodicity:1 mpt:5 receives:1 mehdi:1 minibatch:4 logistic:1 scientific:1 effect:1 verify:1 true:1 unbiased:3 regularization:2 neal:1 conditionally:1 during:9 won:1 criterion:2 generalized:4 complete:1 performs:5 image:4 variational:16 consideration:2 wise:1 umontreal:3 specialized:2 empirically:1 conditioning:1 volume:1 slight:1 yosinski:1 significant:1 gibbs:2 ai:2 particle:1 longer:1 depiction:1 add:1 posterior:3 showed:1 perspective:1 verlag:1 binary:3 seen:1 greater:1 somewhat:2 additional:1 determine:1 maximize:5 multiple:2 needing:2 stoyanov:2 reduces:1 technical:3 match:1 cross:4 divided:1 dkl:1 coded:1 bigger:1 qi:5 prediction:19 ensuring:1 regression:1 involving:1 variant:1 expectation:3 arxiv:5 iteration:13 represent:4 receive:1 conditionals:1 argminq:1 fine:2 diagram:1 extra:1 breuleux:1 probably:1 pooling:1 seem:1 practitioner:2 call:1 near:1 bengio:11 enough:3 easy:1 variety:2 fit:1 architecture:3 suboptimal:2 reduce:2 haffner:1 absent:1 whether:3 motivated:1 expression:1 utility:1 penalty:2 york:1 pretraining:4 deep:13 useful:7 generally:2 factorial:1 amount:3 extensively:1 statist:1 younes:1 category:1 generate:1 http:3 outperform:3 arising:1 per:1 blue:2 discrete:1 hyperparameter:4 dropping:1 monitor:2 drawn:2 imputation:1 changing:1 preprocessed:1 verified:1 ollivier:2 asymptotically:1 defect:1 graph:1 sum:1 run:11 place:1 family:4 draw:1 prefer:1 scaling:1 bit:1 entirely:1 dropout:5 layer:23 bound:4 completing:1 followed:1 courville:2 meek:1 annual:1 activity:1 constraint:2 pcd:5 ri:2 software:2 layerwise:12 anyone:1 speed:1 attempting:1 relatively:1 structured:1 developing:2 alternate:1 beneficial:1 slightly:3 across:1 character:1 heckerman:1 son:1 qu:1 stochastics:1 rsalakhu:2 restricted:2 theano:3 equation:3 resource:1 visualization:1 letting:1 merit:1 serf:1 available:4 apply:3 generic:3 save:1 alternative:2 original:3 bagging:1 top:3 running:3 include:1 ensure:1 graphical:5 opportunity:1 eisner:1 especially:1 build:1 society:2 objective:7 already:1 spike:1 erationnelle:1 primary:2 strategy:1 usual:1 diagonal:1 gradient:7 excels:2 separate:3 remedied:1 card:1 thank:2 extent:1 iro:2 reason:1 assuming:1 code:2 index:1 providing:1 minimizing:2 difficult:3 unfortunately:1 mostly:2 greyscale:1 expense:1 trace:1 negative:8 design:3 implementation:1 boltzmann:11 perform:7 upper:1 vertical:1 convolution:1 markov:6 datasets:1 descent:1 defining:1 hinton:15 stack:1 arbitrary:2 canada:1 inferred:1 introduced:1 complement:1 pair:1 required:2 specified:2 extensive:1 connection:2 blackwell:1 learned:3 established:1 pylearn2:2 alternately:1 assembled:1 nip:1 able:10 beyond:1 bar:1 usually:5 pattern:1 sparsity:2 built:2 including:3 green:1 max:2 belief:2 power:1 suitable:1 misclassification:2 eh:1 regularized:1 predicting:4 improve:1 library:1 numerous:1 axis:1 naive:1 rounthwaite:1 prior:2 review:1 geometric:4 l2:1 calcul:1 evolve:1 python:1 lacking:1 loss:2 interesting:1 filtering:1 proven:1 srebro:2 validation:5 consistent:3 classifying:1 share:3 cd:1 translation:1 row:4 free:2 bias:2 pseudolikelihood:5 deeper:2 arnold:2 absolute:1 sparse:1 benefit:3 rich:1 doesn:1 preventing:1 collection:3 preprocessing:1 brakel:3 transaction:1 approximate:4 obtains:2 keep:1 symmetrical:1 norb:6 demo:1 don:2 search:1 latent:2 iterative:1 gsns:2 learn:1 nature:2 ca:3 symmetry:1 contributes:2 improving:1 bottou:2 necessarily:1 did:1 aistats:2 main:1 arrow:1 motivation:1 border:1 hyperparameters:8 turian:1 edition:1 allowed:1 complementary:1 fig:7 fashion:1 wiley:1 precision:1 momentum:2 stroobandt:1 explicit:1 answering:4 chickering:1 extractor:1 montavon:2 ian:1 formula:1 bastien:3 decay:3 intractable:1 incorporating:1 mnist:10 workshop:1 adding:2 sequential:1 corr:1 importance:1 suited:2 backpropagate:1 simply:6 explore:3 likely:1 failed:1 stitch:1 nserc:1 srivastava:1 springer:1 loses:1 tieleman:1 minibatches:1 conditional:2 viewed:4 goal:3 presentation:1 ann:1 replace:1 shared:1 feasible:1 hard:1 specifically:2 uniformly:3 averaging:3 justify:2 denoising:1 called:2 total:2 pas:1 invariance:1 experimental:1 aaron:1 damaging:1 incorporate:1 evaluate:2 trainable:1 handling:1 |
4,448 | 5,025 | Predicting Parameters in Deep Learning
Misha Denil1 Babak Shakibi2 Laurent Dinh3
Marc?Aurelio Ranzato4 Nando de Freitas1,2
1
University of Oxford, United Kingdom
2
University of British Columbia, Canada
3
Universit?e de Montr?eal, Canada
4
Facebook Inc., USA
{misha.denil,nando.de.freitas}@cs.ox.ac.uk
[email protected]
[email protected]
Abstract
We demonstrate that there is significant redundancy in the parameterization of
several deep learning models. Given only a few weight values for each feature it
is possible to accurately predict the remaining values. Moreover, we show that not
only can the parameter values be predicted, but many of them need not be learned
at all. We train several different architectures by learning only a small number of
weights and predicting the rest. In the best case we are able to predict more than
95% of the weights of a network without any drop in accuracy.
1
Introduction
Recent work on scaling deep networks has led to the construction of the largest artificial neural
networks to date. It is now possible to train networks with tens of millions [13] or even over a
billion parameters [7, 16].
The largest networks (i.e. those of Dean et al. [7]) are trained using asynchronous SGD. In this
framework many copies of the model parameters are distributed over many machines and updated
independently. An additional synchronization mechanism coordinates between the machines to ensure that different copies of the same set of parameters do not drift far from each other.
A major drawback of this technique is that training is very inefficient in how it makes use of parallel
resources [1]. In the largest networks of Dean et al. [7], where the gains from distribution are largest,
distributing the model over 81 machines reduces the training time per mini-batch by a factor of 12,
and increasing to 128 machines achieves a speedup factor of roughly 14. While these speedups are
very significant, there is a clear trend of diminishing returns as the overhead of coordinating between
the machines grows. Other approaches to distributed learning of neural networks involve training in
batch mode [8], but these methods have not been scaled nearly as far as their online counterparts.
It seems clear that distributed architectures will always be required for extremely large networks;
however, as efficiency decreases with greater distribution, it also makes sense to study techniques
for learning larger networks on a single machine. If we can reduce the number of parameters which
must be learned and communicated over the network of fixed size, then we can reduce the number
of machines required to train it, and hence also reduce the overhead of coordination in a distributed
framework.
In this work we study techniques for reducing the number of free parameters in neural networks
by exploiting the fact that the weights in learned networks tend to be structured. The technique we
present is extremely general, and can be applied to a broad range of models. Our technique is also
completely orthogonal to the choice of activation function as well as other learning optimizations; it
can work alongside other recent advances in neural network training such as dropout [12], rectified
units [20] and maxout [9] without modification.
1
Figure 1: The first column in each block shows four learned features (parameters of a deep model).
The second column shows a few parameters chosen at random from the original set in the first column. The third column shows that this random set can be used to predict the remaining parameters.
From left to right the blocks are: (1) a convnet trained on STL-10 (2) an MLP trained on MNIST,
(3) a convnet trained on CIFAR-10, (4) Reconstruction ICA trained on Hyv?arinen?s natural image
dataset (5) Reconstruction ICA trained on STL-10.
The intuition motivating the techniques in this paper is the well known observation that the first layer
features of a neural network trained on natural image patches tend to be globally smooth with local
edge features, similar to local Gabor features [6, 13]. Given this structure, representing the value
of each pixel in the feature separately is redundant, since it is highly likely that the value of a pixel
will be equal to a weighted average of its neighbours. Taking advantage of this type of structure
means we do not need to store weights for every input in each feature. This intuition is illustrated in
Figures 1 and 2.
The remainder of this paper is dedicated to elaborating on this observation. We describe a general
purpose technique for reducing the number of free parameters in neural networks. The core of the
technique is based on representing the weight matrix as a low rank product of two smaller matrices.
By factoring the weight matrix we are able to directly control the size of the parameterization by
controlling the rank of the weight matrix.
Na??ve application of this technique is straightforward but
tends to reduce performance of the networks. We show
that by carefully constructing one of the factors, while
learning only the other factor, we can train networks with
vastly fewer parameters which achieve the same performance as full networks with the same structure.
The key to constructing a good first factor is exploiting
smoothness in the structure of the inputs. When we have
prior knowledge of the smoothness structure we expect to
see (e.g. in natural images), we can impose this structure
directly through the choice of factor. When no such prior
knowledge is available we show that it is still possible to
make a good data driven choice.
We demonstrate experimentally that our parameter prediction technique is extremely effective. In the best cases
we are able to predict more than 95% of the parameters
of a network without any drop in predictive accuracy.
Figure 2: RICA with different amounts
of parameter prediction. In the leftmost column 100% of the parameters
are learned with L-BFGS. In the rightmost column, only 10% of the parameters learned, while the remaining values
are predicted at each iteration. The intermediate columns interpolate between
these extremes in increments of 10%.
Throughout this paper we make a distinction between dynamic and static parameters. Dynamic parameters are updated frequently during learning, potentially after each observation or mini-batch. This is in contrast to static parameters, whose values are
computed once and not altered. Although the values of these parameters may depend on the data and
may be expensive to compute, the computation need only be done once during the entire learning
process.
The reason for this distinction is that static parameters are much easier to handle in a distributed
system, even if their values must be shared between machines. Since the values of static parameters do not change, access to them does not need to be synchronized. Copies of these parameters
can be safely distributed across machines without any of the synchronization overhead incurred by
distributing dynamic parameters.
2
2
Low rank weight matrices
Deep networks are composed of several layers of transformations of the form h = g(vW), where
v is an nv -dimensional input, h is an nh -dimensional output, and W is an nv ? nh matrix of
parameters. A column of W contains the weights connecting each unit in the visible layer to a
single unit in the hidden layer. We can to reduce the number of free parameters by representing W
as the product of two matrices W = UV, where U has size nv ? n? and V has size n? ? nh .
By making n? much smaller than nv and nh we achieve a substantial reduction in the number of
parameters.
In principle, learning the factored weight matrices is straightforward. We simply replace W with
UV in the objective function and compute derivatives with respect to U and V instead of W. In
practice this na??ve approach does not preform as well as learning a full rank weight matrix directly.
Moreover, the factored representation has redundancy. If Q is any invertible matrix of size n? ? n?
? V.
? One way to remove this redundancy is to fix
we have W = UV = (UQ)(Q?1 V) = U
the value of U and learn only V. The question remains what is a reasonable choice for U? The
following section provides an answer to this question.
3
Feature prediction
We can exploit the structure in the features of a deep network to represent the features in a much
lower dimensional space. To do this we consider the weights connected to a single hidden unit as
a function w : W ? R mapping weight space to real numbers estimate values of this function
using regression. In the case of p ? p image patches, W is the coordinates of each pixel, but other
structures for W are possible.
A simple regression model which is appropriate here is a linear combination of basis functions. In
this view the columns of U form a dictionary of basis functions, and the features of the network
are linear combinations of these features parameterized by V. The problem thus becomes one of
choosing a good base dictionary for representing network features.
3.1
Choice of dictionary
The base dictionary for feature prediction can be constructed in several ways. An obvious choice
is to train a single layer unsupervised model and use the features from that model as a dictionary.
This approach has the advantage of being extremely flexible?no assumptions about the structure of
feature space are required?but has the drawback of requiring an additional training phase.
When we have prior knowledge about the structure of feature space we can exploit it to construct an
appropriate dictionary. For example when learning features for images we could choose U to be a
selection of Fourier or wavelet bases to encode our expectation of smoothness.
We can also build U using kernels that encode prior knowledge. One way to achieve this is via kernel
ridge regression [25]. Let w? denote the observed values of the weight vector w on a restricted
subset of its domain ? ? W. We introduce a kernel matrix K? , with entries (K? )ij = k(i, j), to
model the covariance between locations i, j ? ?. The parameters at these locations are (w? )i and
(w? )j . The kernel enables us to make smooth predictions of the parameter vector over the entire
domain W using the standard kernel ridge predictor:
?1
w = kT
w? ,
? (K? + ?I)
where k? is a matrix whose elements are given by (k? )ij = k(i, j) for i ? ? and j ? W, and ? is a
?1
ridge regularization coefficient. In this case we have U = kT
and V = w? .
? (K? + ?I)
3.2
A concrete example
In this section we describe the feature prediction process as it applies to features derived from image
patches using kernel ridge regression, since the intuition is strongest in this case. We defer a discussion of how to select a kernel for deep layers as well as for non-image data in the visible layer to a
later section. In those settings the prediction process is formally identical, but the intuition is less
clear.
3
If v is a vectorized image patch corresponding to the visible layer of a standard neural network
then the hidden activity induced by this patch is given by h = g(vW), where g is the network
nonlinearity and W = [w1 , . . . , wnh ] is a weight matrix whose columns each correspond to features
which are to be matched to the visible layer.
We consider a single column of the weight matrix, w, whose elements are indexed by i ? W. In
the case of an image patch these indices are multidimensional i = (ix , iy , ic ), indicating the spatial
location and colour channel of the index i. We select locations ? ? W at which to represent the
filter explicitly and use w? to denote the vector of weights at these locations.
There are a wide variety of options for how ? can be selected. We have found that choosing ?
uniformly at random from W (but tied across channels) works well; however, it is possible that
performance could be improved by carefully designing a process for selecting ?.
?1
We can use values for w? to predict the full feature as w = kT
w? . Notice that we
? (K? + ?I)
T
can predict the entire feature matrix in parallel using W = k? (K? + ?I)?1 W? where W? =
[(w1 )? , . . . , (wnh )? ].
For image patches, where we expect smoothness in pixel space, an appropriate kernel is the squared
exponential kernel
(ix ? jx )2 + (iy ? jy )2
k(i, j) = exp ?
2? 2
where ? is a length scale parameter which controls the degree of smoothness.
Here ? has a convenient interpretation as the set of pixel locations in the image, each corresponding
to a basis function in the dictionary defined by the kernel. More generically we will use ? to index a
collection of dictionary elements in the remainder of the paper, even when a dictionary element may
not correspond directly to a pixel location as in this example.
3.3
Interpretation as pooling
So far we have motivated our technique as a method for predicting features in a neural network;
however, the same approach can also be interpreted as a linear pooling process.
Recall that the hidden activations in a standard neural network before applying the nonlinearity
are given by g ?1 (h) = vW. Our motivation has proceeded along the lines of replacing W with
U? W? and discussing the relationship between W and its predicted counterpart.
Alternatively we can write g ?1 (h) = v? W? where v? = vU? is a linear transformation of the
data. Under this interpretation we can think of a predicted layer as being composed to two layers
internally. The first is a linear layer which applies a fixed pooling operator given by U? , and the
second is an ordinary fully connected layer with |?| visible units.
3.4
Columnar architecture
The prediction process we have described so far assumes that U? is the same for all features; however, this can be too restrictive. Continuing with the intuition that filters should be smooth local
edge detectors we might want to choose ? to give high resolution in a local area of pixel space while
using a sparser representation in the remainder of the space. Naturally, in this case we would want
to choose several different ??s, each of which concentrates high resolution information in different
regions.
It is straightforward to extend feature prediction to this setting. Suppose we have several different
index sets ?1 , . . . , ?J corresponding to elements from a dictionary U. For each ?j we can form the
sub-dictionary U?j and predicted the feature matrix Wj = U?j W?j . The full predicted feature
matrix is formed by concatenating each of these matrices blockwise W = [W1 , . . . , WJ ]. Each
block of the full predicted feature matrix can be treated completely independently. Blocks Wi and
Wj share no parameters?even their corresponding dictionaries are different.
Each ?j can be thought of as defining a column of representation inside the layer. The input to each
column is shared, but the representations computed in each column are independent. The output of
the layer is obtained by concatenating the output of each column. This is represented graphically in
Figure 3.
4
U? 2
g(vU?i w?i )
g(v ? U?i w?i )
w?1
w?1
g(?)
w?2
g(?)
w?3
v
w?2
g(?)
w?3
v
vU?i
vU?i w?i
v ? U?i
v ? U?i w?i
Figure 3: Left: Columnar architecture in a fully connected network, with the path through one
column highlighted. Each column corresponds to a different ?j . Right: Columnar architecture
in a convolutional network. In this setting the w? ?s take linear combinations of the feature maps
obtained by convolving the input with the dictionary. We make the same abuse of notation here as
in the main text?the vectorized filter banks must be reshaped before the convolution takes place.
Introducing additional columns into the network increases the number of static parameters but the
number of dynamic parameters remains fixed. The increase in static parameters comes from the fact
that each column has its own dictionary. The reason that there is not a corresponding increase in
the number of dynamic parameters is that for a fixed size hidden layer the hidden units are divided
between the columns. The number of dynamic parameters depends only on the number of hidden
units and the size of each dictionary.
In a convolutional network the interpretation is similar. In this setting we have g ?1 (h) = v ? W? ,
where W? is an appropriately sized filter bank. Using W to denote the result of vectorizing the
filters of W? (as is done in non-convolutional models) we can again write W = U? w? , and using
a slight abuse of notation1 we can write g ?1 (h) = v ? U? w? . As above, we re-order the operations
to obtain g ?1 (h) = v? w? resulting in a structure similar to a layer in an ordinary MLP. This
structure is illustrated in Figure 3.
Note that v is first convolved with U? to produce v? . That is, preprocessing in each column
comes from a convolution with a fixed set of filters, defined by the dictionary. Next, we form linear
combinations of these fixed convolutions, with coefficients given by w? . This particular order of
operations may result in computational improvements if the number of hidden channels is larger
than n? , or if the elements of U? are separable [22].
3.5
Constructing dictionaries
We now turn our attention to selecting an appropriate dictionary for different layers of the network.
The appropriate choice of dictionary inevitably depends on the structure of the weight space.
When the weight space has a topological structure where we expect smoothness, for example when
the weights correspond to pixels in an image patch, we can choose a kernel-based dictionary to
enforce the type of smoothness we expect.
When there is no topological structure to exploit, we propose to use data driven dictionaries. An
obvious choice here is to use a shallow unsupervised feature learning, such as an autoencoder, to
build a dictionary for the layer.
Another option is to construct data-driven kernels for ridge regression. Easy choices here are using
the empirical covariance or empirical squared covariance of the hidden units, averaged over the data.
Since the correlations in hidden activities depend on the weights in lower layers we cannot initialize
kernels in deep layers in this way without training the previous layers. We handle this by pre-training
each layer as an autoencoder. We construct the kernel using the empirical covariance of the hidden
units over the data using the pre-trained weights. Once each layer has been pre-trained in this way
1
The vectorized filter bank W = U? w? must be reshaped before the convolution takes place.
5
Compare Completion Methods
nokernel
LowRank
RandCon-RandCon
RandFixU-RandFixU
SE-Emp
SE-Emp2
SE-AE
0.045
Error
0.040
0.035
0.030
0.025
0.020
0.015
0.2
0.4
0.6
0.8
TIMIT
0.5
Phone Error Rate
0.050
Emp-Emp
0.4
0.3
0.2
0.1
0.0
1.0
0.2
0.4
0.6
0.8
1.0
Proportion of Parameters Learned
Proportion of parameters learned
Figure 4: Left: Comparing the performance of different dictionaries when predicting the weights in
the first two layers of an MLP network on MNIST. The legend shows the dictionary type in layer1?
layer2 (see main text for details). Right: Performance on the TIMIT core test set using an MLP
with two hidden layers.
we fine-tune the entire network with backpropagation, but in this phase the kernel parameters are
fixed.
We also experiment with other choices for the dictionary, such as random projections (iid Gaussian
dictionary) and random connections (dictionary composed of random columns of the identity).
4
4.1
Experiments
Multilayer perceptron
We perform some initial experiments using MLPs [24] in order to demonstrate the effectiveness of
our technique. We train several MLP models on MNIST using different strategies for constructing
the dictionary, different numbers of columns and different degrees of reduction in the number of
dynamic parameters used in each feature. We chose to explore these permutations on MNIST since
it is small enough to allow us to have broad coverage.
The networks in this experiment all have two hidden layers with a 784?500?500?10 architecture
and use a sigmoid activation function. The final layer is a softmax classifier. In all cases we preform
parameter prediction in the first and second layers only; the final softmax layer is never predicted.
This layer contains approximately 1% of the total network parameters, so a substantial savings is
possible even if features in this layer are not predicted.
Figure 4 (left) shows performance using several different strategies for constructing the dictionary,
each using 10 columns in the first and second layers. We divide the hidden units in each layer equally
between columns (so each column connects to 50 units in the layer above).
Error
The different dictionaries are as follows: nokernel is an ordinary model with no feature prediction (shown as a horizontal
Convnet CIFAR-10
line). LowRank is when both U and V are optimized. Rand0.4
Con is random connections (the dictionary is random columns
0.32
of the identity). RandFixU is random projections using a ma0.24
trix of iid Gaussian entries. SE is ridge regression with the
0.16
squared exponential kernel with length scale 1.0. Emp is ridge
0.08
convnet
regression with the covariance kernel. Emp2 is ridge regres0.5
0.75
0.25
1.0
sion with the squared covariance kernel. AE is a dictionary
Proportion of parameters learned
pre-trained as an autoencoder. The SE?Emp and SE-Emp2 architectures preform substantially better than the alternatives, Figure 5: Performance of a conespecially with few dynamic parameters.
vnet on CIFAR-10. Learning only
For consistency we pre-trained all of the models, except for the 25% of the parameters has a negliLowRank, as autoencoders. We did not pretrain the LowRank gible effect on predictive accuracy.
model because we found the autoencoder pretraining to be extremely unstable for this model.
Figure 4 (right) shows the results of a similar experiment on TIMIT. The raw speech data was
analyzed using a 25-ms Hamming window with a 10-ms fixed frame rate. In all the experiments,
we represented the speech using 12th-order Mel frequency cepstral coefcients (MFCCs) and energy,
along with their first and second temporal derivatives. The networks used in this experiment have
two hidden layers with 1024 units. Phone error rate was measured by performing Viterbi decoding
6
the phones in each utterance using a bigram language model, and confusions between certain sets of
phones were ignored as described in [19].
4.2
Convolutional network
Figure 5 shows the performance of a convnet [17] on CIFAR-10. The first convolutional layer filters
the 32 ? 32 ? 3 input image using 48 filters of size 8 ? 8 ? 3. The second convolutional layer
applies 64 filters of size 8 ? 8 ? 48 to the output of the first layer. The third convolutional layer
further transforms the output of the second layer by applying 64 filters of size 5 ? 5 ? 64. The
output of the third layer is input to a fully connected layer with 500 hidden units and finally into a
softmax layer with 10 outputs. Again we do not reduce the parameters in the final softmax layer.
The convolutional layers each have one column and the fully connected layer has five columns.
Convolutional layers have a natural topological structure to exploit, so we use an dictionary constructed with the squared exponential kernel in each convolutional layer. The input to the fully
connected layer at the top of the network comes from a convolutional layer so we use ridge regression with the squared exponential kernel to predict parameters in this layer as well.
4.3
Reconstruction ICA
Reconstruction ICA [15] is a method for learning overcomplete ICA models which is similar to a
linear autoencoder network. We demonstrate that we can effectively predict parameters in RICA on
both CIFAR-10 and STL-10. In order to use RICA as a classifier we follow the procedure of Coates
et al. [6].
Figure 6 (left) shows the results of parameter prediction with RICA on CIFAR-10 and STL-10.
RICA is a single layer architecture, and we predict parameters a squared exponential kernel dictionary with a length scale of 1.0. The nokernel line shows the performance of RICA with no feature
prediction on the same task. In both cases we are able to predict more than half of the dynamic
parameters without a substantial drop in accuracy.
Figure 6 (right) compares the performance of two RICA models with the same number of dynamic
parameters. One of the models is ordinary RICA with no parameter prediction and the other has
50% of the parameters in each feature predicted using squared exponential kernel dictionary with a
length scale of 1.0; since 50% of the parameters in each feature are predicted, the second model has
twice as many features with the same number of dynamic parameters.
5
Related work and future directions
Several other methods for limiting the number of parameters in a neural network have been explored
in the literature. An early approach is the technique of ?Optimal Brain Damage? [18] which uses
approximate second derivative information to remove parameters from an already trained network.
This technique does not apply in our setting, since we aim to limit the number of parameters before
training, rather than after.
The most common approach to limiting the number of parameters is to use locally connected features [6]. The size of the parameterization of locally connected networks can be further reduced
by using tiled convolutional networks [10] in which groups of feature weights which tile the input
0.31
0.24
0.46
0.44
0.42
0.08
0.28
0.49
0.69
0.9
Proportion of parameters learned
0.37
0.31
0.24
0.08
0.28
0.49
0.69
0.9
RICA
RICA-50%
0.48
0.46
0.44
0.42
1800
Proportion of parameters learned
STL-10
0.5
RICA
RICA-50%
0.44
Error
0.37
CIFAR-10
0.5
nokernel
RICA
0.48
Error
Error
STL-10
0.5
nokernel
RICA
Error
CIFAR-10
0.5
0.44
15750
29700
43650
57600
Number of dynamic parameters
5000
23750
42500
61250
80000
Number of dynamic parameters
Figure 6: Left: Comparison of the performance of RICA with and without parameter prediction on
CIFAR-10 and STL-10. Right: Comparison of RICA, and RICA with 50% parameter prediction
using the same number of dynamic parameters (i.e. RICA-50% has twice as many features). There
is a substantial gain in accuracy with the same number of dynamic parameters using our technique.
Error bars for STL-10 show 90% confidence intervals from the the recommended testing protocol.
7
space are tied. Convolutional neural networks [13] are even more restrictive and force a feature to
have tied weights for all receptive fields.
Techniques similar to the one in this paper have appeared for shallow models in the computer vision literature. The double sparsity method of Rubinstein et al. [23] involves approximating linear
dictionaries with other dictionaries in a similar manner to how we approximate network features.
Rigamonti et al. [22] study approximating convolutional filter banks with linear combinations of
separable filters. Both of these works focus on shallow single layer models, in contrast to our focus
on deep networks.
The techniques described in this paper are orthogonal to the parameter reduction achieved by tying
weights in a tiled or convolutional pattern. Tying weights effectively reduces the number of feature
maps by constraining features at different locations to share parameters. Our approach reduces the
number of parameters required to represent each feature and it is straightforward to incorporate into
a tiled or convolutional network.
Cires?an et al. [3] control the number of parameters by removing connections between layers in a
convolutional network at random. They achieve state-of-the-art results using these randomly connected layers as part of their network. Our technique subsumes the idea of random connections, as
described in Section 3.5.
The idea of regularizing networks through prior knowledge of smoothness is not new, but it is a
delicate process. Lang and Hinton [14] tried imposing explicit smoothness constraints through regularization but found it to universally reduce performance. Na??vely factoring the weight matrix and
learning both factors tends to reduce performance as well. Although the idea is simple conceptually, execution is difficult. G?ulc?ehre et al. [11] have demonstrated that prior knowledge is extremely
important during learning, which highlights the importance of introducing it effectively.
Recent work has shown that state of the art results on several benchmark tasks in computer vision
can be achieved by training neural networks with several columns of representation [2, 13]. The use
of different preprocessing for different columns of representation is of particular relevance [2]. Our
approach has an interpretation similar to this as described in Section 3.4. Unlike the work of [2], we
do not consider deep columns in this paper; however, collimation is an attractive way for increasing
parallelism within a network, as the columns operate completely independently. There is no reason
we could not incorporate deeper columns into our networks, and this would make for a potentially
interesting avenue of future work.
Our approach is superficially similar to the factored RBM [21, 26], whose parameters form a 3tensor. Since the total number of parameters in this model is prohibitively large, the tensor is represented as an outer product of three matrices. Major differences between our technique and the
factored RBM include the fact that the factored RBM is a specific model, whereas our technique can
be applied more broadly?even to factored RBMs. In addition, in a factored RBM all factors are
learned, whereas in our approach the dictionary is fixed judiciously.
In this paper we always choose the set ? of indices uniformly at random. There are a wide variety
of other options which could be considered here. Other works have focused on learning receptive
fields directly [5], and would be interesting to incorporate with our technique.
In a similar vein, more careful attention to the selection of kernel functions is appropriate. We
have considered some simple examples and shown that they preform well, but our study is hardly
exhaustive. Using different types of kernels to encode different types of prior knowledge on the
weight space, or even learning the kernel functions directly as part of the optimization procedure as
in [27] are possibilities that deserve exploration.
When no natural topology on the weight space is available we infer a topology for the dictionary
from empirical statistics; however, it may be possible to instead construct the dictionary to induce a
desired topology on the weight space directly. This has parallels to other work on inducing topology
in representations [10] as well as work on learning pooling structures in deep networks [4].
6
Conclusion
We have shown how to achieve significant reductions in the number of dynamic parameters in deep
models. The idea is orthogonal but complementary to recent advances in deep learning, such as
dropout, rectified units and maxout. It creates many avenues for future work, such as improving
large scale industrial implementations of deep networks, but also brings into question whether we
have the right parameterizations in deep learning.
8
References
[1] Y. Bengio. Deep learning of representations: Looking forward. Technical Report arXiv:1305.0445,
Universit?e de Montr?eal, 2013.
[2] D. Cires?an, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
In IEEE Computer Vision and Pattern Recognition, pages 3642?3649, 2012.
[3] D. Cires?an, U. Meier, and J. Masci. High-performance neural networks for visual object classification.
arXiv:1102.0183, 2011.
[4] A. Coates, A. Karpathy, and A. Ng. Emergence of object-selective features in unsupervised feature
learning. In Advances in Neural Information Processing Systems, pages 2690?2698, 2012.
[5] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In Advances in Neural Information
Processing Systems, pages 2528?2536, 2011.
[6] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning.
In Artificial Intelligence and Statistics, 2011.
[7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker,
K. Yang, and A. Ng. Large scale distributed deep networks. In Advances in Neural Information Processing
Systems, pages 1232?1240, 2012.
[8] L. Deng, D. Yu, and J. Platt. Scalable stacking and learning for building deep architectures. In International Conference on Acoustics, Speech, and Signal Processing, pages 2133?2136, 2012.
[9] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In International Conference on Machine Learning, 2013.
[10] K. Gregor and Y. LeCun. Emergence of complex-like cells in a temporal product network with local
receptive fields. arXiv preprint arXiv:1006.0448, 2010.
[11] C. G?ulc?ehre and Y. Bengio. Knowledge matters: Importance of prior information for optimization. In
International Conference on Learning Representations, 2013.
[12] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
[13] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1106?1114, 2012.
[14] K. Lang and G. Hinton. Dimensionality reduction and prior knowledge in e-set recognition. In Advances
in Neural Information Processing Systems, 1990.
[15] Q. V. Le, A. Karpenko, J. Ngiam, and A. Y. Ng. ICA with reconstruction cost for efficient overcomplete
feature learning. Advances in Neural Information Processing Systems, 24:1017?1025, 2011.
[16] Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building highlevel features using large scale unsupervised learning. In International Conference on Machine Learning,
2012.
[17] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[18] Y. LeCun, J. S. Denker, S. Solla, R. E. Howard, and L. D. Jackel. Optimal brain damage. In Advances in
Neural Information Processing Systems, pages 598?605, 1990.
[19] K.-F. Lee and H.-W. Hon. Speaker-independent phone recognition using hidden markov models. Acoustics, Speech and Signal Processing, IEEE Transactions on, 37(11):1641?1648, 1989.
[20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th
International Conference on Machine Learning, pages 807?814. Omnipress Madison, WI, 2010.
[21] M. Ranzato, A. Krizhevsky, and G. E. Hinton. Factored 3-way restricted Boltzmann machines for modeling natural images. In Artificial Intelligence and Statistics, 2010.
[22] R. Rigamonti, A. Sironi, V. Lepetit, and P. Fua. Learning separable filters. In IEEE Computer Vision and
Pattern Recognition, 2013.
[23] R. Rubinstein, M. Zibulevsky, and M. Elad. Double sparsity: learning sparse dictionaries for sparse signal
approximation. IEEE Transactions on Signal Processing, 58:1553?1564, 2010.
[24] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors.
Nature, 323(6088):533?536, 1986.
[25] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
New York, NY, USA, 2004.
[26] K. Swersky, M. Ranzato, D. Buchman, B. Marlin, and N. Freitas. On autoencoders and score matching
for energy based models. In International Conference on Machine Learning, pages 1201?1208, 2011.
[27] P. Vincent and Y. Bengio. A neural support vector network architecture with adaptive kernels. In International Joint Conference on Neural Networks, pages 187?192, 2000.
9
| 5025 |@word proceeded:1 bigram:1 seems:1 proportion:5 hyv:1 tried:1 covariance:6 sgd:1 lepetit:1 reduction:5 initial:1 contains:2 score:1 united:1 selecting:3 document:1 elaborating:1 rightmost:1 freitas:2 com:1 comparing:1 activation:3 lang:2 must:4 devin:2 visible:5 ma0:1 enables:1 remove:2 drop:3 half:1 fewer:1 selected:1 intelligence:2 parameterization:3 core:2 provides:1 parameterizations:1 location:8 five:1 along:2 constructed:2 overhead:3 inside:1 manner:1 introduce:1 ica:6 roughly:1 frequently:1 multi:1 brain:2 salakhutdinov:1 globally:1 window:1 increasing:2 becomes:1 moreover:2 matched:1 notation:1 what:1 tying:2 preform:4 interpreted:1 substantially:1 marlin:1 transformation:2 safely:1 temporal:2 every:1 multidimensional:1 universit:2 scaled:1 classifier:2 uk:1 control:3 unit:15 internally:1 prohibitively:1 platt:1 before:4 local:5 tends:2 limit:1 oxford:1 laurent:2 path:1 abuse:2 approximately:1 might:1 chose:1 twice:2 co:1 range:1 averaged:1 lecun:3 testing:1 vu:4 practice:1 block:4 communicated:1 backpropagation:1 procedure:2 area:1 empirical:4 gabor:1 thought:1 convenient:1 projection:2 pre:5 confidence:1 induce:1 matching:1 cannot:1 selection:2 operator:1 applying:2 dean:4 map:2 notation1:1 demonstrated:1 straightforward:4 graphically:1 attention:2 independently:3 williams:1 focused:1 resolution:2 factored:8 handle:2 coordinate:2 increment:1 updated:2 limiting:2 construction:1 controlling:1 suppose:1 us:1 designing:1 goodfellow:1 trend:1 element:6 expensive:1 recognition:5 rumelhart:1 vein:1 observed:1 preprint:1 region:1 wj:3 connected:9 ranzato:5 solla:1 decrease:1 zibulevsky:1 substantial:4 intuition:5 warde:1 cristianini:1 dynamic:16 babak:1 trained:12 depend:2 predictive:2 creates:1 efficiency:1 completely:3 basis:3 joint:1 represented:3 train:6 describe:2 effective:1 artificial:3 rubinstein:2 choosing:2 exhaustive:1 whose:5 larger:2 elad:1 statistic:3 think:1 highlighted:1 reshaped:2 emergence:2 final:3 online:1 advantage:2 highlevel:1 reconstruction:5 propose:1 product:4 remainder:3 adaptation:1 karpenko:1 date:1 achieve:5 inducing:1 billion:1 exploiting:2 double:2 sutskever:2 produce:1 object:2 ac:1 propagating:1 completion:1 measured:1 ij:2 lowrank:3 coverage:1 c:1 predicted:11 come:3 involves:1 synchronized:1 concentrate:1 direction:1 drawback:2 filter:14 exploration:1 nando:2 arinen:1 fix:1 considered:2 ic:1 exp:1 mapping:1 predict:10 viterbi:1 major:2 achieves:1 dictionary:41 jx:1 early:1 purpose:1 proc:1 coordination:1 jackel:1 largest:4 weighted:1 always:2 gaussian:2 aim:1 rather:1 denil:1 sion:1 encode:3 derived:1 focus:2 improvement:1 rank:4 pretrain:1 contrast:2 industrial:1 sense:1 factoring:2 entire:4 diminishing:1 hidden:17 selective:1 layer2:1 pixel:8 classification:3 flexible:1 hon:1 spatial:1 softmax:4 initialize:1 art:2 equal:1 once:3 construct:4 never:1 saving:1 ng:6 field:4 identical:1 broad:2 yu:1 unsupervised:5 nearly:1 future:3 report:1 mirza:1 few:3 randomly:1 neighbour:1 composed:3 ve:2 interpolate:1 ulc:2 phase:2 connects:1 delicate:1 ab:1 montr:2 mlp:5 highly:1 possibility:1 generically:1 analyzed:1 extreme:1 misha:2 farley:1 kt:3 edge:2 orthogonal:3 vely:1 indexed:1 continuing:1 divide:1 taylor:1 re:1 desired:1 overcomplete:2 eal:2 column:35 modeling:1 wnh:2 ordinary:4 stacking:1 introducing:2 cost:1 subset:1 entry:2 predictor:1 krizhevsky:3 too:1 motivating:1 answer:1 international:7 lee:2 decoding:1 invertible:1 connecting:1 iy:2 concrete:1 na:3 w1:3 vastly:1 squared:8 again:2 choose:5 tile:1 cires:3 convolving:1 inefficient:1 derivative:3 return:1 de:4 bfgs:1 subsumes:1 coefficient:2 inc:1 matter:1 explicitly:1 depends:2 later:1 view:1 option:3 parallel:3 defer:1 timit:3 mlps:1 formed:1 accuracy:5 convolutional:18 correspond:3 conceptually:1 raw:1 vincent:1 accurately:1 iid:2 rectified:3 detector:2 strongest:1 facebook:1 energy:2 rbms:1 frequency:1 tucker:1 obvious:2 naturally:1 rbm:4 static:6 con:1 gain:2 hamming:1 dataset:1 recall:1 knowledge:9 dimensionality:1 carefully:2 back:1 follow:1 improved:1 fua:1 done:2 ox:1 correlation:1 autoencoders:2 horizontal:1 replacing:1 mode:1 brings:1 grows:1 usa:2 effect:1 building:2 requiring:1 counterpart:2 hence:1 regularization:2 illustrated:2 attractive:1 during:3 speaker:1 mel:1 m:2 leftmost:1 ridge:9 demonstrate:4 confusion:1 dedicated:1 omnipress:1 image:15 umontreal:1 sigmoid:1 common:1 nh:4 million:1 extend:1 interpretation:5 slight:1 dinh:1 significant:3 cambridge:1 imposing:1 smoothness:9 uv:3 consistency:1 nonlinearity:2 language:1 mfccs:1 shawe:1 access:1 base:3 own:1 recent:4 driven:3 phone:5 schmidhuber:1 store:1 certain:1 discussing:1 additional:3 greater:1 impose:1 deng:1 redundant:1 recommended:1 corrado:2 signal:4 full:5 reduces:3 infer:1 smooth:3 technical:1 cifar:9 divided:1 equally:1 jy:1 vnet:1 prediction:16 scalable:1 regression:8 ae:2 multilayer:1 expectation:1 vision:4 arxiv:4 iteration:1 represent:3 kernel:27 monga:2 achieved:2 cell:1 whereas:2 want:2 separately:1 fine:1 interval:1 addition:1 appropriately:1 rest:1 unlike:1 operate:1 nv:4 induced:1 tend:2 pooling:4 legend:1 effectiveness:1 vw:3 yang:1 intermediate:1 constraining:1 easy:1 enough:1 bengio:5 variety:2 architecture:10 topology:4 reduce:8 idea:4 avenue:2 haffner:1 judiciously:1 whether:1 motivated:1 distributing:2 colour:1 speech:4 york:1 pretraining:1 hardly:1 deep:21 ignored:1 clear:3 involve:1 se:6 tune:1 karpathy:1 amount:1 transforms:1 ten:1 locally:2 reduced:1 coates:4 notice:1 coordinating:1 per:1 broadly:1 write:3 group:1 key:1 redundancy:3 four:1 parameterized:1 swersky:1 place:2 throughout:1 reasonable:1 patch:8 scaling:1 dropout:2 layer:57 courville:1 topological:3 activity:2 constraint:1 fourier:1 extremely:6 performing:1 separable:3 speedup:2 structured:1 freitas1:1 combination:5 smaller:2 across:2 wi:2 shallow:3 modification:1 making:1 restricted:3 resource:1 remains:2 turn:1 mechanism:1 available:2 operation:2 apply:1 denker:1 appropriate:6 enforce:1 uq:1 batch:3 alternative:1 convolved:1 original:1 assumes:1 remaining:3 ensure:1 top:1 include:1 madison:1 exploit:4 restrictive:2 build:2 approximating:2 gregor:1 tensor:2 objective:1 question:3 already:1 strategy:2 damage:2 receptive:4 gradient:1 convnet:5 outer:1 unstable:1 reason:3 length:4 index:5 relationship:1 mini:2 kingdom:1 difficult:1 potentially:2 blockwise:1 implementation:1 boltzmann:2 perform:1 observation:3 convolution:4 markov:1 benchmark:1 howard:1 inevitably:1 defining:1 hinton:7 looking:1 frame:1 canada:2 drift:1 meier:2 required:4 connection:4 optimized:1 imagenet:1 acoustic:2 learned:12 distinction:2 deserve:1 able:4 bar:1 alongside:1 parallelism:1 pattern:4 appeared:1 sparsity:2 natural:6 treated:1 force:1 predicting:4 representing:4 altered:1 improve:1 autoencoder:5 columbia:1 utterance:1 text:2 prior:9 literature:2 vectorizing:1 synchronization:2 fully:5 expect:4 permutation:1 highlight:1 interesting:2 incurred:1 degree:2 rica:18 vectorized:3 principle:1 bank:4 share:2 ehre:2 asynchronous:1 copy:3 free:3 allow:1 deeper:1 perceptron:1 senior:1 wide:2 emp:5 taking:1 cepstral:1 sparse:2 distributed:7 layer1:1 superficially:1 fb:1 preventing:1 forward:1 collection:1 adaptive:1 preprocessing:2 universally:1 far:4 transaction:2 approximate:2 alternatively:1 buchman:1 learn:1 channel:3 nature:1 ca:1 improving:2 ngiam:1 bottou:1 complex:1 constructing:5 marc:1 domain:2 protocol:1 did:1 main:2 aurelio:1 motivation:1 complementary:1 ny:1 sub:1 mao:1 explicit:1 exponential:6 concatenating:2 tied:3 third:3 wavelet:1 ix:2 masci:1 british:1 removing:1 specific:1 explored:1 stl:8 mnist:4 effectively:3 importance:2 corr:1 execution:1 sparser:1 easier:1 columnar:3 chen:2 led:1 simply:1 likely:1 explore:1 visual:1 trix:1 applies:3 corresponds:1 nair:1 sized:1 identity:2 careful:1 maxout:3 shared:2 replace:1 experimentally:1 change:1 except:1 reducing:2 uniformly:2 rigamonti:2 total:2 tiled:3 indicating:1 select:2 formally:1 support:1 relevance:1 incorporate:3 regularizing:1 srivastava:1 |
4,449 | 5,026 | Learning Stochastic Feedforward Neural Networks
Yichuan Tang
Department of Computer Science
University of Toronto
Toronto, Ontario, Canada.
[email protected]
Ruslan Salakhutdinov
Department of Computer Science and Statistics
University of Toronto
Toronto, Ontario, Canada.
[email protected]
Abstract
Multilayer perceptrons (MLPs) or neural networks are popular models used for
nonlinear regression and classification tasks. As regressors, MLPs model the
conditional distribution of the predictor variables Y given the input variables X.
However, this predictive distribution is assumed to be unimodal (e.g. Gaussian).
For tasks involving structured prediction, the conditional distribution should be
multi-modal, resulting in one-to-many mappings. By using stochastic hidden variables rather than deterministic ones, Sigmoid Belief Nets (SBNs) can induce a rich
multimodal distribution in the output space. However, previously proposed learning algorithms for SBNs are not efficient and unsuitable for modeling real-valued
data. In this paper, we propose a stochastic feedforward network with hidden layers composed of both deterministic and stochastic variables. A new Generalized
EM training procedure using importance sampling allows us to efficiently learn
complicated conditional distributions. Our model achieves superior performance
on synthetic and facial expressions datasets compared to conditional Restricted
Boltzmann Machines and Mixture Density Networks. In addition, the latent features of our model improves classification and can learn to generate colorful textures of objects.
1
Introduction
Multilayer perceptrons (MLPs) are general purpose function approximators. The outputs of a MLP
can be interpreted as the sufficient statistics of a member of the exponential family (conditioned on
the input X), thereby inducing a distribution over the output space Y . Since the nonlinear activations
are all deterministic, MLPs model the conditional distribution p(Y |X) with a unimodal assumption
(e.g. an isotropic Gaussian)1 .
For many structured prediction problems, we are interested in a conditional distribution p(Y |X)
that is multimodal and may have complicated structure2 . One way to model the multi-modality is to
make the hidden variables stochastic. Conditioned on a particular input X, different hidden configurations lead to different Y . Sigmoid Belief Nets (SBNs) [3, 2] are models capable of satisfying the
multi-modality requirement. With binary input, hidden, and output variables, they can be viewed
as directed graphical models where the sigmoid function is used to compute the degrees of ?belief?
of a child variable given the parent nodes. Inference in such models is generally intractable. The
original paper by Neal [2] proposed a Gibbs sampler which cycles through the hidden nodes one
at a time. This is problematic as Gibbs sampling can be very slow when learning large models
or fitting moderately-sized datasets. In addition, slow mixing of the Gibbs chain would typically
lead to a biased estimation of gradients during learning. A variational learning algorithm based on
2
For example,
in a MLP with one input, one output and one hidden layer: p(y|x) ? N (y|?y , ?y ), ?y =
? W2 ?(W1 x) , ?(a) = 1/(1 + exp(?a)) is the sigmoid function. Note that the Mixture of Density Network
is an exception to the unimodal assumption [1].
2
An equivalent problem is learning one-to-many functions from X 7? Y .
1
1
stochastic
stochastic
y
x
Figure 1: Stochastic Feedforward Neural Networks. Left: Network diagram. Red nodes are stochastic and
binary, while the rest of the hiddens are deterministic sigmoid nodes. Right: motivation as to why multimodal
outputs are needed. Given the top half of the face x, the mouth in y can be different, leading to different
expressions.
the mean-field approximation was proposed in [4] to improve the learning of SBNs. A drawback
of the variational approach is that, similar to Gibbs, it has to cycle through the hidden nodes one
at a time. Moreover, beside the standard mean-field variational parameters, additional parameters
must be introduced to lower-bound an intractable term that shows up in the expected free energy,
making the lower-bound looser. Gaussian fields are used in [5] for inference by making Gaussian
approximations to units? input, but there is no longer a lower bound on the likelihood.
In this paper, we introduce the Stochastic Feedforward Neural Network (SFNN) for modeling conditional distributions p(y|x) over continuous real-valued Y output space. Unlike SBNs, to better
model continuous data, SFNNs have hidden layers with both stochastic and deterministic units. The
left panel of Fig. 1 shows a diagram of SFNNs with multiple hidden layers. Given an input vector x,
different states of the stochastic units can generates different modes in Y . For learning, we present
a novel Monte Carlo variant of the Generalized Expectation Maximization algorithm. Importance
sampling is used for the E-step for inference, while error backpropagation is used by the M-step
to improve a variational lower bound on the data log-likelihood. SFNNs have several attractive
properties, including:
? We can draw samples from the exact model distribution without resorting to MCMC.
? Stochastic units form a distributed code to represent an exponential number of mixture components in output space.
? As a directed model, learning does not need to deal with a global partition function.
? Combination of stochastic and deterministic hidden units can be jointly trained using the backpropagation algorithm, as in standard feed-forward neural networks.
The two main alternative models are Conditional Gaussian Restricted Boltzmann Machines (CGRBMs) [6] and Mixture Density Networks (MDNs) [1]. Note that Gaussian Processes [7] and
Gaussian Random Fields [8] are unimodal and therefore incapable of modeling a multimodal Y .
Conditional Random Fields [9] are widely used in NLP and vision, but often assume Y to be discrete rather than continuous. C-GRBMs are popular models used for human motion modeling [6],
structured prediction [10], and as a higher-order potential in image segmentation [11]. While CGRBMs have the advantage of exact inference, they are energy based models that define different
partition functions for different input X. Learning also requires Gibbs sampling which is prone to
poor mixing. MDNs use a mixture of Gaussians to represent the output Y . The components? means,
mixing proportions, and the output variances are all predicted by a MLP conditioned on X. As with
SFNNs, the backpropagation algorithm can be used to train MDNs efficiently. However, the number
of mixture components in the output Y space must be pre-specified and the number of parameters is
linear in the number of mixture components. In contrast, with Nh stochastic hidden nodes, SFNNs
can use its distributed representation to model up to 2Nh mixture components in the output Y .
2
Stochastic Feedforward Neural Networks
SFNNs contain binary stochastic hidden variables h ? {0, 1}Nh , where Nh is the number of hidden
nodes. For clarity of presentation, we construct a SFNN from a one-hidden-layer MLP by replacing
the sigmoid nodes with stochastic binary ones. Note that other types stochastic units can also be
used. The conditional distribution of interest,
p(y|x), is obtained by marginalizing out the latent
P
stochastic hidden variables: p(y|x) = h p(y, h|x). SFNNs are directed graphical models where
the generative process starts from x, flows through h, and then generates output y. Thus, we can
factorize the joint distribution as: p(y, h|x) = p(y|h)p(h|x). To model real-valued y, we have
2
p(y|h) = N (y|W2 h + b2 , ?y2 ) and p(h|x) = ?(W1 x + b1 ), where b is the bias. Since h ? {0, 1}Nh
is a vector of Bernoulli random variables, p(y|x) has potentially 2Nh different modes3 , one for every
possible binary configurations of h. The fact that h can take on different states in SFNN is the reason
why we can learn one-to-many mappings, which would be impossible with standard MLPs.
The modeling flexibility of SFNN comes with computational costs. Since we have a mixture model
with potentially 2Nh components conditioned on any x, p(y|x) does not have a closed-form expression. We can use Monte Carlo approximation with M samples for its estimation:
p(y|x) '
M
1 X
p(y|h(m) ),
M m=1
h(m) ? p(h|x).
(1)
This estimator is unbiased and has relatively low variance. This is because the accuracy of the
estimator does not depend on the dimensionality of h and that p(h|x) is factorial, meaning that we
can draw samples from the exact distribution.
If y is discrete, it is sufficient for all of the hiddens to be discrete. However, using only discrete
hiddens is suboptimal when modeling real-valued output Y . This is due to the fact that while y is
continuous, there are only a finite number of discrete hidden states, each one (e.g. h0 ) leads to a
component which is a Gaussian: p(y|h0 ) = N (y|?(h0 ), ?y2 ). The mean of a Gaussian component
is a function of the hidden state: ?(h0 ) = W2T h0 + b2 . When x varies, only the probability of
choosing a specific hidden state h0 changes via p(h0 |x), not ?(h0 ). However, if we allow ?(h0 ) to
be a deterministic function of x as well, we can learn a smoother p(y|x), even when it is desirable
to learn small residual variances ?y2 . This can be accomplished by allowing for both stochastic and
deterministic units in a single SFNN hidden layer, allowing the mean ?(h0 , x) to have contributions
from two components, one from the hidden state h0 , and another one from defining a deterministic
mapping from x. As we demonstrate in our experimental results, this is crucial for learning good
density models of the real-valued Y .
In SFNNs with only one hidden layer, p(h|x) is a factorial Bernoulli distribution. If p(h|x) has low
entropy, only a few discrete h states out of the 2Nh total states would have any significant probability
mass. We can increase the entropy over the stochastic hidden variables by adding a second hidden
layer. The second hidden layer takes the stochastic and any deterministic hidden nodes of the first
layer as its input. This leads to our proposed SFNN model, shown in Fig. 1.
In our SFNNs, we P
assume a conditional
diagonal Gaussian distribution for the output Y :
P
log p(y|h, x) ? ? 21 i log ?i2 ? 12 i (yi ? ?(h, x))2 /?i2 . We note that we can also use any
other parameterized distribution (e.g. Student?s t) for the output variables. This is a win compared
to the Boltzmann Machine family of models, which require the output distribution to be from the
exponential family.
2.1 Learning
We present a Monte Carlo variant of the Generalized EM algorithm [12] for learning SFNNs. Specifically, importance sampling is used during the E-step to approximate the posterior p(h|y, x), while
the Backprop algorithm is used during the M-step to calculate the derivatives of the parameters of
both the stochastic and deterministic nodes. Gradient ascent using the derivatives will guarantee
that the variational lower bound of the model log-likelihood will be improved. The drawback of our
learning algorithm is the requirement of sampling the stochastic nodes M times for every weight
update. However, as we will show in the experimental results, 20 samples is sufficient for learning
good SFNNs.
The requirement of sampling is typical for models capable of structured learning. As a comparison,
energy based models, such as conditional Restricted Boltzmann Machines, require MCMC sampling
per weight update to estimate the gradient of the log-partition function. These MCMC samples do
not converge to the true distribution, resulting in a biased estimate of the gradient.
For clarity, we provide the following derivations for SFNNs with one hidden layer containing only
stochastic nodes4 . For any approximating distribution q(h), we can write down the following varia3
4
In practice, due to weight sharing, we will not be able to have close to that many modes for a large Nh .
It is straightforward to extend the model to multiple and hybid hidden layered SFNNs.
3
tional lower-bound on the data log-likelihood:
X
X
p(y, h|x) X
p(y, h|x; ?)
log p(y|x) = log
p(y, h|x) =
p(h|y, x) log
?
q(h) log
,
p(h|y, x)
q(h)
h
h
(2)
h
where q(h) can be any arbitrary distribution. For the tightest lower-bound, q(h) need to be the exact
posterior p(h|y, x). While the posterior p(h|y, x) is hard to compute, the ?conditional prior? p(h|x)
is easy (corresponds to a simple feedforward pass). We can therefore set q(h) , p(h|x). However,
this would be a very bad approximation as learning proceeds, since the learning of the likelihood
p(y|h, x) will increase the KL divergence between the conditional prior and the posterior. Instead,
it is critical to use importance sampling with the conditional prior as the proposal distribution.
Let Q be the expected complete data log-likelihood, which is a lower bound on the log-likelihood
that we wish to maximize:
Q(?, ?old ) =
M
1 X (m)
p(h|x; ?old ) log p(y, h|x; ?) '
w
log p(y, h(m) |x; ?),
p(h|x; ?old )
M m=1
(3)
X p(h|y, x; ?old )
h
where h(m) ? p(h|x; ?old ) and w(m) is the importance weight of the m-th sample from the proposal
distribution p(h|x; ?old ). Using Bayes Theorem, we have
w(m) =
p(h(m) |y, x; ?old )
p(y|h(m) , x; ?old )
=
'
p(y|x; ?old )
p(h(m) |x; ?old )
1
M
p(y|h(m) ; ?old )
.
PM
(m) ; ?
old )
m=1 p(y|h
(4)
Eq. 1 is used to approximate p(y|x; ?old ). For convenience, we define the partial objective
of the m-th sample as Q(m) , w(m) log p(y|h(m) ; ?) + log p(h(m) |x; ?) . We can then approximate our objective function Q(?, ?old ) with M samples from the proposal: Q(?, ?old ) '
PM
1
(m)
(?, ?old ). For our generalized M-step, we seek to perform gradient ascent on Q:
m=1 Q
M
M
M
o
1 X ?Q(m) (?, ?old )
1 X (m) ? n
?Q
'
=
w
log p(y|h(m) ; ?) + log p(h(m) |x; ?) . (5)
??
M m=1
??
M m=1
??
?
? is computed using error backpropagation of two sub-terms. The first
The gradient term ??
?
(m)
part, ?? log p(y|h ; ?) , treats y as the targets and h(m) as the input data, while the second part,
?
(m)
|x; ?) , treats h(m) as the targets and x as the input data. In SFNNs with a mixture
?? log p(h
of deterministic and stochastic units, backprop will additionally propagate error information from
the first part to the second part.
The full gradient is a weighted summation of the M partial derivatives, where the weighting comes
from how well a particular state h(m) can generate the data y. This is intuitively appealing, since
learning adjusts both the ?preferred? states? abilities to generate the data (first part in the braces), as
well as increase their probability of being picked conditioning on x (second part in the braces). The
detailed EM learning algorithm for SFNNs is listed in Alg. 1 of the Supplementary Materials.
2.2 Cooperation during learning
We note that for importance sampling to work well in general, a key requirement is that the proposal
distribution is not small where the true distribution has significant mass. However, things are slightly
different when using importance sampling during learning. Our proposal distribution p(h|x) and the
posterior p(h|y, x) are not fixed but rather governed by the model parameters. Learning adapts these
distribution in a synergistic and cooperative fashion.
Let us hypothesize that at a particular learning iteration, the conditional prior p(h|x) is small in
certain regions where p(h|y, x) is large, which is undesirable for importance sampling. The Estep will draw M samples and weight them according to Eq. 4. While all samples h(m) will
have very low log-likelihood due to the bad conditional prior, there will be a certain preferred
? with the largest weight. Learning using Eq. 5 will accomplish two things: (1) it will
state h
adjust the generative weights to allow preferred states to better generate the observed y; (2) it
? given x. Since
will make the conditional prior better by making it more likely to predict h
4
(a) Dataset A
(b) Dataset B
(c) Dataset C
Figure 3: Three synthetic datasets of 1-dimensional one-to-many mappings. For any given x, multiple modes
in y exist. Blue stars are the training data, red pluses are exact samples from SFNNs. Best viewed in color.
? generates y accurately will probably reduce
the generative weights are shared, the fact that h
?
the likelihood of y under another state h. The updated conditional prior tends to be a better proposal distribution for the updated model. The cooperative interaction between the conditional prior and posterior during learning provides some robustness to the importance sampler.
Empirically, we can see this effect as learning progress on
Dataset A of Sec. 3.1 in Fig. 2. The plot shows the model loglikelihood given the training data as learning progresses until
3000 weight updates. 30 importance samples are used during
learning with 2 hidden layers of 5 stochastic nodes. We chose
5 nodes because it is small enough that the true log-likelihood
can be computed using brute-force integration. As learning
progresses, the Monte Carlo approximation is very close to
the true log-likelihood using only 30 samples. As expected,
the KL from the posterior and prior diverges as the generative
Figure 2: KL divergence and log- weights better models the multi-modalities around x = 0.5.
likelihoods. Best viewed in color.
We also compared the KL divergence between our empirical
weighted importance sampled distribution and true posterior, which converges toward zero. This
demonstrate that the prior distribution have learned to not be small in regions of large posterior. In
other words, this shows that the E-step in the learning of SFNNs is close to exact for this dataset and
model.
3
Experiments
We first demonstrate the effectiveness of SFNN on synthetic one dimensional one-to-many mapping data. We then use SFNNs to model face images with varying facial expressions and emotions.
SFNNs outperform other competing density models by a large margin. We also demonstrate the usefulness of latent features learned by SFNNs for expression classification. Finally, we train SFNNs
on a dataset with in-depth head rotations, a database with colored objects, and a image segmentation database. By drawing samples from these trained SFNNs, we obtain qualitative results and
insights into the modeling capacity of SFNNs. We provide computation times for learning in the
Supplementary Materials.
3.1 Synthetic datasets
As a proof of concept, we used three one dimensional one-to-many mapping datasets, shown in
Fig. 3. Our goal is to model p(y|x). Dataset A was used by [1] to evaluate the performance of the
Mixture Density Networks (MDNs). Dataset B has a large number of tight modes conditioned on
any given x, which is useful for testing a model?s ability to learn many modes and a small residual
variance. Dataset C is used for testing whether a model can learn modes that are far apart from
each other. We randomly split the data into a training, validation, and a test set. We report the
average test set log-probability averaged over 5 folds for different models in Table 1. The method
called ?Gaussian? is a 2D Gaussian estimated on (x, y) jointly, and we report log p(y|x) which
can be obtained easily in closed-form. For Conditional Gaussian Restricted Boltzmann Machine
(C-GRBM) we used 25-step Contrastive Divergence [13] (CD-25) to estimate the gradient of the
log partition function. We used Annealed Importance Sampling [14, 15] with 50,000 intermediate
temperatures to estimate the partition function. SBN is a Sigmoid Belief Net with three hidden
stochastic binary layers between the input and the output layer. It is trained in the same way as
SFNN, but there are no deterministic units. Finally, SFNN has four hidden layers with the inner
5
two being hybrid stochastic/deterministic layers (See Fig. 1). We used 30 importance samples to
approximate the posterior during the E-step. All other hyper-parameters for all of the models were
chosen to maximize the validation performance.
Table 1 reveals that SFNNs consistently outperform all other methods.
Fig. 3 further shows samples drawn
from SFNNs as red ?pluses?. Note that
SFNNs can learn small residual variances to accurately model Dataset B.
Comparing SBNs to SFNNs, it is clear
that having deterministic hidden nodes is a big win for modeling continuous y.
Gaussian
MDN
C-GRBM
SBN
SFNN
A 0.078?0.02 1.05?0.02 0.57?0.01 0.79?0.03 1.04?0.03
B -2.40?0.07 -1.58?0.11 -2.14?0.04 -1.33?0.10 -0.98?0.06
C 0.37?0.07 2.03?0.05 1.36?0.05 1.74?0.08 2.21?0.16
Table 1: Average test log-probability density on synthetic 1D
datasets.
3.2
Modeling Facial Expression
Conditioned on a subject?s face with neutral expression, the distribution of all possible emotions or
expressions of this particular individual is multimodal in pixel space. We learn SFNNs to model
facial expressions in the Toronto Face Database [16]. The Toronto Face Database consist of 4000
images of 900 individuals with 7 different expressions. Of the 900 subjects, there are 124 with 10 or
more images per subject, which we used as our data. We randomly selected 100 subjects with 1385
total images for training, while 24 subjects with a total of 344 images were selected as the test set.
For each subject, we take the average of their face images as x (mean face), and learn to model this
subject?s varying expressions y. Both x and y are grayscale and downsampled to a resolution of
48 ? 48. We trained a SFNN with 4 hidden layers of size 128 on these facial expression images. The
second and third ?hybrid? hidden layers contained 32 stochastic binary and 96 deterministic hidden
nodes, while the first and the fourth hidden layers consisted of only deterministic sigmoids. We refer
to this model as SFNN2. We also tested the same model but with only one hybrid hidden layer, that
we call SFNN1. We used mini-batches of size 100 and and 30 importance samples for the E-step.
A total of 2500 weight updates were performed. Weights were randomly initialized with standard
deviation of 0.1, and the residual variance ?y2 was initialized to the variance of y.
For comparisons with other models, we trained a Mixture of Factor Analyzers (MFA) [17], Mixture
Density Networks (MDN), and Conditional Gaussian Restricted Boltzmann Machines (C-GRBM)
on this task. For the Mixture of Factor Analyzers model, we trained a mixture with 100 components,
? which
one for each training individual. Given a new test face xtest , we first find the training x
? ?s FA component, while
is closest in Euclidean distance. We then take the parameters of the x
replacing the FA?s mean with xtest . Mixture Density Networks is trained using code provided by
the NETLAB package [18]. The number of Gaussian mixture components and the number of hidden
nodes were selected using a validation set. Optimization is performed using the scaled conjugate
gradient algorithm until convergence. For C-GRBMs, we used CD-25 for training. The optimal
number of hidden units, selected via validation, was 1024. A population sparsity objective on the
hidden activations was also part of the objective [19]. The residual diagonal covariance matrix is
also learned. Optimization used stochastic gradient descent with mini-batches of 100 samples each.
Table 2 displays the average logprobabilities along with standard errors of the 344 test images. We also
recorded the total training time of each
Table 2: Average test log-probability and total training time on algorithm, although this depends on the
facial expression images. Note that for continuous data, these number of weight updates and whether
or not GPUs are used (see the Supare probability densities and can be positive.
plementary Materials for more details).
For MFA and MDN, the log-probabilities were computed exactly. For SFNNs, we used Eq. 1 with
1000 samples. We can see that SFNNs substantially outperform all other models. Having two hybrid hidden layers (SFNN2) improves model performance over SFNN1, which has only one hybrid
hidden layer.
MFA
MDN
C-GRBM SFNN1 SFNN2
Nats 1406?52 1321?16 1146?113 1488?18 1534?27
Time 10 secs. 6 mins. 158 mins. 112 secs. 113 secs.
Qualitatively, Fig. 4 shows samples drawn from the trained models. The leftmost column are the
mean faces of 3 test subjects, followed by 7 samples from the distribution p(y|x). For C-GRBM,
samples are generated from a Gibbs chain, where each successive image is taken after 1000 steps.
For the other 2 models, displayed samples are exact. MFAs overfit on the training set, generating
6
(a) Conditional Gaussian RBM.
(b) MFA.
(c) SFNN.
Figure 4: Samples generated from various models.
(a)
(b)
(c)
Figure 5: Plots demonstrate how hyperparameters affect the evaluation and learning of SFNNs.
samples with significant artifacts. Samples produced by C-GRBMs suffer from poor mixing and
get stuck at a local mode. SFNN samples show that the model was able to capture a combination
of mutli-modality and preserved much of the identity of the test subjects. We also note that SFNN
generated faces are not simple memorization of the training data. This is validated by its superior
performance on the test set in Table 2.
We further explored how different hyperparameters (e.g. # of stochastic layers, # of Monte Carlo
samples) can affect the learning and evaluation of SFNNs. We used face images and SFNN2 for
these experiments. First, we wanted to know the number of M in Eq. 1 needed to give a reasonable
estimate of the log-probabilities. Fig. 5(a) shows the estimates of the log-probability as a function of
the number of samples. We can see that having about 500 samples is reasonable, but more samples
provides a slightly better estimate. The general shape of the plot is similar for all other datasets
and SFNN models. When M is small, we typically underestimate the true log-probabilities. While
500 or more samples are needed for accurate model evaluation, only 20 or 30 samples are sufficient
for learning good models (as shown in Fig. 5(b). This is because while M = 20 gives suboptimal
approximation to the true posterior, learning still improves the variational lower-bound. In fact, we
can see that the difference between using 30 and 200 samples during learning results in only about 20
nats of the final average test log-probability. In Fig. 5(c), we varied the number of binary stochastic
hidden variables in the 2 inner hybrid layers. We did not observe significant improvements beyond
more than 32 nodes. With more hidden nodes, over-fitting can also be a problem.
3.2.1 Expression Classification
The internal hidden representations learned by SFNNs are also useful for classification of facial
expressions. For each {x, y} image pair, there are 7 possible expression types: neutral, angry,
happy, sad, surprised, fear, and disgust. As baselines, we used regularized linear softmax classifiers
and multilayer perceptron classifier taking pixels as input. The mean of every pixel across all cases
was set to 0 and standard deviation was set to 1.0. We then append the learned hidden features of
SFNNs and C-GRBMs to the image pixels and re-train the same classifiers. The results are shown in
the first row of Table 3. Adding hidden features from the SFNN trained in an unsupervised manner
(without expression labels) improves accuracy for both linear and nonlinear classifiers.
Linear C-GRBM SFNN
+Linear +Linear
clean 80.0% 81.4% 82.4%
10% noise 78.9% 79.7% 80.8%
50% noise 72.4% 74.3% 71.8%
75% noise 52.6% 58.1% 59.8%
10% occl. 76.2% 79.5% 80.1%
50% occl. 54.1% 59.9% 62.5%
75% occl. 28.2% 33.9% 37.5%
(a) Random noise
MLP SFNN
+MLP
83.2% 83.8 %
82.0% 81.7 %
79.1% 78.5%
71.9% 73.1%
80.3% 81.5%
58.5% 63.4%
33.2% 39.2%
(b) Block occlusion
Table 3: Recognition accuracy over 5 folds. Bold
Figure 6: Left: Noisy test images y. Posterior infer- numbers indicate that the difference in accuracy is
ence in SFNN finds Ep(h|x,y) [h]. Right: generated y statistically significant than the competitor modimages from the expected hidden activations.
els, for both linear and nonlinear classifiers.
7
(a) Generated Objects
(b) Generated Horses
Figure 7: Samples generated from a SFNN after training on object and horse databases. Conditioned on a
given foreground mask, the appearance is multimodal (different color and texture). Best viewed in color.
SFNNs are also useful when dealing with noise. As a generative model of y, it is somewhat robust
to noisy and occluded pixels. For example, the left panels of Fig. 6, show corrupted test images
y. Using the importance sampler described in Sec. 2.1, we can compute the expected values of the
binary stochastic hidden variables given the corrupted test y images5 . In the right panels of Fig. 6,
we show the corresponding generated y from the inferred average hidden states. After this denoising
process, we can then feed the denoised y and E[h] to the classifiers. This compares favorably to
simply filling in the missing pixels with the average of that pixel from the training set. Classification
accuracies under noise are also presented in Table 3. For example 10% noise means that 10 percent
of the pixels of both x and y are corrupted, selected randomly. 50% occlusion means that a square
block with 50% of the original area is randomly positioned in both x and y. Gains in recognition
performance from using SFNN are particularly pronounced when dealing with large amounts of
random noise and occlusions.
3.3
Additional Qualitative Experiments
Not only are SFNNs capable of modeling facial expressions of aligned face images, they can also
model complex real-valued conditional distributions. Here, we present some qualitative samples
drawn from SFNNs trained on more complicated distributions (an additional example on rotated
faces is presented in the Supplementary Materials).
We trained SFNNs to generate colorful images of common objects from the Amsterdam Library of
Objects database [20], conditioned on the foreground masks. This is a database of 1000 everyday
objects under various lighting, rotations, and viewpoints. Every object also comes with a foreground
segmentation mask. For every object, we selected the image under frontal lighting without any rotations, and trained a SFNN conditioned on the foreground mask. Our goal is to model the appearance
(color and texture) of these objects. Of the 1000 objects, there are many objects with similar foreground masks (e.g. round or rectangular). Conditioned on the test foreground masks, Fig. 7(a)
shows random samples from the learned SFNN model. We also tested on the Weizmann segmentation database [21] of horses, learning a conditional distribution of horse appearances conditioned on
the segmentation mask. The results are shown in Fig. 7(b).
4
Discussions
In this paper we introduced a novel model with hybrid stochastic and deterministic hidden nodes.
We have also proposed an efficient learning algorithm that allows us to learn rich multi-modal conditional distributions, supported by quantitative and qualitative empirical results. The major drawback
of SFNNs is that inference is not trivial and M samples are needed for the importance sampler.
While this is sufficiently fast for our experiments we can potentially accelerate inference by learning
a separate recognition network to perform inference in one feedforward pass. These techniques have
previously been used by [22, 23] with success.
5
For this task we assume that we have knowledge of which pixels is corrupted.
8
References
[1] C. M. Bishop. Mixture density networks. Technical Report NCRG/94/004, Aston University, 1994.
[2] R. M. Neal. Connectionist learning of belief networks. volume 56, pages 71?113, July 1992.
[3] R. M. Neal. Learning stochastic feedforward networks. Technical report, University of Toronto, 1990.
[4] Lawrence K. Saul, Tommi Jaakkola, and Michael I. Jordan. Mean field theory for sigmoid belief networks.
Journal of Artificial Intelligence Research, 4:61?76, 1996.
[5] David Barber and Peter Sollich. Gaussian fields for approximate inference in layered sigmoid belief
networks. In Sara A. Solla, Todd K. Leen, and Klaus-Robert M?uller, editors, NIPS, pages 393?399. The
MIT Press, 1999.
[6] G. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In NIPS,
2006.
[7] Carl Edward Rasmussen. Gaussian processes for machine learning. MIT Press, 2006.
[8] H. Rue and L. Held. Gaussian Markov Random Fields: Theory and Applications, volume 104 of Monographs on Statistics and Applied Probability. Chapman & Hall, London, 2005.
[9] John Lafferty. Conditional random fields: Probabilistic models for segmenting and labeling sequence
data. pages 282?289. Morgan Kaufmann, 2001.
[10] Volodymyr Mnih, Hugo Larochelle, and Geoffrey Hinton. Conditional restricted boltzmann machines for
structured output prediction. In Proceedings of the International Conference on Uncertainty in Artificial
Intelligence, 2011.
[11] Yujia Li, Daniel Tarlow, and Richard Zemel. Exploring compositional high order pattern potentials for
structured output learning. In Proceedings of International Conference on Computer Vision and Pattern
Recognition, 2013.
[12] R. M. Neal and G. E. Hinton. A new view of the EM algorithm that justifies incremental, sparse and other
variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368. 1998.
[13] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14:1771?1800, 2002.
[14] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[15] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of
the Intl. Conf. on Machine Learning, volume 25, 2008.
[16] J.M. Susskind. The Toronto Face Database. Technical report, 2011. http://aclab.ca/users/josh/TFD.html.
[17] Zoubin Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical
Report CRG-TR-96-1, University of Toronto, 1996.
[18] Ian Nabney. NETLAB: algorithms for pattern recognitions. Advances in pattern recognition. SpringerVerlag, 2002.
[19] V. Nair and G. E. Hinton. 3-D object recognition with deep belief nets. In NIPS 22, 2009.
[20] J. M. Geusebroek, G. J. Burghouts, and A. W. M. Smeulders. The amsterdam library of object images.
International Journal of Computer Vision, 61(1), January 2005.
[21] Eran Borenstein and Shimon Ullman. Class-specific, top-down segmentation. In In ECCV, pages 109?
124, 2002.
[22] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The wake-sleep algorithm for unsupervised neural
networks. Science, 268(5214):1158?1161, 1995.
[23] R. Salakhutdinov and H. Larochelle. Efficient learning of deep boltzmann machines. AISTATS, 2010.
9
| 5026 |@word proportion:1 seek:1 propagate:1 covariance:1 contrastive:2 xtest:2 thereby:1 tr:1 configuration:2 daniel:1 comparing:1 activation:3 must:2 john:1 occl:3 partition:5 shape:1 wanted:1 hypothesize:1 plot:3 update:5 half:1 generative:5 selected:6 intelligence:2 isotropic:1 colored:1 tarlow:1 provides:2 node:19 toronto:11 successive:1 along:1 surprised:1 qualitative:4 fitting:2 manner:1 introduce:1 mask:7 expected:5 multi:5 salakhutdinov:3 provided:1 moreover:1 panel:3 mass:2 interpreted:1 substantially:1 guarantee:1 quantitative:2 every:5 exactly:1 scaled:1 classifier:6 brute:1 unit:10 colorful:2 segmenting:1 positive:1 frey:1 local:1 treat:2 tends:1 todd:1 plus:2 chose:1 sara:1 statistically:1 averaged:1 weizmann:1 directed:3 testing:2 practice:1 block:2 backpropagation:4 susskind:1 procedure:1 area:1 empirical:2 pre:1 induce:1 word:1 downsampled:1 sfnn:23 zoubin:1 get:1 convenience:1 close:3 layered:2 synergistic:1 undesirable:1 impossible:1 memorization:1 equivalent:1 deterministic:18 missing:1 annealed:2 straightforward:1 rectangular:1 resolution:1 estimator:2 adjusts:1 insight:1 grbm:6 population:1 updated:2 target:2 user:1 exact:7 carl:1 satisfying:1 recognition:7 particularly:1 cooperative:2 database:9 observed:1 ep:1 capture:1 calculate:1 sbn:2 region:2 cycle:2 solla:1 monograph:1 moderately:1 nats:2 occluded:1 trained:12 depend:1 tight:1 predictive:1 multimodal:6 joint:1 easily:1 accelerate:1 various:2 derivation:1 train:3 fast:1 london:1 monte:5 artificial:2 zemel:1 horse:4 klaus:1 hyper:1 choosing:1 h0:11 labeling:1 widely:1 valued:6 supplementary:3 loglikelihood:1 drawing:1 ability:2 statistic:4 jointly:2 noisy:2 final:1 advantage:1 sequence:1 net:4 propose:1 interaction:1 product:1 aligned:1 mixing:4 flexibility:1 ontario:2 adapts:1 roweis:1 inducing:1 pronounced:1 everyday:1 parent:1 convergence:1 requirement:4 diverges:1 intl:1 generating:1 incremental:1 converges:1 rotated:1 object:14 burghouts:1 progress:3 eq:5 edward:1 c:2 predicted:1 come:3 indicate:1 larochelle:2 tommi:1 drawback:3 stochastic:37 human:2 material:4 backprop:2 require:2 summation:1 crg:1 exploring:1 around:1 sufficiently:1 hall:1 exp:1 lawrence:1 mapping:6 predict:1 major:1 achieves:1 purpose:1 ruslan:1 estimation:2 label:1 largest:1 weighted:2 uller:1 mit:2 gaussian:20 rather:3 varying:2 jaakkola:1 validated:1 improvement:1 consistently:1 bernoulli:2 likelihood:12 contrast:1 baseline:1 inference:8 tional:1 dayan:1 el:1 typically:2 hidden:51 interested:1 pixel:9 classification:6 html:1 integration:1 softmax:1 field:9 construct:1 emotion:2 having:3 sampling:14 chapman:1 unsupervised:2 filling:1 foreground:6 report:6 connectionist:1 richard:1 few:1 randomly:5 composed:1 divergence:5 individual:3 occlusion:3 mlp:6 interest:1 mnih:1 evaluation:3 adjust:1 mixture:19 held:1 chain:2 accurate:1 capable:3 partial:2 facial:8 old:17 euclidean:1 initialized:2 re:1 taylor:1 column:1 modeling:11 grbms:4 ence:1 maximization:1 cost:1 deviation:2 neutral:2 predictor:1 usefulness:1 varies:1 corrupted:4 accomplish:1 synthetic:5 density:11 hiddens:3 international:3 probabilistic:1 michael:1 w1:2 recorded:1 containing:1 conf:1 expert:1 derivative:3 leading:1 ullman:1 li:1 volodymyr:1 potential:2 star:1 b2:2 student:1 sec:5 bold:1 depends:1 performed:2 view:1 picked:1 closed:2 red:3 start:1 bayes:1 denoised:1 complicated:3 contribution:1 smeulders:1 mlps:5 square:1 accuracy:5 variance:7 kaufmann:1 efficiently:2 accurately:2 produced:1 carlo:5 lighting:2 sharing:1 competitor:1 underestimate:1 energy:3 proof:1 rbm:1 sampled:1 gain:1 dataset:10 popular:2 color:5 knowledge:1 improves:4 dimensionality:1 segmentation:6 positioned:1 feed:2 higher:1 modal:2 improved:1 leen:1 until:2 overfit:1 replacing:2 nonlinear:4 mode:7 artifact:1 effect:1 contain:1 y2:4 unbiased:1 true:7 concept:1 consisted:1 neal:6 i2:2 deal:1 attractive:1 round:1 during:9 mutli:1 generalized:4 leftmost:1 complete:1 demonstrate:5 motion:2 temperature:1 percent:1 image:21 variational:6 meaning:1 novel:2 sigmoid:9 superior:2 rotation:3 common:1 empirically:1 hugo:1 conditioning:1 nh:9 ncrg:1 extend:1 volume:3 significant:5 refer:1 gibbs:6 images5:1 resorting:1 pm:2 analyzer:3 longer:1 posterior:12 closest:1 apart:1 certain:2 incapable:1 binary:10 success:1 approximators:1 accomplished:1 yi:1 morgan:1 additional:3 somewhat:1 converge:1 maximize:2 july:1 smoother:1 multiple:3 unimodal:4 sbns:6 desirable:1 full:1 infer:1 technical:4 prediction:4 involving:1 regression:1 variant:3 multilayer:3 vision:3 expectation:1 iteration:1 represent:2 proposal:6 addition:2 preserved:1 diagram:2 wake:1 modality:4 crucial:1 borenstein:1 biased:2 w2:2 rest:1 unlike:1 ascent:2 brace:2 probably:1 subject:9 thing:2 member:1 flow:1 lafferty:1 effectiveness:1 jordan:2 call:1 feedforward:8 split:1 easy:1 enough:1 intermediate:1 affect:2 competing:1 suboptimal:2 reduce:1 inner:2 whether:2 expression:18 suffer:1 peter:1 compositional:1 deep:3 generally:1 useful:3 detailed:1 listed:1 clear:1 factorial:2 amount:1 mdns:4 generate:5 http:1 outperform:3 exist:1 problematic:1 estimated:1 per:2 blue:1 discrete:6 write:1 key:1 four:1 drawn:3 clarity:2 clean:1 package:1 parameterized:1 fourth:1 uncertainty:1 disgust:1 family:3 reasonable:2 looser:1 sad:1 draw:3 netlab:2 bound:9 layer:24 followed:1 angry:1 display:1 nabney:1 fold:2 sleep:1 generates:3 min:2 relatively:1 gpus:1 estep:1 department:2 structured:6 according:1 combination:2 poor:2 conjugate:1 across:1 slightly:2 em:5 sollich:1 appealing:1 rsalakhu:1 making:3 intuitively:1 restricted:6 taken:1 previously:2 needed:4 know:1 gaussians:1 tightest:1 observe:1 structure2:1 alternative:1 robustness:1 batch:2 original:2 top:2 nlp:1 graphical:3 mdn:4 unsuitable:1 ghahramani:1 murray:1 approximating:1 objective:4 fa:2 eran:1 diagonal:2 gradient:10 win:2 distance:1 separate:1 capacity:1 barber:1 trivial:1 reason:1 toward:1 code:2 mini:2 happy:1 minimizing:1 robert:1 potentially:3 favorably:1 append:1 boltzmann:8 perform:2 allowing:2 datasets:7 markov:1 finite:1 descent:1 displayed:1 january:1 defining:1 hinton:7 head:1 varied:1 arbitrary:1 canada:2 inferred:1 introduced:2 david:1 pair:1 specified:1 kl:4 w2t:1 learned:6 nip:3 able:2 beyond:1 proceeds:1 pattern:4 yujia:1 sparsity:1 geusebroek:1 including:1 belief:9 mouth:1 mfa:5 critical:1 force:1 hybrid:7 regularized:1 tfd:1 residual:5 improve:2 aston:1 library:2 prior:10 marginalizing:1 beside:1 geoffrey:1 validation:4 degree:1 sufficient:4 viewpoint:1 editor:2 cd:2 row:1 prone:1 eccv:1 cooperation:1 supported:1 free:1 rasmussen:1 bias:1 allow:2 perceptron:1 saul:1 face:14 taking:1 sparse:1 distributed:2 depth:1 rich:2 forward:1 qualitatively:1 stuck:1 regressors:1 far:1 approximate:5 yichuan:1 preferred:3 dealing:2 global:1 reveals:1 b1:1 assumed:1 factorize:1 grayscale:1 continuous:6 latent:4 why:2 table:9 additionally:1 plementary:1 learn:11 robust:1 ca:1 alg:1 complex:1 rue:1 did:1 aistats:1 main:1 motivation:1 big:1 hyperparameters:2 noise:8 child:1 fig:14 fashion:1 slow:2 sub:1 wish:1 exponential:3 governed:1 logprobabilities:1 weighting:1 third:1 tang:2 ian:1 down:2 theorem:1 shimon:1 bad:2 specific:2 bishop:1 explored:1 intractable:2 consist:1 adding:2 importance:17 texture:3 conditioned:11 sigmoids:1 justifies:1 margin:1 entropy:2 simply:1 likely:1 appearance:3 josh:1 amsterdam:2 contained:1 fear:1 corresponds:1 nair:1 conditional:28 viewed:4 sized:1 presentation:1 goal:2 identity:1 shared:1 change:1 hard:1 springerverlag:1 specifically:1 typical:1 sampler:4 denoising:1 total:6 called:1 pas:2 experimental:2 perceptrons:2 exception:1 internal:1 frontal:1 evaluate:1 mcmc:3 tested:2 |
4,450 | 5,027 | Zero-Shot Learning Through Cross-Modal Transfer
Richard Socher, Milind Ganjoo, Christopher D. Manning, Andrew Y. Ng
Computer Science Department, Stanford University, Stanford, CA 94305, USA
[email protected], {mganjoo, manning}@stanford.edu, [email protected]
Abstract
This work introduces a model that can recognize objects in images even if no
training data is available for the object class. The only necessary knowledge about
unseen visual categories comes from unsupervised text corpora. Unlike previous
zero-shot learning models, which can only differentiate between unseen classes,
our model can operate on a mixture of seen and unseen classes, simultaneously
obtaining state of the art performance on classes with thousands of training images and reasonable performance on unseen classes. This is achieved by seeing
the distributions of words in texts as a semantic space for understanding what objects look like. Our deep learning model does not require any manually defined
semantic or visual features for either words or images. Images are mapped to be
close to semantic word vectors corresponding to their classes, and the resulting
image embeddings can be used to distinguish whether an image is of a seen or unseen class. We then use novelty detection methods to differentiate unseen classes
from seen classes. We demonstrate two novelty detection strategies; the first gives
high accuracy on unseen classes, while the second is conservative in its prediction
of novelty and keeps the seen classes? accuracy high.
1
Introduction
The ability to classify instances of an unseen visual class, called zero-shot learning, is useful in several situations. There are many species and products without labeled data and new visual categories,
such as the latest gadgets or car models, that are introduced frequently. In this work, we show how
to make use of the vast amount of knowledge about the visual world available in natural language
to classify unseen objects. We attempt to model people?s ability to identify unseen objects even if
the only knowledge about that object came from reading about it. For instance, after reading the
description of a two-wheeled self-balancing electric vehicle, controlled by a stick, with which you
can move around while standing on top of it, many would be able to identify a Segway, possibly after
being briefly perplexed because the new object looks different from previously observed classes.
We introduce a zero-shot model that can predict both seen and unseen classes. For instance, without
ever seeing a cat image, it can determine whether an image shows a cat or a known category from
the training set such as a dog or a horse. The model is based on two main ideas.
Fig. 1 illustrates the model. First, images are mapped into a semantic space of words that is learned
by a neural network model [15]. Word vectors capture distributional similarities from a large, unsupervised text corpus. By learning an image mapping into this space, the word vectors get implicitly
grounded by the visual modality, allowing us to give prototypical instances for various words. Second, because classifiers prefer to assign test images into classes for which they have seen training
examples, the model incorporates novelty detection which determines whether a new image is on the
manifold of known categories. If the image is of a known category, a standard classifier can be used.
Otherwise, images are assigned to a class based on the likelihood of being an unseen category. We
explore two strategies for novelty detection, both of which are based on ideas from outlier detection
methods. The first strategy prefers high accuracy for unseen classes, the second for seen classes.
Unlike previous work on zero-shot learning which can only predict intermediate features or differentiate between various zero-shot classes [21, 27], our joint model can achieve both state of the art
accuracy on known classes as well as reasonable performance on unseen classes. Furthermore, compared to related work on knowledge transfer [21, 28] we do not require manually defined semantic
1
Manifold of known classes
truck
horse
auto
dog
Tra
ini
ng
im
a
New test image
from unknown
class
cat
ge
s
Figure 1: Overview of our cross-modal zero-shot model. We first map each new testing image into
a lower dimensional semantic word vector space. Then, we determine whether it is on the manifold
of seen images. If the image is ?novel?, meaning not on the manifold, we classify it with the help of
unsupervised semantic word vectors. In this example, the unseen classes are truck and cat.
or visual attributes for the zero-shot classes, allowing us to use state-of-the-art unsupervised and
unaligned image features instead along with unsupervised and unaligned language corpora.
2
Related Work
We briefly outline connections and differences to five related lines of research. Due to space constraints, we cannot do justice to the complete literature.
Zero-Shot Learning. The work most similar to ours is that by Palatucci et al. [27]. They map fMRI
scans of people thinking about certain words into a space of manually designed features and then
classify using these features. They are able to predict semantic features even for words for which
they have not seen scans and experiment with differentiating between several zero-shot classes.
However, they do not classify new test instances into both seen and unseen classes. We extend their
approach to allow for this setup using novelty detection. Lampert et al. [21] construct a set of binary
attributes for the image classes that convey various visual characteristics, such as ?furry? and ?paws?
for bears and ?wings? and ?flies? for birds. Later, in section 6.4, we compare our method to their
method of performing Direct Attribute Prediction (DAP).
One-Shot Learning One-shot learning [19, 20] seeks to learn a visual object class by using very few
training examples. This is usually achieved by either sharing of feature representations [2], model
parameters [12] or via similar context [14]. A recent related work on one-shot learning is that of
Salakhutdinov et al. [29]. Similar to their work, our model is based on using deep learning techniques to learn low-level image features followed by a probabilistic model to transfer knowledge,
with the added advantage of needing no training data due to the cross-modal knowledge transfer
from natural language.
Knowledge and Visual Attribute Transfer. Lampert et al. and Farhadi et al. [21, 10] were two
of the first to use well-designed visual attributes of unseen classes to classify them. This is different
to our setting since we only have distributional features of words learned from unsupervised, nonparallel corpora and can classify between categories that have thousands or zero training images. Qi
et al. [28] learn when to transfer knowledge from one category to another for each instance.
Domain Adaptation. Domain adaptation is useful in situations in which there is a lot of training
data in one domain but little to none in another. For instance, in sentiment analysis one could train a
classifier for movie reviews and then adapt from that domain to book reviews [4, 13]. While related,
this line of work is different since there is data for each class but the features may differ between
domains.
Multimodal Embeddings. Multimodal embeddings relate information from multiple sources such
as sound and video [25] or images and text. Socher et al. [31] project words and image regions into a
common space using kernelized canonical correlation analysis to obtain state of the art performance
in annotation and segmentation. Similar to our work, they use unsupervised large text corpora to
2
learn semantic word representations. Their model does require a small amount of training data
however for each class. Some work has been done on multimodal distributional methods [11, 23].
Most recently, Bruni et al. [5] worked on perceptually grounding word meaning and showed that
joint models are better able to predict the color of concrete objects.
3
Word and Image Representations
We begin the description of the full framework with the feature representations of words and images.
Distributional approaches are very common for capturing semantic similarity between words. In
these approaches, words are represented as vectors of distributional characteristics ? most often their
co-occurrences with words in context [26, 9, 1, 32]. These representations have proven very effective
in natural language processing tasks such as sense disambiguation [30], thesaurus extraction [24, 8]
and cognitive modeling [22].
Unless otherwise mentioned, all word vectors are initialized with pre-trained d = 50-dimensional
word vectors from the unsupervised model of Huang et al. [15]. Using free Wikipedia text, their
model learns word vectors by predicting how likely it is for each word to occur in its context. Their
model uses both local context in the window around each word and global document contex, thus
capturing distributional syntactic and semantic information. For further details and evaluations of
these embeddings, see [3, 7].
We use the unsupervised method of Coates et al. [6] to extract I image features from raw pixels in
an unsupervised fashion. Each image is henceforth represented by a vector x ? RI .
4
Projecting Images into Semantic Word Spaces
In order to learn semantic relationships and class membership of images we project the image feature
vectors into the d-dimensional, semantic word space F . During training and testing, we consider
a set of classes Y . Some of the classes y in this set will have available training data, others will
be zero-shot classes without any training data. We define the former as the seen classes Ys and the
latter as the unseen classes Yu . Let W = Ws ? Wu be the set of word vectors in Rd for both seen
and unseen visual classes, respectively.
All training images x(i) ? Xy of a seen class y ? Ys are mapped to the word vector wy corresponding to the class name. To train this mapping, we train a neural network to minimize the following
objective function :
2
X X
J(?) =
(1)
wy ? ?(2) f ?(1) x(i) ,
y?Ys x(i) ?Xy
where ?(1) ? Rh?I , ?(2) ? Rd?h and the standard nonlinearity f = tanh. We define ? =
(?(1) , ?(2) ). A two-layer neural network is shown to outperform a single linear mapping in the
experiments section below. The cost function is trained with standard backpropagation and L-BFGS.
By projecting images into the word vector space, we implicitly extend the semantics with a visual
grounding, allowing us to query the space, for instance for prototypical visual instances of a word.
Fig. 2 shows a visualization of the 50-dimensional semantic space with word vectors and images
of both seen and unseen classes. The unseen classes are cat and truck. The mapping from 50 to 2
dimensions was done with t-SNE [33]. We can observe that most classes are tightly clustered around
their corresponding word vector while the zero-shot classes (cat and truck for this mapping) do not
have close-by vectors. However, the images of the two zero-shot classes are close to semantically
similar classes (such as in the case of cat, which is close to dog and horse but is far away from car
or ship). This observation motivated the idea for first detecting images of unseen classes and then
classifying them to the zero-shot word vectors.
5
Zero-Shot Learning Model
In this section we first give an overview of our model and then describe each of its components.
In general, we want to predict p(y|x), the conditional probability for both seen and unseen classes
y ? Ys ? Yu given an image from the test set x ? Xt . To achieve this we will employ the semantic
vectors to which these images have been mapped to f ? Ft .
Because standard classifiers will never predict a class that has no training examples, we introduce
a binary novelty random variable which indicates whether an image is in a seen or unseen class
3
frog
cat
bird
dog
cat
automobile
truck
frog
ship
airplane
horse
bird
dog
deer
truck
automobile
deer
horse
ship
airplane
Figure 2: T-SNE visualization of the semantic word space. Word vector locations are highlighted
and mapped image locations are shown both for images for which this mapping has been trained and
unseen images. The unseen classes are cat and truck.
V ? {s, u}. Let Xs be the set of all feature vectors for training images of seen classes and Fs their
corresponding semantic vectors. We similarly define Fy to be the semantic vectors of class y. We
predict a class y for a new input image x and its mapped semantic vector f via:
X
p(y|x, Xs , Fs , W, ?) =
P (y|V, x, Xs , Fs , W, ?)P (V |x, Xs , Fs , W, ?).
V ?{s,u}
Marginalizing out the novelty variable V allows us to first distinguish between seen and unseen
classes. Each type of image can then be classified differently. The seen image classifier can be a
state of the art softmax classifier while the unseen classifier can be a simple Gaussian discriminator.
5.1 Strategies for Novelty Detection
We now consider two strategies for predicting whether an image is of a seen or unseen class. The
term P (V = u|x, Xs , Fs , W, ?) is the probability of an image being in an unseen class. An image
from an unseen class will not be very close to the existing training images but will still be roughly
in the same semantic region. For instance, cat images are closest to dogs even though they are not
as close to the dog word vector as most dog images are. Hence, at test time, we can use outlier
detection methods to determine whether an image is in a seen or unseen class.
We compare two strategies for outlier detection. Both are computed on the manifold of training
images that were mapped to the semantic word space. The first method is relatively liberal in its
assessment of novelty. It uses simple thresholds on the marginals assigned to each image under isometric, class-specific Gaussians. The mapped points of seen classes are used to obtain this marginal.
For each seen class y ? Ys , we compute P (x|Xy , wy , Fy , ?) = P (f |Fy , wy ) = N (f |wy , ?y ). The
Gaussian of each class is parameterized by the corresponding semantic word vector wy for its mean
and a covariance matrix ?y that is estimated from all the mapped training points with that label. We
restrict the Gaussians to be isometric to prevent overfitting. For a new image x, the outlier detector
then becomes the indicator function that is 1 if the marginal probability is below a certain threshold
Ty for all the classes:
P (V = u|f, Xs , W, ?) := 1{?y ? Ys : P (f |Fy , wy ) < Ty }
We provide an experimental analysis for various thresholds T below. The thresholds are selected
to make at least some fraction of the vectors from training images above threshold, that is, to be
classified as a seen class. Intuitively, smaller thresholds result in fewer images being labeled as
unseen. The main drawback of this method is that it does not give a real probability for an outlier.
4
An alternative would be to use the method of [17] to obtain an actual outlier probability in an unsupervised way. Then, we can obtain the conditional class probability using a weighted combination
of classifiers for both seen and unseen classes (described below). Fig. 2 shows that many unseen
images are not technically outliers of the complete data manifold. Hence this method is very conservative in its assignment of novelty and therefore preserves high accuracy for seen classes.
We need to slightly modify the original approach since we distinguish between training and test
sets. We do not want to use the set of all test images since they would then not be considered
outliers anymore. The modified version has the same two parameters: k = 20, the number of
nearest neighbors that are considered to determine whether a point is an outlier and ? = 3, which
can be roughly seen as a multiplier on the standard deviation. The larger it is, the more a point has
to deviate from the mean in order to be considered an outlier.
For each point f ? Ft , we define a context set C(f ) ? Fs of k nearest neighbors in the training set
of seen categories. We can compute the probabilistic set distance pdist of each point x to the points
in C(f ):
sP
2
q?C(f ) d(f, q)
pdist? (f, C(f )) = ?
,
|C(f )|
where d(f, q) defines some distance function in the word space. We use Euclidean distances. Next
we define the local outlier factor:
lof? (f ) =
pdist? (f, C(f ))
? 1.
Eq?C(f ) [pdist? (f, C(q))]
Large lof values indicate increasing outlierness. In order to obtain a probability, we next define a
normalization factor Z that can be seen as a kind of standard deviation of lof values in the training
set of seen classes:
q
Z? (Fs ) = ? Eq?Fs [(lof(q))2 ].
Now, we can define the Local Outlier Probability:
lof? (f )
LoOP (f ) = max 0, erf
,
Z? (Fs )
(2)
where erf is the Gauss Error function. This probability can now be used to weigh the seen and unseen
classifiers by the appropriate amount given our belief about the outlierness of a new test image.
5.2 Classification
In the case where V = s, i.e. the point is considered to be of a known class, we can use any
probabilistic classifier for obtaining P (y|V = s, x, Xs ). We use a softmax classifier on the original
I-dimensional features. For the zero-shot case where V = u we assume an isometric Gaussian
distribution around each of the novel class word vectors and assign classes based on their likelihood.
6
Experiments
For most of our experiments we utilize the CIFAR-10 dataset [18]. The dataset has 10 classes, each
with 5,000 32 ? 32 ? 3 RGB images. We use the unsupervised feature extraction method of Coates
and Ng [6] to obtain a 12,800-dimensional feature vector for each image. For word vectors, we use
a set of 50-dimensional word vectors from the Huang dataset [15] that correspond to each CIFAR
category. During training, we omit two of the 10 classes and reserve them for zero-shot analysis.
The remaining categories are used for training.
In this section we first analyze the classification performance for seen classes and unseen classes
separately. Then, we combine images from the two types of classes, and discuss the trade-offs
involved in our two unseen class detection strategies. Next, the overall performance of the entire
classification pipeline is summarized and compared to another popular approach by Lampert et al.
[21]. Finally, we run a few additional experiments to assess quality and robustness of our model.
6.1 Seen and Unseen Classes Separately
First, we evaluate the classification accuracy when presented only with images from classes that
have been used in training. We train a softmax classifier to label one of 8 classes from CIFAR-10
(2 are reserved for zero-shot learning). In this case, we achieve an accuracy of 82.5% on the set of
5
0.8
0.7
0.7
Accuracy
Accuracy
0.9
0.8
0.6
0.58667
0.5
0.4
0.3
0.2
(c) Comparison
(b) LoOP model
1
unseen classes
0.8
seen classes
Gaussian
LoOP
0.7
0.6
0.6
Accuracy
(a) Gaussian model
1
0.9
0.6557
0.5
0.4
0.3
0.5
0.4
0.3
unseen classes
0.2
0.2
0.1
0
0.1
seen classes
0
0.2
0.4
0.6
0.8
Fraction of points classified as unseen
1
0
0
0.2
0.4
0.6
0.8
Outlier probability threshold
1
0.1
0
0.2
0.4
0.6
0.8
1
Fraction unseen/outlier threshold
Figure 4: Comparison of accuracies for images from previously seen and unseen categories when
unseen images are detected under the (a) Gaussian threshold model, (b) LoOP model. The average
accuracy on all images is shown in (c) for both models. We also show a line corresponding to the
single accuracy achieved in the Bayesian pipeline. In these examples, the zero-shot categories are
?cat? and ?truck?.
classes excluding cat and truck, which closely matches the SVM-based classification results in the
original Coates and Ng paper [6] that used all 10 classes.
We now focus on classification between only two zero-shot classes. In this case, the classification is
based on isometric Gaussians which amounts to simply comparing distances between word vectors
of unseen classes and an image mapped into semantic space. In this case, the performance is good
if there is at least one seen class similar to the zero-shot class. For instance, when cat and dog are
taken out from training, the resulting zero-shot classification does not work well because none of the
other 8 categories is similar enough to both images to learn a good semantic distinction. On the other
hand, if cat and truck are taken out, then the cat vectors can be mapped to the word space thanks to
similarities to dogs and trucks can be distinguished thanks to car, yielding better performance.
1
Fig. 3 shows the accuracy achieved in distinguishing images belonging to various combinations of zero-shot classes. We observe, as expected, that the maximum accuracy is achieved
when choosing semantically distinct categories.
For instance, frog-truck and cat-truck do very
well. The worst accuracy is obtained when cat
and dog are chosen instead. From the figure we
see that for certain combinations of zero-shot
classes, we can achieve accuracies up to 90%.
0.9
Zero?shot accuracy
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
6.2 Influence
of Novelty Detectors on Average Accuracy
cat?dog plane?auto auto?deer deer?ship cat?truck
Pair of zero?shot classes used
Our next area of investigation is to determine Figure 3: Visualization of classification accuracy
the average performance of the classifier for achieved for unseen images, for different choices
the overall dataset that includes both seen and of zero-shot classes selected before training.
unseen images. We compare the performance
when each image is passed through either of the two novelty detectors which decide with a certain
probability (in the second scenario) whether an image belongs to a class that was used in training.
Depending on this choice, the image is either passed through the softmax classifier for seen category
images, or assigned to the class of the nearest semantic word vector for unseen category images.
Fig. 4 shows the accuracies for test images for different choices made by the two scenarios for
novelty detection. The test set includes an equal number of images from each category, with 8
categories having been seen before, and 2 being new. We plot the accuracies of the two types
of images separately for comparison. Firstly, at the left extreme of the curve, the Gaussian unseen
image detector treats all of the images as unseen, and the LoOP model takes the probability threshold
for an image being unseen to be 0. At this point, with all unseen images in the test set being treated
as such, we achieve the highest accuracies, at 90% for this zero-shot pair. Similarly, at the other
extreme of the curve, all images are classified as belonging to a seen category, and hence the softmax
classifier for seen images gives the best possible accuracy for these images.
6
Between the extremes, the curves for unseen image accuracies and seen image accuracies fall and
rise at different rates. Since the Gaussian model is liberal in designating an image as belonging to an
unseen category, it treats more of the images as unseen, and hence we continue to get high unseen
class accuracies along the curve. The LoOP model, which tries to detect whether an image could
be regarded as an outlier for each class, does not assign very high outlier probabilities to zero-shot
images due to a large number of them being spread on inside the manifold of seen images (see Fig. 2
for a 2-dimensional visualization of the originally 50-dimensional space). Thus, it continues to treat
the majority of images as seen, leading to high seen class accuracies. Hence, the LoOP model can
be used in scenarios where one does not want to degrade the high performance on classes from the
training set but allow for the possibility of unseen classes.
We also see from Fig. 4 (c) that since most images in the test set belong to previously seen categories,
the LoOP model, which is conservative in assigning the unseen label, gives better overall accuracies
than the Gaussian model. In general, we can choose an acceptable threshold for seen class accuracy
and achieve a corresponding unseen class accuracy. For example, at 70% seen class accuracy in the
Gaussian model, unseen classes can be classified with accuracies of between 30% to 15%, depending
on the class. Random chance is 10%.
6.3 Combining predictions for seen and unseen classes
The final step in our experiments is to perform the full Bayesian pipeline as defined by Equation 2.
We obtain a prior probability of an image being an outlier. The LoOP model outputs a probability
for the image instance being an outlier, which we use directly. For the Gaussian threshold model, we
tune a cutoff fraction for log probabilities beyond which images are classified as outliers. We assign
probabilities 0 and 1 to either side of this threshold. We show the horizontal lines corresponding to
the overall accuracy for the Bayesian pipeline on Figure 4.
6.4 Comparison to attribute-based classification
To establish a context for comparing our model performance, we also run the attribute-based classification approach outlined by Lampert et al. [21]. We construct an attribute set of 25 attributes highlighting different aspects of the CIFAR-10 dataset, with certain aspects dealing with animal-based
attributes, and others dealing with vehicle-based attributes. We train each binary attribute classifier
separately, and use the trained classifiers to construct attribute labels for unseen classes. Finally,
we use MAP prediction to determine the final output class. The table below shows a summary of
results. Our overall accuracies for both models outperform the attribute-based model.
Bayesian pipeline (Gaussian)
Bayesian pipeline (LoOP)
Attribute-based (Lampert et al.)
74.25%
65.31%
45.25%
In general, an advantage of our approach is the ability to adapt to a domain quickly, which is difficult
in the case of the attribute-based model, since appropriate attribute types need to be carefully picked.
6.6 Extension to
CIFAR-100 and Analysis of Deep Semantic Mapping
So far, our tests were on the CIFAR-10 dataset. We
now describe results on the more challenging CIFAR-100
7
1
1?layer NN
2?layer NN
0.8
Accuracy
6.5 Novelty detection in original feature space
The analysis of novelty detectors in 6.2 involves calculation in the word space. As a comparison, we perform the
same experiments with the Gaussian model in the original feature space. In the mapped space, we observe that
of the 100 images assigned the highest probability of being an outlier, 12% of those images are false positives. On
the other hand, in the original feature space, the false positive rate increases to 78%. This is intuitively explained
by the fact that the mapping function gathers extra semantic information from the word vectors it is trained on, and
images are able to cluster better around these assumed
Gaussian centroids. In the original space, there is no semantic information, and the Gaussian centroids need to
be inferred from among the images themselves, which are
not truly representative of the center of the image space
for their classes.
0.6
unseen accuracies
0.4
0.2
0
0
seen accuracies
0.2
0.4
0.6
0.8
Fraction of points classified as seen
1
Figure 5: Comparison of accuracies
for images from previously seen and
unseen categories for the modified
CIFAR-100 dataset, after training the
semantic mapping with a one-layer network and two-layer network.
The
deeper mapping function performs better.
dataset [18], which consists of 100 classes, with 500 32 ? 32 ? 3 RGB images in each class. We
remove 4 categories for which no vector representations were available in our vocabulary. We then
combined the CIFAR-10 dataset to get a set of 106 classes. Six zero-shot classes were chosen: ?forest?, ?lobster?, ?orange?, ?boy?, ?truck?, and ?cat?. As before, we train a neural network to map the
vectors into semantic space. With this setup, we get a peak non-zero-shot accuracy of 52.7%, which
is almost near the baseline on 100 classes [16]. When all images are labeled as zero shot, the peak
accuracy for the 6 unseen classes is 52.7%, where chance would be at 16.6%.
Because of the large semantic space corresponding to 100 classes, the proximity of an image to
its appropriate class vector is dependent on the quality of the mapping into semantic space. We
hypothesize that in this scenario a two layer neural network as described in Sec. 4 will perform
better than a single layer or linear mapping. Fig. 5 confirms this hypothesis. The zero-shot accuracy
is 10% higher with a 2 layer neural net compared to a single layer with 42.2%.
6.7
1
Zero-Shot Classes with Distractor Words
0.9
Neighbors of cat
Neighbors of truck
Accuracy
We would like zero-shot images to be classi0.8
fied correctly when there are a large number
of unseen categories to choose from. To eval0.7
uate such a setting with many possible but in0.6
correct unseen classes we create a set of distractor words. We compare two scenarios. In
0.5
the first, we add random nouns to the semantic
0.4
space. In the second, much harder, setting we
add the k nearest neighbors of a word vector.
0.3
We then evaluate classification accuracy with
0.2
each new set. For the zero-shot class cat and
0
10
20
30
40
Number of distractor words
truck, the nearest neighbors distractors include
rabbit, kitten and mouse, among others.
Figure 6: Visualization of the zero-shot classifiThe accuracy does not change much if random cation accuracy when distractor words from the
distractor nouns are added. This shows that the nearest neighbor set of a given category are also
semantic space is spanned well and our zero- present.
shot learning model is quite robust. Fig. 6
shows the classification accuracies for the second scenario. Here, accuracy drops as an increasing number of semantically related nearest neighbors are added to the distractor set. This is to be
expected because there are not enough related categories to accurately distinguish very similar categories. After a certain number, the effect of a new distractor word is small. This is consistent with
our expectation that a certain number of closely-related semantic neighbors would distract the classifier; however, beyond that limited set, other categories would be further away in semantic space
and would not affect classification accuracy.
7
Conclusion
We introduced a novel model for jointly doing standard and zero-shot classification based on deep
learned word and image representations. The two key ideas are that (i) using semantic word vector
representations can help to transfer knowledge between modalities even when these representations
are learned in an unsupervised way and (ii) that our Bayesian framework that first differentiates novel
unseen classes from points on the semantic manifold of trained classes can help to combine both
zero-shot and seen classification into one framework. If the task was only to differentiate between
various zero-shot classes we could obtain accuracies of up to 90% with a fully unsupervised model.
Acknowledgments
Richard is partly supported by a Microsoft Research PhD fellowship. The authors gratefully acknowledge
the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of
Text (DEFT) Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-13-2-0040,
the DARPA Deep Learning program under contract number FA8650-10-C-7020 and NSF IIS-1159679. Any
opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and
do not necessarily reflect the view of DARPA, AFRL, or the US government.
8
References
[1] M. Baroni and A. Lenci. Distributional memory: A general framework for corpus-based semantics.
Computational Linguistics, 36(4):673?721, 2010.
[2] E. Bart and S. Ullman. Cross-generalization: learning novel classes from a single example by feature
replacement. In CVPR, 2005.
[3] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. J. Mach.
Learn. Res., 3, March 2003.
[4] J. Blitzer, M. Dredze, and F. Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain
Adaptation for Sentiment Classification. In ACL, 2007.
[5] E. Bruni, G. Boleda, M. Baroni, and N. Tran. Distributional semantics in technicolor. In ACL, 2012.
[6] A. Coates and A. Ng. The Importance of Encoding Versus Training with Sparse Coding and Vector
Quantization . In ICML, 2011.
[7] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks
with multitask learning. In ICML, 2008.
[8] J. Curran. From Distributional to Semantic Similarity. PhD thesis, University of Edinburgh, 2004.
[9] K. Erk and S. Pad?o. A structured vector space model for word meaning in context. In EMNLP, 2008.
[10] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
[11] Y. Feng and M. Lapata. Visual information in semantic representation. In HLT-NAACL, 2010.
[12] M. Fink. Object classification from a single example utilizing class relevance pseudo-metrics. In NIPS,
2004.
[13] X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for Large-Scale sentiment classification: A deep
learning approach. In ICML, 2011.
[14] D. Hoiem, A.A. Efros, and M. Herbert. Geometric context from a single image. In ICCV, 2005.
[15] E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. Improving Word Representations via Global
Context and Multiple Word Prototypes. In ACL, 2012.
[16] Yangqing Jia, Chang Huang, and T. Darrell. Beyond spatial pyramids: Receptive field learning for pooled
image features. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages
3370 ?3377, june 2012.
[17] H. Kriegel, P. Kr?oger, E. Schubert, and A. Zimek. LoOP: local Outlier Probabilities. In Proceedings of
the 18th ACM conference on Information and knowledge management, CIKM ?09, 2009.
[18] Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master?s thesis, Computer
Science Department, University of Toronto, 2009.
[19] R.; Perona L. Fei-Fei; Fergus. One-shot learning of object categories. TPAMI, 28, 2006.
[20] B. M. Lake, J. Gross R. Salakhutdinov, and J. B. Tenenbaum. One shot learning of simple visual concepts.
In Proceedings of the 33rd Annual Conference of the Cognitive Science Society, 2011.
[21] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to Detect Unseen Object Classes by BetweenClass Attribute Transfer. In CVPR, 2009.
[22] T. K. Landauer and S. T. Dumais. A solution to Plato?s problem: the Latent Semantic Analysis theory of
acquisition, induction and representation of knowledge. Psychological Review, 104(2):211?240, 1997.
[23] C.W. Leong and R. Mihalcea. Going beyond text: A hybrid image-text approach for measuring word
relatedness. In IJCNLP, 2011.
[24] D. Lin. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL, pages
768?774, 1998.
[25] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A.Y. Ng. Multimodal deep learning. In ICML, 2011.
[26] S. Pado and M. Lapata. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161?199, 2007.
[27] M. Palatucci, D. Pomerleau, G. Hinton, and T. Mitchell. Zero-shot learning with semantic output codes.
In NIPS, 2009.
[28] Guo-Jun Qi, C. Aggarwal, Y. Rui, Q. Tian, S. Chang, and T. Huang. Towards cross-category knowledge
propagation for learning visual concepts. In CVPR, 2011.
[29] A. Torralba R. Salakhutdinov, J. Tenenbaum. Learning to learn with compound hierarchical-deep models.
In NIPS, 2012.
[30] H. Sch?utze. Automatic word sense discrimination. Computational Linguistics, 24:97?124, 1998.
9
[31] R. Socher and L. Fei-Fei. Connecting modalities: Semi-supervised segmentation and annotation of images
using unaligned text corpora. In CVPR, 2010.
[32] P. D. Turney and P. Pantel. From frequency to meaning: Vector space models of semantics. Journal of
Artificial Intelligence Research, 37:141?188, 2010.
[33] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research,
2008.
10
| 5027 |@word multitask:1 version:1 briefly:2 justice:1 confirms:1 seek:1 rgb:2 covariance:1 blender:1 harder:1 shot:47 zimek:1 paw:1 hoiem:2 ours:1 document:1 fa8750:1 existing:1 comparing:2 assigning:1 remove:1 designed:2 plot:1 hypothesize:1 drop:1 bart:1 discrimination:1 intelligence:1 selected:2 fewer:1 plane:1 segway:1 detecting:1 location:2 toronto:1 org:1 liberal:2 firstly:1 five:1 along:2 direct:1 consists:1 combine:2 inside:1 introduce:2 expected:2 roughly:2 themselves:1 frequently:1 distractor:7 salakhutdinov:3 little:1 actual:1 farhadi:2 window:1 increasing:2 becomes:1 project:3 begin:1 what:1 kind:1 erk:1 unified:1 finding:1 pseudo:1 fink:1 classifier:18 stick:1 omit:1 before:3 positive:2 local:4 modify:1 treat:3 mach:1 encoding:1 acl:4 bird:3 frog:3 challenging:1 co:1 limited:1 tian:1 acknowledgment:1 harmeling:1 outlierness:2 testing:2 backpropagation:1 mihalcea:1 area:1 word:62 pre:1 seeing:2 get:4 cannot:1 close:6 context:9 influence:1 map:4 center:1 latest:1 rabbit:1 utilizing:1 regarded:1 spanned:1 nam:1 deft:1 construction:1 us:2 distinguishing:1 designating:1 hypothesis:1 curran:1 recognition:1 continues:1 distributional:9 labeled:3 observed:1 ft:2 fly:1 capture:1 worst:1 thousand:2 region:2 trade:1 highest:2 mentioned:1 weigh:1 agency:1 gross:1 trained:6 technically:1 multimodal:4 joint:2 darpa:3 differently:1 cat:23 various:6 represented:2 train:6 distinct:1 effective:1 describe:2 query:1 detected:1 horse:5 deer:4 artificial:1 choosing:1 quite:1 stanford:4 larger:1 cvpr:6 ducharme:1 otherwise:2 ability:3 erf:2 unseen:70 syntactic:1 highlighted:1 jointly:1 final:2 differentiate:4 advantage:2 tpami:1 net:1 tran:1 unaligned:3 product:1 adaptation:4 loop:11 combining:1 achieve:6 pantel:1 description:2 cluster:1 darrell:1 object:13 help:3 depending:2 andrew:1 blitzer:1 nearest:7 eq:2 c:1 involves:1 come:1 indicate:1 differ:1 drawback:1 closely:2 attribute:19 correct:1 exploration:1 opinion:1 material:1 require:3 government:1 assign:4 clustered:1 generalization:1 investigation:1 im:1 extension:1 ijcnlp:1 proximity:1 around:5 considered:4 lof:5 wheeled:1 mapping:12 predict:7 reserve:1 efros:1 torralba:1 utze:1 baroni:2 label:4 tanh:1 create:1 weighted:1 offs:1 gaussian:15 modified:2 focus:1 june:1 likelihood:2 indicates:1 centroid:2 baseline:1 sense:2 detect:2 kim:1 dependent:1 membership:1 nn:2 entire:1 pad:1 kernelized:1 w:1 perona:1 going:1 semantics:4 schubert:1 pixel:1 overall:5 classification:19 among:2 animal:1 noun:2 art:5 softmax:5 orange:1 marginal:2 equal:1 construct:3 never:1 extraction:2 ng:7 having:1 manually:3 field:1 look:2 unsupervised:14 yu:2 icml:4 thinking:1 fmri:1 others:3 richard:3 few:2 employ:1 simultaneously:1 recognize:1 tightly:1 preserve:1 replacement:1 microsoft:1 attempt:1 detection:12 possibility:1 evaluation:1 introduces:1 mixture:1 extreme:3 truly:1 yielding:1 necessary:1 xy:3 unless:1 euclidean:1 initialized:1 re:1 psychological:1 instance:13 classify:7 modeling:1 measuring:1 assignment:1 cost:1 deviation:2 krizhevsky:1 dependency:1 endres:1 nickisch:1 combined:1 oger:1 thanks:2 dumais:1 peak:2 kitten:1 standing:1 probabilistic:4 contract:2 lee:1 connecting:1 milind:1 concrete:1 quickly:1 mouse:1 thesis:2 reflect:1 management:1 huang:5 possibly:1 choose:2 emnlp:1 henceforth:1 cognitive:2 book:1 wing:1 leading:1 ullman:1 lenci:1 bfgs:1 lapata:2 summarized:1 sec:1 includes:2 boom:1 coding:1 tra:1 forsyth:1 pooled:1 collobert:1 vehicle:2 later:1 lot:1 try:1 picked:1 analyze:1 doing:1 view:1 annotation:2 jia:1 minimize:1 air:1 ass:1 accuracy:50 characteristic:2 reserved:1 correspond:1 identify:2 raw:1 bayesian:6 vincent:1 accurately:1 none:2 cation:1 classified:7 detector:5 sharing:1 hlt:1 ty:2 acquisition:1 lobster:1 involved:1 frequency:1 dataset:9 popular:1 mitchell:1 knowledge:12 car:3 color:1 distractors:1 segmentation:2 carefully:1 afrl:2 originally:1 higher:1 isometric:4 supervised:1 modal:3 done:2 though:1 box:1 furthermore:1 correlation:1 hand:2 horizontal:1 christopher:1 assessment:1 propagation:1 defines:1 quality:2 dredze:1 usa:1 grounding:2 name:1 effect:1 multiplier:1 naacl:1 concept:2 former:1 hence:5 assigned:4 laboratory:1 furry:1 semantic:44 visualizing:1 during:2 self:1 ini:1 outline:1 complete:2 demonstrate:1 dap:1 performs:1 image:116 meaning:4 novel:5 recently:1 common:2 wikipedia:1 overview:2 extend:2 belong:1 marginals:1 rd:3 automatic:2 outlined:1 similarly:2 nonlinearity:1 language:6 gratefully:1 similarity:4 add:2 closest:1 recent:1 showed:1 belongs:1 ship:4 scenario:6 prime:1 certain:7 compound:1 binary:3 came:1 continue:1 der:1 seen:54 herbert:1 additional:1 novelty:16 determine:6 semi:1 ii:2 multiple:3 sound:1 needing:1 full:2 aggarwal:1 match:1 adapt:2 calculation:1 cross:5 cifar:9 lin:1 retrieval:1 y:6 controlled:1 qi:2 prediction:4 nonparallel:1 vision:1 expectation:1 metric:1 palatucci:2 grounded:1 normalization:1 pyramid:1 achieved:6 want:3 separately:4 fellowship:1 source:1 modality:3 sch:1 extra:1 operate:1 unlike:2 plato:1 incorporates:1 near:1 intermediate:1 bengio:2 embeddings:4 enough:2 leong:1 affect:1 architecture:1 restrict:1 idea:4 prototype:1 airplane:2 whether:10 motivated:1 six:1 defense:1 passed:2 sentiment:3 f:9 fa8650:1 prefers:1 deep:10 useful:2 tune:1 amount:4 ang:1 tenenbaum:2 category:31 outperform:2 canonical:1 coates:4 nsf:1 estimated:1 cikm:1 correctly:1 key:1 threshold:13 yangqing:1 prevent:1 cutoff:1 utilize:1 vast:1 fraction:5 run:2 parameterized:1 you:1 master:1 almost:1 reasonable:2 decide:1 wu:1 lake:1 betweenclass:1 disambiguation:1 thesaurus:1 maaten:1 prefer:1 acceptable:1 capturing:2 layer:10 followed:1 distinguish:4 truck:17 annual:1 occur:1 constraint:1 worked:1 alex:1 fei:4 ri:1 aspect:2 performing:1 relatively:1 department:2 structured:1 combination:3 manning:3 march:1 belonging:3 smaller:1 slightly:1 bruni:2 outlier:21 iccv:1 projecting:2 intuitively:2 explained:1 pipeline:6 taken:2 equation:1 visualization:5 previously:4 discus:1 describing:1 differentiates:1 technicolor:1 ganjoo:1 ge:1 available:4 gaussians:3 observe:3 hierarchical:1 away:2 appropriate:3 occurrence:1 anymore:1 distinguished:1 alternative:1 robustness:1 original:7 top:1 remaining:1 include:1 linguistics:3 clustering:1 in0:1 establish:1 society:1 feng:1 move:1 objective:1 added:3 strategy:7 receptive:1 distance:4 mapped:12 majority:1 degrade:1 manifold:8 fy:4 induction:1 code:1 relationship:1 setup:2 difficult:1 sne:3 relate:1 boy:1 rise:1 pomerleau:1 unknown:1 perform:3 allowing:3 observation:1 acknowledge:1 situation:2 hinton:2 ever:1 excluding:1 inferred:1 introduced:2 dog:12 pair:2 janvin:1 connection:1 discriminator:1 learned:4 distinction:1 nip:3 able:4 beyond:4 kriegel:1 usually:1 wy:7 below:5 pattern:1 reading:2 program:2 max:1 memory:1 video:1 belief:1 natural:4 treated:1 force:1 predicting:2 indicator:1 hybrid:1 advanced:1 movie:1 jun:1 auto:3 extract:1 text:10 review:3 understanding:1 literature:1 deviate:1 prior:1 geometric:1 marginalizing:1 fully:1 bear:1 prototypical:2 filtering:1 proven:1 versus:1 gather:1 consistent:1 classifying:1 tiny:1 balancing:1 bordes:1 summary:1 supported:1 free:1 side:1 allow:2 deeper:1 neighbor:9 fall:1 differentiating:1 sparse:1 edinburgh:1 van:1 curve:4 dimension:1 vocabulary:1 world:1 author:2 made:1 far:2 implicitly:2 relatedness:1 keep:1 dealing:2 global:2 overfitting:1 corpus:7 assumed:1 fergus:1 landauer:1 latent:1 khosla:1 table:1 learn:8 transfer:8 robust:1 ca:1 obtaining:2 improving:1 forest:1 distract:1 ngiam:1 automobile:2 necessarily:1 electric:1 domain:8 bollywood:1 sp:1 main:2 spread:1 rh:1 lampert:6 convey:1 gadget:1 fied:1 fig:9 representative:1 fashion:1 pereira:1 spatial:1 learns:1 coling:1 xt:1 specific:1 x:7 svm:1 glorot:1 socher:5 quantization:1 false:2 importance:1 kr:1 phd:2 perceptually:1 illustrates:1 rui:1 simply:1 explore:1 likely:1 visual:17 highlighting:1 expressed:1 recommendation:1 chang:2 determines:1 chance:2 acm:1 weston:1 conditional:2 towards:1 change:1 semantically:3 conservative:3 called:1 specie:1 partly:1 experimental:1 gauss:1 turney:1 people:2 support:1 latter:1 scan:2 guo:1 relevance:1 evaluate:2 biography:1 |
4,451 | 5,028 | Reasoning With Neural Tensor Networks
for Knowledge Base Completion
Richard Socher?, Danqi Chen*, Christopher D. Manning, Andrew Y. Ng
Computer Science Department, Stanford University, Stanford, CA 94305, USA
[email protected], {danqi,manning}@stanford.edu, [email protected]
Abstract
Knowledge bases are an important resource for question answering and other tasks
but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities.
Previous work represented entities as either discrete atomic units or with a single
entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows
sharing of statistical strength between, for instance, facts involving the ?Sumatran
tiger? and ?Bengal tiger.? Lastly, we demonstrate that all models improve when
these word vectors are initialized with vectors learned from unsupervised large
corpora. We assess the model by considering the problem of predicting additional
true relations between entities given a subset of the knowledge base. Our model
outperforms previous models and can classify unseen relationships in WordNet
and FreeBase with an accuracy of 86.2% and 90.0%, respectively.
1
Introduction
Ontologies and knowledge bases such as WordNet [1], Yago [2] or the Google Knowledge Graph are
extremely useful resources for query expansion [3], coreference resolution [4], question answering
(Siri), information retrieval or providing structured knowledge to users. However, they suffer from
incompleteness and a lack of reasoning capability.
Much work has focused on extending existing knowledge bases using patterns or classifiers applied
to large text corpora. However, not all common knowledge that is obvious to people is expressed in
text [5, 6, 2, 7]. We adopt here the complementary goal of predicting the likely truth of additional
facts based on existing facts in the knowledge base. Such factual, common sense reasoning is
available and useful to people. For instance, when told that a new species of monkeys has been
discovered, a person does not need to find textual evidence to know that this new monkey, too, will
have legs (a meronymic relationship inferred due to a hyponymic relation to monkeys in general).
We introduce a model that can accurately predict additional true facts using only an existing
database. This is achieved by representing each entity (i.e., each object or individual) in the database
as a vector. These vectors can capture facts about that entity and how probable it is part of a certain
relation. Each relation is defined through the parameters of a novel neural tensor network which
can explicitly relate two entity vectors. The first contribution of this paper is the new neural tensor
network (NTN), which generalizes several previous neural network models and provides a more
powerful way to model relational information than a standard neural network layer.
The second contribution is to introduce a new way to represent entities in knowledge bases. Previous
work [8, 9, 10] represents each entity with one vector. However, does not allow the sharing of
?
Both authors contributed equally.
1
Knowledge Base
Word Vector Space
Relation: has part
cat
dog
?.
tail
leg
?.
cat
limb
?.
Confidence for Triplet
eye
leg
cat
dog
Relation: type of
tiger
leg
?.
R
tiger
Relation: instance of
Bengal
tiger
?
Neural
Tensor
Network
e1
India
Bengal tiger
?.
Reasoning about Relations
tail
( Bengal tiger, has part,
e2
tail)
Does a Bengal tiger have a tail?
Figure 1: Overview of our model which learns vector representations for entries in a knowledge base
in order to predict new relationship triples. If combined with word representations, the relationships
can be predicted with higher accuracy and for entities that were not in the original knowledge base.
statistical strength if entity names share similar substrings. Instead, we represent each entity as the
average of its word vectors, allowing the sharing of statistical strength between the words describing
each entity e.g., Bank of China and China.
The third contribution is the incorporation of word vectors which are trained on large unlabeled text.
This readily available resource enables all models to more accurately predict relationships.
We train on relationships in WordNet and Freebase and evaluate on a heldout set of unseen relational
triplets. Our model outperforms previously introduced related models such as those of [8, 9, 10]. Our
new model, illustrated in Fig. 1, outperforms previous knowledge base models by a large margin.
We will make the code and dataset available at www.socher.org.
2
Related Work
The work most similar to ours is that by Bordes et al. [8] and Jenatton et al. [9] who also learn
vector representations for entries in a knowledge base. We implement their approach and compare
to it directly. Our new model outperforms this and other previous work. We also show that both our
and their model can benefit from initialization with unsupervised word vectors.
Another related approach is by Sutskever et al. [11] who use tensor factorization and Bayesian
clustering for learning relational structures. Instead of clustering the entities in a nonparametric
Bayesian framework we rely purely on learned entity vectors. Their computation of the truth of a
relation can be seen as a special case of our proposed model. Instead of using MCMC for inference
and learning, we use standard forward propagation and backpropagation techniques modified for
the NTN. Lastly, we do not require multiple embeddings for each entity. Instead, we consider the
subunits (space separated words) of entity names.
Our Neural Tensor Network is related to other models in the deep learning literature. Ranzato and
Hinton [12] introduced a factored 3-way Restricted Boltzmann Machine which is also parameterized
by a tensor. Recently, Yu et al. [13] introduce a model with tensor layers for speech recognition.
Their model is a special case of our model and is only applicable inside deeper neural networks. Simultaneously with this paper, we developed a recursive version of this model for sentiment analysis
[14].
There is a vast amount of work on extending knowledge bases by parsing external, text corpora
[5, 6, 2], among many others. The field of open information extraction [15], for instance, extracts
relationships from millions of web pages. This work is complementary to ours; we mainly note that
little work has been done on knowledge base extension based purely on the knowledge base itself or
with readily available resources but without re-parsing a large corpus.
2
Lastly, our model can be seen as learning a tensor factorization, similar to Nickel et al. [16]. In the
comparison of Bordes et al. [17] these factorization methods have been outperformed by energybased models.
Many methods that use knowledge bases as features such as [3, 4] could benefit from a method that
maps the provided information into vector representations. We learn to modify word representations
via grounding in world knowledge. This essentially allows us to analyze word embeddings and
query them for specific relations. Furthermore, the resulting vectors could be used in other tasks
such as named entity recognition [18] or relation classification in natural language [19].
3
Neural Models for Reasoning over Relations
This section introduces the neural tensor network that reasons over database entries by learning
vector representations for them. As shown in Fig. 1, each relation triple is described by a neural
network and pairs of database entities which are given as input to that relation?s model. The model
returns a high score if they are in that relationship and a low one otherwise. This allows any fact,
whether implicit or explicitly mentioned in the database to be answered with a certainty score. We
first describe our neural tensor model and then show that many previous models are special cases of
it.
3.1
Neural Tensor Networks for Relation Classification
The goal is to learn models for common sense reasoning, the ability to realize that some facts hold
purely due to other existing relations. Another way to describe the goal is link prediction in an
existing network of relationships between entity nodes. The goal of our approach is to be able
to state whether two entities (e1 , e2 ) are in a certain relationship R. For instance, whether the
relationship (e1 , R, e2 ) = (Bengal tiger, has part, tail) is true and with what certainty. To this end,
we define a set of parameters indexed by R for each relation?s scoring function. Let e1 , e2 ? Rd be
the vector representations (or features) of the two entities. For now we can assume that each value
of this vector is randomly initialized to a small uniformly random number.
The Neural Tensor Network (NTN) replaces a standard linear neural network layer with a bilinear tensor layer that directly relates the two entity vectors across multiple dimensions. The model
computes a score of how likely it is that two entities are in a certain relationship by the following
NTN-based function:
e
[1:k]
g(e1 , R, e2 ) = uTR f eT1 WR e2 + VR 1 + bR ,
(1)
e2
[1:k]
where f = tanh is a standard nonlinearity applied element-wise, WR ? Rd?d?k is a tensor and
[1:k]
the bilinear tensor product eT1 WR e2 results in a vector h ? Rk , where each entry is computed by
[i]
one slice i = 1, . . . , k of the tensor: hi = eT1 WR e2 . The other parameters for relation R are the
k?2d
standard form of a neural network: VR ? R
and U ? Rk , bR ? Rk .
Fig. 2 shows a visualization of this model. The main
advantage is that it can relate the two inputs multiplicatively instead of only implicitly through the
nonlinearity as with standard neural networks where
the entity vectors are simply concatenated. Intuitively, we can see each slice of the tensor as being
responsible for one type of entity pair or instantiation
of a relation. For instance, the model could learn that
both animals and mechanical entities such as cars
can have parts (i.e., (car, has part, x)) from different parts of the semantic word vector space. In our
experiments, we show that this results in improved
performance. Another way to interpret each tensor
slice is that it mediates the relationship between the
two entity vectors differently.
3
Neural Tensor Layer
Linear
Layer
f
Slices of
Tensor Layer
Standard
Layer
+
UT f( e1T W[1:2] e2 + V
Bias
+
e1
+b)
e2
Figure 2: Visualization of the Neural Tensor
Network. Each dashed box represents one
slice of the tensor, in this case there are k = 2
slices.
3.2
Related Models and Special Cases
We now introduce several related models in increasing order of expressiveness and complexity. Each
model assigns a score to a triplet using a function g measuring how likely the triplet is correct. The
ideas and strengths of these models are combined in our new Neural Tensor Network defined above.
Distance Model. The model of Bordes et al. [8] scores relationships by mapping the left and right
entities to a common space using a relationship specific mapping matrix and measuring the L1
distance between the two. The scoring function for each triplet has the following form:
g(e1 , R, e2 ) = kWR,1 e1 ? WR,2 e2 k1 ,
where WR,1 , WR,2 ? Rd?d are the parameters of relation R?s classifier. This similarity-based
model scores correct triplets lower (entities most certainly in a relation have 0 distance). All other
functions are trained to score correct triplets higher. The main problem with this model is that the
parameters of the two entity vectors do not interact with each other, they are independently mapped
to a common space.
Single Layer Model. The second model tries to alleviate the problems of the distance model by
connecting the entity vectors implicitly through the nonlinearity of a standard, single layer neural
network. The scoring function has the following form:
e
T
T
g(e1 , R, e2 ) = uR f (WR,1 e1 + WR,2 e2 ) = uR f [WR,1 WR,2 ] 1 ,
e2
where f = tanh, WR,1 , WR,2 ? Rk?d and uR ? Rk?1 are the parameters of relation R?s scoring
function. While this is an improvement over the distance model, the non-linearity only provides a
weak interaction between the two entity vectors at the expense of a harder optimization problem.
Collobert and Weston [20] trained a similar model to learn word vector representations using words
in their context. This model is a special case of the tensor neural network if the tensor is set to 0.
Hadamard Model. This model was introduced by Bordes et al. [10] and tackles the issue of weak
entity vector interaction through multiple matrix products followed by Hadamard products. It is
different to the other models in our comparison in that it represents each relation simply as a single
vector that interacts with the entity vectors through several linear products all of which are parameterized by the same parameters. The scoring function is as follows:
T
g(e1 , R, e2 ) = (W1 e1 ? Wrel,1 eR + b1 ) (W2 e2 ? Wrel,2 eR + b2 )
where W1 , Wrel,1 , W2 , Wrel,2 ? Rd?d and b1 , b2 ? Rd?1 are parameters that are shared by all
relations. The only relation specific parameter is eR . While this allows the model to treat relational
words and entity words the same way, we show in our experiments that giving each relationship its
own matrix operators results in improved performance. However, the bilinear form between entity
vectors is by itself desirable.
Bilinear Model. The fourth model [11, 9] fixes the issue of weak entity vector interaction through a
relation-specific bilinear form. The scoring function is as follows: g(e1 , R, e2 ) = eT1 WR e2 , where
WR ? Rd?d are the only parameters of relation R?s scoring function. This is a big improvement
over the two previous models as it incorporates the interaction of two entity vectors in a simple
and efficient way. However, the model is now restricted in terms of expressive power and number
of parameters by the word vectors. The bilinear form can only model linear interactions and is
not able to fit more complex scoring functions. This model is a special case of NTNs with VR =
0, bR = 0, k = 1, f = identity. In comparison to bilinear models, the neural tensor has much
more expressive power which will be useful especially for larger databases. For smaller datasets the
number of slices could be reduced or even vary between relations.
3.3
Training Objective and Derivatives
All models are trained with contrastive max-margin objective functions. The main idea is that each
(i)
(i)
triplet in the training set T (i) = (e1 , R(i) , e2 ) should receive a higher score than a triplet in which
one of the entities is replaced with a random entity. There are NR many relations, indexed by R(i)
for each triplet. Each relation has its associated neural tensor net parameters. We call the triplet
4
(i)
(i)
with a random entity corrupted and denote the corrupted triplet as Tc = (e1 , R(i) , ec ), where we
sampled entity ec randomly from the set of all entities that can appear at that position in that relation.
Let the set of all relationships? NTN parameters be ? = u, W, V, b, E. We minimize the following
objective:
N X
C
X
J(?) =
max 0, 1 ? g T (i) + g Tc(i) + ?k?k22 ,
i=1 c=1
where N is the number of training triplets and we score the correct relation triplet higher than its
corrupted one up to a margin of 1. For each correct triplet we sample C random corrupted triplets.
We use standard L2 regularization of all the parameters, weighted by the hyperparameter ?.
The model is trained by taking derivatives with respect to the five groups of parameters. The derivatives for the standard neural network weights V are the same as in general backpropagation. Dropping the relation specific index R, we have the following derivative for the j?th slice of the full
tensor:
?g(e1 , R, e2 )
e1
0
T
T
[j]
=
u
f
(z
)e
e
,
where
z
=
e
W
e
+
V
+ bj ,
j
j 1 2
j
2
j?
1
e2
?W [j]
where Vj? is the j?th row of the matrix V and we defined zj as the j?th element of the k-dimensional
hidden tensor layer. We use minibatched L-BFGS for optimization which converges to a local
optimum of our non-convex objective function. We also experimented with AdaGrad but found that
it performed slightly worse.
3.4 Entity Representations Revisited
All the above models work well with randomly initialized entity vectors. In this section we introduce
two further improvements: representing entities by their word vectors and initializing word vectors
with pre-trained vectors.
Previous work [8, 9, 10] assigned a single vector representation to each entity of the knowledge base,
which does not allow the sharing of statistical strength between the words describing each entity.
Instead, we model each word as a d-dimensional vector ? Rd and compute an entity vector as the
composition of its word vectors. For instance, if the training data includes a fact that homo sapiens
is a type of hominid and this entity is represented by two vectors vhomo and vsapiens , we may extend
the fact to the previously unseen homo erectus, even though its second word vector for erectus might
still be close to its random initialization.
Hence, for a total number of NE entities consisting of NW many unique words, if we train on
the word level (the training error derivatives are also back-propagated to these word vectors), and
represent entities by word vectors, the full embedding matrix has dimensionality E ? Rd?NW .
Otherwise we represent each entity as an atomic single vector and train the entity embedding matrix
E ? Rd?NE .
We represent the entity vector by averaging its word vectors. For example, vhomo sapiens =
0.5(vhomo +vsapiens ). We have also experimented with Recursive Neural Networks (RNNs) [21, 19]
for the composition. In the WordNet subset over 60% of the entities have only a single word and
over 90% have less or equal to 2 words. Furthermore, most of the entities do not exhibit a clear
compositional structure, e.g., people names in Freebase. Hence, RNNs did not show any distinct
improvement over simple averaging and we will not include them in the experimental results.
Training word vectors has the additional advantage that we can benefit from pre-trained unsupervised word vectors, which in general capture some distributional syntactic and semantic information.
We will analyze how much it helps to use these vectors for initialization in Sec. 4.2. Unless otherwise specified, we use the d = 100-dimensional vectors provided by [18]. Note that our approach
does not explicitly deal with polysemous words. One possible future extension is to incorporate the
idea of multiple word vectors per word as in Huang et al. [22].
4
Experiments
Experiments are conducted on both WordNet [1] and FreeBase [23] to predict whether some relations hold using other facts in the database. This can be seen as common sense reasoning [24]
over known facts or link prediction in relationship networks. For instance, if somebody was born
5
in London, then their nationality would be British. If a German Shepard is a dog, it is also a vertebrate. Our models can obtain such knowledge (with varying degrees of accuracy) by jointly learning
relationship classifiers and entity representations.
We first describe the datasets, then compare the above models and conclude with several analyses of
important modeling decisions, such as whether to use entity vectors or word vectors.
4.1
Datasets
Dataset
Wordnet
Freebase
#R.
11
13
# Ent.
38,696
75,043
# Train
112,581
316,232
# Dev
2,609
5,908
# Test
10,544
23,733
Table 1: The statistics for WordNet and Freebase including number of different relations #R.
Table 1 gives the statistics of the databases. For WordNet we use 112,581 relational triplets for
training. In total, there are 38,696 unique entities in 11 different relations. One important difference
to previous work is our dataset generation which filters trivial test triplets. We filter out tuples from
the testing set if either or both of their two entities also appear in the training set in a different relation
or order. For instance, if (e1 , similar to, e2 ) appears in training set, we delete (e2 , similar to, e1 ) and
(e1 , type of, e2 ), etc from the testing set. In the case of synsets containing multiple words, we pick
the first, most frequent one. For FreeBase, we use the relational triplets from People domain, and
extract 13 relations. We remove 6 of them (place of death, place of birth, location, parents, children,
spouse) from the testing set since they are very difficult to predict, e.g., the name of somebody?s
spouse is hard to infer from other knowledge in the database.
It is worth noting that the setting of FreeBase is profoundly different from WordNet?s. In WordNet,
e1 and e2 can be arbitrary entities; but in FreeBase, e1 is restricted to be a person?s name, and e2
can only be chosen from a finite answer set. For example, if R = gender, e2 can only be male or
female; if R = nationality, e2 can only be one of 188 country names. All the relations for testing
and their answer set sizes are shown in Fig. 3.
We use a different evaluation set from [8] because it has become apparent to us and them that
there were issues of overlap between their training and testing sets which impacted the quality and
interpretability of their evaluation.
4.2
Relation Triplets Classification
Our goal is to predict correct facts in the form of relations (e1 , R, e2 ) in the testing data. This could
be seen as answering questions such as Does a dog have a tail?, using the scores g(dog, has part,
tail) computed by the various models.
We use the development set to find a threshold TR for each relation such that if g(e1 , R, e2 ) ? TR ,
the relation (e1 , R, e2 ) holds, otherwise it does not hold.
In order to create a testing set for classification, we randomly switch entities from correct testing
triplets resulting in a total of 2?#Test triplets with equal number of positive and negative examples.
In particular, we constrain the entities from the possible answer set for Freebase by only allowing
entities in a position if they appeared in that position in the dataset. For example, given a correct
triplet (Pablo Picaso, nationality, Spain), a potential negative example is (Pablo Picaso, nationality,
United States). We use the same way to generate the development set. This forces the model to focus
on harder cases and makes the evaluation harder since it does not include obvious non-relations such
as (Pablo Picaso, nationality, Van Gogh). The final accuracy is based on how many triplets are
classified correctly.
Model Comparisons
We first compare the accuracy among different models. In order to get the highest accuracy for all
the models, we cross-validate using the development set to find the best hyperparameters: (i) vector
initialization (see next section); (ii) regularization parameter ? = 0.0001; (iii) the dimensionality
of the hidden vector (for the single layer and NTN models d = 100) and (iv) number of training
iterations T = 500. Finally, the number of slices was set to 4 in our NTN model.
Table 2 shows the resulting accuracy of each model. Our Neural Tensor Network achieves an accuracy of 86.2% on the Wordnet dataset and 90.0% on Freebase, which is at least 2% higher than the
bilinear model and 4% higher than the Single Layer Model.
6
Model
Distance Model
Hadamard Model
Single Layer Model
Bilinear Model
Neural Tensor Network
WordNet
68.3
80.0
76.0
84.1
86.2
Freebase
61.0
68.8
85.3
87.7
90.0
Avg.
64.7
74.4
80.7
85.9
88.1
Table 2: Comparison of accuracy of the different models described in Sec. 3.2 on both datasets.
WordNet
FreeBase
domain topic
ethnicity (211)
similar to
synset domain topic
religion (107)
domain region
cause of death (170)
subordinate instance of
has part
institution (727)
part of
profession (455)
member holonym
member meronym
nationality (188)
type of
gender (2)
has instance
70
75
80
85
Accuracy (%)
90
95
100
70
75
80
85
Accuracy (%)
90
95
100
Figure 3: Comparison of accuracy of different relations on both datasets. For FreeBase, the number
in the bracket denotes the size of possible answer set.
First, we compare the accuracy among different relation types. Fig. 3 reports the accuracy of each
relation on both datasets. Here we use our NTN model for evaluation, other models generally have
a lower accuracy and a similar distribution among different relations. The accuracy reflects the
difficulty of inferring a relationship from the knowledge base.
On WordNet, the accuracy varies from 75.5% (domain region) to 97.5% (subordinate instance of ).
Reasoning about some relations is more difficult than others, for instance, the relation (dramatic art,
domain region, closed circuit television) is much more vague than the relation (missouri, subordinate
instance of, river). Similarly, the accuracy varies from 77.2% (institution) to 96.6% (gender) in
FreeBase. We can see that the two easiest relations for reasoning are gender and nationality, and the
two most difficult ones are institution and cause of death. Intuitively, we can infer the gender and
nationality from the name, location, or profession of a person, but we hardly infer a person?s cause
of death from all other information.
We now analyze the choice of entity representations and also the influence of word initializations. As
explained in Sec. 3.4, we compare training entity vectors (E ? Rd?NE ) and training word vectors
(E ? Rd?NW ), where an entity vector is computed as the average of word vectors. Furthermore, we
compare random initialization and unsupervised initialization for training word vectors. In summary,
we explore three options: (i) entity vectors (EV); (ii) randomly initialized word vectors (WV); (iii)
word vectors initialized with unsupervised word vectors (WV-init).
Fig. 4 shows the various models and their performance with these three settings. We observe
that word vectors consistently and significantly outperform entity vectors on WordNet and this also
holds in most cases on FreeBase. It might be because the entities in WordNet share more common
words. Furthermore, we can see that most of the models have improved accuracy with initialization
from unsupervised word vectors. Even with random initialization, our NTN model with training
word vectors can reach high classification accuracy: 84.7% and 88.9% on WordNet and Freebase
respectively. In other words, our model is still able to perform good reasoning without external
textual resources.
4.3
Examples of Reasoning
We have shown that our model can achieve high accuracy when predicting whether a relational triplet
is true or not. In this section, we give some example predictions. In particular, we are interested in
how the model does transitive reasoning across multiple relationships in the knowledge base.
First, we demonstrate examples of relationship predictions by our Neural Tensor Network on WordNet. We select the first entity and a relation and then sort all the entities (represented by their word
7
WordNet
FreeBase
90
85
95
EV
WV
WV?init
90
85
75
Accuracy (%)
Accuracy (%)
80
70
65
80
75
70
60
65
55
50
EV
WV
WV?init
Distance
Hadamard Single Layer
Bilinear
60
NTN
Distance
Hadamard Single Layer
Bilinear
NTN
Figure 4: Influence of entity representations. EV: entity vectors. WV: randomly initialized word
vectors. WV-init: word vectors initialized with unsupervised semantic word vectors.
Entity e1
tube
creator
dubrovnik
armed forces
boldness
peole
Relationship R
type of
type of
subordinate instance of
domain region
has instance
type of
Sorted list of entities likely to be in this relationship
structure; anatomical structure; device; body; body part; organ
individual; adult; worker; man; communicator; instrumentalist
city; town; city district; port; river; region; island
military operation; naval forces; military officier; military court
audaciousness; aggro; abductor; interloper; confession;
group; agency; social group; organisation; alphabet; race
Table 3: Examples of a ranking by the model for right hand side entities in WordNet. The ranking
is based on the scores that the neural tensor network assigns to each triplet.
vector averages) by descending scores that the model assigns to the complete triplet. Table 3 shows
some examples for several relations, and most of the inferred relations among them are plausible.
Fig. 5 illustrates a real example from FreeBase in
which a person?s information is inferred from the
other relations provided in training. Given place
male
historian
gender
of birth is Florence and profession is historian, our
gender
model can accurately predict that Francesco Guicprofession
Francesco
ciardini?s gender is male and his nationality is Italy.
Francesco
Patrizi
These might be infered from two pieces of com- place of birth Guicciardini
mon knowledge: (i) Florence is a city of Italy; (ii)
nationality
nationality
Francesco is a common name among males in Italy.
Florence
Italy
The key is how our model can derive these facts from
the knowledge base itself, without the help of exnationality
location
Matteo
ternal information. For the first fact, some relations
Rosselli
such as Matteo Rosselli has location Florence and
nationality Italy exist in the knowledge base, which Figure 5: A reasoning example in Freemight imply the connection between Florence and Base. Black lines denote relationships given
Italy. For the second fact, we can see that many in training, red lines denote relationships the
other people e.g., Francesco Patrizi are shown Ital- model inferred. The dashed line denotes
ian or male in the FreeBase, which might imply that word vector sharing.
Francesco is a male or Italian name. It is worth noting that we do not have an explicit relation between Francesco Guicciardini and Francesco Patrizi;
the dashed line in Fig. 5 shows the benefits from the sharing via word representations.
5
Conclusion
We introduced Neural Tensor Networks for knowledge base completion. Unlike previous models
for predicting relationships using entities in knowledge bases, our model allows mediated interaction of entity vectors via a tensor. The model obtains the highest accuracy in terms of predicting
unseen relationships between entities through reasoning inside a given knowledge base. It enables
the extension of databases even without external textual resources. We further show that by representing entities through their constituent words and initializing these word representations using
readily available word vectors, performance of all models improves substantially. Potential path for
future work include scaling the number of slices based on available training data for each relation
and extending these ideas to reasoning over free text.
8
Acknowledgments
Richard is partly supported by a Microsoft Research PhD fellowship. The authors gratefully acknowledge the
support of a Natural Language Understanding-focused gift from Google Inc., the Defense Advanced Research
Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research
Laboratory (AFRL) prime contract no. FA8750-13-2-0040, the DARPA Deep Learning program under contract
number FA8650-10-C-7020 and NSF IIS-1159679. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA,
AFRL, or the US government.
References
[1] G.A. Miller. WordNet: A Lexical Database for English. Communications of the ACM, 1995.
[2] F. M. Suchanek, G. Kasneci, and G. Weikum. Yago: a core of semantic knowledge. In
Proceedings of the 16th international conference on World Wide Web, 2007.
[3] J. Graupmann, R. Schenkel, and G. Weikum. The SphereSearch engine for unified ranked
retrieval of heterogeneous XML and web documents. In Proceedings of the 31st international
conference on Very large data bases, VLDB, 2005.
[4] V. Ng and C. Cardie. Improving machine learning approaches to coreference resolution. In
ACL, 2002.
[5] R. Snow, D. Jurafsky, and A. Y. Ng. Learning syntactic patterns for automatic hypernym
discovery. In NIPS, 2005.
[6] A. Fader, S. Soderland, and O. Etzioni. Identifying relations for open information extraction.
In EMNLP, 2011.
[7] G. Angeli and C. D. Manning. Philosophers are mortal: Inferring the truth of unseen facts. In
CoNLL, 2013.
[8] A. Bordes, J. Weston, R. Collobert, and Y. Bengio. Learning structured embeddings of knowledge bases. In AAAI, 2011.
[9] R. Jenatton, N. Le Roux, A. Bordes, and G. Obozinski. A latent factor model for highly
multi-relational data. In NIPS, 2012.
[10] A. Bordes, X. Glorot, J. Weston, and Y. Bengio. Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing. AISTATS, 2012.
[11] I. Sutskever, R. Salakhutdinov, and J. B. Tenenbaum. Modelling relational data using Bayesian
clustered tensor factorization. In NIPS, 2009.
[12] M. Ranzato and A. Krizhevsky G. E. Hinton. Factored 3-Way Restricted Boltzmann Machines
For Modeling Natural Images. AISTATS, 2010.
[13] D. Yu, L. Deng, and F. Seide. Large vocabulary speech recognition using deep tensor neural
networks. In INTERSPEECH, 2012.
[14] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts. Recursive
deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013.
[15] A. Yates, M. Banko, M. Broadhead, M. J. Cafarella, O. Etzioni, and S. Soderland. Textrunner:
Open information extraction on the web. In HLT-NAACL (Demonstrations), 2007.
[16] M. Nickel, V. Tresp, and H. Kriegel. A three-way model for collective learning on multirelational data. In ICML, 2011.
[17] A. Bordes, N. Usunier, A. Garca-Durn, J. Weston, and O. Yakhnenko. Irreflexive and hierarchical relations as translations. CoRR, abs/1304.7158, 2013.
[18] J. Turian, L. Ratinov, and Y. Bengio. Word representations: a simple and general method for
semi-supervised learning. In Proceedings of ACL, pages 384?394, 2010.
[19] R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. Semantic Compositionality Through
Recursive Matrix-Vector Spaces. In EMNLP, 2012.
[20] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural
networks with multitask learning. In ICML, 2008.
9
[21] R. Socher, E. H. Huang, J. Pennington, A. Y. Ng, and C. D. Manning. Dynamic Pooling and
Unfolding Recursive Autoencoders for Paraphrase Detection. In NIPS. MIT Press, 2011.
[22] E. H. Huang, R. Socher, C. D. Manning, and A. Y. Ng. Improving Word Representations via
Global Context and Multiple Word Prototypes. In ACL, 2012.
[23] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. Freebase: a collaboratively created
graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD
international conference on Management of data, SIGMOD, 2008.
[24] N. Tandon, G. de Melo, and G. Weikum. Deriving a web-scale commonsense fact database. In
AAAI Conference on Artificial Intelligence (AAAI 2011), 2011.
10
| 5028 |@word multitask:1 version:1 open:4 vldb:1 contrastive:1 pick:1 dramatic:1 tr:2 harder:3 born:1 score:12 united:1 ours:2 document:1 fa8750:1 outperforms:4 existing:5 com:1 readily:3 parsing:3 realize:1 evans:1 enables:2 remove:1 intelligence:1 device:1 core:1 institution:3 provides:2 node:1 revisited:1 location:4 org:2 district:1 five:1 become:1 seide:1 inside:2 introduce:6 suchanek:1 ontology:1 multi:1 hypernym:1 salakhutdinov:1 little:1 armed:1 considering:1 increasing:1 vertebrate:1 provided:3 spain:1 linearity:1 gift:1 circuit:1 project:1 what:1 easiest:1 substantially:1 monkey:3 developed:1 unified:2 finding:1 certainty:2 tackle:1 classifier:3 unit:1 appear:2 positive:1 local:1 modify:1 treat:1 bilinear:11 path:1 matteo:2 might:4 rnns:2 black:1 initialization:9 china:2 acl:3 fader:1 jurafsky:1 factorization:4 unique:2 responsible:1 acknowledgment:1 testing:8 atomic:2 recursive:5 implement:1 banko:1 backpropagation:2 significantly:1 yakhnenko:1 word:62 confidence:1 pre:2 get:1 unlabeled:1 close:1 operator:1 context:2 influence:2 descending:1 www:1 map:1 lexical:1 independently:1 convex:1 focused:2 resolution:2 roux:1 identifying:1 assigns:3 factored:2 deriving:1 his:1 deft:1 embedding:2 tandon:1 user:1 element:2 recognition:3 distributional:1 database:13 factual:1 initializing:2 capture:2 region:5 ranzato:2 highest:2 mentioned:1 agency:2 complexity:1 dynamic:1 trained:7 coreference:2 purely:3 vague:1 darpa:3 joint:1 differently:1 represented:4 cat:3 various:2 alphabet:1 train:4 separated:1 distinct:1 describe:3 london:1 query:2 artificial:1 birth:3 apparent:1 mon:1 stanford:4 larger:1 plausible:1 otherwise:4 ability:2 statistic:2 unseen:5 syntactic:2 jointly:1 itself:3 final:1 advantage:2 net:1 interaction:6 product:4 frequent:1 hadamard:5 achieve:1 validate:1 constituent:1 ent:1 sutskever:2 parent:1 optimum:1 extending:3 converges:1 object:1 help:2 derive:1 andrew:1 completion:2 c:1 predicted:1 snow:1 correct:8 filter:2 exploration:1 human:1 opinion:1 material:1 subordinate:4 require:1 government:1 fix:1 clustered:1 alleviate:1 probable:1 extension:3 hold:5 mapping:2 predict:7 bj:1 nw:3 vary:1 adopt:1 achieves:1 collaboratively:1 outperformed:1 applicable:1 tanh:2 et1:4 organ:1 create:1 city:3 weighted:1 reflects:1 unfolding:1 mit:1 freebase:21 modified:1 varying:1 structuring:1 focus:1 naval:1 improvement:4 spouse:2 consistently:1 philosopher:1 mainly:1 modelling:1 potts:1 sense:3 inference:1 irreflexive:1 hidden:2 italian:1 relation:61 interested:1 issue:3 among:6 classification:5 development:3 animal:1 art:1 special:6 field:1 equal:2 extraction:3 ng:7 represents:3 yu:2 unsupervised:7 icml:2 future:2 others:2 report:1 richard:3 randomly:6 missouri:1 simultaneously:1 individual:2 replaced:1 consisting:1 microsoft:1 ab:1 detection:1 highly:1 homo:2 evaluation:4 certainly:1 introduces:1 male:6 bracket:1 commonsense:1 worker:1 unless:1 indexed:2 iv:1 taylor:1 initialized:7 re:1 delete:1 instance:16 classify:1 modeling:2 military:3 dev:1 measuring:2 subset:2 entry:4 krizhevsky:1 conducted:1 ital:1 too:1 answer:4 varies:2 corrupted:4 combined:2 person:5 st:1 international:3 river:2 yago:2 told:1 contract:2 connecting:1 w1:2 sapiens:2 aaai:3 tube:1 town:1 containing:1 huang:3 reflect:1 emnlp:3 management:1 worse:1 external:3 derivative:5 return:1 potential:2 bfgs:1 huval:1 de:1 b2:2 sec:3 includes:1 inc:1 explicitly:3 race:1 ranking:2 collobert:3 piece:1 performed:1 try:1 view:1 closed:1 analyze:3 red:1 sort:1 option:1 capability:1 multirelational:1 florence:5 contribution:3 ass:1 minimize:1 air:1 accuracy:24 who:2 miller:1 weak:3 bayesian:3 polysemous:1 accurately:3 substring:1 cardie:1 worth:2 classified:1 reach:1 sharing:6 hlt:1 obvious:2 e2:33 associated:1 propagated:1 sampled:1 dataset:5 knowledge:34 car:2 ut:1 dimensionality:2 improves:1 profession:3 e1t:1 jenatton:2 back:1 appears:1 afrl:2 higher:6 supervised:1 impacted:1 improved:4 done:1 box:1 though:1 furthermore:4 implicit:1 lastly:3 autoencoders:1 hand:1 web:5 christopher:1 expressive:3 propagation:1 lack:2 google:2 quality:1 usa:1 name:9 grounding:1 k22:1 true:4 naacl:1 regularization:2 assigned:1 hence:2 death:4 laboratory:1 semantic:7 illustrated:1 deal:1 interspeech:1 complete:1 demonstrate:2 l1:1 reasoning:16 meaning:1 wise:1 image:1 novel:1 recently:1 common:8 overview:1 shepard:1 million:1 tail:7 extend:1 interpret:1 composition:2 communicator:1 rd:11 nationality:12 abductor:1 automatic:1 similarly:1 nonlinearity:3 language:3 gratefully:1 similarity:1 etc:1 base:27 own:1 female:1 sturge:1 italy:6 prime:1 certain:3 wv:8 scoring:8 seen:4 additional:4 deng:1 dashed:3 semi:1 relates:1 multiple:7 desirable:1 full:2 infer:3 ii:4 angeli:1 cross:1 melo:1 retrieval:2 equally:1 e1:26 prediction:4 involving:1 heterogeneous:1 essentially:1 iteration:1 represent:5 achieved:1 receive:1 fellowship:1 country:1 w2:2 unlike:1 pooling:1 dubrovnik:1 member:2 incorporates:1 call:1 noting:2 iii:2 embeddings:3 ethnicity:1 bengio:3 switch:1 fit:1 architecture:1 idea:4 prototype:1 br:3 court:1 whether:6 defense:1 minibatched:1 sentiment:2 suffer:2 fa8650:1 speech:2 cause:3 compositional:1 hardly:1 deep:6 useful:3 generally:1 clear:1 amount:1 nonparametric:1 ang:1 tenenbaum:1 reduced:1 generate:1 outperform:1 exist:1 zj:1 nsf:1 wr:15 per:1 correctly:1 anatomical:1 discrete:2 hyperparameter:1 dropping:1 yates:1 profoundly:1 group:3 key:1 threshold:1 vast:1 graph:2 ratinov:1 parameterized:2 powerful:1 fourth:1 named:1 place:4 wu:1 schenkel:1 decision:1 scaling:1 incompleteness:2 conll:1 layer:16 hi:1 followed:1 ntn:11 somebody:2 replaces:1 strength:5 incorporation:1 constrain:1 answered:1 extremely:1 department:1 structured:2 manning:7 across:2 smaller:1 slightly:1 ur:3 island:1 leg:4 intuitively:2 restricted:4 explained:1 resource:6 visualization:2 previously:2 describing:2 german:1 know:1 end:1 usunier:1 available:6 generalizes:1 operation:1 limb:1 observe:1 hierarchical:1 original:1 chuang:1 denotes:2 clustering:2 include:3 creator:1 sigmod:2 giving:1 concatenated:1 k1:1 especially:1 tensor:39 objective:4 question:3 interacts:1 nr:1 exhibit:1 distance:8 link:2 mapped:1 entity:85 topic:2 trivial:1 reason:2 code:1 index:1 relationship:30 multiplicatively:1 providing:1 demonstration:1 difficult:3 relate:2 expense:1 perelygin:1 negative:2 collective:1 boltzmann:2 contributed:1 allowing:2 perform:1 francesco:8 datasets:6 finite:1 acknowledge:1 subunit:1 relational:9 hinton:2 communication:1 discovered:1 arbitrary:1 paraphrase:1 expressiveness:1 inferred:4 compositionality:2 introduced:4 pablo:3 dog:5 pair:2 mechanical:1 specified:1 connection:1 engine:1 learned:2 textual:3 mediates:1 nip:4 adult:1 able:3 kriegel:1 pattern:2 ev:4 appeared:1 program:2 max:2 including:1 interpretability:1 weikum:3 power:2 suitable:1 overlap:1 natural:4 rely:1 force:4 predicting:5 difficulty:1 ranked:1 advanced:1 representing:3 improve:1 xml:1 eye:1 ne:3 imply:2 created:1 transitive:1 extract:2 mediated:1 tresp:1 text:7 literature:1 l2:1 understanding:1 discovery:1 adagrad:1 holonym:1 heldout:1 generation:1 filtering:1 nickel:2 triple:2 etzioni:2 degree:1 port:1 treebank:1 bank:1 share:2 bordes:8 translation:1 row:1 summary:1 supported:1 free:1 english:1 synset:2 bias:1 allow:2 deeper:1 side:1 india:1 wide:1 paritosh:1 taking:1 benefit:4 slice:10 van:1 dimension:1 vocabulary:1 world:2 computes:1 author:3 forward:1 avg:1 ec:2 constituting:1 social:1 obtains:1 implicitly:2 global:1 mortal:1 instantiation:1 corpus:4 b1:2 conclude:1 infered:1 tuples:1 latent:1 triplet:27 table:6 learn:5 ca:1 init:4 improving:2 interact:1 expansion:1 complex:1 necessarily:1 domain:7 vj:1 did:1 aistats:2 main:3 big:1 hyperparameters:1 turian:1 child:1 complementary:2 confession:1 body:2 fig:8 vr:3 position:3 inferring:2 explicit:1 answering:3 third:1 learns:1 ian:1 rk:5 british:1 specific:5 er:3 list:1 experimented:2 soderland:2 evidence:1 utr:1 organisation:1 socher:7 glorot:1 corr:1 pennington:1 phd:1 illustrates:1 television:1 margin:3 danqi:2 chen:1 tc:2 simply:2 likely:4 explore:1 expressed:2 religion:1 recommendation:1 gender:8 bollacker:1 truth:3 acm:2 obozinski:1 weston:5 goal:5 identity:1 sorted:1 shared:1 man:1 tiger:9 hard:1 uniformly:1 averaging:2 wordnet:21 total:3 specie:1 partly:1 experimental:1 select:1 people:5 support:1 incorporate:1 evaluate:1 mcmc:1 |
4,452 | 5,029 | Discriminative Transfer Learning with
Tree-based Priors
Nitish Srivastava
Department of Computer Science
University of Toronto
[email protected]
Ruslan Salakhutdinov
Department of Computer Science and Statistics
University of Toronto
[email protected]
Abstract
High capacity classifiers, such as deep neural networks, often struggle on classes
that have very few training examples. We propose a method for improving classification performance for such classes by discovering similar classes and transferring knowledge among them. Our method learns to organize the classes into
a tree hierarchy. This tree structure imposes a prior over the classifier?s parameters. We show that the performance of deep neural networks can be improved
by applying these priors to the weights in the last layer. Our method combines
the strength of discriminatively trained deep neural networks, which typically require large amounts of training data, with tree-based priors, making deep neural
networks work well on infrequent classes as well. We also propose an algorithm
for learning the underlying tree structure. Starting from an initial pre-specified
tree, this algorithm modifies the tree to make it more pertinent to the task being
solved, for example, removing semantic relationships in favour of visual ones for
an image classification task. Our method achieves state-of-the-art classification
results on the CIFAR-100 image data set and the MIR Flickr image-text data set.
1
Introduction
Learning classifiers that generalize well is a hard problem when only few training examples are
available. For example, if we had only 5 images of a cheetah, it would be hard to train a classifier
to be good at distinguishing cheetahs against hundreds of other classes, working off pixels alone.
Any powerful enough machine learning model would severely overfit the few examples, unless it is
held back by strong regularizers. This paper is based on the idea that performance can be improved
using the natural structure inherent in the set of classes. For example, we know that cheetahs are
related to tigers, lions, jaguars and leopards. Having labeled examples from these related classes
should make the task of learning from 5 cheetah examples much easier. Knowing class structure
should allow us to borrow ?knowledge? from relevant classes so that only the distinctive features
specific to cheetahs need to be learned. At the very least, the model should confuse cheetahs with
these animals rather than with completely unrelated classes, such as cars or lamps. Our aim is to
develop methods for transferring knowledge from related tasks towards learning a new task. In the
endeavour to scale machine learning algorithms towards AI, it is imperative that we have good ways
of transferring knowledge across related problems.
Finding relatedness is also a hard problem. This is because in the absence of any prior knowledge, in
order to find which classes are related, we should first know what the classes are - i.e., have a good
model for each one of them. But to learn a good model, we need to know which classes are related.
This creates a cyclic dependency. One way to circumvent it is to use an external knowledge source,
such as a human, to specify the class structure by hand. Another way to resolve this dependency is
to iteratively learn a model of the what the classes are and what relationships exist between them,
using one to improve the other. In this paper, we follow this bootstrapping approach.
1
This paper proposes a way of learning class structure and classifier parameters in the context of
deep neural networks. The aim is to improve classification accuracy for classes with few examples.
Deep neural networks trained discriminatively with back propagation achieved state-of-the-art performance on difficult classification problems with large amounts of labeled data [2, 14, 15]. The
case of smaller amounts of data or datasets which contain rare classes has been relatively less studied. To address this shortcoming, our model augments neural networks with a tree-based prior over
the last layer of weights. We structure the prior so that related classes share the same prior. This
shared prior captures the features that are common across all members of any particular superclass.
Therefore, a class with few examples, for which the model would otherwise be unable to learn good
features for, can now have access to good features just by virtue of belonging to the superclass.
Learning a hierarchical structure over classes has been extensively studied in the machine learning,
statistics, and vision communities. A large class of models based on hierarchical Bayesian models
have been used for transfer learning [20, 4, 1, 3, 5]. The hierarchical topic model for image features
of Bart et.al. [1] can discover visual taxonomies in an unsupervised fashion from large datasets but
was not designed for rapid learning of new categories. Fei-Fei et.al. [5] also developed a hierarchical Bayesian model for visual categories, with a prior on the parameters of new categories that was
induced from other categories. However, their approach is not well-suited as a generic approach
to transfer learning because they learned a single prior shared across all categories. A number of
models based on hierarchical Dirichlet processes have also been used for transfer learning [23, 17].
However, almost all of the the above-mentioned models are generative by nature. These models
typically resort to MCMC approaches for inference, that are hard to scale to large datasets. Furthermore, they tend to perform worse than discriminative approaches, particularly as the number of
labeled examples increases.
A large class of discriminative models [12, 25, 11] have also been used for transfer learning that
enable discovering and sharing information among related classes. Most similar to our work is [18]
which defined a generative prior over the classifier parameters and a prior over the tree structures to
identify relevant categories. However, this work focused on a very specific object detection task and
used an SVM model with pre-defined HOG features as its input. In this paper, we demonstrate our
method on two different deep architectures (1) convolutional nets with pixels as input and singlelabel softmax outputs and (2) fully connected nets pretrained using deep Boltzmann machines with
image features and text tokens as input and multi-label logistic outputs. Our model improves performance over strong baselines in both cases, lending some measure of universality to the approach. In
essence, our model learns low-level features, high-level features, as well as a hierarchy over classes
in an end-to-end way.
2
Model Description
Let X = {x1 , x2 , . . . , xN } be a set of N data points and Y = {y1 , y2 , . . . , yN } be the set of
corresponding labels, where each label yi is a K dimensional vector of targets. These targets could
be binary, one-of-K, or real-valued. In our setting, it is useful to think of each xi as an image and
yi as a one-of-K encoding of the label. The model is a multi-layer neural network (see Fig. 1a). Let
w denote the set of all parameters of this network (weights and biases for all the layers), excluding
the top-level weights, which we denote separately as ? ? RD?K . Here D represents the number of
hidden units in the last hidden layer. The conditional distribution over Y can be expressed as
Z
P (Y|X ) =
P (Y|X , w, ?)P (w)P (?)dwd?.
(1)
w,?
In general, this integral is intractable, and we typically resort to MAP estimation to determine the
values of the model parameters w and ? that maximize
log P (Y|X , w, ?) + log P (w) + log P (?).
Here, log P (Y|X , w, ?) is the log-likelihood function and the other terms are priors over the model?s
parameters. A typical choice of prior is a Gaussian distribution with diagonal covariance:
1
?k ? N 0, ID , ?k ? {1, . . . , K}.
?
Here ?k ? RD denotes the classifier parameters for class k. Note that this prior assumes that each ?k
is independent of all other ?i ?s. In other words, a-priori, the weights for label k are not related to any
2
K
???
Predictions
?car
?
y
?tiger
?
D
High level
features
???
Low level
features
fw (x)
?vehicle
w
x
Input
?car
?animal
?truck
?tiger
???
?cheetah
(a)
(b)
Figure 1: Our model: A deep neural network with priors over the classification parameters. The priors are
derived from a hierarchy over classes.
other label?s weights. This is a reasonable assumption when nothing is known about the labels. It
works quite well for most applications with large number of labeled examples per class. However, if
we know that the classes are related to one another, priors which respect these relationships may be
more suitable. Such priors would be crucial for classes that only have a handful of training examples,
since the effect of the prior would be more pronounced. In this work, we focus on developing such
a prior.
2.1
Learning With a Fixed Tree Hierarchy
Let us first assume that the classes have been organized into a fixed tree hierarchy which is available
to us. We will relax this assumption later by placing a hierarchical non-parametric prior over the tree
structures. For ease of exposition, consider a two-level hierarchy1 , as shown in Fig. 1b. There are
K leaf nodes corresponding to the K classes. They are connected to S super-classes which group
together similar basic-level classes. Each leaf node k is associated with a weight vector ?k ? RD .
Each super-class node s is associated with a vector ?s ? RD , s = 1, ..., S. We define the following
generative model for ?
1
1
?s ? N 0, ID ,
?k ? N ?parent(k) , ID .
(2)
?1
?2
This prior expresses relationships between classes. For example, it asserts that ?car and ?truck are
both deviations from ?vehicle . Similarly, ?cat and ?dog are deviations from ?animal . Eq. 1 can now be
re-written to include ? as follows
Z
P (Y|X ) =
P (Y|X , w, ?)P (w)P (?|?)P (?)dwd?d?.
(3)
w,?,?
We can perform MAP inference to determine the values of {w, ?, ?} that maximize
log P (Y|X , w, ?) + log P (w) + log P (?|?) + log P (?).
In terms of a loss function, we wish to minimize
L(w, ?, ?)
=
? log P (Y|X , w, ?) ? log P (w) ? log P (?|?) ? log P (?)
=
? log P (Y|X , w, ?) +
K
?2 X
?1
?2
||w||2 +
||?k ? ?parent(k) ||2 + ||?||2 . (4)
2
2
2
k=1
Note that by fixing the value of ? = 0, this loss function recovers our standard loss function. The
choice of normal distributions in Eq. 2 leads to a nice property that maximization over ?, given ? can
be done in closed form. It just amounts to taking a (scaled) average of all ?k ?s which are children of
?s . Let Cs = {k|parent(k) = s}, then
X
1
?s? =
.?k
(5)
|Cs | + ?1 /?2
k?Cs
1
The model can be easily generalized to deeper hierarchies.
3
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
Given: X , Y, classes K, superclasses S, initial z, L, M.
Initialize w, ?.
repeat
k = van
// Optimize w, ? with fixed z.
w, ? ? SGD (X , Y, w, ?, z) for L steps.
// Optimize z, ? with fixed w.
s=vehicle
RandomPermute(K)
for k in K do
for s in S ? {snew } do
zk ? s
car truck van
? s ? SGD (fw (X ), Y, ?, z) for M steps.
end for
s0 ? ChooseBestSuperclass(? 1 , ? 2 , . . .)
0
? ? ? s , zk ? s0 , S ? S ? {s0 }
end for
until convergence
Algorithm 1: Procedure for learning the tree.
s=snew
s=animal
cat
dog
van
van
Therefore, the loss function in Eq. 4 can be optimized by iteratively performing the following two
steps. In the first step, we maximize over w and ? keeping ? fixed. This can be done using standard
stochastic gradient descent (SGD). Then, we maximize over ? keeping ? fixed. This can be done in
closed form using Eq. 5. In practical terms, the second step is almost instantaneous and only needs
to be performed after every T gradient descent steps, where T is around 10-100. Therefore, learning
is almost identical to standard gradient descent. It allows us to exploit the structure over labels at a
very nominal cost in terms of computational time.
2.2
Learning the Tree Hierarchy
So far we have assumed that our model is given a fixed tree hierarchy. Now, we show how the tree
structure can be learned during training. Let z be a K-length vector that specifies the tree structure,
that is, zk = s indicates that class k is a child of super-class s. We place a non-parametric Chinese
Restaurant Process (CRP) prior over z. This prior P (z) gives the model the flexibility to have any
number of superclasses. The CRP prior extends a partition of k classes to a new class by adding
the new class either to one of the existing superclasses or to a new superclass. The probability of
cs
where cs is the number of children of superclass s. The probability
adding it to superclass s is k+?
?
of creating a new superclass is k+?
. In essence, it prefers to add a new node to an existing large
superclass instead of spawning a new one. The strength of this preference is controlled by ?.
Equipped with the CRP prior over z, the conditional over Y takes the following form
X Z
P (Y|X ) =
P (Y|X , w, ?)P (w)P (?|?, z)P (?)dwd?d? P (z).
z
(6)
w,?,?
MAP inference in this model leads to the following optimization problem
max log P (Y|X , w, ?) + log P (w) + log P (?|?, z) + log P (?) + log P (z).
w,?,?,z
Maximization over z is problematic because the domain of z is a huge discrete set. Fortunately, this
can be approximated using a simple and parallelizable search procedure.
We first initialize the tree sensibly. This can be done by hand or by extracting a semantic tree from
WordNet [16]. Let the number of superclasses in the tree be S. We optimize over {w, ?, ?} for a
L steps using this tree. Then, a leaf node is picked uniformly at random from the tree and S + 1
tree proposals are generated as follows. S proposals are generated by attaching this leaf node to
each of the S superclasses. One additional proposal is generated by creating a new super-class and
attaching the label to it. This process is shown in Algorithm 1. We then re-estimate {?, ?} for
each of these S + 1 trees for a few steps. Note that each of the S + 1 optimization problems can
be performed independently, in parallel. The best tree is then picked using a validation set. This
process is repeated by picking another node and again trying all possible locations for it. After each
node has been picked once and potentially repositioned, we take the resulting tree and go back to
4
whale
dolphin
willow tree
oak tree
lamp
clock
leopard
tiger
ray
flatfish
Figure 2: Examples from CIFAR-100. Five randomly chosen examples from 8 of the 100 classes are shown.
Classes in each row belong to the same superclass.
optimizing w, ? using this newly learned tree in place of the given tree. If the position of any class
in the tree did not change during a full pass through all the classes, the hierarchy discovery was
said to have converged. When training this model on CIFAR-100, this amounts to interrupting the
stochastic gradient descent after every 10,000 steps to find a better tree. The amount of time spent
in learning this tree is a small fraction of the total time (about 5%).
3
Experiments on CIFAR-100
The CIFAR-100 dataset [13] consists of 32 ? 32 color images belonging to 100 classes. These
classes are divided into 20 groups of 5 each. For example, the superclass fish contains aquarium
fish, flatfish, ray, shark and trout; and superclass flowers contains orchids, poppies, roses, sunflowers
and tulips. Some examples from this dataset are shown in Fig. 2. We chose this dataset because it has
a large number of classes with a few examples in each, making it ideal for demonstrating the utility
of transfer learning. There are only 600 examples of each class of which 500 are in the training set
and 100 in the test set. We preprocessed the images by doing global contrast normalization followed
by ZCA whitening.
3.1 Model Architecture and Training Details
We used a convolutional neural network with 3 convolutional hidden layers followed by 2 fully connected hidden layers. All hidden units used a rectified linear activation function. Each convolutional
layer was followed by a max-pooling layer. Dropout [8] was applied to all the layers of the network with the probability of retaining a hidden unit being p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the
different layers of the network (going from input to convolutional layers to fully connected layers).
Max-norm regularization [8] was used for weights in both convolutional and fully connected layers.
The initial tree was chosen based on the superclass structure given in the data set. We learned a
tree using Algorithm 1 with L = 10, 000 and M = 100. The final learned tree is provided in the
supplementary material.
3.2 Experiments with Few Examples per Class
In our first set of experiments, we worked in a scenario where each class has very few examples.
The aim was to assess whether the proposed model allows related classes to borrow information
from each other. For a baseline, we used a standard convolutional neural network with the same
architecture as our model. This is an extremely strong baseline and already achieved excellent
results, outperforming all previously reported results on this dataset as shown in Table 1. We created
5 subsets of the data by randomly choosing 10, 25, 50, 100 and 250 examples per class, and trained
four models on each subset. The first was the baseline. The second was our model using the given
tree structure (100 classes grouped into 20 superclasses) which was kept fixed during training. The
third and fourth were our models with a learned tree structure. The third one was initialized with
a random tree and the fourth with the given tree. The random tree was constructed by drawing a
sample from the CRP prior and randomly assigning classes to leaf nodes.
The test performance of these models is compared in Fig. 3a. We observe that if the number of
examples per class is small, the fixed tree model already provides significant improvement over
the baseline. The improvement diminishes as the number of examples increases and eventually
the performance falls below the baseline (61.7% vs 62.8%). However, the learned tree model does
5
30
60
20
50
Improvement
Test classification accuracy
70
40
30
Baseline
Fixed Tree
Learned Tree
20
10
10
25
50
100
250
Number of training examples per label
10
0
?10
Baseline
Fixed Tree
Learned Tree
?20
?30
500
0
20
(a)
40
60
Sorted classes
80
100
(b)
Figure 3: Classification results on CIFAR-100. Left: Test set classification accuracy for different number of
training examples per class. Right: Improvement over the baseline when trained on 10 examples per class. The
learned tree models were initialized at the given tree.
Method
Test Accuracy %
Conv Net + max pooling
Conv Net + stochastic pooling [24]
Conv Net + maxout [6]
Conv Net + max pooling + dropout (Baseline)
Baseline + fixed tree
Baseline + learned tree (Initialized randomly)
Baseline + learned tree (Initialized from given tree)
56.62 ? 0.03
57.49
61.43
62.80 ? 0.08
61.70 ? 0.06
61.20 ? 0.35
63.15 ? 0.15
Table 1: Classification results on CIFAR-100. All models were trained on the full training set.
better. Even with 10 examples per class, it gets an accuracy of 18.52% compared to the baseline
model?s 12.81% or the fixed tree model?s 16.29%. Thus the model can get almost a 50% relative
improvement when few examples are available. As the number of examples increases, the relative
improvement decreases. However, even for 500 examples per class, the learned tree model improves
upon the baseline, achieving a classification accuracy of 63.15%. Note that initializing the model
with a random tree decreases model performance, as shown in Table 1.
Next, we analyzed the learned tree model to find the source of the improvements. We took the model
trained on 10 examples per class and looked at the classification accuracy separately for each class.
The aim was to find which classes gain or suffer the most. Fig. 3b shows the improvement obtained
by different classes over the baseline, where the classes are sorted by the value of the improvement
over the baseline. Observe that about 70 classes benefit in different degrees from learning a hierarchy
for parameter sharing, whereas about 30 classes perform worse as a result of transfer. For the learned
tree model, the classes which improve most are willow tree (+26%) and orchid (+25%). The classes
which lose most from the transfer are ray (-10%) and lamp (-10%).
We hypothesize that the reason why certain classes gain a lot is that they are very similar to other
classes within their superclass and thus stand to gain a lot by transferring knowledge. For example,
the superclass for willow tree contains other trees, such as maple tree and oak tree. However, ray
belongs to superclass fish which contains more typical examples of fish that are very dissimilar in
appearance. With the fixed tree, such transfer hurts performance (ray did worse by -29%). However,
when the tree was learned, this class split away from the fish superclass to join a new superclass and
did not suffer as much. Similarly, lamp was under household electrical devices along with keyboard
and clock. Putting different kinds of electrical devices under one superclass makes semantic sense
but does not help for visual recognition tasks. This highlights a key limitation of hierarchies based
on semantic knowledge and advocates the need to learn the hierarchy so that it becomes relevant to
the task at hand. The full learned tree is provided in the supplementary material.
3.3 Experiments with Few Examples for One Class
In this set of experiments, we worked in a scenario where there are lots of examples for different
classes, but only few examples of one particular class. The aim was to see whether the model
transfers information from other classes that it has learned to this ?rare? class. We constructed
training sets by randomly drawing either 5, 10, 25, 50, 100, 250 or 500 examples from the dolphin
6
60
85
Baseline
Fixed Tree
Learned Tree
50
40
30
20
10
0
5
Baseline
Fixed Tree
Learned Tree
80
Test classification accuracy
Test classification accuracy
70
75
70
65
60
55
50
10
25
50
100
250
Number of training cases for dolphin
45
1
500
5
10
25
50
100
250
Number of training cases for dolphin
(a)
500
(b)
Figure 4: Results on CIFAR-100 with few examples for the dolphin class. Left: Test set classification accuracy
for different number of examples. Right: Accuracy when classifying a dolphin as whale or shark is also
considered correct.
Classes
baby, female, people,
portrait
plant life, river,
water
clouds, sea, sky,
transport, water
animals, dog, food
clouds, sky, structures
claudia
h no text i
barco, pesca,
boattosail, navegac?a? o
watermelon, dog,
hilarious, chihuahua
colors, cores, centro,
commercial, building
Images
Tags
Figure 5: Some examples from the MIR-Flickr dataset. Each instance in the dataset is an image along with
textual tags. Each image has multiple classes.
class and all 500 training examples for the other 99 classes. We trained the baseline, fixed tree and
learned tree models with each of these datasets. The objective was kept the same as before and
no special attention was paid to the dolphin class. Fig. 4a shows the test accuracy for correctly
predicting the dolphin class. We see that transfer learning helped tremendously. For example, with
10 cases, the baseline gets 0% accuracy whereas the transfer learning model can get around 3%.
Even for 250 cases, the learned tree model gives significant improvements (31% to 34%). We
repeated this experiment for classes other than dolphin as well and found similar improvements. See
the supplementary material for a more detailed description.
In addition to performing well on the class with few examples, we would also want any errors
to be sensible. To check if this was indeed the case, we evaluated the performance of the above
models treating the classification of dolphin as shark or whale to also be correct, since we believe
these to be reasonable mistakes. Fig. 4b shows the classification accuracy under this assumption for
different models. Observe that the transfer learning methods provide significant improvements over
the baseline. Even when we have just 1 example for dolphin, the accuracy jumps from 45% to 52%.
4
Experiments on MIR Flickr
The Multimedia Information Retrieval Flickr Data set [9] consists of 1 million images collected
from the social photography website Flickr along with their user assigned tags. Among the 1 million
images, 25,000 have been annotated using 38 labels. These labels include object categories such as,
bird, tree, people, as well as scene categories, such as indoor, sky and night. Each image has multiple
labels. Some examples are shown in Fig. 5.
This dataset is different from CIFAR-100 in many ways. In the CIFAR-100 dataset, our model was
trained using image pixels as input and each image belonged to only one class. MIR-FLickr is a
multimodal dataset for which we used standard computer vision image features and word counts
as inputs. The CIFAR-100 models used a multi-layer convolutional network, whereas for this
dataset we use a fully connected neural network initialized by unrolling a Deep Boltzmann Machine
(DBM) [19]. Moreover, this dataset offers a more natural class distribution where some classes occur more often than others. For example, sky occurs in over 30% of the instances, whereas baby
occurs in fewer than 0.4%. We also used 975,000 unlabeled images for unsupervised training of the
DBM. We use the publicly available features and train-test splits from [21].
7
0.08
Baseline
Fixed Tree
Learned Tree
0.08
Improvement in Average Precision
Improvement in Average Precision
0.10
0.06
0.04
0.02
0.00
?0.02
?0.04
0
5
10
15
20
25
Sorted classes
30
35
0.06
0.04
0.02
0.00
?0.02
?0.04
40
(a) Class-wise improvement
0.0
0.1
0.2
0.3
0.4
Fraction of instances containing the class
(b) Improvement vs. number of examples
Figure 6: Results on MIR Flickr. Left: Improvement in Average Precision over the baseline for different
methods. Right: Improvement of the learned tree model over the baseline for different classes along with the
fraction of test cases which contain that class. Each dot corresponds to a class. Classes with few examples
(towards the left of plot) usually get significant improvements.
Method
MAP
Logistic regression on Multimodal DBM [21]
Multiple Kernel Learning SVMs [7]
TagProp [22]
Multimodal DBM + finetuning + dropout (Baseline)
Baseline + fixed tree
Baseline + learned tree (initialized from given tree)
0.609
0.623
0.640
0.641 ? 0.004
0.648 ? 0.004
0.651 ? 0.005
Table 2: Mean Average Precision obtained by different models on the MIR-Flickr data set.
4.1
Model Architecture and Training Details
In order to make our results directly comparable to [21], we used the same network architecture as
described therein. The authors of the dataset [10] provided a high-level categorization of the classes
which we use to create an initial tree. This tree structure and the one learned by our model are shown
in the supplementary material. We used Algorithm 1 with L = 500 and M = 100.
4.2
Classification Results
For a baseline we used a Multimodal DBM model after finetuning it discriminatively with dropout.
This model already achieves state-of-the-art results, making it a very strong baseline. The results
of the experiment are summarized in Table 2. The baseline achieved a MAP of 0.641, whereas our
model with a fixed tree improved this to 0.647. Learning the tree structure further pushed this up to
0.651. For this dataset, the learned tree was not significantly different from the given tree. Therefore,
we expected the improvement from learning the tree to be marginal. However, the improvement over
the baseline was significant, showing that transferring information between related classes helped.
Looking closely at the source of gains, we found that similar to CIFAR-100, some classes gain
and others lose as shown in Fig. 6a. It is encouraging to note that classes which occur rarely in
the dataset improve the most. This can be seen in Fig. 6b which plots the improvements of the
learned tree model over the baseline against the fraction of test instances that contain that class. For
example, the average precision for baby which occurs in only 0.4% of the test cases improves from
0.173 (baseline) to 0.205 (learned tree). This class borrows from people and portrait both of which
occur very frequently. The performance on sky which occurs in 31% of the test cases stays the same.
5
Conclusion
We proposed a model that augments standard neural networks with tree-based priors over the classification parameters. These priors follow the hierarchical structure over classes and enable the model
to transfer knowledge from related classes. We also proposed a way of learning the hierarchical
structure. Experiments show that the model achieves excellent results on two challenging datasets.
8
References
[1] E. Bart, I. Porteous, P. Perona, and M. Welling. Unsupervised learning of visual taxonomies. In CVPR,
pages 1?8, 2008.
[2] Y. Bengio and Y. LeCun. Scaling learning algorithms towards AI. Large-Scale Kernel Machines, 2007.
[3] Hal Daum?e, III. Bayesian multitask learning with latent hierarchies. In Proceedings of the Twenty-Fifth
Conference on Uncertainty in Artificial Intelligence, UAI ?09, pages 135?142, Arlington, Virginia, United
States, 2009. AUAI Press.
[4] Theodoros Evgeniou and Massimiliano Pontil. Regularized multi-task learning. In ACM SIGKDD, 2004.
[5] Li Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Trans. Pattern Analysis
and Machine Intelligence, 28(4):594?611, April 2006.
[6] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages
1319?1327, 2013.
[7] M. Guillaumin, J. Verbeek, and C. Schmid. Multimodal semi-supervised learning for image classification.
In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 902 ?909, june
2010.
[8] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
[9] Mark J. Huiskes and Michael S. Lew. The MIR Flickr retrieval evaluation. In MIR ?08: Proceedings
of the 2008 ACM International Conference on Multimedia Information Retrieval, New York, NY, USA,
2008. ACM.
[10] Mark J. Huiskes, Bart Thomee, and Michael S. Lew. New trends and ideas in visual concept detection:
the MIR flickr retrieval evaluation initiative. In Multimedia Information Retrieval, pages 527?536, 2010.
[11] Zhuoliang Kang, Kristen Grauman, and Fei Sha. Learning with whom to share in multi-task feature
learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML
?11, pages 521?528, New York, NY, USA, June 2011. ACM.
[12] Seyoung Kim and Eric P. Xing. Tree-guided group lasso for multi-task regression with structured sparsity.
In ICML, pages 543?550, 2010.
[13] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of
Toronto, 2009.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems 25. MIT Press, 2012.
[15] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks
for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th International Conference on Machine Learning, pages 609?616, 2009.
[16] George A. Miller. Wordnet: a lexical database for english. Commun. ACM, 38(11):39?41, November
1995.
[17] R. Salakhutdinov, J. Tenenbaum, and A. Torralba. Learning to learn with compound hierarchical-deep
models. In NIPS. MIT Press, 2011.
[18] R. Salakhutdinov, A. Torralba, and J. Tenenbaum. Learning to share visual appearance for multiclass
object detection. In CVPR, 2011.
[19] R. R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In Proceedings of the International
Conference on Artificial Intelligence and Statistics, volume 12, 2009.
[20] Babak Shahbaba and Radford M. Neal. Improving classification when a class hierarchy is available using
a hierarchy-based prior. Bayesian Analysis, 2(1):221?238, 2007.
[21] Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines. In
Advances in Neural Information Processing Systems 25, pages 2231?2239. MIT Press, 2012.
[22] Jakob Verbeek, Matthieu Guillaumin, Thomas Mensink, and Cordelia Schmid. Image Annotation with
TagProp on the MIRFLICKR set. In 11th ACM International Conference on Multimedia Information
Retrieval (MIR ?10), pages 537?546. ACM Press, 2010.
[23] Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for classification
with dirichlet process priors. J. Mach. Learn. Res., 8:35?63, May 2007.
[24] Matthew D. Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional neural
networks. CoRR, abs/1301.3557, 2013.
[25] Alon Zweig and Daphna Weinshall. Hierarchical regularization cascade for joint learning. In Proceedings
of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 37?45, May
2013.
9
| 5029 |@word multitask:1 norm:1 covariance:1 paid:1 sgd:3 shot:1 initial:4 cyclic:1 contains:4 united:1 existing:2 activation:1 universality:1 assigning:1 written:1 partition:1 trout:1 pertinent:1 hypothesize:1 designed:1 treating:1 plot:2 bart:3 alone:1 generative:3 discovering:2 leaf:5 v:2 device:2 website:1 fewer:1 intelligence:3 lamp:4 core:1 provides:1 lending:1 toronto:5 node:9 preference:1 location:1 oak:2 theodoros:1 five:1 along:4 constructed:2 initiative:1 mirflickr:1 consists:2 combine:1 advocate:1 ray:5 expected:1 indeed:1 rapid:1 frequently:1 cheetah:7 multi:7 salakhutdinov:6 resolve:1 food:1 encouraging:1 equipped:1 unrolling:1 conv:4 provided:3 discover:1 underlying:1 unrelated:1 becomes:1 moreover:1 what:3 weinshall:1 kind:1 developed:1 finding:1 bootstrapping:1 sky:5 every:2 auai:1 sensibly:1 classifier:7 scaled:1 grauman:1 unit:3 yn:1 organize:1 before:1 struggle:1 mistake:1 severely:1 encoding:1 mach:1 id:3 chose:1 bird:1 therein:1 studied:2 challenging:1 co:1 ease:1 tulip:1 practical:1 lecun:1 procedure:2 huiskes:2 pontil:1 significantly:1 cascade:1 pre:2 word:2 krishnapuram:1 get:5 unlabeled:1 thomee:1 context:1 applying:1 optimize:3 map:5 lexical:1 modifies:1 go:1 maple:1 starting:1 independently:1 attention:1 focused:1 xuejun:1 snew:2 matthieu:1 borrow:2 hurt:1 hierarchy:15 target:2 infrequent:1 nominal:1 commercial:1 user:1 distinguishing:1 goodfellow:1 trend:1 approximated:1 particularly:1 recognition:2 balaji:1 labeled:4 database:1 cloud:2 solved:1 capture:1 initializing:1 electrical:2 connected:6 decrease:2 mentioned:1 rose:1 jaguar:1 warde:1 babak:1 trained:8 distinctive:1 creates:1 upon:1 eric:1 completely:1 easily:1 multimodal:6 finetuning:2 joint:1 cat:2 train:2 massimiliano:1 shortcoming:1 artificial:2 choosing:1 aquarium:1 quite:1 supplementary:4 valued:1 cvpr:3 relax:1 otherwise:1 drawing:2 statistic:3 think:1 final:1 net:6 took:1 propose:2 adaptation:1 relevant:3 flexibility:1 description:2 asserts:1 pronounced:1 dolphin:11 parent:3 convergence:1 sutskever:2 sea:1 categorization:1 object:4 spent:1 help:1 develop:1 andrew:1 fixing:1 alon:1 eq:4 strong:4 c:7 guided:1 closely:1 correct:2 annotated:1 stochastic:4 human:1 enable:2 material:4 require:1 kristen:1 leopard:2 around:2 considered:1 normal:1 lawrence:1 dbm:5 matthew:1 achieves:3 torralba:2 ruslan:3 diminishes:1 estimation:1 lose:2 label:13 grouped:1 create:1 mit:3 gaussian:1 aim:5 super:4 rather:1 flatfish:2 derived:1 focus:1 june:2 improvement:22 likelihood:1 indicates:1 check:1 contrast:1 sigkdd:1 zca:1 baseline:34 sense:1 tremendously:1 kim:1 inference:3 typically:3 transferring:5 hidden:6 perona:2 willow:3 going:1 singlelabel:1 pixel:3 classification:23 among:3 priori:1 retaining:1 proposes:1 animal:5 art:3 softmax:1 initialize:2 special:1 marginal:1 once:1 evgeniou:1 having:1 ng:1 cordelia:1 identical:1 represents:1 placing:1 whale:3 unsupervised:4 carin:1 icml:5 report:1 others:2 mirza:1 yoshua:1 inherent:1 few:15 randomly:5 ab:2 detection:3 huge:1 evaluation:2 analyzed:1 farley:1 regularizers:1 held:1 rajesh:1 integral:1 unless:1 tree:91 initialized:6 re:3 instance:4 portrait:2 maximization:2 cost:1 deviation:2 imperative:1 rare:2 subset:2 hundred:1 krizhevsky:3 virginia:1 reported:1 dependency:2 xue:1 international:7 river:1 stay:1 lee:1 off:1 picking:1 michael:2 together:1 ilya:2 again:1 containing:1 worse:3 external:1 creating:2 resort:2 li:1 summarized:1 vehicle:3 later:1 performed:2 closed:2 picked:3 doing:1 lot:3 helped:2 xing:1 shahbaba:1 parallel:1 annotation:1 minimize:1 ass:1 publicly:1 accuracy:15 convolutional:11 lew:2 miller:1 identify:1 generalize:1 bayesian:4 rectified:1 converged:1 detector:1 flickr:10 parallelizable:1 sharing:2 guillaumin:2 against:2 endeavour:1 associated:2 recovers:1 gain:5 newly:1 dataset:14 knowledge:9 car:5 improves:3 color:2 organized:1 back:3 supervised:1 follow:2 arlington:1 specify:1 improved:3 april:1 mensink:1 done:4 evaluated:1 furthermore:1 just:3 roger:1 crp:4 until:1 overfit:1 working:1 hand:3 clock:2 night:1 transport:1 mehdi:1 propagation:1 logistic:2 believe:1 hal:1 building:1 effect:1 usa:2 contain:3 y2:1 concept:1 regularization:3 assigned:1 iteratively:2 semantic:4 neal:1 during:3 essence:2 claudia:1 generalized:1 trying:1 demonstrate:1 image:22 photography:1 instantaneous:1 wise:1 common:1 volume:2 million:2 belong:1 significant:5 honglak:1 ai:2 rd:4 similarly:2 had:1 dot:1 access:1 whitening:1 add:1 female:1 optimizing:1 belongs:1 commun:1 scenario:2 keyboard:1 certain:1 compound:1 binary:1 outperforming:1 life:1 baby:3 yi:2 seen:1 fortunately:1 additional:1 george:1 determine:2 maximize:4 semi:1 spawning:1 full:3 dwd:3 multiple:4 technical:1 offer:1 zweig:1 cifar:12 retrieval:6 divided:1 controlled:1 prediction:1 verbeek:2 basic:1 orchid:2 regression:2 vision:3 scalable:1 liao:1 normalization:1 kernel:2 achieved:3 proposal:3 whereas:5 addition:1 separately:2 want:1 source:3 crucial:1 mir:10 induced:1 tend:1 pooling:5 member:1 extracting:1 ideal:1 split:2 enough:1 bengio:2 iii:1 restaurant:1 architecture:5 lasso:1 idea:2 knowing:1 multiclass:1 favour:1 whether:2 sunflower:1 utility:1 suffer:2 york:2 prefers:1 deep:16 useful:1 detailed:1 amount:6 extensively:1 tenenbaum:2 svms:1 augments:2 category:9 specifies:1 exist:1 problematic:1 fish:5 per:10 correctly:1 discrete:1 express:1 group:3 putting:1 four:1 key:1 demonstrating:1 achieving:1 preprocessed:1 kept:2 fraction:4 powerful:1 fourth:2 uncertainty:1 place:2 almost:4 reasonable:2 extends:1 shark:3 interrupting:1 scaling:1 comparable:1 pushed:1 dropout:4 layer:16 followed:3 courville:1 truck:3 strength:2 occur:3 handful:1 worked:2 fei:5 alex:3 x2:1 scene:1 centro:1 tag:3 nitish:4 extremely:1 performing:2 relatively:1 department:2 developing:1 structured:1 belonging:2 across:3 smaller:1 rob:1 rsalakhu:1 making:3 previously:1 eventually:1 count:1 know:4 end:4 available:5 observe:3 hierarchical:11 away:1 generic:1 thomas:1 top:1 dirichlet:2 denotes:1 assumes:1 include:2 porteous:1 zeiler:1 daphna:1 household:1 daum:1 exploit:1 chinese:1 objective:1 already:3 looked:1 occurs:4 parametric:2 sha:1 diagonal:1 said:1 gradient:4 unable:1 capacity:1 sensible:1 topic:1 whom:1 collected:1 reason:1 water:2 length:1 relationship:4 difficult:1 taxonomy:2 hog:1 potentially:1 tagprop:2 boltzmann:4 twenty:1 perform:3 datasets:5 descent:4 november:1 hinton:3 excluding:1 looking:1 y1:1 jakob:1 community:1 david:1 dog:4 specified:1 optimized:1 imagenet:1 learned:30 textual:1 kang:1 nip:1 trans:1 address:1 lion:1 flower:1 below:1 indoor:1 usually:1 pattern:2 belonged:1 sparsity:1 max:5 belief:1 suitable:1 natural:2 circumvent:1 predicting:1 regularized:1 improve:4 created:1 schmid:2 text:3 prior:33 nice:1 discovery:1 relative:2 fully:5 loss:4 discriminatively:3 highlight:1 plant:1 limitation:1 geoffrey:2 borrows:1 validation:1 degree:1 imposes:1 s0:3 classifying:1 share:3 tiny:1 row:1 token:1 repeat:1 last:3 keeping:2 english:1 bias:1 allow:1 deeper:1 fall:1 taking:1 attaching:2 fifth:1 van:4 benefit:1 xn:1 stand:1 preventing:1 author:1 jump:1 far:1 social:1 welling:1 ranganath:1 relatedness:1 global:1 uai:1 assumed:1 repositioned:1 discriminative:3 xi:1 fergus:2 search:1 latent:1 why:1 table:5 learn:6 transfer:14 nature:1 zk:3 improving:3 excellent:2 domain:1 did:3 nothing:1 child:3 repeated:2 x1:1 fig:10 join:1 fashion:1 grosse:1 poppy:1 ny:2 precision:5 position:1 wish:1 third:2 learns:2 ian:1 removing:1 specific:2 showing:1 svm:1 virtue:1 intractable:1 adding:2 corr:2 confuse:1 easier:1 suited:1 appearance:2 visual:7 expressed:1 pretrained:1 radford:1 corresponds:1 acm:7 conditional:2 superclass:23 sorted:3 seyoung:1 exposition:1 towards:4 maxout:2 shared:2 absence:1 hard:4 tiger:4 fw:2 typical:2 change:1 uniformly:1 wordnet:2 total:1 multimedia:4 pas:1 ya:1 rarely:1 aaron:1 people:3 mark:2 dissimilar:1 mcmc:1 srivastava:3 |
4,453 | 503 | Refining PIn Controllers using Neural Networks
Gary M. Scott
Department of Chemical Engineering
1415 Johnson Drive
University of Wisconsin
Madison, WI 53706
Jude W. Shavlik
Department of Computer Sciences
1210 W. Dayton Street
University of Wisconsin
Madison, WI 53706
W. Harmon Ray
Department of Chemical Engineering
1415 Johnson Drive
University of Wisconsin
Madison, WI 53706
Abstract
The KBANN approach uses neural networks to refine knowledge that can
be written in the form of simple propositional rules. We extend this idea
further by presenting the MANNCON algorithm by which the mathematical
equations governing a PID controller determine the topology and initial
weights of a network, which is further trained using backpropagation. We
apply this method to the task of controlling the outflow and temperature
of a water tank, producing statistically-significant gains in accuracy over
both a standard neural network approach and a non-learning PID controller. Furthermore, using the PID knowledge to initialize the weights of
the network produces statistically less variation in testset accuracy when
compared to networks initialized with small random numbers.
1
INTRODUCTION
Research into the design of neural networks for process control has largely ignored
existing knowledge about the task at hand. One form this knowledge (often called
the "domain theory") can take is embodied in traditional controller paradigms. The
555
556
Scott, Shavlik, and Ray
recently-developed KBANN (Knowledge-Based Artificial Neural Networks) approach
(Towell et al., 1990) addresses this issue for tasks for which a domain theory (written
using simple propositional rules) is available. The basis of this approach is to use
the existing knowledge to determine an appropriate network topology and initial
weights, such that the network begins its learning process at a "good" starting
point.
This paper describes the MANNCON (Multivariable Artificial Neural Network Control) algorithm, a method of using a traditional controller paradigm to determine
the topology and initial weights of a network . The used of a PID controller in this
way eliminates network-design problems such as the choice of network topology
(i.e., the number of hidden units) and reduces the sensitivity of the network to the
initial values of the weights. Furthermore, the initial configuration of the network
is closer to its final state than it would normally be in a randomly-configured network. Thus, the MANNCON networks perform better and more consistently than
the standard, randomly-initialized three-layer approach.
The task we examine here is learning to control a Multiple-Input, Multiple-Output
(MIMO) system. There are a number of reasons to investigate this task using neural networks. One, it usually involves nonlinear input-output relationships, which
matches the nonlinear nature of neural networks. Two, there have been a number
of successful applications of neural networks to this task (Bhat & McAvoy, 1990;
Jordan & Jacobs, 1990; Miller et al., 1990). Finally, there are a number of existing
controller paradigms which can be used to determine the topology and the initial
weights of the network.
2
CONTROLLER NETWORKS
The MANNCON algorithm uses a Proportional-Integral-Derivative (PID) controller
(Stephanopoulos, 1984), one of the simplest of the traditional feedback controller
schemes, as the basis for the construction and initialization of a neural network controller. The basic idea of PID control is that the control action u (a vector) should
be proportional to the error, the integral of the error over time, and the temporal
derivative of the error. Several tuning parameters determine the contribution of
these various components. Figure 1 depicts the resulting network topology based
on the PID controller paradigm. The first layer of the network, that from Y$P (desired process output or setpoint) and Y(n-l) (actual process output of the past time
step), calculates the simple error (e). A simple vector difference,
e=Y$p-Y
accomplishes this. The second layer, that between e, e(n-l), and e, calculates the
actual error to be passed to the PID mechanism. In effect, this layer acts as a
steady-state pre-compensator (Ray, 1981), where
e = GIe
and produces the current error and the error signals at the past two time steps.
This compensator is a constant matrix, G I , with values such that interactions at a
steady state between the various control loops are eliminated. The final layer , that
between e and u(n) (controller output/plant input), calculates the controller action
Refining PID Controllers using Neural Networks
Fd
Td
den) Water
Tank
F
T
Yen)
WCO
WHO
WCI
WHI
WC2
WH2
Y(n-I)
t:(n-I)
Figure 1:
MANNCON network showing weights that are initialized using
Ziegler-Nichols tuning parameters.
based on the velocity form of the discrete PID controller:
UC(n)
= UC(n-l) + WCOCI(n) + WCICI(n-l) + WC2 CI(n-2)
where Wca, wCb and WC2 are constants determined by the tuning parameters of the
controller for that loop. A similar set of equations and constants (WHO, WHI, WH2)
exist for the other controller loop.
Figure 2 shows a schematic of the water tank (Ray, 1981) that the network controls. This figure also shows the controller variables (Fc and FH), the tank output
variables (F(h) and T), and the disturbance variables (Fd and Td). The controller
cannot measure the disturbances, which represent noise in the system.
MANN CON initializes the weights of Figure 1 's network with va.lues that mimic
the behavior of a PID controller tuned with Ziegler-Nichols (Z-N) parameters
(Stephanopoulos, 1984) at a particular operating condition. Using the KBANN
approach (Towell et al., 1990), it adds weights to the network such that all units
in a layer are connected to all units in all subsequent layers, and initializes these
weights to small random numbers several orders of magnitude smaller than the
weights determined by the PID parameters. We scaled the inputs and outputs of
the network to be in the range [0,1].
Initializing the weights of the network in the manner given above assumes that the
activation functions of the units in the network are linear, that is,
557
558
Scott, Shavlik, and Ray
Cold Stream
Fe
Hot Stream (at TH)
~
T
F
Dis t urban ce
Fd,Td
I-
= Temperature
= Flow Rate
h
II
l-
Output
F(h), T
I I
Figure 2: Stirred mixing tank requiring outflow and temperature control.
Table 1: Topology and initialization of networks.
Network
1. Standard neural network
2. MANNCON network I
3. MANNCON network II
Topology
3-layer (14 hidden units)
PID topology
PID topology
Weight Initialization
random
random
Z-N tuning
The strength of neural networks, however, lie in their having nonlinear (typically
sigmoidal) activation functions. For this reason, the MANNCON system initially sets
the weights (and the biases of the units) so that the linear response dictated by the
PID initialization is approximated by a sigmoid over the output range of the unit.
For units that have outputs in the range [-1,1]' the activation function becomes
2
1 + exp( -2.31 L
where
Wji
_ 1
WjiOi)
are the linear weights described above.
Once MANNCON configures and initializes the weights of the network, it uses a set
of training examples and backpropagation to improve the accuracy of the network.
The weights initialized with PID information, as well as those initialized with small
random numbers, change during backpropagation training.
3
EXPERIMENTAL DETAILS
We compared the performance of three networks that differed in their topology
and/or their method of initialization. Table 1 summarizes the network topology
and weight initialization method for each network. In this table, "PID topology"
is the network structure shown in Figure 1. "Random" weight initialization sets
Refining PID Controllers using Neural Networks
Table 2: Range and average duration of setpoints for experiments.
Experiment
1
2
3
Training Set
[0.1,0.9]
22 instances
[0.1,0.9]
22 instances
[0.4,0.6]
22 instances
Testing Set
[0.1,0.9]
22 instances
[0.1,0.9]
80 instances
[0.1,0.9]
80 instances
all weights to small random numbers centered around zero. We also compare these
networks to a (non-learning) PID controller.
We trained the networks using backpropagation over a randomly-determined schedule of setpoint YsP and disturbance d changes that did not repeat. The setpoints,
which represent the desired output values that the controller is to maintain, are the
temperature and outflow of the tank. The disturbances, which represent noise, are
the inflow rate and temperature of a disturbance stream. The magnitudes of the
setpoints and the disturbances formed a Gaussian distribution centered at 0.5. The
number of training examples between changes in the setpoints and disturbances
were exponentially distributed.
We performed three experiments in which the characteristics of the training and/or
testing set differed. Table 2 summarizes the range of the setpoints as well as their
average duration for each data set in the experiments. As can be seen, in Experiment
1, the training set and testing sets were qualitatively similar; in Experiment 2, the
test set was of longer duration setpoints; and in Experiment 3, the training set was
restricted to a subrange of the testing set. We periodically interrupted training and
tested the network . Results are averaged over 10 runs (Scott, 1991).
We used the error at the output of the tank (y in Figure 1) to determine the network
error (at u) by propagating the error backward through the plant (Psaltis et al.,
1988). In this method, the error signal at the input to the tank is given by
8u i
?Yi
= f '( netui ) ~
~ 8y j OUi
J
where 8yj represents the simple error at the output of the water tank and 8ui is the
error signal at the input of the tank . Since we used a model of the process and not a
real tank, we can calculate the partial derivatives from the process model equations.
4
RESULTS
Figure 3 compares the performance of the three networks for Experiment 1. As can
be seen, the MANNCON networks show an increase in correctness over the standard
neural network approach. Statistical analysis of the errors using a t-test show
that they differ significantly at the 99.5% confidence level. Furthermore, while the
difference in performance between MANNCON network I and MANNCON network II is
559
560
Scott, Shavlik, and Ray
l~---------------------------------------------,
1 = Standard neural network
2 = MANNCON network I
3 = MANN CON network II
4 = PID controller (non-learning)
10000
15000
20000
25000
30000
Training Instances
Figure 3: Mean square error of networks on the testset as a function of
the number of training instances presented for Experiment 1.
not significant, the difference in the variance of the testing error over different runs
is significant (99.5% confidence level). Finally, the MANNCON networks perform
significantly better (99.95% confidence level) than the non-learning PID controller.
The performance of the standard neural network represents the best of several trials
with a varying number of hidden units ranging from 2 to 20.
A second observation from Figure 3 is that the MANNCON networks learned much
more quickly than the standard neural-network approach. The MANNCON networks
required significantly fewer training instances to reach a performance level within
5% of its final error rate. For each of the experiments, Table 3 summarizes the
final mean error, as well as the number of training instances required to achieve a
performance within 5% of this value.
In Experiments 2 and 3 we again see a significant gain in correctness of the
MAN-
NCON networks over both the standard neural network approach (99.95% confidence
level) as well as the non-learning PID controller (99.95% confidence level). In these
experiments, the MANNCON network initialized with Z-N tuning also learned significantly quicker (99.95% confidence level) than the standard neural network.
5
FUTURE WORK
One question is whether the introduction of extra hidden units into the network
would improve the performance by giving the network "room" to learn concepts
that are outside the given domain theory. The addition of extra hidden units as
well as the removal of unneeded units is an area with much ongoing research.
Refining PID Controllers using Neural Networks
l.
2.
3.
4.
5.
l.
2.
3.
4.
5.
l.
2.
3.
4.
5.
Table 3: Comparison of network performance.
I Mean Square Error I Training Instances
Method
Experiment 1
25,200 ? 2, 260
Standard neural network
0.0103 ? 0.0004
5,000 ? 3,340
MANN CON network I
0.0090 ? 0.0006
MANN CON network II
640? 200
0.0086 ? 0.0001
PID control (Z-N tuning) 0.0109
0.0190
Fixed control action
Experiment 2
14,400 ? 3, 150
Standard neural network
0.0118 ? 0.00158
12 , 000 ? 3,690
MANN CON network I
0.0040 ? 0.00014
2,080? 300
0.0038 ? 0.00006
MANN CON network II
PID control (Z-N tuning) 0.0045
Fixed con trol action
0.0181
Experiment 3
0.0112 ? 0.00013
25,200 ? 2, 360
Standard neural network
25,000 ? 1, 550
MANN CON network I
0.0039 ? 0.00008
9,400 ? 1,180
MANN CON network II
0.0036 ? 0.00006
PID control (Z-N tuning) 0.0045
Fixed control action
0.0181
The "?" indicates that the true value lies within these bounds at a 95%
confidence level. The values given for fixed control action (5) represent
the errors resulting from fixing the control actions at a level that produces
outputs of [0.5,0.5) at steady state.
"Ringing" (rapid changes in controller actions) occurred in some of the trained
networks . A future enhancement of this approach would be to create a network
architecture that prevented this ringing, perhaps by limiting the changes in the
controller actions to some relatively small values.
Another important goal of this approach is the application of it to other real-world
processes. The water tank in this project, while illustrative of the approach , was
quite simple. Much more difficult problems (such as those containing significant
time delays) exist and should be explored.
There are several other controller paradigms that could be used as a basis for network construction and initialization. There are several different digital controllers,
such as Deadbeat or Dahlin's (Stephanopoulos, 1984), that could be used in place
of the digital PID controller used in this project. Dynamic Matrix Control (DMC)
(Pratt et al., 1980) and Internal Model Control (IMC) (Garcia & Morari, 1982) are
also candidates for consideration for this approach.
Finally, neural networks are generally considered to be "black boxes," in that their
inner workings are completely uninterpretable. Since the neural networks in this
approach are initialized with information, it may be possible to interpret the weights
of the network and extract useful information from the trained network.
561
562
Scott, Shavlik, and Ray
6
CONCLUSIONS
We have described the MANNCON algorithm, which uses the information from a
PID controller to determine a relevant network topology without resorting to trialand-error methods. In addition, the algorithm, through initialization of the weights
with prior knowledge, gives the backpropagtion algorithm an appropriate direction
in which to continue learning. Finally, we have shown that using the MANNCON
algorithm significantly improves the performance of the trained network in the following ways:
? Improved mean testset accuracy
? Less variability between runs
? Faster rate of learning
? Better generalization and extrapolation ability
Acknowledgements
This material based upon work partially supported under a National Science Foundation Graduate Fellowship (to Scott), Office of Naval Research Grant N00014-90J-1941, and National Science Foundation Grants IRI-9002413 and CPT-8715051.
References
Bhat, N. & McAvoy, T. J. (1990). Use of neural nets for dynamic modeling and
control of chemical process systems. Computers and Chemical Engineering, 14,
573-583.
Garcia, C. E. & Morari, M. (1982). Internal model control: 1. A unifying review
and some new results. I&EC Process Design & Development, 21, 308-323.
Jordan, M. I. & Jacobs, R. A. (1990). Learning to control an unstable system
with forward modeling. In Advances in Neural Information Processing Systems
(Vol. 2, pp. 325- 331). San Mateo, CA: Morgan Kaufmann.
Miller, W. T., Sutton, R. S., & Werbos, P. J. (Eds.)(1990). Neural networks for
control. Cambridge, MA : MIT Press.
Pratt, D. M., Ramaker, B. L., & Cutler, C. R. (1980) . Dynamic matrix control
method. Patent 4,349,869, Shell Oil Company.
Psaltis, D., Sideris, A., & Yamamura, A. A. (1988). A multilayered neural network
controller. IEEE Control Systems Magazine, 8, 17- 21.
Ray, W . H. (1981). Advanced process control. New York: McGraw-Hill, Inc.
Scott, G. M. (1991). Refining PID controllers using neural networks. Master's
project, University of Wisconsin, Department of Computer Sciences.
Stephanopoulos, G. (1984). Chemical process control: An introduction to theory
and practice. Englewood Cliffs, NJ: Prentice Hall, Inc.
Towell, G., Shavlik, J., & Noordewier, M. (1990). Refinement of approximate domain theories by knowledge-base neural networks. In Eighth National Conference on Aritificial Intelligence (pp. 861-866). Menlo Park, CA: AAAI Press .
| 503 |@word trial:1 wcb:1 trialand:1 wco:1 jacob:2 initial:6 configuration:1 tuned:1 past:2 existing:3 current:1 activation:3 written:2 interrupted:1 periodically:1 subsequent:1 intelligence:1 fewer:1 ysp:1 sigmoidal:1 mathematical:1 ray:8 manner:1 rapid:1 behavior:1 examine:1 td:3 company:1 actual:2 becomes:1 begin:1 project:3 ringing:2 developed:1 nj:1 temporal:1 act:1 scaled:1 control:25 unit:12 normally:1 grant:2 producing:1 engineering:3 sutton:1 cliff:1 black:1 initialization:9 mateo:1 range:5 statistically:2 averaged:1 graduate:1 testing:5 yj:1 practice:1 backpropagation:4 cold:1 dayton:1 area:1 significantly:5 pre:1 confidence:7 cannot:1 prentice:1 iri:1 starting:1 duration:3 rule:2 variation:1 limiting:1 controlling:1 construction:2 magazine:1 us:4 velocity:1 approximated:1 werbos:1 quicker:1 initializing:1 calculate:1 connected:1 ui:1 dynamic:3 trained:5 trol:1 kbann:3 upon:1 basis:3 completely:1 various:2 artificial:2 outside:1 quite:1 whi:2 ability:1 final:4 net:1 interaction:1 relevant:1 loop:3 mixing:1 achieve:1 enhancement:1 produce:3 fixing:1 propagating:1 involves:1 differ:1 direction:1 wc2:3 centered:2 material:1 mann:8 generalization:1 around:1 considered:1 hall:1 exp:1 fh:1 psaltis:2 ziegler:2 correctness:2 create:1 mit:1 gaussian:1 varying:1 office:1 refining:5 naval:1 consistently:1 indicates:1 typically:1 unneeded:1 initially:1 hidden:5 tank:12 issue:1 development:1 initialize:1 uc:2 once:1 having:1 eliminated:1 represents:2 park:1 mimic:1 future:2 randomly:3 national:3 maintain:1 fd:3 englewood:1 investigate:1 cutler:1 integral:2 closer:1 partial:1 harmon:1 initialized:7 desired:2 dmc:1 instance:11 modeling:2 setpoints:6 noordewier:1 delay:1 successful:1 mimo:1 johnson:2 yamamura:1 sensitivity:1 quickly:1 again:1 aaai:1 containing:1 derivative:3 inc:2 configured:1 stream:3 performed:1 extrapolation:1 contribution:1 yen:1 formed:1 square:2 accuracy:4 variance:1 largely:1 who:2 miller:2 characteristic:1 kaufmann:1 drive:2 reach:1 ed:1 pp:2 con:9 gain:2 knowledge:8 improves:1 schedule:1 response:1 improved:1 box:1 furthermore:3 governing:1 hand:1 working:1 nonlinear:3 perhaps:1 oil:1 effect:1 nichols:2 requiring:1 concept:1 true:1 chemical:5 during:1 illustrative:1 steady:3 multivariable:1 presenting:1 hill:1 temperature:5 ranging:1 consideration:1 recently:1 sigmoid:1 patent:1 exponentially:1 extend:1 occurred:1 interpret:1 significant:5 imc:1 cambridge:1 tuning:8 resorting:1 lues:1 longer:1 operating:1 add:1 base:1 dictated:1 n00014:1 sideris:1 continue:1 yi:1 wji:1 seen:2 morgan:1 accomplishes:1 determine:7 paradigm:5 signal:3 ii:7 multiple:2 reduces:1 match:1 faster:1 prevented:1 va:1 calculates:3 schematic:1 basic:1 controller:36 jude:1 represent:4 inflow:1 addition:2 fellowship:1 bhat:2 extra:2 eliminates:1 flow:1 jordan:2 pratt:2 architecture:1 topology:14 inner:1 idea:2 whether:1 passed:1 york:1 action:9 cpt:1 ignored:1 generally:1 useful:1 simplest:1 exist:2 wci:1 towell:3 discrete:1 vol:1 urban:1 ce:1 backward:1 run:3 master:1 place:1 summarizes:3 uninterpretable:1 layer:8 bound:1 refine:1 strength:1 relatively:1 department:4 describes:1 smaller:1 wi:3 den:1 restricted:1 pid:29 equation:3 pin:1 mechanism:1 available:1 apply:1 appropriate:2 assumes:1 setpoint:2 morari:2 madison:3 unifying:1 giving:1 initializes:3 question:1 compensator:2 traditional:3 street:1 unstable:1 water:5 reason:2 relationship:1 difficult:1 fe:1 configures:1 design:3 perform:2 observation:1 variability:1 propositional:2 required:2 wh2:2 learned:2 address:1 usually:1 scott:8 eighth:1 hot:1 disturbance:7 advanced:1 scheme:1 improve:2 extract:1 embodied:1 prior:1 review:1 acknowledgement:1 removal:1 wisconsin:4 plant:2 proportional:2 digital:2 foundation:2 repeat:1 supported:1 dis:1 bias:1 shavlik:6 distributed:1 feedback:1 world:1 forward:1 qualitatively:1 refinement:1 san:1 testset:3 ec:1 approximate:1 mcgraw:1 mcavoy:2 table:7 nature:1 learn:1 ca:2 menlo:1 domain:4 did:1 multilayered:1 noise:2 outflow:3 depicts:1 differed:2 lie:2 candidate:1 showing:1 explored:1 ci:1 magnitude:2 garcia:2 fc:1 partially:1 gary:1 ma:1 shell:1 goal:1 room:1 man:1 change:5 ncon:1 determined:3 called:1 experimental:1 stephanopoulos:4 internal:2 ongoing:1 tested:1 |
4,454 | 5,030 | Adaptive Multi-Column Deep Neural Networks
with Application to Robust Image Denoising
Forest Agostinelli
Michael R. Anderson
Honglak Lee
Division of Computer Science and Engineering
University of Michigan
Ann Arbor, MI 48109, USA
{agostifo,mrander,honglak}@umich.edu
Abstract
Stacked sparse denoising autoencoders (SSDAs) have recently been shown to be
successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what
it has seen during training. To address this limitation, we present the adaptive
multi-column stacked sparse denoising autoencoder (AMC-SSDA), a novel technique of combining multiple SSDAs by (1) computing optimal column weights
via solving a nonlinear optimization program and (2) training a separate network
to predict the optimal weights. We eliminate the need to determine the type of
noise, let alone its statistics, at test time and even show that the system can be
robust to noise not seen in the training set. We show that state-of-the-art denoising performance can be achieved with a single system on a variety of different
noise types. Additionally, we demonstrate the efficacy of AMC-SSDA as a preprocessing (denoising) algorithm by achieving strong classification performance
on corrupted MNIST digits.
1
Introduction
Digital images are often corrupted with noise during acquisition and transmission, degrading performance in later tasks such as: image recognition and medical diagnosis. Many denoising algorithms
have been proposed to improve the accuracy of these tasks when corrupted images must be used.
However, most of these methods are carefully designed only for a certain type of noise or require
assumptions about the statistical properties of the corrupting noise.
For instance, the Wiener filter [30] is an optimal linear filter in the sense of minimum mean-square
error and performs very well at removing speckle and Gaussian noise, but the input signal and noise
are assumed to be wide-sense stationary processes, and known autocorrelation functions of the input
are required [7]. Median filtering outperforms linear filtering for suppressing noise in images with
edges and gives good output for salt & pepper noise [2], but it is not as effective for the removal
of additive Gaussian noise [1]. Periodic noise such as scan-line noise is difficult to eliminate using
spatial filtering but is relatively easy to remove using Fourier domain band-stop filters once the
period of the noise is known [6].
Much of this research has taken place in the field of medical imaging, most recently because of a
drive to reduce patient radiation exposure. As radiation dose is decreased, noise levels in medical
images increases [12, 16], so noise reduction techniques have been key to maintaining image quality
while improving patient safety [27]. In this application, assumptions must also be made or statistical
properties must also be determined for these techniques to perform well [26].
Recently, various types of neural networks have been evaluated for their denoising efficacy. Xie
et al. [31] had success at removing noise from corrupted images with the stacked sparse denoising
1
autoencoder (SSDA). The SSDA is trained on images corrupted with a particular noise type, so it
too has a dependence on a priori knowledge about the general nature of the noise.
In this paper, we present the adaptive multi-column sparse stacked denoising autoencoder (AMCSSDA), a method to improve the SSDA?s robustness to various noise types. In the AMC-SSDA,
columns of single-noise SSDAs are run in parallel and their outputs are linearly combined to produce the final denoised image. Taking advantage of the sparse autoencoder?s capability for learning
features, the features encoded by the hidden layers of each SSDA are supplied to an additional
network to determine the optimal weighting for each column in the final linear combination.
We demonstrate that a single AMC-SSDA network provides better denoising results for both noise
types present in the training set and for noise types not seen by the denoiser during training. A given
instance of noise corruption might have features in common with one or more of the training set noise
types, allowing the best combination of denoisers to be chosen based on that image?s specific noise
characteristics. With our method, we eliminate the need to determine the type of noise, let alone its
statistics, at test time. Additionally, we demonstrate the efficacy of AMC-SSDA as a preprocessing
(denoising) algorithm by achieving strong classification performance on corrupted MNIST digits.
2
Related work
Numerous approaches have been proposed for image denoising using signal processing techniques
(e.g., see [23, 8] for a survey). Some methods transfer the image signal to an alternative domain
where noise can be easily separated from the signal [25, 21]. Portilla et al. [25] proposed a waveletbased Bayes Least Squares with a Gaussian Scale-Mixture (BLS-GSM) method. More recent approaches exploit the ?non-local? statistics of images: different patches in the same image are often
similar in appearance, and thus they can be used together in denoising [11, 22, 8]. This class of
algorithms?BM3D [11] in particular?represents the current state-of-the-art in natural image denoising; however, it is targeted primarily toward Gaussian noise. In our preliminary evaluation,
BM3D did not perform well on many of the variety of noise types.
While BM3D is a well-engineered algorithm, Burger et al. [9] showed that it is possible to achieve
state-of-the-art denoising performance with a plain multi-layer perceptron (MLP) that maps noisy
patches onto noise-free ones, once the capacity of the MLP, the patch size, and the training set are
large enough. Therefore, neural networks indeed have a great potential for image denoising.
Vincent et al. [29] introduced the stacked denoising autoencoders as a way of providing a good initial
representation of the data in deep networks for classification tasks. Our proposed AMC-SSDA builds
upon this work by using the denoising autoencoder?s internal representation to determine the optimal
column weighting for robust denoising.
Cires?an et al. [10] presented a multi-column approach for image classification, averaging the output
of several deep neural networks (or columns) trained on inputs preprocessed in different ways. However, based on our experiments, this approach (i.e., simply averaging the output of each column) is
not robust in denoising since each column has been trained on a different type of noise. To address
this problem, we propose an adaptive weighting scheme that can handle a variety of noise types.
Jain et al. [18] used deep convolutional neural networks for image denoising. Rather than using
a convolutional approach, our proposed method applies multiple sparse autoencoder networks in
combination to the denoising task. Tang et al. [28] applied deep learning techniques (e.g., extensions
of the deep belief network with local receptive fields) to denoising and classifying MNIST digits. In
comparison, we achieve favorable classification performance on corrupted MNIST digits.
3
Algorithm
In this section, we first describe the SSDA [31]. Then we will present the AMC-SSDA and describe
the process of finding optimal column weights and predicting column weights for test images.
3.1
Stacked sparse denoising autoencoders
A denoising autoencoder (DA) [29] is typically used as a way to pre-train layers in a deep neural
network, avoiding the difficulty in training such a network as a whole from scratch by performing
greedy layer-wise training (e.g., [4, 5, 14]). As Xie et al. [31] showed, a denoising autoencoder is
2
also a natural fit for performing denoising tasks, due to its behavior of taking a noisy signal as input
and reconstructing the original, clean signal as output.
Commonly, a series of DAs are connected to form a stacked denoising autoencoder (SDA)?a deep
network formed by feeding the hidden layer?s activations of one DA into the input of the next DA.
Typically, SDAs are pre-trained in an unsupervised fashion where each DA layer is trained by generating new noise [29]. We follow Xie et al.?s method of SDA training by calculating the first layer
activations for both the clean input and noisy input to use as training data for the second layer. As
they showed, this modification to the training process allows the SDA to better learn the features for
denoising the original corrupting noise.
More formally, let y ? RD be an instance of uncorrupted data and x ? RD be the corrupted version
of y. We can define the feedforward functions of the DA with K hidden units as follows:
h(x) = f (Wx + b)
(1)
y
?(x) = g(W0 h + b0 )
(2)
where f () and g() are respectively encoding and decoding functions (for which sigmoid function
1
?(s) = 1+exp(?s)
is often used),1 W ? RK?D and b ? RK are encoding weights and biases,
0
D?K
and W ? R
and b0 ? RD are the decoding weights and biases. h(x) ? RK is the hidden
layer?s activation, and y
?(x) ? RD is the reconstruction of the input (i.e., the DA?s output). Given
training data D = {(x1 , y1 ), ..., (xN , yN )} with N training examples, the DA is trained by backpropagation to minimize the sparsity regularized reconstruction loss given by
LDA (D; ?) =
N
K
X
?
1 X
kyi ? y
?(xi )k22 + ?
KL(?k?
?j ) + (kWk2F + kW0 k2F )
N i=1
2
j=1
(3)
where ? = {W, b, W0 , b0 } are the parameters of the model, and the sparsity-inducing term
KL(?k?
?j ) is the Kullback-Leibler divergence between ? (target activation) and ??j (empirical average activation of the j-th hidden unit) [20, 13]:
KL(?
?j k?) = ? log
?
(1 ? ?)
+ (1 ? ?) log
??j
1 ? ??j
where
??j =
N
1 X
hj (xi )
N i=1
(4)
and ?, ?, and ? are scalar-valued hyperparameters determined by cross validation.
In this work, two DAs are stacked as shown in Figure 1a, where the activation of the first DA?s
hidden layer provides the input to the second DA, which in turn provides the input to the output
layer of the first DA. This entire network?the SSDA?is trained again by back-propagation in a
fine tuning stage, minimizing the loss given by
LSSDA (D; ?) =
N
2L
1 X
?X
kyi ? y
?(xi )k22 +
kW(l) k2F
N i=1
2
(5)
l=1
where L is the number of stacked DAs (we used L = 2 in our experiments), and W(l) denotes
weights for the l-th layer in the stacked deep network.2 The sparsity-inducing term is not needed
for this step because the sparsity was already incorporated in the pre-trained DAs. Our experiments
show that there is not a significant change in performance when sparsity is included.
3.2
Adaptive Multi-Column SSDA
The adaptive multi-column SSDA is the linear combination of several SSDAs, or columns, each
trained on a single type of noise using optimized weights determined by the features of each given
input image. Taking advantage of the SSDA?s capability of feature learning, we use the features generated by the activation of the SSDA?s hidden layers as inputs to a neural network-based regression
component, referred to here as the weight prediction module. As shown in Figure 1b, this module
then uses these features to compute the optimal weights used to linearly combine the column outputs
into a weighted average.
1
In particular, the sigmoid function is often used for decoding the input data when their values are bounded
between 0 and 1. For general cases, other types of functions (such as tanh, rectified linear, or linear functions)
can be used.
2
After pre-training, we initialized W(1) and W(4) from the encoding and decoding weights of the first-layer
DA, and W(2) and W(3) from the encoding and decoding weights of the second-layer DA, respectively.
3
Denoised Image
...
+
(4)
(3)
h
...
(2)
h
(2)
s1
s2
...
sC
Weights
f1
f2
...
fC
Features
Weight
Prediction
Module
(1)
h
...
W
x
sC
(3)
...
W
s2
SSDAC
W
s1
SSDA2
W
SSDA1
y?
fC
f1
f2
...
(1)
...
Noisy Image
(a) SSDA
(b) AMC-SSDA
Figure 1: Illustration of the AMC-SSDA. We concatenate the activations of the first-layer hidden
units of the SSDA in each column (i.e., fc denotes the concatenated hidden unit vectors h(1) (x)
and h(2) (x) of the SSDA corresponding to c-th column) as input features to the weight prediction
module for determining the optimal weight for each column of the AMC-SSDA. See text for details.
3.2.1
Training the AMC-SSDA
The AMC-SSDA has three training phases: training the SSDAs, determining optimal weights for a
set of training images, and then training the weight prediction module. The SSDAs are trained as
discussed in Section 3.1, with each SSDA provided a noisy training set, corrupted by a single noise
type along with the original versions of those images as a target set. Each SSDA column c then
produces an output y
?c ? RD for an input x ? RD , which is the noisy version of original image y.
(We omit index i to remove clutter.)
3.2.2
Finding optimal column weights via quadratic program
Once the SSDAs are trained, we construct a new training set that pairs features extracted from the
hidden layers of the SSDAs with optimal column weights. Specifically, for each image, a vector
? = [f1 ; ...; fC ] is built from the features extracted from the hidden layers of each SSDA, where C is
the number of columns. That is, for SSDA column c, the activations of hidden layers h(1) and h(2)
(as shown in Figure 1a) are concatenated into a vector fc , and then f1 , f2 , . . . , fC are concatenated to
form the whole feature vector ?.
? = [y1 , ..., yC ] ?
Additionally, the output of each column for each image is collected into a matrix Y
D?C
R
, with each column being the output of one of the SSDA columns, y
?c . To determine the ideal
linear weighting of the SSDA columns for that given image, we perform the following non-linear
minimization (quadratic program) as follows:3
minimize{sc }
subject to
1 ?
kYs ? yk2
2
0 ? sc ? 1, ?c
C
X
1?? ?
sc ? 1 + ?
(6)
(7)
(8)
c=1
Here s ? RC is the vector of weights sc corresponding to each SSDA column c. Constraining the
weights between 0 and 1 was shown to allow for better weight predictions by reducing overfitting.
The constraint in Eq. (8) helps to avoid degenerate cases where weights for very bright or dark spots
3
In addition to the L2 error shown in Equation (6), we also tested minimizing the L1 distance as the error
function, which is a standard method in the related field of image registration [3]. The version using the L1
error performed slightly better in our noisy digit classification task, suggesting that the loss function might need
to be tuned to the task and images at hand.
4
Noise Type
Gaussian
Speckle
Salt & Pepper
Parameter
?2
?
?
Parameter value
0.02, 0.06, 0.10, 0.14, 0.18, 0.22, 0.26
0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35
0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35
Table 1: SSDA training noises in the 21-column AMC-SSDA. ? is the noise density.
would otherwise be very high or low. Although making the weights sum exactly to one is more
intuitive, we found that the performance slightly improved when given some flexibility, as shown in
Eq. (8). For our experiments, ? = 0.05 is used.
3.2.3
Learning to predict optimal column weights via RBF networks
The final training phase is to train the weight prediction module. A radial basis function (RBF)
network is trained to take the feature vector ? as input and produce a weight vector s, using the
optimal weight training set described in Section 3.2.2. An RBF network was chosen for our experiments because of its known performance in function approximation [24]. However, other function
approximation techniques could be used in this step.
3.2.4
Denoising with the AMC-SSDA
Once training has been completed, the AMC-SSDA is ready for use. A noisy image x is supplied
? each column of
as input to each of the columns, which together produce the output matrix Y,
which is the output of a particular column of the AMC-SSDA. The feature vector ? is created
from the activation of the hidden layers of each SSDA (as described in Section 3.2.2) and fed into
the weight prediction module (as described in Section 3.2.3), which then computes the predicted
column weights, s? . The final denoised image y
? is produced by linearly combining the columns
? ? .4
using these weights: y
? = Ys
4
Experiments
We performed a number of denoising tasks by corrupting and denoising images of computed tomography (CT) scans of the head from the Cancer Imaging Archive [17] (Section 4.1). Quantitative evaluation of denoising results was performed using peak signal-to-noise ratio (PSNR),
a standard method used for evaluating denoising performance. PSNR is defined as PSNR =
10 log10 (p2max /?e2 ), where pmax is the maximum possible pixel value and ?e2 is the mean-square
error between the noisy and original images. We also tested the AMC-SSDA as pre-processing step
in an image classification task by corrupting MNIST database of handwritten digits [19] with various
types of noise and then denoising and classifying the digits with a classifier trained on the original
images (Section 4.2).
Our code is available at: http://sites.google.com/site/nips2013amcssda/.
4.1
Image denoising
To evaluate general denoising performance, images of CT scans of the head were corrupted with
seven variations of Gaussian, salt-and-pepper, and speckle noise, resulting in the 21 noise types
shown in Table 1. Twenty-one individual SSDAs were trained on randomly selected 8-by-8 pixel
patches from the corrupted images; each SSDA was trained on a single type of noise. These twentyone SSDAs were used as columns to create an AMC-SSDA.5 The testing noise is given in Table 2.
The noise was produced using Matlab?s imnoise function with the exception of uniform noise,
which was produced with our own implementation. For Poisson noise, the image is divided by ?
prior to applying the noise; the result is then multiplied by ?.
To train the weight predictor for the AMC-SSDA, a set of images disjoint from the training set of
the individual SSDAs were used. The training images for the AMC-SSDA were corrupted with the
same noise types used to train its columns. The AMC-SSDA was tested on another set of images
4
We have tried alternatives to this approach. Some of these involved using a single unified network to
combine the columns, such as joint training. In our preliminary experiments, these approaches did not yield
significant improvements.
5
We also evaluated AMC-SSDAs with smaller number of columns. In general, we achieved better performance with more columns. We discuss its statistical significance later in this section.
5
Noise Type
Gaussian
Speckle
Salt & Pepper
Poisson
Uniform [-0.5, 0.5]
1
? 2 = 0.01
? = 0.1
? = 0.1
log(?) = 24.4
30%
2
? 2 = 0.07
? = 0.15
? = 0.15
log(?) = 25.3
50%
3
? 2 = 0.1
? = 0.3
? = 0.3
log(?) = 26.0
70%
4
? 2 = 0.25
? = 0.4
? = 0.4
log(?) = 26.4
100%
Table 2: Parameters of noise types used for testing. The Poisson and uniform noise types are not
seen in the training set. The percentage for uniform noise denotes how many pixels are affected. ?
is the noise density.
(a) Original
(b) Noisy
(c) Mixed-SSDA
(d) AMC-SSDA
Figure 2: Visualization of the denoising performance of the Mixed-SSDA and AMC-SSDA. Top:
Gaussian noise. Bottom: speckle noise.
disjoint from both the individual SSDA and AMC-SSDA training sets. The AMC-SSDA was trained
on 128-by-128 pixel patches. When testing, 64-by-64 pixel patches are denoised with a stride of 48.
During testing, we found that smaller strides yielded a very small increase in PSNR; however, having
a small stride was not feasible due to memory constraints. Since our SSDAs denoise 8-by-8 patches,
features for, say, a 64-by-64 patch are the average of the features extracted for each 8-by-8 patch in
the 64-by-64 patch. We find that this allows for more consistent and predictable weights. The AMCSSDA is first tested on noise types that have been seen (i.e., noise types that were in the training set)
but have different statistics. It is then tested on noise not seen in the training examples, referred to
as ?unseen? noise.
To compare with the experiments of Xie et al. [31], one SSDA was trained on only the Gaussian noise
types, one on only salt & pepper, one on only speckle, and one on all the noise types from Table 1.
We refer to these as gaussian SSDA, s&p SSDA, speckle SSDA, and mixed SSDA, respectively. These
SSDAs were then tested on the same types of noise that the AMC-SSDA was tested on. The results
for both seen and unseen noise can be found in Tables 3 and 4. On average, for all cases, the AMCSSDA produced superior PSNR values when compared to these SSDAs. Some example results are
shown in Figure 2. In addition, we test the case where all the weights are equal and sum to one. We
call this the MC-SSDA; note that there is no adaptive element to it. We found that AMC-SSDA also
outperformed MC-SSDA.
Statistical significance We statistically evaluated the difference between our AMC-SSDA and the
mixed SSDA (the best performing SSDA baseline) for the results shown in Table 3, using the onetailed paired t-test. The AMC-SSDA was significantly better than the mixed-SSDA, with a p-value
of 3.3?10?5 for the null hypothesis. We also found that even for a smaller number of columns (such
as 9 columns), the AMC-SSDA still was superior to the mixed-SSDA with statistical significance.
In this paper, we report results from the 21-column AMC-SSDA.
We also performed additional control experiments in which we gave the SSDA an unfair advantage.
Specifically, each test image corrupted with seen noise was denoised with an SSDA that had been
trained on the exact type of noise and statistics that the test image has been corrupted with; we call
this the ?informed-SSDA.? We saw that the AMC-SSDA performed slightly better on the Gaussian
6
Noisy Gaussian S&P Speckle Mixed MC-SSDA AMC-SSDA
Image SSDA SSDA SSDA SSDA
22.10
26.64
26.69 26.84 27.15
27.37
29.60
13.92
25.83
23.07 19.76 25.52
23.34
26.85
12.52
25.50
22.17 18.35 25.09
22.00
26.10
9.30
23.11
20.17 14.88 22.72
17.97
23.66
13.50
25.86
26.26 22.27 26.32
25.84
27.72
11.76
25.40
25.77 20.07 25.77
24.54
26.77
8.75
23.95
23.96 15.88 24.32
20.42
24.65
7.50
22.46
22.20 13.86 22.95
17.76
23.01
19.93
26.41
26.37 28.22 26.97
27.43
28.59
18.22
25.92
25.80 27.75 26.44
26.71
27.68
15.35
23.54
23.36 25.79
24.42
23.91
25.72
14.24
21.80
21.69 24.41
22.93
22.20
24.35
13.92
24.70
23.96 21.51 25.05
23.29
26.23
(a) PSNRs for previously seen noise, best values in bold.
PSNR for Seen Noise
30
Noisy
Gaussian
S&P
Speckle
Mixed
MC?SSDA
AMC?SSDA
25
20
Average PSNR
Noise
Type
G1
G2
G3
G4
SP 1
SP 2
SP 3
SP 4
S1
S2
S3
S4
Avg
15
10
5
0
Gaussian Avg
Salt & Pepper Avg
Speckle Avg
(b) Average PNSRs for specific noise types
Figure 3: Average PSNR values for denoised images of various previously seen noise types (G:
Gaussian, S: Speckle, SP: Salt & Pepper).
(a) PSNR for unseen noise, best values in bold.
PSNR for Unseen Noise
30
Noisy
Gaussian
S&P
Speckle
Mixed
MC?SSDA
AMC?SSDA
25
Average PSNR
Noise Noisy Gaussian S&P Speckle Mixed MC-SSDA AMC-SSDA
Type Image SSDA SSDA SSDA SSDA
P1
19.90
26.27
26.48
27.99
26.80
27.35
28.83
P2
16.90
25.77
25.92
26.94
26.01
26.78
27.64
P3
13.89
24.61
24.54
24.65
24.43
25.11
25.50
P4
12.11
23.36
23.07
22.64
23.01
23.28
23.43
U 1 17.20
23.40
23.68
25.05
23.74
24.71
24.50
U 2 16.04
26.21
25.86
23.21
26.28
26.13
28.06
U 3 12.98
23.24
21.36
17.83
22.89
21.07
23.70
U4
8.78
16.54
15.45
12.01
16.04
14.11
16.78
Avg 14.72
23.67
23.29
22.54
23.65
23.57
24.80
20
15
10
5
0
Poisson Avg
Uniform Avg
(b) Average results for noise types.
Figure 4: Average PSNR values for denoised images of various previously unseen noise types (P:
Poisson noise; U: Uniform noise).
and salt & pepper noise and slightly worse on speckle noise. Overall, the informed-SSDA had,
on average, a PSNR that was only 0.076dB better than the AMC-SSDA. The p-value obtained was
0.4708, indicating little difference between the two methods. This suggests that the AMC-SSDA can
perform as well as using an ?ideally? trained network for specific noise type (i.e., training and testing
an SSDA for the same specific noise type). This is achieved through its adaptive functionality.
4.2
Digit recognition from denoised images
Since the results of denoising images from a visual standpoint can be more qualitative than quantitative, we have tested using denoising as a preprocessing step done before a classification task.
Specifically, we used the MNIST database of handwritten digits [19] as benchmark to evaluate the
efficacy of our denoising procedures.
First, we trained a deep neural network digit classifier from the MNIST training digits, following
[15]. The digit classifier achieved a baseline error rate of 1.09% when tested on the uncorrupted
MNIST test set.
The MNIST digits are corrupted with Gaussian, salt & pepper, speckle, block, and border noise.
Examples of this are shown in Figure 5. The block and border noises are similar to that of Tang
Figure 5: Example MNIST digits. Noisy images are shown on top and the corresponding denoised
images by the AMC-SSDA are shown below. Noise types from left: Gaussian, speckle, salt &
pepper, block, border.
7
et al. [28]. An SSDA was trained on each type of noise. An AMC-SSDA was also trained using
these types of noise. The goal of this experiment is to show that the potential cumbersome and
time-consuming process of determining the type of noise that an image is corrupted with at test time
is not needed to achieve good classification results.
As the results show in Table 3, the denoising performance was strongly correlated to the type of noise
upon which the denoiser was trained. The bold-faced values show the best performing denoiser for a
given noise type. Since a classification difference of 0.1% or larger is considered statistically significant [5], we bold all values within 0.1% of the best error rate. The AMC-SSDA either outperforms,
or comes close to (within 0.06%), the SSDA that was trained with the same type of noise as in the
test data. In terms of average error across all types of noises, the AMC-SSDA is significantly better
than any single denoising algorithms we compared. The results suggest that the AMC-SSDA consistently achieves strong classification performance without having to determine the type of noise
during test time.
These results are also comparable to the results of Tang et al. [28]. We show that we get better
classification accuracy for the block and border noise types. In addition, we note that Tang et al.
uses a 7-by-7 local receptive field, while ours uses 28-by-28 patches. As suggested by Tang et al.,
we expect that using a local field in our architecture could further improve our results.
Method / Noise Type
No denoising
Gaussian SSDA
Salt & Pepper SSDA
Speckle SSDA
Block SSDA
Border SSDA
AMC-SSDA
Tang et al. [28]*
Clean
1.09%
2.13%
1.94%
1.58%
1.67%
8.42%
1.50%
1.24%
Gaussian
29.17%
1.52%
1.71%
5.86%
5.92%
19.87%
1.47%
-
S&P
18.63%
2.44%
2.38%
6.80%
15.29%
19.45%
2.22%
-
Speckle
8.11%
5.10%
4.78%
2.03%
7.64%
13.89%
2.09%
-
Block
25.72%
20.03%
19.71%
19.95%
5.15%
31.38%
5.18%
19.09%
Border
90.05%
8.69%
2.16%
7.36%
1.81%
1.12%
1.15%
1.29%
Average
28.80%
6.65%
5.45%
7.26%
6.25%
15.69%
2.27%
-
Table 3: MNIST test classification error of denoised images. Rows denote the performance of
different denoising methods, including: ?no denoising,? SSDA trained on a specific noise type, and
AMC-SSDA. Columns represent images corrupted with the given noise type. Percentage values are
classification error rates for a set of test images corrupted with the given noise type and denoised
prior to classification. Bold-faced values represent the best performance for images corrupted by a
given noise type. *Note: we compare the numbers reported from Tang et al. [28] (?7x7+denoised?).
5
Conclusion
In this paper, we proposed the adaptive multi-column SSDA, a novel technique of combining multiple SSDAs by predicting optimal column weights adaptively. We have demonstrated that AMCSSDA can robustly denoise images corrupted by multiple different types of noise without knowledge
of the noise type at testing time. It has also been shown to perform well on types of noise that were
not in the training set. Overall, the AMC-SSDA has significantly outperformed the SSDA in denoising. The good classification results of denoised MNIST digits also support the hypothesis that the
AMC-SSDA eliminates the need to know about the type of noise during test time.
Acknowledgments
This work was funded in part by Google Faculty Research Award, ONR N00014-13-1-0762, and
NSF IIS 1247414. F. Agostinelli was supported by GEM Fellowship, and M. Anderson was supported in part by NSF IGERT Open Data Fellowship (#0903629). We also thank Roni Mittelman,
Yong Peng, Scott Reed, and Yuting Zhang for their helpful comments.
References
[1] G. R. Arce. Nonlinear signal processing: A statistical approach. Wiley-Interscience, 2005.
[2] E. Arias-Castro and D. L. Donoho. Does median filtering truly preserve edges better than linear filtering?
The Annals of Statistics, 37(3):1172?1206, 2009.
8
[3] D. I. Barnea and H. F. Silverman. A class of algorithms for fast digital image registration. IEEE Transactions on Computers, 100(2):179?186, 1972.
[4] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1?127,
2009.
[5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
NIPS, 2007.
[6] R. Bourne. Image filters. In Fundamentals of Digital Imaging in Medicine, pages 137?172. Springer
London, 2010.
[7] R. G. Brown and P. Y. Hwang. Introduction to random signals and applied Kalman filtering, volume 1.
John Wiley & Sons New York, 1992.
[8] A. Buades, B. Coll, and J.-M. Morel. A review of image denoising algorithms, with a new one. Multiscale
Modeling & Simulation, 4(2):490?530, 2005.
[9] H. C. Burger, C. J. Schuler, and S. Harmeling. Image denoising: Can plain neural networks compete with
BM3D? In CVPR, 2012.
[10] D. Cires?an, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
In CVPR, 2012.
[11] K. Dabov, R. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3D transform-domain
collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080?2095, 2007.
[12] L. W. Goldman. Principles of CT: Radiation dose and image quality. Journal of Nuclear Medicine
Technology, 35(4):213?225, 2007.
[13] G. Hinton. A practical guide to training restricted boltzmann machines. Technical report, University of
Toronto, 2010.
[14] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[15] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[16] W. Huda. Dose and image quality in CT. Pediatric Radiology, 32(10):709?713, 2002.
[17] N. C. Institute. The Cancer Imaging Archive. http://www.cancerimagingarchive.net, 2013.
[18] V. Jain and H. S. Seung. Natural image denoising with convolutional networks. In NIPS, 2008.
[19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[20] H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. In NIPS. 2008.
[21] F. Luisier, T. Blu, and M. Unser. A new SURE approach to image denoising: Interscale orthonormal
wavelet thresholding. IEEE Transactions on Image Processing, 16(3):593?606, 2007.
[22] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Non-local sparse models for image restoration.
In ICCV, 2009.
[23] M. C. Motwani, M. C. Gadiya, R. C. Motwani, and F. C. Harris. Survey of image denoising techniques.
In GSPX, 2004.
[24] J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3(2):246?257, 1991.
[25] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of
Gaussians in the wavelet domain. IEEE Transactions on Image Processing, 12(11):1338?1351, 2003.
[26] M. G. Rathor, M. A. Kaushik, and M. V. Gupta. Medical images denoising techniques review. International Journal of Electronics Communication and Microelectronics Designing, 1(1):33?36, 2012.
[27] R. Siemund, A. L?ove, D. van Westen, L. Stenberg, C. Petersen, and I. Bj?orkman-Burtscher. Radiation
dose reduction in CT of the brain: Can advanced noise filtering compensate for loss of image quality?
Acta Radiologica, 53(4):468?472, 2012.
[28] Y. Tang and C. Eliasmith. Deep networks for robust visual recognition. In ICML, 2010.
[29] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine
Learning Research, 11:3371?3408, 2010.
[30] N. Wiener. Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. Technology Press of the Massachusetts Institute of Technology, 1950.
[31] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In NIPS, 2012.
9
| 5030 |@word faculty:1 version:4 blu:1 open:1 simulation:1 tried:1 inpainting:1 reduction:2 electronics:1 initial:1 series:2 efficacy:4 tuned:1 ours:1 suppressing:1 document:1 outperforms:2 current:1 com:1 activation:10 must:3 john:1 additive:1 concatenate:1 wx:1 remove:2 designed:1 alone:2 stationary:2 greedy:2 selected:1 provides:3 toronto:1 yuting:1 zhang:1 rc:1 along:1 qualitative:1 combine:2 interscience:1 autocorrelation:1 g4:1 peng:1 indeed:1 behavior:1 p1:1 multi:9 bm3d:4 brain:1 salakhutdinov:1 goldman:1 little:1 burger:2 provided:1 bounded:1 buades:1 null:1 what:1 strela:1 degrading:1 informed:2 unified:1 finding:2 sapiro:1 quantitative:2 exactly:1 classifier:3 control:1 unit:4 medical:4 omit:1 yn:1 safety:1 before:1 engineering:2 local:6 encoding:4 interpolation:1 might:2 acta:1 suggests:1 statistically:2 acknowledgment:1 harmeling:1 practical:1 testing:6 lecun:1 block:6 backpropagation:1 silverman:1 digit:15 spot:1 procedure:1 area:1 empirical:1 universal:1 significantly:3 pre:5 radial:2 suggest:1 petersen:1 get:1 onto:1 close:1 applying:1 www:1 map:1 demonstrated:1 exposure:1 survey:2 ssdas:16 lamblin:1 nuclear:1 orthonormal:1 handle:1 variation:2 annals:1 target:2 exact:1 us:3 designing:1 hypothesis:2 element:1 trend:1 recognition:4 u4:1 pediatric:1 database:2 bottom:1 module:7 connected:1 predictable:1 ideally:1 seung:1 trained:25 solving:1 ove:1 upon:2 division:1 f2:3 basis:2 easily:1 joint:1 various:5 stacked:11 separated:1 jain:2 train:4 effective:1 describe:2 fast:2 london:1 sc:6 encoded:1 larger:1 valued:1 cvpr:2 say:1 otherwise:1 statistic:6 unseen:5 g1:1 radiology:1 transform:1 noisy:15 final:4 advantage:3 net:3 propose:1 reconstruction:2 p4:1 combining:3 degenerate:1 achieve:3 flexibility:1 intuitive:1 inducing:2 ky:1 motwani:2 transmission:1 produce:4 generating:1 help:1 radiation:4 b0:3 eq:2 p2:1 strong:3 predicted:1 come:1 larochelle:2 functionality:1 filter:4 engineered:1 eliasmith:1 require:1 feeding:1 agostinelli:2 f1:4 preliminary:2 extension:1 considered:1 exp:1 great:1 predict:2 bj:1 achieves:1 favorable:1 outperformed:2 tanh:1 saw:1 create:1 weighted:1 morel:1 minimization:1 gaussian:21 rather:1 avoid:1 hj:1 ponce:1 denoisers:1 improvement:1 consistently:1 baseline:2 sense:2 helpful:1 eliminate:3 waveletbased:1 typically:2 entire:1 hidden:13 pixel:5 overall:2 classification:17 priori:1 art:3 spatial:1 smoothing:1 amc:48 once:4 field:5 construct:1 having:2 equal:1 ng:1 represents:1 kw:1 park:1 unsupervised:1 k2f:2 icml:1 bourne:1 report:2 primarily:1 randomly:1 preserve:1 divergence:1 individual:3 phase:2 mlp:2 evaluation:2 mixture:2 truly:1 edge:2 initialized:1 dose:4 instance:3 column:49 modeling:1 mittelman:1 restoration:1 ekanadham:1 uniform:6 predictor:1 successful:1 osindero:1 too:1 reported:1 corrupted:21 periodic:1 combined:1 adaptively:1 sda:3 density:2 peak:1 fundamental:1 international:1 lee:2 decoding:5 michael:1 together:2 again:1 cires:2 worse:1 suggesting:1 potential:2 stride:3 bold:5 later:2 performed:5 extrapolation:1 bayes:1 denoised:13 parallel:1 capability:2 collaborative:1 minimize:2 bright:1 formed:1 square:3 egiazarian:1 accuracy:2 wiener:2 characteristic:1 convolutional:3 yield:1 igert:1 handwritten:2 vincent:2 foi:1 produced:4 mc:6 dabov:1 drive:1 corruption:1 rectified:1 gsm:1 cumbersome:1 acquisition:1 involved:1 e2:2 mi:1 stop:1 massachusetts:1 knowledge:2 dimensionality:1 psnr:13 carefully:1 sandberg:1 back:1 xie:5 follow:1 zisserman:1 improved:1 evaluated:3 done:1 strongly:1 anderson:2 stage:1 autoencoders:4 hand:1 nonlinear:2 multiscale:1 propagation:1 google:2 quality:4 lda:1 hwang:1 usa:1 k22:2 brown:1 leibler:1 during:6 kaushik:1 criterion:1 demonstrate:3 performs:1 l1:2 image:80 wise:2 novel:2 recently:3 common:1 sigmoid:2 superior:2 salt:11 volume:1 discussed:1 significant:3 refer:1 honglak:2 ai:1 rd:6 tuning:1 had:3 funded:1 yk2:1 own:1 recent:1 showed:3 schmidhuber:1 certain:1 n00014:1 onr:1 success:1 uncorrupted:2 seen:11 minimum:1 additional:2 sdas:1 determine:6 period:1 signal:9 ii:1 multiple:4 simoncelli:1 technical:1 cross:1 bach:1 compensate:1 divided:1 y:1 award:1 paired:1 prediction:7 regression:1 patient:2 poisson:5 represent:2 achieved:4 addition:3 fellowship:2 fine:1 decreased:1 median:2 standpoint:1 eliminates:1 archive:2 sure:1 comment:1 subject:1 db:1 call:2 ideal:1 feedforward:1 constraining:1 easy:1 enough:1 bengio:4 variety:3 fit:1 gave:1 pepper:11 architecture:2 reduce:1 haffner:1 roni:1 york:1 matlab:1 deep:18 useful:1 clutter:1 dark:1 s4:1 band:1 tomography:1 http:2 supplied:2 percentage:2 nsf:2 s3:1 disjoint:2 diagnosis:1 bls:1 affected:1 key:1 achieving:2 kyi:2 preprocessed:1 clean:3 registration:2 imaging:4 luisier:1 sum:2 run:1 compete:1 place:1 patch:11 p3:1 comparable:1 layer:21 ct:5 quadratic:2 yielded:1 constraint:2 yong:1 x7:1 fourier:1 performing:4 relatively:1 combination:4 smaller:3 slightly:4 reconstructing:1 across:1 son:1 g3:1 modification:1 s1:3 making:1 castro:1 restricted:1 iccv:1 taken:1 equation:1 visualization:1 previously:3 kw0:1 turn:1 discus:1 needed:2 know:1 fed:1 umich:1 available:1 gaussians:1 multiplied:1 v2:1 robustly:1 alternative:2 robustness:1 original:7 denotes:3 top:2 completed:1 maintaining:1 log10:1 calculating:1 medicine:2 exploit:1 concatenated:3 build:1 already:1 receptive:2 dependence:1 gradient:1 distance:1 separate:1 thank:1 capacity:1 w0:2 lajoie:1 seven:1 collected:1 toward:1 denoiser:3 code:1 kalman:1 index:1 reed:1 illustration:1 providing:1 minimizing:2 ratio:1 manzagol:1 difficult:1 pmax:1 implementation:1 boltzmann:1 twenty:1 perform:5 allowing:1 teh:1 benchmark:1 speckle:18 psnrs:1 incorporated:1 head:2 hinton:3 y1:2 portilla:2 communication:1 introduced:1 pair:1 required:1 kl:3 meier:1 optimized:1 nip:4 address:2 beyond:1 suggested:1 below:1 scott:1 yc:1 sparsity:5 program:3 built:1 including:1 memory:1 belief:3 wainwright:1 natural:3 difficulty:1 regularized:1 predicting:2 advanced:1 scheme:1 improve:3 technology:3 numerous:1 created:1 ready:1 autoencoder:9 text:1 prior:2 faced:2 l2:1 removal:1 popovici:1 review:2 determining:3 loss:4 expect:1 mixed:10 limitation:1 filtering:8 digital:3 validation:1 foundation:1 consistent:1 principle:1 thresholding:1 corrupting:4 classifying:2 row:1 cancer:2 supported:2 free:1 bias:2 allow:1 katkovnik:1 perceptron:1 guide:1 wide:1 institute:2 taking:3 sparse:10 van:1 plain:2 xn:1 evaluating:1 computes:1 made:1 adaptive:9 preprocessing:3 commonly:1 avg:7 coll:1 transaction:4 kullback:1 overfitting:1 mairal:1 assumed:1 gem:1 consuming:1 xi:3 table:9 additionally:3 nature:1 transfer:1 robust:6 learn:1 schuler:1 forest:1 improving:1 bottou:1 domain:4 da:16 did:2 significance:3 sp:5 linearly:3 whole:2 noise:108 kwk2f:1 hyperparameters:1 s2:3 denoise:2 border:6 x1:1 xu:1 site:2 referred:2 fashion:1 wiley:2 unfair:1 weighting:4 wavelet:2 tang:8 removing:3 rk:3 specific:5 unser:1 gupta:1 microelectronics:1 mnist:12 aria:1 chen:1 michigan:1 fc:6 simply:1 appearance:1 visual:3 g2:1 scalar:1 applies:1 springer:1 extracted:3 harris:1 ssda:112 targeted:1 p2max:1 ann:1 rbf:3 goal:1 donoho:1 feasible:1 change:1 included:1 determined:3 specifically:3 reducing:2 averaging:2 denoising:59 arbor:1 exception:1 formally:1 indicating:1 internal:1 support:1 scan:3 avoiding:1 evaluate:2 tested:9 scratch:1 correlated:1 |
4,455 | 5,031 | Top-Down Regularization of Deep Belief Networks
Hanlin Goh?, Nicolas Thome, Matthieu Cord
Laboratoire d?Informatique de Paris 6
UPMC ? Sorbonne Universit?es, Paris, France
{Firstname.Lastname}@lip6.fr
Joo-Hwee Lim?
Institute for Infocomm Research
A*STAR, Singapore
[email protected]
Abstract
Designing a principled and effective algorithm for learning deep architectures is a
challenging problem. The current approach involves two training phases: a fully
unsupervised learning followed by a strongly discriminative optimization. We
suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme
using a method to regularize deep belief networks with top-down information. The
network is constructed from building blocks of restricted Boltzmann machines
learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a
top-down pass is used. Experiments on the MNIST dataset show improvements
over the existing algorithms for deep belief networks. Object recognition results
on the Caltech-101 dataset also yield competitive results.
1
Introduction
Deep architectures have strong representational power due to their hierarchical structures. They
are capable of encoding highly varying functions and capture complex relationships and high-level
abstractions among high-dimensional data [1]. Traditionally, the multilayer perceptron is used to
optimize such hierarchical models based on a discriminative criterion that models P (y|x) using a
error backpropagating gradient descent [2, 3]. However, when the architecture is deep, it is challenging to train the entire network through supervised learning due to the large number of parameters,
the non-convex optimization problem and the dilution of the error signal through the layers. This
optimization may even lead to worse performances as compared to shallower networks [4].
Recent developments in unsupervised feature learning and deep learning algorithms have made it
possible to learn deep feature hierarchies. Deep learning, in its current form, typically involves two
consecutive learning phases. The first phase greedily learns unsupervised modules layer-by-layer
from the bottom-up [1, 5]. Some common criteria for unsupervised learning include the maximum likelihood that models P (x) [1] and the input reconstruction error of vector x [5?7]. This is
subsequently followed by a supervised phase that fine-tunes the network using a supervised, usually discriminative algorithm, such as supervised error backpropagation. The unsupervised learning
phase initializes the parameters without taking into account the ultimate task of interest, such as
classification. The second phase assumes the entire burden of modifying the model to fit the task.
In this work, we propose a gradual transition from the fully-unsupervised learning to the highlydiscriminative optimization. This is done by adding an intermediate training phase between the two
existing deep learning phases, which enhances the unsupervised representation by incorporating
top-down information. To realize this notion, we introduce a new global (non-greedy) optimization
?
Hanlin Goh is also with the Institute for Infocomm Research, A*STAR, Singapore and the Image and
Pervasive Access Lab, CNRS UMI 2955, Singapore ? France.
?
Joo-Hwee Lim is also with the Image and Pervasive Access Lab, CNRS UMI 2955, Singapore ? France.
1
that regularizes the deep belief network (DBN) from the top-down. We retain the same gradient
descent procedure of updating the parameters of the DBN as the unsupervised learning phase. The
new regularization method and deep learning strategy are applied to handwritten digit recognition
and dictionary learning for object recognition, with competitive empirical results.
Related Work
Input layer!
E(x, z) = ?z> Wx ? b> z ? c> x.
The probability assigned to x is given by:
1 X
P (x) =
exp(?E(x, z)),
Z z
I input units!
Restricted Boltzmann Machines. A restricted Boltzmann
machine (RBM) [8] is a bipartite Markov random field with an
input layer x ? RI and a latent layer z ? RJ (see Figure 1). The
layers are connected by undirected weights W ? RI?J . Each
unit also receives input from a bias parameter bj or ci . The joint
configuration of binary states {x, z} has an energy given by:
x!
W!
c!
(1)
1
Latent layer!
z!
J latent units!
2
b
Figure 1: Structure of the RBM.
Z=
XX
x
exp(?E(x, z)),
(2)
z
where Z is known as the partition function, which normalizes P (x) to a valid distribution. The units
in a layer are conditionally independent with distributions given by logistic functions:
Y
P (z|x) =
P (zj |x),
P (zj |x) = 1/(1 + exp(?wj> x ? bj )),
(3)
j
P (x|z) =
Y
i
P (xi |z),
P (xi |z) = 1/(1 + exp(?wi z ? ci )).
(4)
This enables the model to be sampled via alternating Gibbs sampling between the two layers. To
estimate the maximum likelihood of the data distribution P (x), the RBM is trained by taking the
gradient of the log probability of the input data with respect to the parameters:
? log P (x)
? hxi zj i0 ? hxi zj iN ,
?wij
(5)
where h?it denotes the expectation under the distribution at the t-th sampling of the Markov chain.
The first term samples the data distribution at t = 0, while the second term approximates the equilibrium distribution at t = ? using the contrastive divergence method [9] by using a small and finite
number of sampling steps N to obtain a distribution of reconstructed states at t = N . RBMs have
also been regularized to produce sparse representations [10, 11].
The conditional distribution of the concatenated vector is now:
Y
Y
P (x, y|z) = P (x|z)P (y|z) =
P (xi |z)
P (yc |z), (7)
i
c
where P (xi |z) is given in Equation 4 and the outputs yc may
either be logistic units or the softmax units. The RBM may
again be trained using contrastive divergence algorithm [9] to
approximate the maximum likelihood of joint distribution.
2
W!
Latent
layer!
z!
J latent units!
E(x, y, z) = ?z> Wx ? z> Vy ? b> z ? c> x ? d> y (6)
Inputs!
x!
C output units! I input units!
Supervised Restricted Boltzmann Machines. To introduce
class labels to the RBM, a one-hot coded output vector y ? RC
is defined, where yc = 1 iff c is the class index. Another set of
weights V ? RC?J connects y with z. The two vectors are concatenated to form a new input vector [x, y] for the RBM, which
is linked to z through [W> , V> ], as shown in Figure 2. This
supervised RBM models the joint distribution P (x, y). The energy function of this model can be extended to
V!
y!
Classes!
Concatenated!
layer!
Figure 2: A supervised RBM
jointly models inputs and outputs.
Biases are omitted for simplicity.
During inference, only x is given and y is set at a neutral value, which makes this part of the RBM
?noisy?. The objective is to use x to ?denoise? y and obtain the prediction. This can be done by
several iterations of alternating Gibbs sampling. If the number of classes is huge, the number of
input units need to be huge to maintain a high signal to noise ratio. Larochelle and Bengio [12]
suggested to couple this generative model P (x, y) with a discriminative model P (y|x), which can
help alleviate this issue. However, if the objective is to train a deep network, then with ever new
layer, the previous V has to be discarded and retrained. It may also not be desirable to use a
discriminative criterion directly from the outputs, especially in the initial layers of the network.
Deep Belief Networks. Deep belief networks (DBN) [1] are probabilistic graphical models made
up of a hierarchy of stochastic latent variables. Being universal approximators [13], they have been
applied to a variety of problems such as image and video recognition [1, 14], dimension reduction [15]. It follows a two-phase training strategy of unsupervised greedy pre-training followed by
supervised fine-tuning.
For unsupervised pre-training, a stack of RBMs is trained greedily from the bottom-up, with the
latent activations of each layer used as the inputs for the next RBM. Each new layer RBM models the
data distribution P (x), such that when higher-level layers are sufficiently large, the variational bound
on the likelihood always improves [1]. A popular method for supervised fine-tuning backpropagates
the error given by P (y|x) to update the network?s parameters. It has been shown to perform well
when initialized by first learning a model of input data using unsupervised pre-training [15].
An alternative supervised method is a generative model that implements a supervised RBM (Figure 2) that models P (x, y) at the top layer. For training, the network employs the up-down backfitting algorithm [1]. The algorithm is initialized by untying the network?s recognition and generative
weights. First, a stochastic bottom-up pass is performed and the generative weights are adjusted to
be good at reconstructing the layer below. Next, a few iterations of alternating sampling using the
respective conditional probabilities are done at the top-level supervised RBM between the concatenated vector and the latent layer. Using contrastive divergence the RBM is updated by fitting to its
posterior distribution. Finally, a stochastic top-down pass adjusts bottom-up recognition weights to
reconstruct the activations of the layer above.
In this work, we extend the existing DBN training strategy by having an additional supervised training phase before the discriminative error backpropagation. A top-down regularization of the network?s parameters is proposed. The network is optimized globally so that the inputs gradually map
to the output through the layers. We also retain the simple method of using gradient descent to
update the weights of the RBMs and retain the same convention for generative RBM learning.
3
Top-Down RBM Regularization: The Building Block
We regularize RBM learning with targets obtained by sampling from higher-level representations.
Generic Cross-Entropy Regularization. The aim is to construct a top-down regularized building
block for deep networks, instead of combining the optimization criteria directly [12], which is done
for the supervised RBM model (Figure 2). To give control over individual elements in the latent
vector, one way to manipulate the representations is to point-wise bias the activations for each latent
variable j [11]. Given a training dataset Dtrain , a regularizer based on the cross-entropy loss can be
?:
defined to penalize the difference between the latent vector z and a target vector z
|Dtrain |
LRBM +reg (Dtrain ) = ?
X
k=1
log P (xk ) ? ?
|Dtrain | J
X X
k=1
j=1
log P (?
zjk |zjk ).
(8)
The update rule of the cross-entropy-regularized RBM can be modified to:
?wij ? hxi sj i0 ? hxi zj iN ,
(9)
where
sj = (1 ? ?) zj + ??
zj
(10)
is the merger of the latent and target activations used to update the parameters. Here, the influences
of z?j and zj are regulated by parameter ?. If ? = 0 or when the activationes match (i.e. zj = z?j ),
then the parameter update is exactly that the original contrastive divergence learning algorithm.
3
Building Block. The same principle of regularizing the latent activations can be used to combine
signals from the bottom-up and top-down. This forms the building block for optimizing a DBN
with top-down regularization. The basic building block is a three-layer structure consisting of three
consecutive layers: the previous zl?1 ? RI , current zl ? RJ and next zl+1 ? RH layers. The
layers are connected by two sets of weight parameters Wl?1 and Wl to the previous and next
layers respectively. For the current layer zl , the bottom-up representations zl,l?1 are sampled from
the previous layer zl?1 through weighted connections Wl?1 with:
>
P (zl,l?1,j | zl?1 ; Wl?1 ) = 1/(1 + exp(?wl?1,j
zl?1 ? bl,j )),
(11)
where the two terms in the subscripts of a sampled representation zdest,src refer to the destination
(dest) and source (src) layers respectively. Meanwhile, sampling from the next layer zl+1 via
weights Wl drives the top-down representations zl,l+1 :
P (zl,l+1,j | zl+1 ; Wl ) = 1/(1 + exp(?wl,j zl+1 ? cl,j )).
(12)
The objective is to learn the RBM parameters Wl?1 that map from the previous layer zl?1 to
the current latent layer zl,l?1 , by maximizing the likelihood of the previous layer P (zl?1 ) while
considering the top-down samples zl,l+1 from the next layer zl+1 as target representations. The loss
function for a network with L layers can be broken down as:
LDBN +topdown =
L
X
l=2
Ll,RBM +topdown
(13)
where the cross-entropy regularization the loss function for the layer is
|Dtrain |
Ll,RBM +topdown = ?
X
k=1
log P (zl?1,k ) ? ?
|Dtrain | J
X X
k=1
j=1
log P (zl,l+1,jk |zl,l?1,jk ).
(14)
This results in the following gradient descent:
?wl?1,ij = ? hzl?1,l?2,i sl,j i0 ? hzl?1,l,i zl,l?1,j iN ,
(15)
sl,jk = (1 ? ?l ) zl,l?1,jk +?l zl,l+1,jk ,
| {z }
| {z }
(16)
where
Bottom-up
Top-down
is the merged representation from the bottom-up and top-down signals (see Figure 3), weighted by
hyperparameter ?l . The bias towards one source of signal can be adjusted by selecting an appropriate
?l . Additionally, the alternating Gibbs sampling, necessary for the contrastive divergence updates,
is performed from the unbiased bottom-up samples using Equation 11 and a symmetric decoder:
P (zl?1,l,j = 1 | zl,l?1 ; Wl?1 ) = 1/(1 + exp(?wl?1,i zl,l?1 ? cl?1,j )).
Previous layer!
zl
1
Wl
Intermediate layer!
sl
1
zl,l
1
zl,l
1
zl,l +1
(17)
Next layer!
Wl
z l +1
1-step CD!
zl
1,l
Merged!
Bottom-up!
Top-down!
Figure 3: The basic building block learns a bottom-up latent representation regularized by topdown signals. Bottom-up zl,l?1 and top-down zl,l+1 latent activations are sampled from zl?1 and
zl+1 respectively. They are merged to get the modified activations sl used for parameter updates.
Reconstructions independently driven from the input signals form the Gibbs sampling Markov chain.
4
4
Globally-Optimized Deep Belief Networks
Forward-Backward Learning Strategy. In the DBN, RBMs are stacked from the bottom-up in
a greedy layer-wise manner, with each new layer modeling the posterior distribution of the previous
layer. Similarly, regularized building blocks can also be used to construct the regularized DBN
(Figure 4). The network, as illustrated in Figure 4(a), comprises of a total of L ? 1 RBMs. The
network can be trained with a forward and backward strategy (Figure 4(b)). It integrates top-down
regularization with contrastive divergence learning, which is given by alternating Gibbs sampling
between the layers (Figure 4(c)).
Input!
Layer 2!
Layer 3!
Layer 4!
Output!
s2
s3
s4
s5
z2,1
x
z1,2
z5,4
z4,3
z3,2
z2,3
z3,4
z4,5
z2,1 z2,3
z3,2 z3,4
z4,3 z4,5
y
z5,4
(a) Top-down regularized deep belief network.
Forward pass!
x
z2,1
s2
s3
z5,4
z4,3
z3,2
z2,3
s5
z3,4
s4
y
z4,5
Backward pass!
Merged!
(b) Forward and backward passes for top-down regularization.
x
z2,1
z3,2
z4,3
z5,4
z2,1 z2,3
z3,2 z3,4
z4,3 z4,5
z5,4
1-step CD!
z1,2
(c) Alternating Gibbs sampling chains for contrastive divergence learning.
Figure 4: Constructing a top-down regularized deep belief network (DBN). All the restricted Boltzmann machines (RBM) that make up the network are concurrently optimized. (a) The building
blocks are connected layer-wise. Both bottom-up and top-down activations are used for training the
network. (b) Activations for the top-down regularization are obtained by sampling and merging the
forward pass and the backward pass. (c) From the activations of the forward pass, the reconstructions
can be obtained by performing alternating Gibbs sampling with the previous layer.
In the forward pass, given the input features, each layer zl is sampled from the bottom-up, based on
the representation of the previous layer zl?1 (Equation 11). The top-level vector zL is activated with
the softmax function. Upon reaching the output layer, the backward pass begins. The activations zL
are combined with the output labels y to produce sL given by
sL,ck = (1 ? ?L )zL,L?1,ck + ?L yck ,
(18)
The merged activations sl (Equation 16), which besides being used for parameter updates, have a
second role of activating the lower layer zl?1 from the top-down:
P (zl?1,l,j | sl ; Wl ) = 1/1 + exp(?wl?1,j sl ? cl?1,j ).
This is repeated until the second layer is reached (l = 2) and s2 is computed.
5
(19)
Top-down sampling encourages the class-based invariance of the bottom-up representations. However, sampling from the top-down, with the output vector y as the only source will result in only
one activation pattern per class. This is undesirable, especially for the bottom layers, which should
have representations more heavily influenced by bottom-up data. By merging the top-down representations with the bottom-up ones, the representations will encode both instance-based variations
and class-based variations. In the last layer, we typically set ?L as 1, so that the final RBM given by
WL?1 learns to map to the class labels y. Backward activation of zL?1,L is a class-based invariant
representation obtained from y and used to regularize WL?2 . All other backward activations from
this point onwards are based on the merged representation from instance- and class-based representations.
Three-Phase Learning Procedure. After greedy learning models P (x) and the top-down regularized forward-backward learning is executed. The eventual goal of the network is to be able to give
a prediction of P (y|x). This suggest that the network can adopt a three-phase strategy for training,
whereby the parameters learned in one phase initializes the next, as follows:
? Phase 1 ? Unsupervised Greedy. The network is constructed by greedily learning a new
unsupervised RBM on top of the existing network. To enhance the representations, various
regularizations, such as sparsity [10], can be applied. The stacking process is repeated for
L ? 2 RBMs, until layer L ? 1 is added to the network.
? Phase 2 ? Supervised Regularized. This phase begins by connecting the L ? 1 to a final
layer, which is activated by the softmax activation function for a classification problem.
Using the one-hot coded output vector y ? RC as its target activations and setting ?L to 1,
the RBM is learned as an associative memory with the following update:
?wL?1,ic ? hzL?1,L?2,i yc i0 ? hzL?1,L,i zL,L?1,c iN .
(20)
This final RBM, together with the other RBMs learned from Phase 1, form the initialization
for the top-down regularized forward-backward learning algorithm. This phase is used to
fine-tune the network using generative learning, and binds the layers together by aligning
all the parameters of the network with the outputs.
? Phase 3 ? Supervised Discriminative. Finally, the supervised error backpropagation algorithm is used to improve class discrimination in the representations. Backpropagation
can also be described in two passes. In the forward pass, each layer is activated from the
bottom-up to obtain the class predictions. The classification error is then computed based
on the groundtruth and the backward pass performs gradient descent on the parameters by
backpropagating the errors through the layers from the top-down.
From Phase 1 to Phase 2, the form of the parameter update rule based on gradient descent does not
change. Only that top-down signals are also taken into account. Essentially, the two phases are
performing a variant of the contrastive divergence algorithm. Meanwhile, from Phase 2 to Phase 3,
the inputs to the phases (x and y) do not change, while the optimization function is modified from
performing regularization to being completely discriminative.
5
Empirical Evaluation
In this work, the proposed deep learning strategy and top-down regularization method were evaluated and analyzed using the MNIST handwritten digit dataset [16] and the Caltech-101 object
recognition dataset [17].
5.1
MNIST Handwritten Digit Recognition
The MNIST dataset contains images of handwritten digits. The task is to recognize a digit from
0 to 9 given a 28 ? 28 pixel image. The dataset is split into 60, 000 images used for training
and 10, 000 test images. Many different methods have used this dataset to perform evaluation on
classification performances, specifically the DBNN [1]. The basic version of this dataset, with
neither preprocessing nor enhancements, was used for the evaluation. A five-layer DBN was setup
to have the same topography as evaluated in [1]. The number of units in each layer, from the first to
the last layer, were 784, 500, 500, 2000 and 10, in that order. Five architectural setups were tested:
6
1.
2.
3.
4.
5.
Stacked RBMs with up-down learning (original DBN reported in [1]),
Stacked RBMs with forward-backward learning and backpropagation,
Stacked sparse RBMs [11] with forward-backward learning and backpropagation, and
Stacked sparse RBMs [11] with backpropagation, and
Forward-backward learning from random weights.
In the phases 1 and 2, we followed the evaluation procedure of Hinton et al. [1] by initially using
44, 000 training and 10, 000 validation images to train the network before retraining it with the
full training set. In phase 3, sets of 50, 000 and 10, 000 images were used as the initial training
and validation sets. After model selection, the network was retrained on the training set of 60, 000
images.
To simplify the parameterization for the forward-backward learning in phase 2, the top-down modulation parameter ?l across the layers were controlled by a single parameter ? using the function:
?l = |l ? 1|? /(|l ? 1|? ? |L ? l|? ).
(21)
where ? > 0. The top-down influence for a layer l is also dependent on its relative position in the
network. The function assigns ?l such that the layers nearer to the input will have stronger influences
from the input, while the layers near the output will be biased towards the output. This distancebased modulation of their influences enables a gradual mapping between the input and output layers.
Our best performance was obtained using setting 3, which got an error rate of 0.91% on the test
set. Figure 5 shows the 91 wrongly classified test examples for this setting. When initialized with
the conventional RBMs but fine-tuned with forward-backward learning and error backpropagation,
the score was 0.98%. As a comparison, the conventional DBN obtained an error rate of 1.25%.
Directly optimizing the network from random weights produced an error of 1.61%, which is still
fairly decent, considering that the network was optimized globally from scratch. For each setup, the
intermediate results for each training phase are reported in Table 1.
Overall, the results achieved are very competitive for methods with the same complexity that rely on
neither convolution nor image distortions and normalization. A variant of the DBN, which focused
on learning nonlinear transformations of the feature space for nearest neighbor classification [18],
had an error rate of 1.0%. The deep convex net [19], which utilized more complex convex-optimized
modules as building blocks but did not perform fine-tuning on a global network level, got a score
of 0.83%. At the time of writing, the best performing model on the dataset gave an error rate of
0.23% and used a heavy architecture of a committee of 35 deep convolutional neural nets with
elastic distortions and image normalization [20].
From Table 1, we can observe that each of the three learning phases helped to improve the overall
performance of the networks. The forward-backward algorithm outperforms the up-down learning of the original DBN. Using sparse RBMs [11] and backpropagation, it was possible to further
improve the recognition performances. The forward-backward learning was effective as a bridge
between the other two phases, with an improvement of 0.17% over the setup without phase 2. The
method was even as a standalone algorithm, demonstrating its potential by learning from randomly
initialized weights.
Table 1: Results on MNIST after various phases of the training process.
Setup / Learning algorithm*
Classification error rate
Phase 1
Phase 2
Phase 1
Phase 2
Phase 3
Deep belief network (reported in [1])
1. RBMs
Up-down
2.49%
1.25%
?
Proposed top-down regularized deep belief network
2. RBMs
Forward-backward
2.49%
1.14%
3. Sparse RBMs
Forward-backward
2.14%
1.06%
4. Sparse RBMs
?
2.14%
?
5. Random weights Forward-backward
?
1.61%
*Phase 3 runs the error backpropagation algorithm whenever employed.
7
0.98%
0.91%
1.08%
?
Figure 5: The 91 wrongly classified test examples from the MNIST dataset.
5.2
Caltech-101 Object Recognition
The Caltech-101 dataset [17] is one of the most popular datasets for object recognition evaluation.
It contains 9, 144 images belonging to 101 object categories and one background class. The images
were first resized while retaining their original aspect ratios, such that the longer spatial dimension
was at most 300 pixels. SIFT descriptors [21] were extracted from densely sampled patches of
16 ? 16 at 4 pixel intervals. The SIFT descriptors were `1 -normalized by constraining each descriptor vector to sum to a maximum of one, resulting in a quasi-binary feature. Additionally, SIFT
descriptors from a spatial neighborhood of 2 ? 2 were concatenated to form a macrofeature [22].
A DBN setup was used to learn a dictionary to map local macrofeatures to a mid-level representation. Two layers of RBMs were stacked to model the macrofeatures. Both RBMs were regularized
with population and lifetime sparseness during training [23]. First a single RBM, which had 1024
latent variables, was trained from macrofeature. A set of 200, 000 randomly selected macrofeatures was used for training this first layer. The resulting representations of the first RBM were then
concatenated within each spatial neighborhood of 2 ? 2. The second RBM modeled this spatially
aggregated representation into a higher-level representation. Another set of 200, 000 randomly selected spatially aggregated representations was used for training this RBM.
The higher-level RBM representation was associated to the image label. For each experimental
trial, a set of 30 training examples per class (totaling to 3060) was randomly selected for supervised
learning. The forward-backward learning algorithm was used to regularize the learning while finetuning the network. Finally, error backpropagation was performed to further optimize the dictionary.
From these representations, max-pooling within spatial regions defined by a spatial pyramid was
employed [22, 24] to obtain a single vector representing the whole image. It is also possible to
employ more advanced pooling schemes [25]. A linear SVM classifier was then trained, using the
same train-test split from the previous supervised learning phase.
Table 2 shows the average class-wise classification accuracy, averaged across 102
classes and 10 experimental trials. The results demonstrate a consistent improvement
moving from Phase 1 to phase 3. The final
accuracy obtained was 79.7%. This outperforms all existing dictionary learning methods based on a single image descriptors,
with a 0.8% improvement over the previous
state-of-the-art results [23, 28]. As a comparison, other existing reported dictionary
learning methods that encode SIFT-based local descriptors are also included in Table 2.
6
Table 2: Classification accuracy on Caltech-101.
Method / Training phase
Accuracy
Proposed top-down regularized DBN
Phase 1: Unsupervised stacking
Phase 2: Top-down regularization
Phase 3: Error backpropagation
72.8%
78.2%
79.7%
Sparse coding & max-pooling [22]
Extended HMAX [26]
Convolutional RBM [27]
Unsupervised & supervised RBM [23]
Gated Convolutional RBM [28]
73.4%
76.3%
77.8%
78.9%
78.9%
Conclusion
We proposed the notion of deep learning by gradually transitioning from being fully unsupervised to
strongly discriminative. This is achieved through the introduction of an intermediate phase between
the unsupervised and supervised learning phases. This notion is implemented by incorporating
top-down information to DBNs through regularization. The method is easily integrated into the
intermediate learning phase based on simple building blocks. It can be performed to complement
greedy layer-wise unsupervised learning and discriminative optimization using error backpropagation. Empirical evaluation show that the method leads to competitive results for handwritten digit
recognition and object recognition datasets.
8
References
[1] G. E. Hinton, S. Osindero, and Y.-W. Teh, ?A fast learning algorithm for deep belief networks,? Neural
Computation, vol. 18, no. 7, pp. 1527?1554, 2006.
[2] Y. LeCun, ?Une proc?edure d?apprentissage pour r?eseau a seuil asymmetrique (a learning scheme for
asymmetric threshold networks),? in Cognitiva 85, 1985.
[3] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, ?Learning representations by back-propagating errors,?
Nature, vol. 323, pp. 533 ? 536, October 1986.
[4] Y. Bengio, ?Learning deep architectures for AI,? Foundations and Trends in Machine Learning, vol. 2,
no. 1, pp. 1?127, 2009.
[5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, ?Greedy layer-wise training of deep networks,? in
NIPS, 2006.
[6] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun, ?Efficient learning of sparse representations with an
energy-based model,? in NIPS, 2006.
[7] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, ?Extracting and composing robust features
with denoising autoencoders,? in ICML, 2008.
[8] P. Smolensky, ?Information processing in dynamical systems: Foundations of harmony theory,? in Parallel Distributed Processing: Volume 1: Foundations, ch. 6, pp. 194?281, MIT Press, 1986.
[9] G. E. Hinton, ?Training products of experts by minimizing contrastive divergence,? Neural Computation,
vol. 14, no. 8, p. 1771?1800, 2002.
[10] H. Lee, C. Ekanadham, and A. Ng, ?Sparse deep belief net model for visual area V2,? in NIPS, 2008.
[11] H. Goh, N. Thome, and M. Cord, ?Biasing restricted Boltzmann machines to manipulate latent selectivity
and sparsity,? in NIPS Workshop, 2010.
[12] H. Larochelle and Y. Bengio, ?Classification using discriminative restricted Boltzmann machines,? in
ICML, 2008.
[13] N. Le Roux and Y. Bengio, ?Representational power of restricted Boltzmann machines and deep belief
networks,? Neural Computation, vol. 20, pp. 1631?1649, June 2008.
[14] I. Sutskever and G. E. Hinton, ?Learning multilevel distributed representations for high-dimensional sequences,? in AISTATS, 2007.
[15] G. E. Hinton and R. Salakhutdinov, ?Reducing the dimensionality of data with neural networks,? Science,
vol. 28, pp. 504?507, 2006.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,? Proceedings of the IEEE, vol. 86, pp. 2278?2324, November 1998.
[17] L. Fei-Fei, R. Fergus, and P. Perona, ?Learning generative visual models from few training examples: An
incremental bayesian approach tested on 101 object categories,? CVPR Workshop, 2004.
[18] R. Salakhutdinov and G. E. Hinton, ?Learning a nonlinear embedding by preserving class neighbourhood
structure,? in AISTATS, 2007.
[19] L. Deng and D. Yu, ?Deep convex net: A scalable architecture for speech pattern classification,? in Interspeech, 2011.
[20] D. C. Cires?an, U. Meier, and J. Schmidhuber, ?Multi-column deep neural networks for image classification,? in CVPR, 2012.
[21] D. Lowe, ?Object recognition from local scale-invariant features,? in CVPR, 1999.
[22] Y. Boureau, F. Bach, Y. LeCun, and J. Ponce, ?Learning mid-level features for recognition,? in CVPR,
2010.
[23] H. Goh, N. Thome, M. Cord, and J.-H. Lim, ?Unsupervised and supervised visual codes with restricted
Boltzmann machines,? in ECCV, 2012.
[24] S. Lazebnik, C. Schmid, and J. Ponce, ?Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,? in CVPR, 2006.
[25] S. Avila, N. Thome, M. Cord, E. Valle, and A. Ara?ujo, ?Pooling in image representation: the visual
codeword point of view,? Computer Vision and Image Understanding, pp. 453?465, May 2013.
[26] C. Theriault, N. Thome, and M. Cord, ?Extended coding and pooling in the HMAX model,? IEEE Transaction on Image Processing, 2013.
[27] K. Sohn, D. Y. Jung, H. Lee, and A. Hero III, ?Efficient learning of sparse, distributed, convolutional
feature representations for object recognition,? in ICCV, 2011.
[28] K. Sohn, G. Zhou, C. Lee, and H. Lee, ?Learning and selecting features jointly with point-wise gated
boltzmann machines,? in ICML, 2013.
9
| 5031 |@word trial:2 version:1 stronger:1 retraining:1 valle:1 gradual:2 contrastive:9 reduction:1 initial:2 configuration:1 contains:2 score:2 selecting:2 tuned:1 document:1 outperforms:2 existing:6 current:5 z2:9 activation:17 realize:1 partition:1 wx:2 enables:2 update:10 standalone:1 discrimination:1 greedy:7 generative:7 selected:3 parameterization:1 une:1 merger:1 xk:1 five:2 rc:3 constructed:2 backfitting:1 fitting:1 combine:1 manner:1 introduce:2 pour:1 nor:2 multi:1 untying:1 salakhutdinov:2 globally:3 ara:1 considering:2 begin:2 xx:1 transformation:1 exactly:1 universit:1 classifier:1 control:1 unit:11 zl:46 before:2 bind:1 local:3 encoding:1 subscript:1 modulation:2 initialization:1 challenging:2 averaged:1 lecun:4 block:11 implement:2 backpropagation:13 digit:6 procedure:5 area:1 universal:1 empirical:3 got:2 matching:1 pre:3 suggest:2 get:1 undesirable:1 selection:1 wrongly:2 influence:4 writing:1 optimize:2 conventional:2 map:4 maximizing:1 williams:1 independently:1 convex:4 focused:1 simplicity:1 roux:1 assigns:1 matthieu:1 adjusts:1 rule:2 seuil:1 lamblin:1 regularize:4 population:1 embedding:1 notion:3 traditionally:1 variation:2 updated:1 hierarchy:2 target:5 heavily:1 dbns:1 designing:1 element:1 distancebased:1 recognition:17 jk:5 updating:1 utilized:1 rumelhart:1 asymmetric:1 trend:1 bottom:22 role:1 module:2 capture:1 cord:5 wj:1 connected:3 region:1 ranzato:1 src:2 principled:1 broken:1 complexity:1 trained:6 upon:1 bipartite:1 completely:1 easily:1 joint:3 finetuning:1 various:2 regularizer:1 train:4 stacked:6 informatique:1 fast:1 effective:2 neighborhood:2 cvpr:5 distortion:2 reconstruct:1 jointly:2 noisy:1 final:4 associative:1 sequence:1 net:4 propose:2 reconstruction:3 product:1 fr:1 combining:2 iff:1 representational:2 sutskever:1 enhancement:1 produce:2 incremental:1 object:10 help:1 propagating:1 nearest:1 ij:1 strong:1 implemented:1 involves:2 larochelle:4 convention:1 merged:6 modifying:1 subsequently:1 stochastic:3 hzl:4 thome:5 activating:1 multilevel:1 alleviate:1 adjusted:2 sufficiently:1 ic:1 exp:8 equilibrium:1 mapping:1 bj:2 dictionary:5 consecutive:2 adopt:1 omitted:1 proc:1 integrates:1 harmony:1 label:4 bag:1 bridge:2 lip6:1 wl:19 macrofeatures:3 weighted:2 mit:1 concurrently:1 always:1 aim:1 modified:3 reaching:1 ck:2 zhou:1 resized:1 varying:1 totaling:1 pervasive:2 encode:2 june:1 ponce:2 improvement:4 likelihood:5 greedily:3 inference:1 abstraction:1 dependent:1 cnrs:2 i0:4 entire:2 typically:2 integrated:1 initially:1 perona:1 wij:2 france:3 quasi:1 pixel:3 issue:1 among:1 classification:11 yck:1 overall:2 retaining:1 development:1 spatial:6 softmax:3 fairly:1 art:1 field:1 construct:2 having:1 ng:1 sampling:15 yu:1 unsupervised:19 icml:3 simplify:1 employ:2 dilution:1 few:2 randomly:4 divergence:9 recognize:1 individual:1 densely:1 phase:52 connects:1 consisting:1 maintain:1 onwards:1 interest:1 huge:2 highly:1 evaluation:6 analyzed:1 activated:3 chain:3 capable:1 necessary:1 respective:1 initialized:4 goh:4 instance:2 column:1 modeling:1 stacking:2 ekanadham:1 neutral:1 recognizing:1 osindero:1 dtrain:6 reported:4 upmc:1 combined:1 dest:1 retain:3 probabilistic:1 destination:1 lee:4 enhance:1 connecting:1 together:2 again:1 cires:1 worse:1 expert:1 account:2 potential:1 de:1 star:3 coding:2 performed:4 helped:1 lowe:1 lab:2 view:1 linked:1 reached:1 competitive:4 parallel:1 accuracy:4 convolutional:4 descriptor:6 yield:1 handwritten:5 vincent:1 bayesian:1 produced:1 drive:1 classified:2 influenced:1 whenever:1 energy:3 rbms:19 pp:8 associated:1 rbm:35 couple:1 sampled:7 dataset:12 popular:2 lim:3 improves:1 dimensionality:1 back:1 umi:2 higher:4 supervised:22 done:4 evaluated:2 strongly:2 lifetime:1 until:2 autoencoders:1 receives:1 nonlinear:2 logistic:2 zjk:2 building:11 normalized:1 unbiased:1 regularization:15 assigned:1 alternating:7 symmetric:1 spatially:2 illustrated:1 conditionally:1 ll:2 during:2 interspeech:1 encourages:1 lastname:1 backpropagating:2 whereby:1 backpropagates:1 criterion:4 demonstrate:1 performs:1 image:21 variational:1 wise:7 lazebnik:1 regularizing:1 common:1 volume:1 extend:1 approximates:1 refer:1 s5:2 gibbs:7 ai:1 tuning:3 dbn:15 similarly:1 z4:9 had:2 joo:2 hxi:4 moving:1 access:2 longer:1 aligning:1 posterior:2 recent:1 optimizing:2 driven:1 schmidhuber:1 selectivity:1 codeword:1 binary:2 approximators:1 caltech:5 preserving:1 additional:1 employed:2 deng:1 aggregated:2 signal:9 full:1 desirable:1 rj:2 match:1 cross:4 bach:1 manipulate:2 coded:2 controlled:1 z5:5 prediction:3 variant:2 basic:3 scalable:1 multilayer:1 essentially:1 expectation:1 vision:1 iteration:2 normalization:2 pyramid:2 achieved:2 penalize:1 background:1 fine:6 interval:1 laboratoire:1 source:3 biased:1 cognitiva:1 pass:2 pooling:5 undirected:1 extracting:1 near:1 chopra:1 intermediate:5 bengio:7 split:2 decent:1 constraining:1 variety:1 iii:1 fit:1 gave:1 architecture:6 haffner:1 ultimate:1 speech:1 deep:33 tune:2 s4:2 mid:2 sohn:2 category:3 sl:9 zj:9 singapore:4 vy:1 s3:2 per:2 hyperparameter:1 vol:7 demonstrating:1 threshold:1 neither:2 backward:22 sum:1 run:1 groundtruth:1 architectural:1 patch:1 sorbonne:1 layer:73 bound:1 followed:4 fei:2 ri:3 scene:1 avila:1 aspect:1 performing:4 belonging:1 across:2 reconstructing:1 wi:1 restricted:9 gradually:2 invariant:2 iccv:1 taken:1 equation:4 committee:1 hero:1 observe:1 hierarchical:2 v2:1 generic:1 appropriate:1 neighbourhood:1 alternative:1 original:4 denotes:1 top:43 include:1 assumes:1 graphical:1 concatenated:6 especially:2 bl:1 initializes:2 objective:3 added:1 strategy:8 enhances:1 gradient:8 regulated:1 decoder:1 besides:1 code:1 index:1 relationship:1 z3:9 ratio:2 modeled:1 manzagol:1 minimizing:1 setup:6 executed:1 october:1 boltzmann:10 perform:3 shallower:1 gated:2 teh:1 convolution:1 markov:3 discarded:1 datasets:2 finite:1 descent:6 november:1 regularizes:1 extended:3 ever:1 hinton:7 stack:1 retrained:2 complement:1 meier:1 paris:2 optimized:5 connection:1 z1:2 learned:4 merges:1 nearer:1 nip:4 beyond:1 able:1 suggested:1 topdown:4 usually:1 firstname:1 yc:4 below:1 pattern:2 sparsity:2 smolensky:1 dynamical:1 biasing:1 poultney:1 max:2 memory:1 video:1 belief:14 power:2 hot:2 natural:1 rely:1 regularized:14 advanced:1 representing:1 scheme:3 improve:3 schmid:1 popovici:1 sg:1 understanding:1 relative:1 fully:3 loss:3 topography:1 validation:2 foundation:3 consistent:1 apprentissage:1 principle:1 cd:2 heavy:1 normalizes:1 eccv:1 jung:1 last:2 bias:4 perceptron:1 institute:2 neighbor:1 taking:2 sparse:10 distributed:3 dimension:2 transition:1 valid:1 forward:22 made:2 preprocessing:1 transaction:1 reconstructed:1 approximate:1 sj:2 global:3 discriminative:11 xi:4 fergus:1 latent:18 table:6 additionally:2 learn:3 nature:1 robust:1 nicolas:1 elastic:1 composing:1 bottou:1 complex:2 meanwhile:2 cl:3 constructing:1 did:1 aistats:2 rh:1 s2:3 noise:1 whole:1 denoise:1 repeated:2 position:1 comprises:1 learns:3 hmax:2 down:44 transitioning:1 sift:4 svm:1 burden:1 incorporating:2 mnist:6 workshop:2 adding:1 merging:2 ci:2 sparseness:1 boureau:1 gap:1 entropy:4 visual:4 ch:1 extracted:1 conditional:2 goal:1 towards:2 eventual:1 change:2 included:1 specifically:1 reducing:1 infocomm:2 denoising:1 i2r:1 total:1 pas:13 invariance:1 e:1 experimental:2 reg:1 tested:2 scratch:1 |
4,456 | 5,032 | Adaptive dropout for training deep neural networks
Lei Jimmy Ba Brendan Frey
Department of Electrical and Computer Engineering
University of Toronto
jimmy, [email protected]
Abstract
Recently, it was shown that deep neural networks can perform very well if the
activities of hidden units are regularized during learning, e.g, by randomly dropping out 50% of their activities. We describe a method called ?standout? in which
a binary belief network is overlaid on a neural network and is used to regularize
of its hidden units by selectively setting activities to zero. This ?adaptive dropout
network? can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using
back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural
network parameters, suggesting that a good dropout network regularizes activities
according to magnitude. When evaluated on the MNIST and NORB datasets, we
found that our method achieves lower classification error rates than other feature
learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80% and 5.8%
errors on the MNIST and NORB test sets, which is better than state-of-the-art
results obtained using feature learning methods, including those that use convolutional architectures.
1
Introduction
For decades, deep networks with broad hidden layers and full connectivity could not be trained to
produce useful results, because of overfitting, slow convergence and other issues. One approach
that has proven to be successful for unsupervised learning of both probabilistic generative models
and auto-encoders is to train a deep network layer by layer in a greedy fashion [7]. Each layer of
connections is learnt using contrastive divergence in a restricted Boltzmann machine (RBM) [6] or
backpropagation through a one-layer auto-encoder [1], and then the hidden activities are used to
train the next layer. When the parameters of a deep network are initialized in this way, further fine
tuning can be used to improve the model, e.g., for classification [2]. The unsupervised, pre-training
stage is a crucial component for achieving competitive overall performance on classification tasks,
e.g., Coates et al. [4] have achieved improved classification rates by using different unsupervised
learning algorithms.
Recently, a technique called dropout was shown to significantly improve the performance of deep
neural networks on various tasks [8], including vision problems [10]. Dropout randomly sets hidden
unit activities to zero with a probability of 0.5 during training. Each training example can thus
be viewed as providing gradients for a different, randomly sampled architecture, so that the final
neural network efficiently represents a huge ensemble of neural networks, with good generalization
capability. Experimental results on several tasks show that dropout frequently and significantly
improves the classification performance of deep architectures. Injecting noise for the purpose of
regularization has been studied previously, but in the context of adding noise to the inputs [3],[21]
and to network components [16].
Unfortunately, when dropout is used to discriminatively train a deep fully connected neural network
on input with high variation, e.g., in viewpoint and angle, little benefit is achieved (section 5.5),
unless spatial structure is built in.
1
In this paper, we describe a generalization of dropout, where the dropout probability for each
hidden variable is computed using a binary belief network that shares parameters with the deep
network. Our method works well both for unsupervised and supervised learning of deep networks.
We present results on the MNIST and NORB datasets showing that our ?standout? technique can
learn better feature detectors for handwritten digit and object recognition tasks. Interestingly, we
also find that our method enables the successful training of deep auto-encoders from scratch, i.e.,
without layer-by-layer pre-training.
2
The model
The original dropout technique [8] uses a constant probability for omitting a unit, so a natural question we considered is whether it may help to let this probability be different for different hidden
units. In particular, there may be hidden units that can individually make confident predictions for
the presence or absence of an important feature or combination of features. Dropout will ignore this
confidence and drop the unit out 50% of the time. Viewed another way, suppose after dropout is
applied, it is found that several hidden units are highly correlated in the pre-dropout activities. They
could be combined into a single hidden unit with a lower dropout probability, freeing up hidden
units for other purposes.
We denote the activity of unit j in a deep neural network by aj and assume that its inputs are
{ai : i < j}. In dropout, aj is randomly set to zero with probability 0.5. Let mj be a binary variable
that is used to mask, the activity aj , so that its value
Xis
aj = mj g
wj,i ai ,
(1)
i:i<j
where wj,i is the weight from unit i to unit j and g(?) is the activation function and a0 = 1 accounts
for biases. Whereas in standard dropout, mj is Bernoulli with probability 0.5, here we use an
adaptive dropout probability that depends on input activities:
X
P (mj = 1|{ai : i < j}) = f
?j,i ai ,
(2)
i:i<j
where ?j,i is the weight from unit i to unit j in the standout network or the adaptive dropout network;
f (?) is a sigmoidal function, f : R ? [0, 1]. We use the logistic function, f (z) = 1/(1 + exp(?z)).
The standout network is an adpative dropout network that can be viewed as a binary belief network that overlays the neural network and stochastically adapts its architecture, depending on the
input. Unlike a traditional belief network, the distribution over the output variable is not obtained
by marginalizing over the hidden mask variables. Instead, the distribution over the hidden mask
variables should be viewed as specifying a Bayesian posterior distribution over models. Traditional
Bayesian inference generates a posterior distribution that does not depend on the input at test time,
whereas the posterior distribution described here does depend on the test input. At first, this may
seem inappropriate. However, if we could exactly compute the Bayesian posterior distribution over
neural networks (parameters and architectures), we would find strong correlations between components, such as the connectivity and weight magnitudes in one layer and the connectivity and weight
magnitudes in the next layer. The standout network described above can be viewed as approximately
taking into account these dependencies through the use of a parametric family of distributions.
The standout method described here can be simplified to obtain other dropout techniques. The
original dropout method is obtained by clamping ?j,i = 0 for 0 ? i < j. Another interesting
setting is obtained by clamping ?j,i = 0 for 1 ? i < j, but learning the input-independent dropout
parameter ?j,0 for each unit aj .
As in standard dropout, to process an input at test time, the stochastic feedforward process is replaced
by taking the expectation of equation 1: X
X
E[aj ] = f
?j,i ai g
wj,i ai .
(3)
i:i<j
i:i<j
We found that this method provides very similar results as randomly simulating the stochastic
process and computing the expected output of the neural network.
3
Learning
For a specific configuration m of the mask variables, let L(m, w) denote the likelihood of a training
set or a minibatch, where w is the set of neural network parameters. It may include a prior as well.
2
The dependence of L on the input and output have been suppressed for notational simplicity. Given
the current dropout parameters, ?, the standout network acts like a binary belief network that generates a distribution over the mask variables for the training set or minibatch, denoted P (m|?, w).
Again, we have suppressed the dependence on the input to the neural network. As described above,
this distribution should not be viewed as the distribution over hidden variables in a latent variable
model, but as an approximation to a Bayesian posterior distribution over model architectures.
The goal is to adjust ? and w to make P (m|?, w) close to the true posterior over architectures
as given by L(m, w), while also adjusting L(m, w) so as maximize the data likelihood w.r.t. w.
Since both the approximate posterior P (m|?, w) and the likelihood L(m, w) depend on the neural
network parameters, we use a crude approximation that we found works well in practice. If the
approximate posterior were as close as possible to the true posterior, then the derivative of the free
energy F (P, L) w.r.t P would be zero and we can ignore terms of the form ?P/?w. So, we adjust
the neural network parameters using the approximate derivative,
X
?
?
P (m|?, w)
log L(m, w),
(4)
?w
m
which can be computed by sampling from P (m|?, w).
For a given setting of the neural network parameters, the standout network can in principal be adjusted to be closer to the Bayesian posterior by following the derivative of the free energy F (P, L)
w.r.t. ?. This is difficult in practice, so we use an approximation where we assume the approximate
posterior is correct and sample a configuration of m from it. Then, for each hidden unit, we consider
mj = 0 and mj = 1 and determine the partial contribution to the free energy. The standout network
parameters are adjusted for that hidden unit so as to decrease the partial contribution to the free
energy. Namely, the standout network updates are obtained by sampling the mask variables using
the current standout network, performing forward propagation in the neural network, and computing the data likelihood. The mask variables are sequentially perturbed by combining the standout
network probability for the mask variable with the data likelihood under the neural network, using
a partial forward propagation. The resulting mask variables are used as complete data for updating
the standout network.
The above learning technique is approximate, but works well in practice and achieves models that
outperform standard dropout and other feature learning techniques, as described below.
Algorithm 1: Standout learning algorithm: alg1 and alg2
Notation: H ? is Heaviside step function ;
Input: w, ?, ?, ?
alg1: initialize w, ? randomly; alg2: initialize w randomly, set ? = w;
while not stopping criteria do
for hidden unit j = 1, 2, ... do
P
P (mj = 1|{ai : i < j}) = f ? i:i<j ?j,i ai + ? ;
mj ? P (mj = 1|{ai : i <
j});
P
w
a
aj = mj g
;
j,i
i
i:i<j
end
?
Update neural network parameter w using ?w
log L(m, w);
/* alg1
for hidden unit j = 1, 2, ... do
tj = H L(m, w|mj = 1) ? L(m, w|mj = 0)
end
Update standout network ? using target t ;
/* alg2
Update standout network ? using ? ? w ;
end
*/
*/
3.1 Stochastic adaptive mixtures of local experts
A neural network of N hidden units can be viewed as 2N possible models given the standout mask
M . Each of the 2N models acts like a separate ?expert? network that performs well for a subset
of the input space. Training all 2N models separately can easily over-fit to the data, but weight
sharing among the models can prevent over-fitting. Therefore, the standout network, much like a
gating network, also produces a distributed representation to stochastically choses which expert to
3
Figure 1: Weights from hidden units that are least likely to be dropped out, for examples from each
of the 10 classes, for (top) auto-encoder and (bottom) discriminative neural networks trained using
standout.
Figure 2: First layer standout network filters and neural network filters learnt from MNIST data
using our method.
turn on for a given input. This means 2N models are chosen by N binary numbers in this distributed
representation.
The standout network partitions the input space into different regions that are suitable for each
expert. We can visualize the effect of the standout network by showing the units that output high
standout probability for one class but not others. The standout network learns that some hidden units
are important for one class and tend to keep those. These hidden units are then more likely to be
dropped out when the input comes from a different class.
4
Exploratory experiments
Here, we study different aspects of our method using MNIST digits (see below for more details).
We trained a shallow one hidden layer auto-encoder on MNIST using the approximate learning
algorithm. We can visualize the effect of the standout network by showing the units that output low
dropout probability for one class but not others. The standout network learns that some hidden units
are important for one class and tends to keep those. These hidden units are more likely to be dropped
when the input comes from a different class (see figure 1).
The first layer filters of both the standout network and the neural network are shown in figure 2.
We noticed that the weights in the two networks are very similar. Since the learning algorithm for
adjusting the dropout parameters is computationally burdensome (see above), we considered tying
the parameters w and ?. To account for different scales and shifts, we set ? = ?w + ?, where ? and
? are learnt.
Concretely, we found empirically that the standout network parameters trained in this way are quite
similar (although not identical) to the neural network parameters, up to an affine transformation.
This motivated our second algorithm alg2 in psuedocode(1), where the neural network parameters
are trained as described in learning section 3, but the standout parameters are set to an affine transformation of the neural network parameters with hyper-parameters alpha and beta. These hyperparameters are determined as explained below. We found that this technique works very well in
practice, for the MNIST and NORB datasets (see below). For example, for unsupervised learning
on MNIST using the architecture described below, we obtained 153 errors for tied parameters and
158 errors for separately learnt parameters. This tied parameter learning algorithm is used for the
experiments in the rest of the paper. In the above description of our method, we mentioned two
hyper-parameters that need to be considered: the scale parameter ? and the bias parameter ?. Here
we explore the choice of these parameters, by presenting some experimental results obtained by
training a dropout model as described below using MNIST handwritten digit images.
? controls the sensitivity of the dropout function to the weighted sum of inputs that is used to
determine the hidden activity. In particular, ? scales the weighted sum of the activities from the
4
layer before. In contrast, the bias ? shifts the dropout probability to be high or low and ultimately
controls the sparsity of the hidden unit activities. A model with a more negative ? will have most of
its hidden activities concentrated near zero.
Figure 3(a) illustrates how choices of ? and ? change the dependence of the dropout probability on
the input. It shows a histogram of hidden unit activities after training networks with different ??s
and ??s on MNIST images.
Figure 3: Histogram of hidden unit activities for various choices of hyper-parameters using the logistic dropout function, including those configurations that are equivalent to dropout and no dropoutbased regularization (AE). Histograms of hidden unit activities for various dropout functions. Various standout function f (?)
We also consider different forms of the dropout function other than the logistic function, as shown
in figure 3(b). The effect of different functional forms can be observed in the histogram of the
activities after training on the MNIST images. The logistic dropout function creates a sparse
distribution of activation values, whereas the functions such as f (z) = 1 ? 4(1 ? ?(z))?(z) produce
a multi-modal distribution over the activation values.
5
Experimental results
We consider both unsupervised learning and discriminative learning tasks, and compare results obtained using standout to those obtained using restricted Boltzmann machines (RBMs) and autoencoders trained using dropout, for unsupervised feature learning tasks. We also investigate classification performance by applying standout during discriminative training using the MNIST and
NORB [11] datasets.
In our experiments, we have made a few engineering choices that are consistent with previous publications in the area, so that our results are comparable to the literature. We used ReLU units, a linear
momentum schedule, and an exponentially decaying learning rate (c.f. Nair et al. 2009[13]; Hinton et al. 2012 [8]). In addition, we used cross-validation to search over the learning rate (0.0001,
0.0003, 0.001, 0.003, 0.01, 0.03) and the values of alpha and beta (-2, -1.5, -1, -.5, 0, .5, 1, 1.5, 2)
and for the NORB dataset, the number of hidden units (1000, 2000, 4000, 6000).
5.1 Datasets
The MNIST handwritten digit dataset is generally considered as a well-studied problem, which
offers the ability to ensure that new algorithms produce sensible results when compared to the many
other techniques that have been benchmarked. It consists of ten classes of handwritten digits, ranging
from 0 to 9. There are, in total, 60,000 training images and 10,000 test images. Each image is 28?28
pixels in size. Following the common convention, we randomly separate the original training set
into 50,000 training cases and 10,000 cases used for validating the choice of hyper-parameters. We
concatenate all the pixels in an image in a raster scan fashion to create a 784-dimensional vector.
The task is to predict the 10 class labels from the 784-dimensional input vector.
The small NORB normalized-uniform dataset contains 24,300 training examples and 24,300 test
examples. It consists of 50 different objects from five different classes: cars, trucks, planes, animals,
and humans. Each data point is represented by a stereo image pair of size 96?96 pixels. The training
and test set used different object instances and images are created under different lighting conditions,
elevations and azimuths. In order to perform well in NORB, it demands learning algorithms to learn
features that can generalize to test set and be able to handle large input dimension. This makes
NORB significantly more challenging than the MNIST dataset. The objects in the NORB dataset
are 3D under difference out-of-plane rotation, and so on. Therefore, the models trained on NORB
have to learn and store implicit representations of 3D structure, lighting and so on. We formulate
5
the data vector following Snoek et al.[17] by down-sampling from 96 ? 96 to 32 ? 32, so that the
final training data vector has 2048 dimensions. Data points are subtracted by the mean and divided
by the standard deviation along each input dimension across the whole training set to normalize the
contrast. The goal is to predict the five class labels for the previously unseen 24,300 test examples.
The training set is separated into 20,000 for training and 4,300 for validation.
5.2 Nonlinearity for feedforward network
We used the ReLU [13] activation function for all of the results reported here, both on unsupervised
and discriminative tasks. The ReLU function can be written as g(x) = max(0, x). We found that
its use significantly speeds up training by up to 10-fold, compared to the commonly used logistic
activation function. The speed-up we observed can be explained in two ways. First, computations
are saved when using max instead of the exponential function. Second, ReLUs do not suffer from
the vanishing gradient problem that logistic functions have for very large inputs.
5.3 Momentum
We optimized the model parameters using stochastic gradient descent with the Nesterov momentum
technique [19], which can effectively speed up learning when applied to large models compared to
standard momentum. When using Nesterov momentum, the cost function J and derivatives ?J
?? are
evaluated at ? + v k , where v k = ?v k?1 + ? ?J
is
the
velocity
and
?
is
the
model
parameter.
?<1
??
is the momentum coefficient and ? is the learning rate. Nesterov momentum takes into account
the velocity in parameter space when computing updates. Therefore, it further reduces oscillations
compared to standard momentum.
We schedule the momentum coefficient ? to further speed up the learning process. ? starts at 0.5 in
the first epoch and linearly increase to 0.99. The momentum stays at 0.99 during the major portion
of learning and then is linearly ramped down to 0.5 during the end of learning.
5.4 Computation time
We used the publicly available gnumpy library [20] to implement our models. The models mentioned
in this work are trained on a single Nvidia GTX 580 GPU. As in psuedocode(1), the first algorithm
is relatively slow, since the number of computations is O(n2 ) where n is the number of hidden units.
The second algorithm is much faster and takes O(kn) time, where k is the number of configurations
of the hyper-parameters alpha and beta that are searched over. In particular, for a 784-1000-784 autoencoder model with mini-batches of size 100 and 50,000 training cases on a GTX 580 GPU, learning
takes 1.66 seconds per epoch for standard dropout and 1.73 seconds for our second algorithm.
The computational cost of the improved representations produced by our algorithm is that a hyperparameter search is needed. We note that some other recently developed dropout-related methods,
such as maxout, also involve an additional computational factor.
5.5 Unsupervised feature learning
Having good features is crucial for obtaining competitive performance in classification and other
high level tasks. Learning algorithms that can take advantage of unlabeled data are appealing due
to increasing amount of unlabeled data. Furthermore, on more challenging datasets, such as NORB,
a fully connected discriminative neural network trained from scratch tends to perform poorly, even
with the help of dropout. (We trained a two hidden layer neural network on NORB to obtain 13%
error rate and saw no improvement by using dropout). Such disappointing performance motived us
to investigate unsupervised feature learning and pre-training strategies with our new method. Below,
we show that our method can extract useful features in a self-taught fashion. The features extracted
using our method not only outperform other common feature learning methods, but our method is
also quite computationally efficient compared to techniques like sparse coding.
We use the following procedures for feature learning. We first extract the features using one of the
unsupervised learning algorithms in figure (4). The usefulness of the extracted features are then
evaluated by training a linear classifier to predict the object class from the extracted features. This
process is similar to that employed in other feature learning research [14].
We trained a number of architectures on MNIST, including standard auto-encoders, dropout autoencoders and standout auto-encoders. As described previously, we compute the expected value of
6
arch.
raw pixel
784
784-1000
RBM
weight decay
DAE
784-1000-784
dropout
784-1000-784
AE
50% hidden dropout
standout
784-1000-784
AE
standout
act. func.
err.
7.2%
?(?)
1.81%
arch.
8976
2048-4000
RBM
weight decay
DAE
2048-4000-2048
dropout
2048-4000-2048
AE
50% hidden dropout
dropout
2048-4000-2048
AE *
22% hidden dropout
standout 2048-4000-2048
AE
standout
act. func.
err.
23.6%
?(?)
10.6%
ReLU (?)
9.5%
raw pixel
ReLU (?) 1.95%
ReLU (?) 1.70%
ReLU (?) 1.53%
(a) MNIST
ReLU (?) 10.1%
ReLU (?)
8.9%
ReLU (?) 7.3%
(b) NORB
Figure 4: Performance of unsupervised feature learning methods. The dropout probability in the
DAE * was optimized using [18]
each hidden activity and use that as the feature when training a classifier. We also examined RBM?s,
where we the soft probability for each hidden unit as a feature. Different classifiers can be used
and give similar performance; we used a linear SVM because it is fast and straightforward to apply.
However, on a subset of problems we tried logistic classifiers and they achieved indistinguishable
classification rates.
Results for the different architectures and learning methods are compared in table 4(a). The autoencoder trained using our proposed technique with ? = 1 and ? = 0 performed the best on MNIST.
We performed extensive experiments on the NORB dataset with larger models. The hyperparameters used for the best result are ? = 1 and ? = 1. Overall, we observed similar trends
to the ones we observed for MNIST. Our standout method consistently performs better than other
methods, as shown in table 4(b).
5.6 Discussion
The proposed standout method was able to outperform other feature learning methods in both
datasets with a noticeable margin. The stochasticity introduced by the standout network successfully removes hidden units that are unnecessary for good performance and that hinder performance.
By inspecting the weights from auto-encoders regularized by dropout and standout, we find that the
standout auto-encoder weights are sharper than those learnt using dropout, which may be consistent
with the improved performance on classification tasks.
Classification error rate as a function of number of hidden units
20
DAE
Dropout AE
Deterministic standout AE
Standout AE
18
test error rate (%)
The effect of the number of hidden units was studied using networks with sizes 500, 1000, 1500, and up to 4500.
Figure 5 shows that all algorithms generally perform better by increasing the number of hidden units. One notable
trend for dropout regularization is that it achieves significantly better performance with large numbers of hidden units since all units have equal chance to be omitted.
In comparison, standout can achieve similar performance
with only half as many hidden units, because highly useful hidden units will be kept more often while only the
less effective units will be dropped.
16
14
12
10
8
6
500
1000
1500
2000 2500 3000
numer of hidden units
3500
4000
4500
Figure 5: Classification error rate as a
function of number of hidden units on
NORB.
One question is whether it is the stochasticity of the standout network that helps, or just a different nonlinearity obtained by the expected activity in equation 3. To address this, we trained a
deterministic auto-encoder with hidden activation functions given by equation 3. The result of this
?deterministic standout method? is shown in figure 5 and it performs quite poorly.
It is believed that sparse features can help improve the performance of linear classifiers. We found
that auto-encoders trained using ReLU units and standout produce sparse features. We wondered
whether training a sparse auto-encoder with a sparsity level matching the one obtained by our method
would yield similar performance. We applied an L1 penalty on the hidden units and trained an
auto-encoder to match the sparsity obtained by our method (figure4). The final features extracted
using the sparse auto-encoder achieved 10.2% error on NORB, which is significantly worse than
our method. Further gains can be achieved by tuning hyper-parameters, but the hyper-parameters
for our method are easier to tune and, as shown above, have little effect on the final performance.
Moreover, the sparse features learnt using standout are also computationally efficient compared
7
error rate
RBM + FT
1.24%
DAE + FT
1.3%
shallow dropout AE + FT
1.10%
deep dropout AE + FT
0.89%
standout shallow AE + FT
1.06%
standout deep AE + FT
0.80%
(a) MNIST fine-tuned
DBN [15]
DBM [15]
third order RBM [12]
dropout shallow AE + FT
dropout deep AE + FT
standout shallow AE + FT
standout deep AE + FT
(b) NORB fine-tuned
error rate
8.3%
7.2%
6.5%
7.5%
7.0%
6.2%
5.8%
Figure 6: Performance of fine-tuned classifiers, where FT is fine-tuning
to more sophisticated encoding algorithms, e.g., [5]. To find the code for data points with more
than 4000 dimensions and 4000 dictionary elements, the sparse coding algorithm quickly becomes
impractical.
Surprisingly, a shallow network with standout regularization (table 4(b)) outperforms some of the
much larger and deeper networks shown. Some of those deeper models have three or four times
more parameters than the shallow network we trained here. This particular result show that a simpler
model trained using our regularization technique can achieve higher performance compared to other,
more complicated methods.
5.7 Discriminative learning
In deep learning, a common practice is to use the encoder weights learnt by an unsupervised learning
method to initialize the early layers of a multilayer discriminative model. The backpropagation
algorithm is then used to learn the weights for the last hidden layer and also fine tune the weights
in the layers before. This procedure is often referred to as discriminative fine tuning. We initialized
neural networks using the models described above. The regularization method that we used for
unsupervised learning (RBM, dropout, standout) is also used for corresponding discriminative fine
tuning. For example, if a neural network is initialized using an auto-encoder trained with standout,
the neural network will also be fine tuned using standout for all its hidden units, with the same
standout function and hyper-parameters as the auto-encoder.
During discriminative fine tuning, we hold the weights fixed for all layers except the last one for the
first 10 epochs, and then the weights are updated jointly after that. As found by previous authors,
we find that classification performance is usually improved by the use of discriminative fine tuning.
Impressively, we found that a two-hidden-layer neural network with 1000 ReLU units in its first
and second hidden layers trained with standout is able to achieve 80 errors on MNIST data after
fine tuning (error rate of 0.80%). This performance is better than the current best non-convolutional
result [8] and the training procedure is simpler. On NORB dataset, we similarly achieved 6.2%
error rate by fine tuning the simple shallow auto-encoder from table(4(b)). Furthermore, a twohidden-layer neural network with 4000 ReLU units in both hidden layers that is pre-trained using
standout achieved 5.8% error rate after fine tuning. It is worth mentioning that a small weight decay
of 0.0005 is applied to this network during fine-tuning to further prevent overfitting. It outperforms
other models that do not exploit spatial structure. As far as we know, this result is better than
any previously published results without distortion or jitter. It even outperforms carefully designed
convolutional neural networks found in [9].
Figure 6 reports the classification accuracy obtained by different models, including state-of-the-art
deep networks.
6
Conclusions
Our results demonstrate that the proposed use of standout networks can significantly improve performance of feature-learning methods. Further, our results provide additional support for the ?regularization by noise? hypothesis that has been used to regularize other deep architectures, including
RBMs and denoising auto-encoders, and in dropout.
An obvious missing piece in this research is a good theoretical understanding of why the standout
network provides better regularization compared to the fixed dropout probability of 0.5. While we
have motivated our approach as one of approximating the Bayesian posterior, further theoretical
justifications are needed.
8
References
[1] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep
networks. Advances in neural information processing systems, 19:153, 2007.
R in Machine
[2] Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends
Learning, 2(1):1?127, 2009.
[3] C.M. Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation,
7(1):108?116, 1995.
[4] A. Coates and A.Y. Ng. The importance of encoding versus training with sparse coding and
vector quantization. In International Conference on Machine Learning, volume 8, page 10,
2011.
[5] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In International Conference on Machine Learning, 2010.
[6] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural
computation, 18(7):1527?1554, 2006.
[7] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006.
[8] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint
arXiv:1207.0580, 2012.
[9] Kevin Jarrett, Koray Kavukcuoglu, MarcAurelio Ranzato, and Yann LeCun. What is the best
multi-stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pages 2146?2153. IEEE, 2009.
[10] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional
neural networks. Advances in Neural Information Processing Systems, 25, 2012.
[11] Yann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition
with invariance to pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR
2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II?97.
IEEE, 2004.
[12] V. Nair and G. Hinton. 3d object recognition with deep belief nets. Advances in Neural
Information Processing Systems, 22:1339?1347, 2009.
[13] V. Nair and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc.
27th International Conference on Machine Learning, pages 807?814. Omnipress Madison,
WI, 2010.
[14] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the Twenty-eight International
Conference on Machine Learning (ICML11), 2011.
[15] Ruslan Salakhutdinov and Hugo Larochelle. Efficient learning of deep boltzmann machines.
In International Conference on Artificial Intelligence and Statistics. Citeseer, 2010.
[16] J. Sietsma and R.J.F. Dow. Creating artificial neural networks that generalize. Neural Networks,
4(1):67?79, 1991.
[17] Jasper Snoek, Ryan P Adams, and Hugo Larochelle. Nonparametric guidance of autoencoder
representations using label information. Journal of Machine Learning Research, 13:2567?
2588, 2012.
[18] Jasper Snoek, Hugo Larochelle, and Ryan Adams. Practical bayesian optimization of machine
learning algorithms. In Advances in Neural Information Processing Systems 25, pages 2960?
2968, 2012.
[19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of
initialization and momentum in deep learning.
[20] Tijmen Tieleman. Gnumpy: an easy way to use gpu boards in python. Department of Computer
Science, University of Toronto, 2010.
[21] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The
Journal of Machine Learning Research, 11:3371?3408, 2010.
9
| 5032 |@word tried:1 recapitulate:1 contrastive:1 citeseer:1 configuration:4 contains:1 tuned:4 interestingly:2 outperforms:3 err:2 current:3 activation:6 written:1 gpu:3 concatenate:1 partition:1 enables:1 remove:1 drop:1 designed:1 update:5 generative:1 greedy:2 half:1 intelligence:1 plane:2 vanishing:1 provides:2 toronto:2 sigmoidal:1 simpler:2 five:2 along:1 beta:3 consists:2 ramped:1 fitting:1 snoek:3 mask:10 expected:3 frequently:1 multi:2 salakhutdinov:3 little:2 inappropriate:1 increasing:2 becomes:1 notation:1 moreover:1 what:1 tying:1 benchmarked:1 developed:1 transformation:2 impractical:1 act:4 exactly:1 classifier:6 control:2 unit:53 before:2 dropped:4 engineering:2 frey:2 local:3 tends:2 encoding:2 approximately:2 initialization:1 studied:3 examined:1 specifying:1 challenging:2 adpative:1 co:1 mentioning:1 alg2:4 contractive:1 jarrett:1 sietsma:1 practical:1 lecun:3 practice:5 implement:1 backpropagation:2 digit:5 procedure:3 area:1 significantly:7 matching:1 pre:5 confidence:1 close:2 unlabeled:2 context:1 applying:1 equivalent:2 deterministic:3 marten:1 missing:1 straightforward:1 jimmy:2 wondered:1 formulate:1 simplicity:1 lamblin:1 regularize:2 handle:1 exploratory:1 variation:1 justification:1 updated:1 target:1 suppose:1 us:1 hypothesis:1 velocity:2 trend:3 recognition:5 element:1 updating:1 bottom:1 observed:4 ft:11 preprint:1 electrical:1 wj:3 region:1 connected:2 ranzato:1 decrease:1 mentioned:2 nesterov:3 hinder:1 ultimately:1 trained:21 depend:3 psuedocode:2 creates:1 easily:1 various:4 represented:1 train:3 separated:1 stacked:1 fast:3 describe:2 effective:1 artificial:2 hyper:8 kevin:1 quite:3 larger:2 cvpr:1 distortion:1 encoder:12 ability:1 statistic:1 unseen:1 jointly:2 final:4 advantage:1 net:2 adaptation:1 combining:1 poorly:2 achieve:3 adapts:1 description:1 normalize:1 sutskever:3 convergence:1 produce:5 karol:1 adam:2 object:8 help:4 depending:1 pose:1 freeing:1 noticeable:1 strong:1 come:2 larochelle:5 convention:1 correct:1 saved:1 filter:3 stochastic:5 human:1 generalization:2 elevation:1 ryan:2 inspecting:1 adjusted:2 hold:1 considered:4 exp:1 overlaid:1 predict:3 visualize:2 dbm:1 major:1 achieves:4 dictionary:1 early:1 omitted:1 purpose:2 ruslan:1 proc:1 injecting:1 label:3 saw:1 individually:1 create:1 successfully:1 weighted:2 publication:1 notational:1 improvement:1 bernoulli:1 likelihood:5 consistently:1 contrast:2 brendan:1 burdensome:1 inference:1 stopping:1 a0:1 hidden:55 pixel:5 issue:1 classification:14 overall:2 among:1 denoted:1 figure4:1 animal:1 art:2 spatial:2 initialize:3 equal:1 having:1 ng:1 sampling:3 koray:1 identical:1 represents:1 broad:1 choses:1 unsupervised:14 extraction:1 others:2 report:1 yoshua:1 few:1 randomly:8 divergence:1 replaced:1 huge:1 highly:2 investigate:2 adjust:2 numer:1 mixture:1 tj:1 fu:1 closer:1 partial:3 unless:1 initialized:3 guidance:1 dae:5 theoretical:2 instance:1 soft:1 cost:2 deviation:1 subset:2 uniform:1 usefulness:1 krizhevsky:2 successful:2 azimuth:1 osindero:1 reported:1 encoders:9 dependency:1 perturbed:1 kn:1 learnt:8 combined:1 confident:1 international:6 sensitivity:1 stay:1 probabilistic:1 quickly:1 ilya:1 connectivity:3 again:1 huang:1 worse:1 stochastically:2 creating:1 expert:4 derivative:5 suggesting:1 account:4 coding:4 coefficient:2 notable:1 depends:1 piece:1 performed:2 portion:1 competitive:2 decaying:1 relus:1 capability:1 start:1 complicated:1 contribution:2 publicly:1 accuracy:1 convolutional:4 efficiently:1 ensemble:1 yield:1 generalize:2 handwritten:4 bayesian:7 raw:2 kavukcuoglu:1 produced:1 vincent:2 lighting:3 worth:1 rectified:1 published:1 detector:2 sharing:1 raster:1 energy:4 rbms:2 james:1 obvious:1 psi:1 rbm:7 sampled:1 gain:1 dataset:7 adjusting:2 car:1 improves:1 dimensionality:1 schedule:2 sophisticated:1 carefully:1 back:1 higher:1 supervised:1 modal:1 improved:4 evaluated:3 furthermore:2 just:1 stage:2 implicit:1 arch:2 correlation:1 autoencoders:3 dow:1 propagation:3 minibatch:2 logistic:7 aj:7 lei:1 omitting:1 effect:5 normalized:1 true:2 gtx:2 regularization:9 indistinguishable:1 during:8 self:1 criterion:2 presenting:1 complete:1 demonstrate:1 performs:3 l1:1 omnipress:1 image:9 ranging:1 wise:1 recently:3 common:3 rotation:1 functional:1 jasper:2 empirically:1 hugo:3 exponentially:1 volume:2 ai:10 tuning:11 dbn:1 similarly:1 nonlinearity:2 stochasticity:2 posterior:12 disappointing:1 store:1 nvidia:1 tikhonov:1 binary:7 muller:1 additional:2 george:1 employed:1 determine:2 maximize:1 ii:1 full:1 reduces:1 faster:1 match:1 cross:1 offer:1 believed:1 divided:1 prediction:1 ae:17 vision:3 expectation:2 multilayer:1 arxiv:2 histogram:4 achieved:7 whereas:3 addition:1 fine:15 separately:2 crucial:2 rest:1 unlike:1 tend:1 validating:1 seem:1 near:1 presence:1 feedforward:2 bengio:4 easy:1 fit:1 relu:13 architecture:13 rifai:1 shift:2 whether:3 motivated:2 penalty:1 stereo:1 suffer:1 deep:27 jie:1 useful:4 generally:2 involve:1 tune:2 amount:1 nonparametric:1 ten:1 concentrated:1 overlay:1 outperform:3 coates:2 per:1 hyperparameter:1 dropping:1 taught:1 four:1 achieving:1 prevent:2 dahl:1 kept:1 sum:2 angle:1 jitter:1 family:1 yann:3 oscillation:1 comparable:1 dropout:63 layer:24 fold:1 truck:1 activity:20 generates:2 aspect:1 speed:4 leon:1 performing:1 relatively:1 department:2 according:1 combination:1 across:1 suppressed:2 wi:1 appealing:1 shallow:8 explained:2 restricted:4 computationally:3 equation:3 previously:4 turn:1 needed:2 know:1 end:4 available:1 apply:1 eight:1 generic:1 simulating:1 subtracted:1 batch:1 marcaurelio:1 original:3 top:1 include:1 ensure:1 madison:1 exploit:1 approximating:1 gregor:1 society:1 noticed:1 question:2 parametric:1 strategy:1 dependence:3 traditional:2 gradient:4 separate:2 sensible:1 lajoie:1 code:1 mini:1 providing:1 tijmen:1 manzagol:1 difficult:1 unfortunately:1 sharper:1 negative:1 ba:1 boltzmann:5 twenty:1 perform:4 teh:1 datasets:7 descent:2 regularizes:1 hinton:8 introduced:1 namely:1 pair:1 extensive:1 connection:1 optimized:2 imagenet:1 address:1 able:3 below:7 usually:1 pattern:1 sparsity:3 built:1 including:7 max:2 belief:7 suitable:1 natural:1 regularized:2 alg1:3 improve:5 library:1 created:1 auto:20 autoencoder:3 gnumpy:2 extract:2 func:2 prior:1 literature:1 epoch:3 understanding:1 popovici:1 python:1 marginalizing:1 fully:2 discriminatively:1 interesting:1 impressively:1 proven:1 versus:1 geoffrey:1 validation:2 foundation:1 affine:2 consistent:2 viewpoint:1 share:1 surprisingly:1 last:2 free:4 bias:3 deeper:2 taking:2 sparse:10 benefit:1 distributed:2 dimension:4 standout:62 preventing:1 forward:2 concretely:1 adaptive:5 made:1 simplified:1 commonly:1 author:1 far:1 approximate:6 alpha:3 ignore:2 keep:2 overfitting:2 sequentially:1 unnecessary:1 norb:19 xi:1 discriminative:11 search:2 latent:1 decade:1 icml11:1 why:1 table:4 learn:4 mj:12 ca:1 correlated:1 obtaining:1 improving:1 bottou:1 linearly:2 whole:1 noise:4 hyperparameters:2 n2:1 referred:1 board:1 fashion:3 slow:2 momentum:11 explicit:1 exponential:1 crude:1 tied:2 third:1 learns:2 down:2 specific:1 bishop:1 utoronto:1 showing:3 gating:1 decay:3 svm:1 glorot:1 mnist:20 quantization:1 adding:1 effectively:1 importance:2 magnitude:3 illustrates:1 clamping:2 demand:1 margin:1 easier:1 likely:3 explore:1 tieleman:1 chance:1 extracted:4 nair:3 viewed:7 goal:2 maxout:1 absence:1 change:1 determined:1 except:1 reducing:1 denoising:4 principal:1 called:2 total:1 invariance:2 experimental:3 selectively:1 searched:1 support:1 scan:1 heaviside:1 scratch:2 srivastava:1 |
4,457 | 5,033 | Stochastic Optimization of PCA with Capped MSG
Raman Arora
TTI-Chicago
Chicago, IL USA
[email protected]
Andrew Cotter
TTI-Chicago
Chicago, IL USA
[email protected]
Nathan Srebro
Technion, Haifa, Israel
and TTI-Chicago
[email protected]
Abstract
We study PCA as a stochastic optimization problem and propose a novel stochastic approximation algorithm which we refer to as ?Matrix Stochastic Gradient?
(MSG), as well as a practical variant, Capped MSG. We study the method both
theoretically and empirically.
1
Introduction
Principal Component Analysis (PCA) is a ubiquitous tool used in many data analysis, machine learning and information retrieval applications. It is used to obtain a lower dimensional representation of
a high dimensional signal that still captures as much of the original signal as possible. Such a low dimensional representation can be useful for reducing storage and computational costs, as complexity
control in learning systems, or to aid in visualization.
PCA is typically phrased as a question about a fixed data set: given n vectors in Rd , what is the
k-dimensional subspace that captures most of the variance in the data (or equivalently, that is best in
reconstructing the vectors, minimizing the sum squared distances, or residuals, from the subspace)?
It is well known that this subspace is the span of the leading k components of the singular value
decomposition of the data matrix (or equivalently of the empirical second moment matrix). Hence,
the study of computational approaches for PCA has mostly focused on methods for finding the SVD
(or leading components of the SVD) for a given n?d matrix (Oja & Karhunen, 1985; Sanger, 1989).
In this paper we approach PCA as a stochastic optimization problem, where the goal is to optimize
a ?population objective? based on i.i.d. draws from the population. In this setting, we have some
unknown source (?population?) distribution D over Rd , and the goal is to find the k-dimensional
subspace maximizing the (uncentered) variance of D inside the subspace (or equivalently, minimizing the average squared residual in the population), based on i.i.d. samples from D. The main point
here is that the true objective is not how well the subspace captures the sample (i.e. the ?training
error?), but rather how well the subspace captures the underlying source distribution (i.e. the ?generalization error?). Furthermore, we are not concerned with capturing some ?true? subspace, and so
do not, for example, try to minimize the angle to such a subspace, but rather attampt to find a ?good?
subspace, i.e. one that is almost as good as the optimal one in terms of reconstruction error.
Of course, finding the subspace that best captures the sample is a very reasonable approach to PCA
on the population. This is essentially an Empirical Risk Minimization (ERM) approach. However,
when comparing it to alternative, perhaps computationally cheaper, approaches, we argue that one
should not compare the error on the sample, but rather the population objective. Such a view can justify and favor computational approaches that are far from optimal on the sample, but are essentially
as good as ERM on the population.
Such a population-based view of optimization has recently been advocated in machine learning,
and has been used to argue for crude stochastic approximation approaches (online-type methods)
over sophisticated deterministic optimization of the empirical (training) objective (i.e. ?batch? methods) (Bottou & Bousquet, 2007; Shalev-Shwartz & Srebro, 2008). A similar argument was also
1
made in the context of stochastic optimization, where Nemirovski et al. (2009) argues for stochastic
approximation (SA) approaches over ERM. approaches (a.k.a. ERM). Accordingly, SA approaches,
mostly variants of Stochastic Gradient Descent, are often the methods of choice for many learning
problems, especially when very large data sets are available (Shalev-Shwartz et al., 2007; Collins
et al., 2008; Shalev-Shwartz & Tewari, 2009). We take the same view in order to advocate for, study,
and develop stochastic approximation approaches for PCA.
In an empirical study of stochastic approximation methods for PCA, a heuristic ?incremental?
method showed very good empirical performance (Arora et al., 2012). However, no theoretical
guarantees or justification were given for incremental PCA. In fact, it was shown that for some distributions it can converge to a suboptimal solution with high probability (see Section 5.2 for more
about this ?incremental? algorithm). Also relevant is careful theoretical work on online PCA by
Warmuth & Kuzmin (2008), in which an online regret guarantee was established. Using an onlineto-batch conversion, this online algorithm can be converted to a stochastic approximation algorithm
with good iteration complexity, however the runtime for each iteration is essentially the same as that
of ERM (i.e. of PCA on the sample), and thus senseless as a stochastic approximation method (see
Section 3.3 for more on this algorithm).
In this paper we borrow from these two approaches and present a novel algorithm for stochastic
PCA?the Matrix Stochastic Gradient (MSG) algorithm. MSG enjoys similar iteration complexity to Warmuth?s and Kuzmin?s algorithm, and in fact we present a unified view of both algorithms as different instantiations of Mirror Descent for the same convex relaxation of PCA. We
then present the capped MSG algorithm, which is a more practical variant of MSG, has very similar
updates to those of the ?incremental? method, works well in practice, and does not get stuck like
the ?incremental? method. The Capped MSG algorithm is thus a clean, theoretically well founded
method, with interesting connections to other stochastic/online PCA methods, and excellent practical performance?a ?best of both worlds? algorithm.
2
Problem Setup
We consider PCA as the problem of finding the maximal (uncentered) variance k-dimensional subspace with respect to an (unknown) distribution D over x ? Rd . We assume without loss of gener2
ality that the data are scaled in such a way that Ex?D [kxk ] ? 1. For our analysis, we also require
4
that the fourth moment be bounded: Ex?D [kxk ] ? 1. We represent a k-dimensional subspace by
an orthonormal basis, collected in the columns of a matrix U . With this parametrization, PCA is
defined as the following stochastic optimization problem:
maximize : Ex?D [xT U U T x]
subject to : U ? R
d?k
(2.1)
T
, U U = I.
In a stochastic optimization setting we do not have direct knowledge of the distribution D, and
instead may access it only through i.i.d. samples?these can be thought of as ?training examples?.
As in other studies of stochastic approximation methods, we are less concerned with the number
of required samples, and instead care mostly about the overall runtime required to obtain an suboptimal solution.
The standard approach to Problem 2.1 is empirical risk minimization (ERM): given samples {xt }Tt=1
PT
drawn from D, we compute the empirical covariance matrix C? = T1 t=1 xt xTt , and take the
columns of U to be the eigenvectors of C? corresponding to the top-k eigenvalues. This approach
requires O(d2 ) memory and O(d2 ) operations just in order to compute the covariance matrix, plus
some additional time for the SVD. We are interested in methods with much lower sample time and
space complexity, preferably linear rather than quadratic in d.
3
MSG and MEG
A natural stochastic approximation (SA) approach to PCA is projected stochastic gradient descent
(SGD) on Problem 2.1, with respect to U . This leads to the stochastic power method, for which, at
each iteration, the following update is performed:
U (t+1) = Porth U (t) + ?xt xTt
(3.1)
2
Here, xt xTt is the gradient of the PCA objective w.r.t. U , ? is a step size, and Porth (?) projects its
argument onto the set of matrices with orthonormal columns. Unfortunately, although SGD is well
understood for convex problems, Problem 2.1 is non-convex. Consequently, obtaining a theoretical
understanding of the stochastic power method, or of how the step size should be set, has proved
elusive. Under some conditions, convergence to the optimal solution can be ensured, but no rate is
known (Oja & Karhunen, 1985; Sanger, 1989; Arora et al., 2012).
Instead, we consider a re-parameterization of the PCA problem where the objective is convex. Instead of representing a linear subspace in terms of its basis matrix U , we parametrize it using the
corresponding projection matrix M = U U T . We can now reformulate the PCA problem as:
maximize : Ex?D [xT M x]
subject to : M ? R
th
where ?i (M ) is the i
d?d
(3.2)
, ?i (M ) ? {0, 1} , rank M = k
eigenvalue of M .
We now have a convex (linear, in fact) objective, but the constraints are not convex. This prompts us
relax the objective by taking the convex hull of the feasible region:
maximize : Ex?D [xT M x]
(3.3)
d?d
subject to : M ? R
, 0 M I, tr M = k
Since the objective is linear, and the feasible regiuon is the convex hull of that of Problem 3.2,
an optimal solution is always attained at a ?vertex?, i.e. a point on the boundary of the original
constraints. The optima of the two objectives are thus the same (strictly speaking?every optimum
of Problem 3.2 is also an optimum of Problem 3.3), and solving Problem 3.3 is equivalent to solving
Problem 3.2.
Furthermore, if a suboptimal solution for Problem 3.3 is not rank-k, i.e. is not a feasible point
of Problem 3.2, we can easily sample from it to obtain a rank-k solution with the same objective
function value (in expectation). This is shown by the following result of Warmuth & Kuzmin (2008):
Lemma 3.1 (Rounding (Warmuth & Kuzmin, 2008)). Any feasible solution of Problem 3.3 can be
expressed as a convex combination of at most d feasible solutions of Problem 3.2.
Algorithm 2 of Warmuth & Kuzmin (2008) shows how to efficiently find such a convex combination.
Since the objective is linear, treating the coefficients of the convex combination as defining a discrete
distribution, and sampling according to this distribution, yields a rank-k matrix with the desired
expected objective function value.
3.1
Matrix Stochastic Gradient
Performing SGD on Problem 3.3 (w.r.t. the variable M ) yields the following update rule:
M (t+1) = P M (t) + ?xt xTt ,
(3.4)
The projection is now performed onto the (convex) constraints of Problem 3.3. This gives the Matrix
Stochastic Gradient (MSG) algorithm, which, in detail, consists of the following steps:
1. Choose a step-size ?, iteration count T , and starting point M (0) .
2. Iterate the update rule (Equation 3.4) T times, each time using an independent sample
xt ? D.
? = 1 PT M (t) .
3. Average the iterates as M
t=1
T
? from M
? using the rounding procedure discussed in the previ4. Sample a rank-k solution M
ous section.
Analyzing MSG is straightforward using a standard SGD analysis:
Theorem 1. After T iterations of MSG (on Problem 3.3), with step size ? =
M
(0)
q
k
T,
and starting at
= 0,
r
1 k
T
?
?
E[Ex?D [x M x]] ? Ex?D [x M x] ?
,
2 T
where the expectation is w.r.t. the i.i.d. samples x1 , . . . , xT ? D and the rounding, and M ? is the
optimum of Problem 3.2.
T
3
Algorithm 1 Matrix stochastic gradient (MSG) update: compute an eigendecomposition of M 0+?xxT from a
rank-m eigendecomposition M 0= U 0 diag(? 0 )(U 0 )T and project the resulting solution onto the constraint set.
The computational cost is dominated by the matrix multiplication on lines 4 or 7 costing O(m2 d) operations.
msg-step d, k, m : N, U 0 : Rd?m , ? 0 : Rm , x : Rd , ? : R
?
?
1
x
? ? ?(U 0 )T x; x? ? ?x ? U 0 x
?; r ? kx? k;
2
if r > 0
3
V, ? ? eig([diag(? 0 ) + x
?x
?T , r?
x; r?
xT , r2 ]);
0
4
U ? [U , x? /r]V ;
5
else
6
V, ? ? eig(diag(? 0 ) + x
?x
?T );
0
7
U ?U V;
8
? ? distinct eigenvalues in ?; ? ? corresponding multiplicities;
9
? ? project (d, k, m, ?, ?);
10
return U, ?;
Proof. The SGD analysis of Nemirovski & Yudin (1983) yields that:
?
(0) 2
? x] ? ? Ex?D [kgk2F ] + kM ? M kF
E[xT M ? x ? xT M
2
2?T
(3.5)
where g = xxT is the gradient of the PCA objective. Now, Ex?D [kgk2F ] = Ex?D [kxk4 ] ? 1 and
?
M ? M (0)
2 = kM ? k2 = k. In the last inequality, we used the fact that M ? has k eigenvalues
F
F
?
of value 1 each, and hence kM ? kF = k.
3.2
Efficient Implementation and Projection
A na??ve implementation of the MSG update requires O(d2 ) memory and O(d2 ) operations per iteration. In this section, we show how to perform this update efficiently by maintaining an up-to-date
eigendecomposition of M (t) . Pseudo-code for the update may be found in Algorithm 1. Consider
the eigendecomposition M (t) = U 0 diag(?)(U 0 )T at the tth iteration, where rank(M (t) ) = kt and
U 0 ? Rd?kt . Given a new observation xt , the eigendecomposition of M (t) + ?xt xTt can be updated
efficiently using a (kt+1)?(kt+1) SVD (Brand, 2002; Arora et al., 2012) (steps 1-7 of Algorithm 1).
This rank-one eigen-update is followed by projection onto the constraints of Problem 3.3, invoked as
project in step 8 of Algorithm 1, discussed in the following paragraphs and given as Algorithm 2.
The projection procedure is based on the following lemma1 . See supplementary material for a proof.
Lemma 3.2. Let M 0 ? Rd?d be a symmetric matrix, with eigenvalues ?10 , . . . , ?d0 and associated
eigenvectors v10 , . . . , vd0 . Its projection M = P (M 0 ) onto the feasible region of Problem 3.3 with
respect to the Frobenius norm, is the unique feasible matrix which has the same eigenvectors as M 0 ,
with the associated eigenvalues ?1 , . . . , ?d satisfying:
?i = max (0, min (1, ?i0 + S))
Pd
with S ? R being chosen in such a way that i=1 ?i = k.
This result shows that projecting onto the feasible region amounts to finding the value of S such that,
after shifting the eigenvalues by S and clipping the results to [0, 1], the result is feasible. Importantly,
the projection operates only on the eigenvalues. Algorithm 2 contains pseudocode which finds S
from a list of eigenvalues. It is optimized to efficiently handle repeated eigenvalues?rather than
receiving the eigenvalues in a length-d list, it instead receives a length-n list containing only the
distinct eigenvalues, with ? containing the corresponding multiplicities. In Sections 4 and 5, we will
see why this is an important optimization. The central idea motivating the algorithm is that, in a
sorted array of eigenvalues, all elements with indices below some threshold i will be clipped to 0,
and all of those with indices above another threshold j will be clipped to 1. The pseudocode simply
searches over all possible pairs of such thresholds until it finds the one that works.
The rank-one eigen-update combined with the fast projection step yields an efficient MSG update
that requires O(dkt ) memory and O(dkt2 ) operations per iteration (recall that kt is the rank of the
1
Our projection problem onto the capped simplex, even when seen in the vector setting, is substantially
different from Duchi et al. (2008). We project onto the set {0 ? ? ? 1, k?k1 = k} in Problem 3.3 and {0 ?
? ? 1, k?k1 = k, k?k0 ? K} in Problem 5.1 whereas Duchi et al. (2008) project onto {0 ? ?, k?k1 = k}.
4
Algorithm 2 Routine which finds the S of Lemma 3.2. It takes as parameters the dimension d, ?target? sub-
space dimension k, and the number of distinct eigenvalues n of the current iterate. P
The length-n arrays ? 0 and
0
?0 contain the distinct eigenvalues and their multiplicities, respectively, of M 0 (with n
i=1 ?i = d). Line 1 sorts
0
0
? and re-orders ? so as to match this sorting. The loop will be run at most 2n times (once for each possible
increment to i or j on lines 12?15), so the computational cost is dominated by that of the sort: O(n log n).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
project (d, k, n : N, ? 0 : Rn , ?0 : Nn )
? 0 , ?0 ? sort(? 0 , ?0 );
i ? 1; j ? 1; si ? 0; sj ? 0; ci ? 0; cj ? 0;
while i ? n
if (i < j)
S ? (k ? (sj ? si ) ? (d ? cj ))/(cj ? ci );
b?(
0
(?i0 + S ? 0) and (?j?1
+ S ? 1)
0
and ((i ? 1) or (?i?1 + S ? 0))
0
and ((j ? n) or (?j+1
? 1))
);
return S if b;
if (j ? n) and (?j0 ? ?i0 ? 1)
sj ? sj + ?0j ?j0 ; cj ? cj + ?0j ; j ? j + 1;
else
si ? si + ?0i ?i0 ; ci ? ci + ?0i ; i ? i + 1;
return error;
iterate M (t) ). This is a significant improvement over the O(d2 ) memory and O(d2 ) computation
required by a standard implementation of MSG, if the iterates have relatively low rank.
3.3
Matrix Exponentiated Gradient
Since M is constrained by its trace, and not by its Frobenius norm, it is tempting to consider mirror
descent (MD) (Beck & Teboulle, 2003) instead of SGD updates for solving Problem 3.3. Recall that
Mirror Descent depends on a choice of ?potential function? ?(?) which should be chosen according
to the geometry of the feasible set and the subgradients (Srebro et al., 2011). Using the squared
Frobenius norm as a potential function, i.e. ?(M ) = kM k2F , yields SGD, i.e. the MSG updates
Equation 3.4. The trace-norm
P constraint suggests using the von Neumann entropy as the potential
function, i.e. ?h (M ) = i ?i (M ) log ?i (M ). This leads to multiplicative updates, yielding what
we refer to as the Matrix Exponentiated Gradient (MEG) algorithm, which is similar to that of (Warmuth & Kuzmin, 2008). In fact, Warmuth and Kuzmin?s algorithm exactly corresponds to online
Mirror Descent on Problem 3.3 with this potential function, but takes the optimization variable to
be M? = I ? M (with the constraints tr M? = d ? k and 0 M? I). In either case, using the
entropy potential, despite being well suited for the trace-geometry, does not actually lead to p
a better
dependence2 on d or k, and a Mirror Descent-based analysis again yields an excess loss of k/T .
Warmuth and Kuzmin present an ?optimistic? analysis, with a dependence
q on the ?reconstruction
L? k log(d/k)
k log(d/k)
?
T
?
error? L = E[x (I ? M )x], which yields an excess error of O
+
T
T
(their logarithmic term can be avoided by a more careful analysis).
4
MSG runtime and the rank of the iterates
As we saw in Sections 3.1 and 3.2, MSG requires O(k/2 ) iterations to obtain an -suboptimal
solution, and each iteration costs O(kt2 d) operations, where kt is the rank of iterate M (t) . This
PT
yields a total runtime of O(k?2 dk/2 ), where k?2 = t=1 kt2 . Clearly, the runtime for MSG depends
critically on the rank of the iterates. If kt is as large as d, then MSG achieves a runtime that is cubic
in the dimensionality. On the other hand, if the rank of the iterates is O(k), the runtime is linear in
the dimensionality. Fortunately, in practice, each kt is typically much lower than d. The reason for
this is that the MSG update performs a rank-1 update followed by a projection onto the constraints.
Since M 0 = M (t) + ?xt xTt will have a larger trace than M (t) (i.e. tr M 0 ? k), the projection, as is
?
This is because in our case, due to the other constraints, kM ? kF = trM ?. Furthermore, the SGD analysis
depends on the Frobenius norm of the stochastic gradients, but since all stochastic gradients are rank one, this
is the same as their spectral norm, which comes up in the entropy-case analysis, and again there is no benefit.
2
5
shown by Lemma 3.2, will subtract a quantity S from every eigenvalue of M 0 , clipping each to 0 if
it becomes negative. Therefore, each MSG update will increase the rank of the iterate by at most 1,
and has the potential to decrease it, perhaps significantly. It?s very difficult to theoretically quantify
how the rank of the iterates will evolve over time, but we have observed empirically that the iterates
do tend to have relatively low rank.
We explore this issue in greater detail experimentally, on a distribution which we expect to be difficult for MSG. To this end, we generated data from known 32-dimensional distributions with diagonal
P32
covariance matrices ? = diag(?/ k?k), where ?i = ? ?i / j=1 ? ?j , for i = 1, . . . , 32 and for
some ? > 1. Observe that ?(k) has a smoothly-decaying set of eigenvalues and the rate of decay is
controlled by ? . As ? ? 1, the spectrum becomes flatter resulting in distributions that present challenging test cases for MSG. We experimented with ? = 1.1 and k ? {1, 2, 4}, where k is the desired
th
subspace dimension used by each
?algorithm. The data is generated by sampling the i standard unit
basis vector ei with probability ?ii . We refer to this as the ?orthogonal distribution?, since it is a
discrete distribution over 32 orthogonal vectors.
k
Spectrum
t
In Figure 1, we show the results with k = 4. We 35
1
can see from the left-hand plot that MSG main- 30
0.8
tains a subspace of dimension around 15. The 25
0.6
plot on the right shows how the set of nonzero 20
15
0.4
eigenvalues of the MSG iterates evolves over
10
time, from which we can see that many of the ex- 5
0.2
tra dimensions are ?wasted? on very small eigen- 0
0
10
10
10 10
10
10
10
values, corresponding to directions which leave 10
Iterations
Iterations
the state matrix only a handful of iterations after
they enter. This suggests that constraining kt can Figure 1: The ranks kt (left) and the eigenvalues
lead to significant speedups and motivates capped (right) of the MSG iterates M (t) .
MSG updates discussed in the next section.
1
5
2
3
4
1
2
3
4
Capped MSG
While, as was observed in the previous section, MSG?s iterates will tend to have ranks kt smaller
than d, they will nevertheless also be larger than k. For this reason, we recommend imposing a hard
constraint K on the rank of the iterates:
maximize : Ex?D [xT M x]
(5.1)
d?d
subject to : M ? R
,0 M I
tr M = k, rank M ? K
We will refer to MSG where the projection is replaced with a projection onto the constraints of
Problem 5.1 (i.e. where the iterates are SGD iterates on Problem 5.1) as ?capped MSG?. As before,
as long as K ? k, Problem 5.1 and Problem 3.3 have the same optimum, it is achieved at a rank-k
matrix, and the extra rank constraint in Problem 5.1 is inactive at the optimum. However, the rank
constraint does affect the iterates, especially since Problem 5.1 is no longer convex. Nonetheless
if K > k (i.e. the hard rank-constraint K is strictly larger than the target rank k), then we can
easily check if we are at a global optimum of Problem 5.1, and hence of Problem 3.3: if the capped
MSG algorithm converges to a solution of rank K, then the upper bound K should be increased.
Conversely, if it has converged to a rank-deficient solution, then it must be the global optimum.
There is thus an advantage in using K > k, and we recommend setting K = k + 1, as we do in our
experiments, and increasing K only if a rank deficient solution is not found in a timely manner.
If we take K = k, then the only way to satisfy the trace constraint is to have all non-zero eigenvalues
equal to one, and Problem 5.1 becomes identical to Problem 3.2. The detour through the convex
objective of Problem 3.3 allows us to increase the search rank K, allowing for more flexibility in
the iterates, while still forcing each iterate to be low-rank, and each update to therefore be efficient,
through the rank constraint.
5.1
Implementing the projection
The only difference between the implementation of MSG and capped MSG is in the projection step.
Similar reasoning to that which was used in the proof of Lemma 3.2 shows that if M (t+1) = P (M 0 )
6
k=1
k=2
1
1.4
1.2
0.8
Suboptimality
k=4
1.4
Incremental
Warmuth & Kuzmin
MSG
Capped MSG
1
1.2
1
0.6
0.8
0.8
0.4
0.6
0.6
0.4
0.4
0.2
0.2
0 1
10
2
10
Iterations
3
10
10
4
0 1
10
0.2
2
3
10
10
Iterations
10
4
0 1
10
2
10
3
10
4
10
Iterations
Figure 2: Comparison on simulated data for different values of parameter k.
with M 0 = M (t) + ?xt xTt , then M (t) and M 0 are simultaneously diagonalizable, and therefore
we can consider only how the projection acts on the eigenvalues. Hence, if we let ? 0 be the vector
of the eigenvalues of M 0 , and suppose that more than K of them are nonzero, then there will be
a a size-K subset of ? 0 such that applying Algorithm 2 to this set gives the projected eigenvalues.
Since we perform only a rank-1 update at every iteration, we must check at most K possibilities,
at a total cost of O(K 2 log K) operations, which has no effect on the asymptotic runtime because
Algorithm 1 requires O(K 2 d) operations.
5.2
Relationship to the incremental PCA method
The capped MSG algorithm with K = k is similar to the incremental algorithm of Arora et al.
(2012), which maintains a rank-k approximation of the covariance matrix and updates according to:
M (t+1) = Prank-k M (t) + xt xTt
where the projection is onto the set of rank-k matrices. Unlike MSG, the incremental algorithm does
not have a step-size. Updates can be performed efficiently by maintaining an eigendecomposition
of each iterate, just as was done for MSG (see Section 3.2).
In a recent survey of stochastic algorithms for PCA (Arora et al., 2012), the incremental algorithm
was found to perform extremely well in practice?it was the best, in fact, among the compared algorithms. However, there exist cases in which it can get stuck at a suboptimal
solution. For example,
?
T
If the data
are
drawn
from
a
discrete
distribution
D
which
samples
[
3,
0]
with probability 1/3
?
and [0, 2]T with probability 2/3, and one runs the incremental algorithm with k = 1, then it will
converge to [1, 0]T with probability 5/9, despite the fact that the maximal eigenvector is [0, 1]T .
The reason for this failure is essentially that the orthogonality of the data interacts poorly with the
low-rank projection: any update which does not entirely displace the maximal eigenvector in one
iteration will be removed entirely by the projection, causing the algorithm to fail to make progress.
The capped MSG algorithm with K > k will not get stuck in such situations, since it will use
the additional dimensions to search in the new direction. Only as it becomes more confident in its
current candidate will the trace of M become increasingly concentrated on the top k directions. To
illustrate this empirically, we generalized this example by generating data using the 32-dimensional
?orthogonal? distribution described in Section 4. This distribution presents a challenging test-case
for MSG, capped MSG and the incremental algorithm. Figure 2 shows plots of individual runs of
MSG, capped MSG with K = k + 1, the incremental algorithm, and Warmuth and Kuzmin?s algorithm, all based on the same sequence of samples drawn from the orthogonal distribution. We
compare algorithms in terms of the suboptimality on the population objective based on the largest
k eigenvalues of the state matrix M (t) . The plots show the incremental algorithm getting stuck for
k ? {1, 4}, and the others intermittently plateauing at intermediate solutions before beginning to
again converge rapidly towards the optimum. This behavior is to be expected on the capped MSG
algorithm, due to the fact that the dimension of the subspace stored at each iterate is constrained.
However, it is somewhat surprising that MSG and Warmuth and Kuzmin?s algorithm behaved similarly, and barely faster than capped MSG.
6
Experiments
We also compared the algorithms on the real-world MNIST dataset, which consists of 70, 000 binary
images of handwritten digits of size 28?28, resulting in a dimensionality of 784. We pre-normalized
the data by mean centering the feature vectors and scaling each feature by the product of its standard
7
k=1
3
2.5
Suboptimality
k=4
2
k=8
8
Incremental
Warmuth & Kuzmin
MSG
Capped MSG
Grassmannian
12
7
10
6
8
5
1.5
4
6
3
1
4
2
0.5
2
1
0 0
10
1
10
10
2
3
4
10
5
10
10
0 0
10
1
2
10
3
10
Iterations
4
10
5
10
10
2
3
10
4
10
5
10
10
Iterations
8
12
7
2
Suboptimality
1
10
Iterations
2.5
10
6
8
5
1.5
4
1
6
3
4
2
0.5
2
1
0 0
10
0 0
10
10
1
10
2
3
10
4
10
5
10
Est. runtime
10
6
10
7
10
8
0 0
10
10
1
10
2
3
10
10
4
5
10
6
10
7
10
10
8
0 0
10
1
10
2
10
Est. runtime
3
10
4
10
5
10
6
10
7
10
8
10
Est. runtime
Figure 3: Comparison on the MNIST dataset. The top row of plots shows suboptimality
as a function of
P
iteration count, while the bottom row suboptimality as a function of estimated runtime
t
0 2
s=1 (ks ) .
deviation and the data dimension, so that each feature vector is zero mean and unit norm in expectation. In addition to MSG, capped MSG, the incremental algorithm and Warmuth and Kuzmin?s
algorithm, we also compare to a Grassmannian SGD algorithm (Balzano et al., 2010). All algorithms except the incremental algorithm have a step-size
parameter. In these experiments, we ran
?
each algorithm with decreasing step sizes ?t = c/ t for c ? {2?12 , 2?11 , . . . , 25 } and picked the
best c, in terms of the average suboptimality over the run, on a validation set. Since we cannot evaluate the true population objective, we estimate it by evaluating on a held-out test set. We use 40%
of samples in the dataset for training, 20% for validation (tuning step-size), and 40% for testing.
We are interested in learning a maximum variance subspace of dimension k ? {1, 4, 8} in a single
?pass? over the training sample. In order to compare MSG, capped MSG, the incremental algorithm and Warmuth and Kuzmin?sPalgorithm in terms of runtime, we calculate the dominant term
t
in the computational complexity: s=1(ks0 )2. The results are averaged over 100 random splits into
train-validation-test sets.
We can see from Figure 3 that the incremental algorithm makes the most progress per iteration and
is also the fastest of all algorithms. MSG is comparable to the incremental algorithm in terms of the
the progress made per iteration. However, its runtime is slightly worse because it will often keep
a slightly larger representation (of dimension kt ). The capped MSG variant (with K = k + 1) is
significantly faster?almost as fast as the incremental algorithm, while, as we saw in the previous
section, being less prone to getting stuck. Warmuth and Kuzmin?s algorithm fares well with k = 1,
but its performance drops for higher k. Inspection of the underlying data shows that, in the k ?
{4, 8} experiments, it also tends to have a larger kt than MSG in these experiments, and therefore
has a higher cost-per-iteration. Grassmannian SGD performs better than Warmuth and Kuzmin?s
algorithm, but much worse than MSG and capped MSG.
7
Conclusions
In this paper, we presented a careful development and analysis of MSG, a stochastic approximation
algorithm for PCA, which enjoys good theoretical guarantees and offers a computationally efficient
variant, capped MSG. We show that capped MSG is well-motivated theoretically and that it does
not get stuck at a suboptimal solution. Capped MSG is also shown to have excellent empirical performance and it therefore is a much better alternative to the recently proposed incremental PCA
algorithm of Arora et al. (2012). Furthermore, we provided a cleaner interpretation of PCA updates of Warmuth & Kuzmin (2008) in terms of Matrix Exponentiated Gradient (MEG) updates
and showed that both MSG and MEG can be interpreted as mirror descent algorithms on the same
relaxation of the PCA optimization problem but with different distance generating functions.
8
References
Arora, Raman, Cotter, Andrew, Livescu, Karen, and Srebro, Nathan. Stochastic optimization for
PCA and PLS. In 50th Annual Allerton Conference on Communication, Control, and Computing,
2012.
Balzano, Laura, Nowak, Robert, and Recht, Benjamin. Online identification and tracking of subspaces from highly incomplete information. In 48th Annual Allerton Conference on Communication, Control, and Computing, 2010.
Beck, A. and Teboulle, M. Mirror descent and nonlinear projected subgradient methods for convex
optimization. Operations Research Letters, 31(3):167?175, 2003.
Bottou, Leon and Bousquet, Olivier. The tradeoffs of large scale learning. In NIPS?07, pp. 161?168,
2007.
Boyd, Stephen and Vandenberghe, Lieven. Convex Optimization. Cambridge University Press, 2004.
Brand, Matthew. Incremental singular value decomposition of uncertain data with missing values.
In ECCV, 2002.
Collins, Michael, Globerson, Amir, Koo, Terry, Carreras, Xavier, and Bartlett, Peter L. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. J.
Mach. Learn. Res., 9:1775?1822, June 2008.
Duchi, John, Shalev-Shwartz, Shai, Singer, Yoram, and Chandra, Tushar. Efficient projections onto
the l1-ball for learning in high dimensions. In Proceedings of the 25th international conference
on Machine learning, ICML ?08, pp. 272?279, New York, NY, USA, 2008. ACM.
Nemirovski, Arkadi and Yudin, David. Problem complexity and method efficiency in optimization.
John Wiley & Sons Ltd, 1983.
Nemirovski, Arkadi, Juditsky, Anatoli, Lan, Guanghui, and Shapiro, Alexander. Robust stochastic
approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?
1609, January 2009.
Oja, Erkki and Karhunen, Juha. On stochastic approximation of the eigenvectors and eigenvalues
of the expectation of a random matrix. Journal of Mathematical Analysis and Applications, 106:
69?84, 1985.
Sanger, Terence D. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 12:459?473, 1989.
Shalev-Shwartz, Shai and Srebro, Nathan. SVM optimization: Inverse dependence on training set
size. In ICML?08, pp. 928?935, 2008.
Shalev-Shwartz, Shai and Tewari, Ambuj. Stochastic methods for l1 regularized loss minimization.
In Proceedings of the 26th Annual International Conference on Machine Learning, ICML?09, pp.
929?936, New York, NY, USA, 2009. ACM.
Shalev-Shwartz, Shai, Singer, Yoram, and Srebro, Nathan. Pegasos: Primal Estimated sub-GrAdient
SOlver for SVM. In ICML?07, pp. 807?814, 2007.
Srebro, N., Sridharan, K., and Tewari, A. On the universality of online mirror descent. Advances in
Neural Information Processing Systems, 24, 2011.
Warmuth, Manfred K. and Kuzmin, Dima. Randomized online PCA algorithms with regret bounds
that are logarithmic in the dimension. Journal of Machine Learning Research (JMLR), 9:2287?
2320, 2008.
9
| 5033 |@word norm:7 km:5 d2:6 ks0:1 decomposition:2 ality:1 covariance:4 sgd:11 tr:4 moment:2 contains:1 vd0:1 current:2 comparing:1 surprising:1 si:4 universality:1 must:2 john:2 chicago:5 displace:1 treating:1 plot:5 update:25 drop:1 juditsky:1 warmuth:18 accordingly:1 parameterization:1 inspection:1 beginning:1 parametrization:1 amir:1 manfred:1 iterates:15 allerton:2 mathematical:1 direct:1 become:1 consists:2 advocate:1 inside:1 paragraph:1 manner:1 theoretically:4 expected:2 behavior:1 decreasing:1 solver:1 increasing:1 becomes:4 project:7 provided:1 underlying:2 bounded:1 israel:1 what:2 interpreted:1 substantially:1 eigenvector:2 unified:1 finding:4 guarantee:3 pseudo:1 preferably:1 every:3 act:1 runtime:14 exactly:1 ensured:1 scaled:1 rm:1 k2:1 control:3 unit:2 dima:1 t1:1 before:2 understood:1 tends:1 despite:2 mach:1 analyzing:1 koo:1 plus:1 k:1 suggests:2 challenging:2 conversely:1 fastest:1 nemirovski:4 averaged:1 practical:3 unique:1 kxk4:1 testing:1 globerson:1 practice:3 regret:2 digit:1 procedure:2 j0:2 empirical:8 thought:1 significantly:2 projection:20 boyd:1 pre:1 get:4 onto:13 cannot:1 pegasos:1 storage:1 risk:2 context:1 applying:1 optimize:1 equivalent:1 deterministic:1 missing:1 maximizing:1 elusive:1 straightforward:1 starting:2 convex:16 focused:1 survey:1 m2:1 rule:2 array:2 importantly:1 borrow:1 orthonormal:2 vandenberghe:1 population:10 handle:1 justification:1 increment:1 updated:1 diagonalizable:1 pt:3 target:2 suppose:1 olivier:1 programming:1 livescu:1 element:1 satisfying:1 observed:2 bottom:1 capture:5 calculate:1 region:3 decrease:1 removed:1 ran:1 benjamin:1 pd:1 complexity:6 senseless:1 solving:3 efficiency:1 basis:3 easily:2 k0:1 xxt:2 train:1 distinct:4 fast:2 dkt:1 shalev:7 balzano:2 heuristic:1 supplementary:1 larger:5 relax:1 favor:1 online:9 advantage:1 eigenvalue:25 sequence:1 propose:1 reconstruction:2 maximal:3 product:1 causing:1 relevant:1 loop:1 date:1 rapidly:1 flexibility:1 poorly:1 frobenius:4 getting:2 convergence:1 optimum:9 neumann:1 generating:2 incremental:23 tti:3 leave:1 converges:1 illustrate:1 andrew:2 develop:1 v10:1 advocated:1 progress:3 sa:3 come:1 quantify:1 direction:3 stochastic:35 hull:2 material:1 implementing:1 require:1 generalization:1 strictly:2 around:1 matthew:1 achieves:1 saw:2 largest:1 tool:1 cotter:3 minimization:3 clearly:1 always:1 rather:5 june:1 improvement:1 rank:39 check:2 i0:4 nn:1 typically:2 interested:2 overall:1 issue:1 among:1 development:1 constrained:2 equal:1 once:1 field:1 sampling:2 identical:1 k2f:1 icml:4 unsupervised:1 simplex:1 others:1 recommend:2 oja:3 simultaneously:1 ve:1 individual:1 cheaper:1 beck:2 replaced:1 geometry:2 porth:2 possibility:1 highly:1 kt2:2 yielding:1 primal:1 held:1 kt:13 nowak:1 orthogonal:4 incomplete:1 detour:1 haifa:1 re:3 desired:2 theoretical:4 uncertain:1 increased:1 column:3 teboulle:2 clipping:2 cost:6 deviation:1 vertex:1 subset:1 technion:1 rounding:3 motivating:1 stored:1 combined:1 confident:1 recht:1 guanghui:1 international:2 siam:1 randomized:1 receiving:1 terence:1 michael:1 na:1 again:3 squared:3 central:1 von:1 containing:2 choose:1 worse:2 laura:1 leading:2 return:3 converted:1 potential:6 flatter:1 coefficient:1 tra:1 satisfy:1 depends:3 performed:3 try:1 view:4 multiplicative:1 optimistic:1 picked:1 sort:3 decaying:1 maintains:1 shai:4 timely:1 arkadi:2 minimize:1 il:2 variance:4 efficiently:5 yield:8 handwritten:1 identification:1 critically:1 converged:1 centering:1 failure:1 nonetheless:1 pp:5 proof:3 associated:2 proved:1 dataset:3 recall:2 knowledge:1 dimensionality:3 ubiquitous:1 cj:5 routine:1 sophisticated:1 actually:1 attained:1 higher:2 done:1 trm:1 furthermore:4 just:2 until:1 hand:2 receives:1 ei:1 nonlinear:1 eig:2 perhaps:2 behaved:1 usa:4 effect:1 contain:1 true:3 normalized:1 xavier:1 hence:4 symmetric:1 nonzero:2 suboptimality:7 generalized:1 tt:1 argues:1 duchi:3 performs:2 l1:2 reasoning:1 image:1 invoked:1 novel:2 recently:2 intermittently:1 pseudocode:2 empirically:3 discussed:3 fare:1 interpretation:1 lieven:1 refer:4 significant:2 cambridge:1 imposing:1 enter:1 rd:7 tuning:1 similarly:1 access:1 longer:1 dominant:1 carreras:1 showed:2 recent:1 forcing:1 inequality:1 binary:1 ous:1 seen:1 additional:2 care:1 fortunately:1 greater:1 somewhat:1 converge:3 maximize:4 tempting:1 signal:2 ii:1 stephen:1 d0:1 match:1 faster:2 offer:1 long:1 retrieval:1 controlled:1 variant:5 essentially:4 expectation:4 chandra:1 iteration:26 represent:1 achieved:1 whereas:1 addition:1 p32:1 else:2 singular:2 source:2 extra:1 unlike:1 subject:4 tend:2 deficient:2 sridharan:1 constraining:1 intermediate:1 split:1 concerned:2 feedforward:1 iterate:8 affect:1 plateauing:1 suboptimal:6 idea:1 tradeoff:1 inactive:1 motivated:1 pca:30 bartlett:1 ltd:1 peter:1 karen:1 speaking:1 york:2 useful:1 tewari:3 eigenvectors:4 cleaner:1 amount:1 concentrated:1 tth:1 shapiro:1 exist:1 estimated:2 per:5 discrete:3 threshold:3 nevertheless:1 lan:1 drawn:3 costing:1 clean:1 wasted:1 relaxation:2 subgradient:1 sum:1 run:4 angle:1 letter:1 fourth:1 inverse:1 clipped:2 almost:2 reasonable:1 draw:1 raman:2 scaling:1 comparable:1 capturing:1 bound:2 entirely:2 layer:1 followed:2 quadratic:1 annual:3 msg:66 constraint:16 handful:1 orthogonality:1 erkki:1 phrased:1 bousquet:2 dominated:2 nathan:4 argument:2 span:1 min:1 extremely:1 performing:1 subgradients:1 leon:1 relatively:2 speedup:1 according:3 combination:3 ball:1 smaller:1 slightly:2 reconstructing:1 increasingly:1 son:1 evolves:1 projecting:1 multiplicity:3 erm:6 computationally:2 equation:2 visualization:1 count:2 fail:1 singer:2 end:1 available:1 operation:8 parametrize:1 observe:1 spectral:1 alternative:2 batch:2 eigen:3 original:2 top:3 maintaining:2 sanger:3 anatoli:1 yoram:2 k1:3 especially:2 objective:17 question:1 quantity:1 dependence:2 md:1 diagonal:1 interacts:1 gradient:16 subspace:19 distance:2 grassmannian:3 simulated:1 argue:2 collected:1 reason:3 barely:1 meg:4 code:1 length:3 index:2 relationship:1 reformulate:1 minimizing:2 equivalently:3 setup:1 mostly:3 unfortunately:1 difficult:2 robert:1 trace:6 negative:1 implementation:4 motivates:1 unknown:2 perform:3 allowing:1 conversion:1 upper:1 observation:1 markov:1 juha:1 descent:10 prank:1 january:1 defining:1 situation:1 communication:2 rn:1 prompt:1 ttic:3 david:1 pair:1 required:3 connection:1 optimized:1 established:1 nip:1 capped:25 below:1 ambuj:1 max:2 memory:4 shifting:1 power:2 terry:1 natural:1 regularized:1 residual:2 representing:1 arora:9 understanding:1 nati:1 kf:3 xtt:8 multiplication:1 evolve:1 asymptotic:1 loss:3 expect:1 interesting:1 srebro:7 validation:3 eigendecomposition:6 row:2 prone:1 course:1 eccv:1 last:1 enjoys:2 exponentiated:4 taking:1 benefit:1 boundary:1 dimension:12 world:2 yudin:2 evaluating:1 stuck:6 made:2 projected:3 avoided:1 founded:1 far:1 sj:4 excess:2 keep:1 tains:1 uncentered:2 global:2 instantiation:1 shwartz:7 spectrum:2 search:3 why:1 learn:1 robust:1 obtaining:1 bottou:2 excellent:2 diag:5 main:2 repeated:1 kuzmin:18 x1:1 cubic:1 ny:2 aid:1 wiley:1 sub:2 candidate:1 crude:1 jmlr:1 theorem:1 xt:19 r2:1 list:3 dk:1 decay:1 experimented:1 svm:2 mnist:2 ci:4 mirror:8 karhunen:3 kx:1 margin:1 sorting:1 subtract:1 suited:1 entropy:3 smoothly:1 logarithmic:2 simply:1 explore:1 kxk:2 expressed:1 pls:1 tracking:1 corresponds:1 acm:2 conditional:1 goal:2 sorted:1 consequently:1 careful:3 towards:1 feasible:10 experimentally:1 hard:2 except:1 reducing:1 operates:1 justify:1 tushar:1 principal:1 lemma:5 total:2 pas:1 svd:4 brand:2 est:3 collins:2 alexander:1 evaluate:1 ex:12 |
4,458 | 5,034 | Variance Reduction for
Stochastic Gradient Optimization
Chong Wang Xi Chen? Alex Smola Eric P. Xing
Carnegie Mellon University, University of California, Berkeley?
{chongw,xichen,epxing}@cs.cmu.edu [email protected]
Abstract
Stochastic gradient optimization is a class of widely used algorithms for training
machine learning models. To optimize an objective, it uses the noisy gradient
computed from the random data samples instead of the true gradient computed
from the entire dataset. However, when the variance of the noisy gradient is
large, the algorithm might spend much time bouncing around, leading to slower
convergence and worse performance. In this paper, we develop a general approach
of using control variate for variance reduction in stochastic gradient. Data statistics
such as low-order moments (pre-computed or estimated online) is used to form
the control variate. We demonstrate how to construct the control variate for two
practical problems using stochastic gradient optimization. One is convex?the
MAP estimation for logistic regression, and the other is non-convex?stochastic
variational inference for latent Dirichlet allocation. On both problems, our approach
shows faster convergence and better performance than the classical approach.
1
Introduction
Stochastic gradient (SG) optimization [1, 2] is widely used for training machine learning models with
very large-scale datasets. It uses the noisy gradient (a.k.a. stochastic gradient) estimated from random
data samples rather than that from the entire data. Thus, stochastic gradient algorithms can run many
more iterations in a limited time budget. However, if the noisy gradient has a large variance, the
stochastic gradient algorithm might spend much time bouncing around, leading to slower convergence
and worse performance. Taking a mini-batch with a larger size for computing the noisy gradient could
help to reduce its variance; but if the mini-batch size is too large, it can undermine the advantage in
efficiency of stochastic gradient optimization.
In this paper, we propose a general remedy to the ?noisy gradient? problem ubiquitous to all stochastic
gradient optimization algorithms for different models. Our approach builds on a variance reduction
technique, which makes use of control variates [3] to augment the noisy gradient and thereby reduce
its variance. The augmented ?stochastic gradient? can be shown to remain an unbiased estimate of
the true gradient, a necessary condition that ensures the convergence. For such control variates to be
effective and sound, they must satisfy the following key requirements: 1) they have a high correlation
with the noisy gradient, and 2) their expectation (with respect to random data samples) is inexpensive
to compute. We show that such control variates can be constructed via low-order approximations
to the noisy gradient so that their expectation only depends on low-order moments of the data. The
intuition is that these low-order moments roughly characterize the empirical data distribution, and
can be used to form the control variate to correct the noisy gradient to a better direction. In other
words, the variance of the augmented ?stochastic gradient? becomes smaller as it is derived with
more information about the data.
The rest of the paper is organized as follows. In ?2, we describe the general formulation and the
theoretical property of variance reduction via control variates in stochastic gradient optimization.
1
In ?3, we present two examples to show how one can construct control variates for practical algorithms.
(More examples are provided in the supplementary material.) These include a convex problem?the
MAP estimation for logistic regression, and a non-convex problem?stochastic variational inference
for latent Dirichlet allocation [22]. Finally, we demonstrate the empirical performance of our
algorithms under these two examples in ?4. We conclude with a discussion on some future work.
2
Variance reduction for general stochastic gradient optimization
We begin with a description of the general formulation of variance reduction via control variate for
stochastic gradient optimization. Consider a general optimization problem over a finite set of training
p
data D = {xd }D
d=1 with each xd ? R . Here D is the number of the training data. We want to
maximize the following function with respect to a p-dimensional vector w,
PD
maximize L(w) := R(w) + (1/D) d=1 f (w; xd ),
w
where R(w) is a regularization function.1 Gradient-based algorithms can be used to maximize L(w)
at the expense of computing the gradient over the entire training set. Instead, stochastic gradient
(SG) methods use the noisy gradient estimated from random data samples. Suppose data index d is
selected uniformly from {1, ? ? ? , D} at step t,
g(w; xd ) = ?w R(w) + ?w f (w; xd ),
wt+1 = wt + ?t g(w; xd ),
(1)
(2)
where g(w; xd ) is the noisy gradient that only depends on xd and ?t is a proper step size. To make
notation simple, we use gd (w) , g(w; xd ).
Following the standard stochastic optimization literature [1, 4], we require the expectation of the
noisy gradient gd equals to the true gradient,
Ed [gd (w)] = ?w L(w),
(3)
to ensure the convergence of the stochastic gradient algorithm. When the variance of gd (w) is large,
the algorithm could suffer from slow convergence.
The basic idea of using control variates for variance reduction is to construct a new random vector
that has the same expectation as the target expectation but with smaller variance. In previous work [5],
control variates were used to improve the estimate of the intractable integral in variational Bayesian
inference which was then used to compute the gradient of the variational lower bound. In our context,
we employ a random vector hd (w) of length p to reduce the variance of the sampled gradient,
ged (w) = gd (w) ? AT (hd (w) ? h(w)),
(4)
where A is a p ? p matrix and h(w) , Ed [hd (w)]. (We will show how to choose hd (w) later, but it
usually depends on the form of gd (w).) The random vector ged (w) has the same expectation as the
noisy gradient gd (w) in Eq. 1, and thus can be used to replace gd (w) in the SG update in Eq. 2. To
reduce the variance of the noisy gradient, the trace of the covariance matrix of ged (w),
Vard [e
gd (w)] , Covd [e
gd (w), ged (w)] = Vard [gd (w)]
? (Covd [hd (w), gd (w)] + Covd [gd (w), hd (w)])A + AT Vard [hd (w)]A,
(5)
must be necessarily small; therefore we set A to be the minimizer of Tr (Vard [e
gd (w)]). That is,
A? = argminA Tr (Vard [e
gd (w)])
?1
= (Vard [hd (w)])
(Covd [gd (w), hd (w)] + Covd [hd (w), gd (w)]) /2.
(6)
?
The optimal A is a function of w.
Why is ged (w) a better choice? Now we show that ged (w) is a better ?stochastic gradient? under the
`2 -norm. In the first-order stochastic oracle model, we normally assume that there exists a constant ?
such that for any estimate w in its domain [6, 7]:
h
i
2
Ed kgd (w) ? Ed [gd (w)]k2 = Tr(Vard [gd (w)]) ? ? 2 .
1
We follow the convention of maximizing a function f : when we mention a convex problem, we actually
mean the objective function ?f is convex.
2
?
Under this assumption, the dominating term in the optimal convergence rate is O(?/ t) for convex
problems and O(? 2 /(?t)) for strongly convex problems, where ? is the strong convexity parameter
(see the definition of strong convexity on Page 459 in [8]).
Now suppose that we can find a random vector hd (w) and compute A? according to Eq. 6. By
plugging A? back into Eq. 5,
h
i
2
Ed ke
gd (w) ? Ed [e
gd (w)]k2 = Tr(Vard [?
gd (w)]),
where Vard [e
gd (w)] = Vard [gd (w)] ? Covd [gd (w), hd (w)](Vard [hd (w)])?1 Covd [hd (w), gd (w)].
?1
For any estimate w, Covd (gd , hd ) (Covd (hd , hd )) Covd (hd , gd ) is a semi-positive definite matrix.
Therefore, its trace, which equals to the sum of the eigenvalues, is positive (or zero when hd and gd
are uncorrelated) and hence,
h
i
h
i
2
2
Ed k?
gd (w) ? Ed [?
gd (w)]k2 ? Ed kgd (w) ? Ed [gd (w)]k2 .
h
i
2
In other words, it is possible to find a constant ? ? ? such that Ed k?
gd (w) ? Ed [?
gd (w)]k2 ? ? 2
for all w. Therefore, when
gradient methods, we could improve the optimal con? applying stochastic
?
vergence rate from O(?/ t) to O(? / t) for convex problems; and from O(? 2 /(?t)) to O(? 2 /(?t))
for strongly convex problems.
Estimating optimal A? . When estimating A? according to Eq. 6, one needs to compute the inverse
of Vard [hd (w)], which could be computationally expensive. In practice, we could constrain A to be
a diagonal matrix. According to Eq. 5, when A = Diag(a11 , . . . , app ), its optimal value is:
a?ii =
[Covd (gd (w),hd (w))]ii
.
[Vard (hd (w))]ii
(7)
This formulation avoids the computation of the matrix inverse, and leads to significant reduction
of computational cost since only the diagonal elements of Covd (gd (w), hd (w)) and Vard (hd (w)),
instead of the full matrices, need to be evaluated. It can be shown that, this simpler surrogate to the
A? due to Eq. 6 still leads to a better convergence rate. Specifically:
2
P
d (gd (w),hd (w))]ii )
Ed k?
gd (w) ? Ed [?
gd (w)]k22 = Tr(Vard (?
gd (w))) = Tr (Vard (gd (w))) ? pi=1 ([Cov[Var
,
d (hd (w))]ii
Pp
2
2
= i=1 (1 ? ?ii )Var(gd (w))ii ? Tr (Vard (gd (w))) = Ed kgd (w) ? Ed [gd (w)]k2 ,
(8)
where ?ii is the Pearson?s correlation coefficient between [gd (w)]i and [hd (w)]i .
Indeed, an even simpler surrogate to the A? , by reducing A to a single real number a, can also
improve convergence rate of SG. In this case, according to Eq. 5, the optimal a? is simply:
a? = Tr (Covd (gd (w), hd (w)))/Tr (Vard (hd (w))).
(9)
To estimate the optimal A? or its surrogates, we need to evaluate Covd (gd (w), hd (w)) and
Vard (hd (w)) (or their diagonal elements), which can be approximated by the sample covariance and
variance from mini-batch samples while running the stochastic gradient algorithm. If we can not
always obtain mini-batch samples, we may use strategies like moving average across iterations, as
those used in [9, 10].
From Eq. 8, we observe that when the Pearson?s correlation coefficient between gd (w) and hd (w)
is higher, the control variate hd (w) will lead to a more significant level of variance reduction and
hence faster convergence. In the maximal correlation case, one could set hd (w) = gd (w) to obtain
zero variance. But obviously, we cannot compute Ed [hd (w)] efficiently in this case. In practice, one
should construct hd (w) such that it is highly correlated with gd (w). In next section, we will show
how to construct control variates for both convex and non-convex problems.
3
Practicing variance reduction on convex and non-convex problems
In this section, we apply the variance reduction technique presented above to two exemplary but
practical problems: MAP estimation for logistic regression?a convex problem; and stochastic variational inference for latent Dirichlet allocation [11, 22]?a non-convex problem. In the supplement,
3
(a) entire data
(b) sampled subset
noisy gradient direction
noisy gradient direction
exact gradient direction
(c) sampled subset with data statistics
exact gradient direction
but unreachable
exact gradient direction
but unreachable
improved noisy
gradient direction
Figure 1: The illustration of how data statistics help reduce variance for the noisy gradient in stochastic
optimization. The solid (red) line is the final gradient direction the algorithm will follow. (a) The exact gradient
direction computed using the entire dataset. (b) The noisy gradient direction computed from the sampled subset,
which can have high variance. (c) The improved noisy gradient direction with data statistics, such as low-order
moments of the entire data. These low-order moments roughly characterize the data distribution, and are used to
form the control variate to aid the noisy gradient.
we show that the same principle can be applied to more problems, such as hierarchical Dirichlet
process [12, 13] and nonnegative matrix factorization [14].
As we discussed in ?2, the higher the correlation between gd (w) and hd (w), the lower the variance
is. Therefore, to apply the variance reduction technique in practice, the key is to construct a random
vector hd (w) such that it has high correlations with gd (w), but its expectation h(w) = Ed [hd (w)] is
inexpensive to compute. The principle behind our choice of h(w) is that we construct h(w) based on
some data statistics, such as low-order moments. These low-order moments roughly characterize
the data distribution which does not depend on parameter w. Thus they can be pre-computed when
processing the data or estimated online while running the stochastic gradient algorithm. Figure 1
illustrates this idea. We will use this principle throughout the paper to construct control variates for
variance reduction under different scenarios.
3.1
SG with variance reduction for logistic regression
Logistic regression is widely used for classification [15]. Given a set of training examples (xd , yd ),
d = 1, ..., D, where yd = 1 or yd = ?1 indicates class labels, the probability of yd is
p(yd | xd , w) = ?(yd w> xd ),
where ?(z) = 1/(1 + exp(?z)) is the logistic function. The averaged log likelihood of the training
data is
PD
1
>
>
`(w) = D
.
(10)
d=1 yd w xd ? log 1 + exp(yd w xd )
An SG algorithm employs the following noisy gradient:
gd (w) = yd xd ?(?yd w> xd ).
(11)
Now we show how to construct our control variate for logistic regression. We begin with the first-order
Taylor expansion around z? for the sigmoid function,
?(z) ? ?(?
z ) (1 + ?(??
z )(z ? z?)) .
We then apply this approximation to ?(?yd w> xd ) in Eq. 11 to obtain our control variate.2 For
logistic regression, we consider two classes separately, since data samples within each class are more
likely to be similar. We consider positive data samples first. Let z = ?w> xd , and we define our
control variate hd (w) for yd = 1 as
(1)
hd (w) , xd ?(?
z ) (1 + ?(??
z )(z ? z?)) = xd ?(?
z ) 1 + ?(??
z )(?w> xd ? z?) .
Its expectation given yd = 1 can be computed in closed-form as
(1)
Ed [hd (w) | yd = 1] = ?(?
z) x
?(1) (1 ? ?(??
z )?
z ) ? ?(??
z ) Var(1) [xd ] + x
?(1) (?
x(1) )> w ,
2
Taylor expansion is not the only way to obtain control variates. Lower bounds or upper bounds of the
objective function [16] can also provide alternatives. But we will not explore those solutions in this paper.
4
where x
?(1) and Var(1) [xd ] are the mean and variance of the input features for the positive examples.
In our experiments, we choose z? = ?w> x
?(1) , which is the center of the positive examples. We can
(?1)
similarly derive the control variate hd (w) for negative examples and we omit the details. Given
the random sample regardless its label, the expectation of the control variate is computed as
(1)
(?1)
Ed [hd (w)] = (D(1) /D)Ed [hd (w) | yd = 1] + (D(?1) /D)Ed [hd
(w) | yd = ?1],
where D(1) and D(?1) are the number of positive and negative examples and D(1) /D is the probability
of choosing a positive example from the training set. With Taylor approximation, we would expect
our control variate is highly correlated with the noisy gradient. See our experiments in ?4 for details.
3.2
SVI with variance reduction for latent Dirichlet allocation
The stochastic variational inference (SVI) algorithm used for latent Dirichlet allocation (LDA) [22] is
also a form of stochastic gradient optimization, therefore it can also benefit from variance reduction.
The basic idea is to stochastically optimize the variational objective for LDA, using stochastic mean
field updates augmented by control variates derived from low-order moments on the data.
Latent Dirichlet allocation (LDA). LDA is the simplest topic model for discrete data such as text
collections [17, 18]. Assume there are K topics. The generative process of LDA is as follows.
1. Draw topics ?k ? DirV (?) for k ? {1, . . . , K}.
2. For each document d ? {1, . . . , D}:
(a) Draw topic proportions ?d ? DirK (?).
(b) For each word wdn ? {1, . . . , N }:
i. Draw topic assignment zdn ? Mult(?d ).
ii. Draw word wdn ? Mult(?zdn ).
Given the observed words w , w1:D , we want to estimate the posterior distribution of the latent
variables, including topics ? , ?1:K , topic proportions ? , ?1:D and topic assignments z , z1:D ,
QK
QD
QN
p(?, ?, z | w) ? k=1 p(?k | ?) d=1 p(?d | ?) n=1 p(zdn | ?d )p(wdn | ?zdn ).
(12)
However, this posterior is intractable. We must resort to approximation methods. Mean-field
variational inference is a popular approach for the approximation [19].
Mean-field variational inference for LDA. Mean-field variational inference posits a family of distributions (called variational distributions) indexed by free variational parameters and then optimizes
these parameters to minimize the KL divergence between the variational distribution and the true
posterior. For LDA, the variational distribution is
QK
QD
QN
q(?, ?, z) = k=1 q(?k | ?k ) d=1 q(?d | ?d ) n=1 q(zdn | ?dn ),
(13)
where the variational parameters are ?k (Dirichlet), ?d (Dirichlet), and ?dn (multinomial). We seek
the variational distribution (Eq. 13) that minimizes the KL divergence to the true posterior (Eq. 12).
This is equivalent to maximizing the lower bound of the log marginal likelihood of the data,
log p(w) ? Eq [log p(?, ?, z, w)] ? Eq [log q(?, ?, z)] , L(q),
(14)
where Eq [?] denotes the expectation with respect to the variational distribution q(?, ?, z). Setting
the gradient of the lower bound L(q) with respect to the variational parameters to zero gives the
following coordinate ascent algorithm [17]. For each document d ? {1, . . . , D}, we run local
variational inference using the following updates until convergence,
P
?kdv ? exp {?(?dk ) + ?(?k,v ) ? ? ( v ?kv )} for v ? {1, . . . , V }
(15)
PV
?d = ? + v=1 ndv ?dv .
(16)
where ?(?) is the digamma function and ndv is the number of term v in document d. Note that here
we use ?dv instead of ?dn in Eq. 13 since the same term v have the same ?dn . After finding the
variational parameters for each document, we update the variational Dirichlet for each topic,
PD
?kv = ? + d=1 ndv ?kdv .
(17)
5
The whole coordinate ascent variational algorithm iterates over Eq. 15, 16 and 17 until convergence.
However, this also reveals the drawback of this algorithm?updating the topic parameter ? in Eq. 17
depends on the variational parameters ? from every document. This is especially inefficient for largescale datasets. Stochastic variational inference solves this problem using stochastic optimization.
Stochastic variational inference (SVI). Instead of using the coordinate ascent algorithm, SVI
optimizes the variational lower bound L(q) using stochastic optimization [22]. It draws random
samples from the corpus and use these samples to form the noisy estimate of the natural gradient [20].
Then the algorithm follows that noisy natural gradient with a decreasing step size until convergence.
The noisy gradient only depends on the sampled data and it is inexpensive to compute. This leads to
a much faster algorithm than the traditional coordinate ascent variational inference algorithm.
Let d be a random document index, d ? Unif(1, ..., D) and Ld (q) be the sampled lower bound. The
sampled lower bound Ld (q) has the same form as the L(q) in Eq. 14 except that the sampled lower
bound uses a virtual corpus that only contains document d replicated D times. According to [22], for
LDA the noisy natural gradient with respect to the topic variational parameters is
gd (?kv ) , ??kv + ? + Dndv ?kdv ,
(18)
?kdv
3
where the
are obtained from the local variational inference by iterating over Eq. 15 and 16 until
convergence. With a step size ?t , SVI uses the following update ?kv ? ?kv + ?t gd (?kv ). However,
the sampled natural gradient gd (?kv ) in Eq. 18 might have a large variance when the number of
documents is large. This could lead to slow convergence or a poor local mode.
Control variate. Now we show how to construct control variates for the noisy gradient to reduce
its variance. According to Eq. 18, the noisy gradient gd (?kv ) is a function of topic assignment
parameters ?dv , which in turn depends on wd , the words in document d, through the iterative updates
in Eq. 15 and 16. This is different from the case in Eq. 11. In logistic regression, the gradient is an
analytical function of the training data (Eq. 11), while in LDA, the natural gradient directly depends
on the optimal local variational parameters (Eq. 18), which then depends on the training data through
the local variational inference (Eq. 15). However, by carefully exploring the structure of the iterations,
we can create effective control variates.
The key idea is to run Eq. 15 and 16 only up to a fixed number of iterations, together with some
additional approximations to maintain analytical tractability. Starting the iteration with ?dk having
P
k(0)
k(0)
the same value, we have ?v ? exp {?(?kv ) ? ? ( v ?kv )}.4 Note that ?v does not depend
k(0)
on document d. Intuitively, ?v is the probability of term v belonging to topic k out of K topics.
Next we use ?dk ? ? to approximate exp(?(?dk )) in Eq. 15.5 Plugging this approximation into
Eq. 15 and 16 leads to the update,
P
P
( V fdu ?k(0)
)?k(0)
( Vu=1 fdu ?k(0)
)?k(0)
k(1)
u
v
u
v
?dv = PK u=1
(19)
PV
PV
k(0)
k(0) ? PK
k(0)
k(0) ,
?
k=1
u=1
fdu ?u
?v
k=1
u=1
fu ?u
?v
where fdv = ndv /nd is the empirical frequency of term v in document d. In addition, we replace fdu
P
with f?u , (1/D) d fdu ,the averaged frequency
of term u in the corpus, making the denominator
PK
PV ? k(0) k(0)
(1)
of Eq. 19, mv ,
fu ?u
?v , independent of documents. This approximation
k=1
u=1
does not change the relative importance for the topics from term v. We define our control variate as
k(1)
hd (?kv ) , Dndv ?dv ,
nP
o
(1)
k(0)
k(0)
V
whose expectation is Ed [hd (?kv )] = D/mv
?v
, where nv fu ,
u=1 nv fu ?u
P
P
(1/D) d ndu fdv = (1/D) d ndu ndv /nd . This depends on up to the second-order moments
k(2)
k(1)
of data, which is usually sparse. We can continue to compute ?dv (or higher) given ?dv , which
turns out using the third-order (or higher) moments. We omit the details here. Similar ideas can be
used in deriving control variates for hierarchical Dirichlet process [12, 13] and nonnegative matrix
factorization [14]. We outline these in the supplementary material.
3
Running to convergence is essential to ensure the natural gradient is valid in Eq. 18 [22].
k(0)
k(0)
In our experiments, we set ?v = 0 if ?v is less than 0.02. This leaves ?(0) very sparse, since a term
usually belongs to a small set of topics. For example, in Nature data, only 6% entries are non-zero.
5
The scale of the approximation does not matter?C(?dk ? ?), where C is a constant, has the same effect as
?dk ? ?. Other approximations to exp(?(?dk )) can also be used as long as it is linear in term of ?dk .
4
6
Variance Reduction-1
Standard-1
Standard-1
70
Variance Reduction-1
Test Accuracy
10
method
0.
method
75
(b) Test Accuracy on testing data
Variance Reduction-0.2
Variance Reduction-0.2
Standard-0.2
Standard-0.2
Variance Reduction-0.05
Variance Reduction-0.05
Standard-0.05
Standard-0.05
0.
01
0.
65
0.
0.
Optimum minus Objective
(a) Optimum minus Objective on training data
1
data points (x100K)
100
1
data points (x100K)
100
5
97
0.
96
0
0.
96
5
0.
97
0
0.
Pearson's correlation coefficient
Figure 2: Comparison of our approach with standard SG algorithms using different constant learning rates. The
figure was created using geom smooth function in ggplot2 using local polynomial regression fitting (loess). A
wider stripe indicates the result fluctuates more. This figure is best viewed in color. (Decayed learning rates we
tested did not perform as well as constant ones and are not shown.) Legend ?Variance Reduction-1? indicates the
algorithm with variance reduction using learning rate ?t = 1.0. (a) Optimum minus the objective on the training
data. The lower the better. (b) Test accuracy on testing data. The higher the better. From these results, we see
that variance reduction with ?t = 1.0 performs the best, while the standard SG algorithm with ?t = 1.0 learns
faster but bounces more (a wider stripe) and performs worse at the end. With ?t = 0.05, variance reduction
performs about the same as the standard algorithm and both converge slowly. These indicate that with the
variance reduction, a larger learning rate is possible to allow faster convergence without sacrificing performance.
0
30
60
data points (x100K)
90
120
Figure 3: Pearson?s correlation coefficient for ?t = 1.0 as we run the our algorithm. It is usually high, indicating
the control variate is highly correlated with the noisy gradient, leading to a large variance reduction. Other
settings are similar.
4
Experiments
In this section, we conducted experiments on the MAP estimation for logistic regression and stochastic
variational inference for LDA.6 In our experiments, we chose to estimate the optimal a? as a scalar
shown in Eq. 9 for simplicity.
4.1
Logistic regression
We evaluate our algorithm on stochastic gradient (SG) for logistic regression. For the standard SG
algorithm, we also evaluated the version with averaged output (ASG), although we did not find it
outperforms the standard SG algorithm much. Our regularization added to Eq. 10 for the MAP
1
estimation is ? 2D
w> w. Our dataset contains covtype (D = 581, 012, p = 54), obtained from the
LIBSVM data website.7 We separate 5K examples as the test set. We test two types of learning rates,
constant and decayed. For constant rates, we explore ?t ? {0.01, 0.05, 0.1, 0.2, 0.5, 1}. For decayed
rates, we explore ?t ? {t?1/2 , t?0.75 , t?1 }. We use a mini-batch size of 100.
Results. We found that the decayed learning rates we tested did not work well compared with the
constant ones on this data. So we focus on the results using the constant rates. We plot three cases
in Figure 2 for ?t ? {0.05, 0.2, 1} to show the trend by comparing the objective function on the
training data and the test accuracy on the testing data. (The best result for variance reduction is
obtained when ?t = 1.0 and for standard SGD is when ?t = 0.2.) These contain the best results of
6
7
Code will be available on authors? websites.
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets
7
Wikipedia
method
.5
Standard-100
-7
Standard-500
.7
Var-Reduction-100
-7
-7
-8
.5
0
.0
0
-7
-7
.2
5
.7
5
-7
.3
-7
.5
0
.0
0
-7
.1
New York Times
-7
5
10
15
20
.9
-7
.1
0
5
10
15
20
time (in hours)
-8
0
.5
0
-7
-8
.7
5
.2
5
Var-Reduction-500
-8
Heldout log likelihood
Nature
0
5
10
15
20
Figure 4: Held-out log likelihood on three large corpora. (Higher numbers are better.) Legend ?Standard-100?
indicates the stochastic algorithm in [10] with the batch size as 100. Our method consistently performs better
than the standard stochastic variational inference. A large batch size tends to perform better.
each. With variance reduction, a large learning rate is possible to allow faster convergence without
sacrificing performance. Figure 3 shows the mean of Pearson?s correlation coefficient between the
control variate and noisy gradient8 , which is quite high?the control variate is highly correlated with
the noisy gradient, leading to a large variance reduction.
4.2
Stochastic variational inference for LDA
We evaluate our algorithm on stochastic variational inference for LDA. [10] has shown that the
adaptive learning rate algorithm for SVI performed better than the manually tuned ones. So we use
their algorithm to estimate adaptive learning rate. For LDA, we set the number of topics K = 100,
hyperparameters ? = 0.1 and ? = 0.01. We tested mini-batch sizes as 100 and 500.
Data sets. We analyzed three large corpora: Nature, New York Times, and Wikipedia. The Nature
corpus contains 340K documents and a vocabulary of 4,500 terms; the New York Times corpus
contains 1.8M documents and a vocabulary vocabulary of 8,000 terms; the Wikipedia corpus contains
3.6M documents and a vocabulary of 7,700 terms.
Evaluation metric and results. To evaluate our models, we held out 10K documents from each
corpus and calculated its predictive likelihood. We follow the metric used in recent topic modeling
literature [21, 22]. For a document wd in Dtest , we split it in into halves, wd = (wd1 , wd2 ), and
computed the predictive log likelihood of the words in wd2 conditioned on wd1 and Dtrain . The
per-word predictive log likelihood is defined as
P
P
likelihoodpw , d?Dtest log p(wd2 |wd1 , Dtrain )/ d?Dtest |wd2 |.
Here | ? | is the number of words. A better predictive distribution given the first half gives higher
likelihood to the second half. We used the same strategy as in [22] to approximate its computation.
Figure 4 shows the results. On all three corpora, our algorithm gives better predictive distributions.
5
Discussions and future work
In this paper, we show that variance reduction with control variates can be used to improve stochastic
gradient optimization. We further demonstrate its usage on convex and non-convex problems,
showing improved performance on both. In future work, we would like to explore how to use
second-order methods (such as Newton?s method) or better line search algorithms to further improve
the performance of stochastic optimization. This is because, for example, with variance reduction,
second-order methods are able to capture the local curvature much better.
Acknowledgement. We thank anonymous reviewers for their helpful comments. We also thank
Dani Yogatama for helping with some experiments on LDA. Chong Wang and Eric P. Xing are
supported by NSF DBI-0546594 and NIH 1R01GM093156.
8
Since the control variate and noisy gradient are vectors, we use the mean of the Pearson?s coefficients
computed for each dimension between these two vectors.
8
References
[1] Spall, J. Introduction to stochastic search and optimization: Estimation, simulation, and control. John
Wiley and Sons, 2003.
[2] Bottou, L. Stochastic learning. In O. Bousquet, U. von Luxburg, eds., Advanced Lectures on Machine
Learning, Lecture Notes in Artificial Intelligence, LNAI 3176, pages 146?168. Springer Verlag, Berlin,
2004.
[3] Ross, S. M. Simulation. Elsevier, fourth edn., 2006.
[4] Nemirovski, A., A. Juditsky, G. Lan, et al. Robust stochastic approximation approach to stochastic
programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[5] Paisley, J., D. Blei, M. Jordan. Variational Bayesian inference with stochastic search. In International
Conference on Machine Learning. 2012.
[6] Lan, G. An optimal method for stochastic composite optimization. Mathematical Programming, 133:365?
397, 2012.
[7] Chen, X., Q. Lin, J. Pena. Optimal regularized dual averaging methods for stochastic optimization. In
Advances in Neural Information Processing Systems (NIPS). 2012.
[8] Boyd, S., L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[9] Schaul, T., S. Zhang, Y. LeCun. No More Pesky Learning Rates. ArXiv e-prints, 2012.
[10] Ranganath, R., C. Wang, D. M. Blei, et al. An adaptive learning rate for stochastic variational inference. In
International Conference on Machine Learning. 2013.
[11] Hoffman, M., D. Blei, F. Bach. Online inference for latent Drichlet allocation. In Neural Information
Processing Systems. 2010.
[12] Teh, Y., M. Jordan, M. Beal, et al. Hierarchical Dirichlet processes. Journal of the American Statistical
Association, 101(476):1566?1581, 2007.
[13] Wang, C., J. Paisley, D. Blei. Online variational inference for the hierarchical Dirichlet process. In
International Conference on Artificial Intelligence and Statistics. 2011.
[14] Seung, D., L. Lee. Algorithms for non-negative matrix factorization. In Neural Information Processing
Systems. 2001.
[15] Bishop, C. Pattern Recognition and Machine Learning. Springer New York., 2006.
[16] Jaakkola, T., M. Jordan. A variational approach to Bayesian logistic regression models and their extensions.
In International Workshop on Artificial Intelligence and Statistics. 1996.
[17] Blei, D., A. Ng, M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993?1022,
2003.
[18] Blei, D., J. Lafferty. Topic models. In A. Srivastava, M. Sahami, eds., Text Mining: Theory and Applications.
Taylor and Francis, 2009.
[19] Jordan, M., Z. Ghahramani, T. Jaakkola, et al. Introduction to variational methods for graphical models.
Machine Learning, 37:183?233, 1999.
[20] Amari, S. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276, 1998.
[21] Asuncion, A., M. Welling, P. Smyth, et al. On smoothing and inference for topic models. In Uncertainty in
Artificial Intelligence. 2009.
[22] Hoffman, M., D. Blei, C. Wang, et al. Stochastic Variational Inference. Journal of Machine Learning
Research, 2013.
9
| 5034 |@word version:1 polynomial:1 norm:1 proportion:2 nd:2 unif:1 seek:1 simulation:2 covariance:2 sgd:1 mention:1 minus:3 thereby:1 tr:9 solid:1 ld:2 moment:10 reduction:35 contains:5 tuned:1 document:17 outperforms:1 wd:3 comparing:1 must:3 john:1 plot:1 update:7 juditsky:1 generative:1 selected:1 leaf:1 website:2 half:3 intelligence:4 blei:7 iterates:1 org:1 simpler:2 zhang:1 mathematical:1 dn:4 constructed:1 fitting:1 indeed:1 roughly:3 decreasing:1 becomes:1 provided:1 begin:2 notation:1 estimating:2 minimizes:1 r01gm093156:1 finding:1 berkeley:1 every:1 xd:23 k2:6 control:35 normally:1 omit:2 positive:7 local:7 tends:1 yd:16 might:3 chose:1 limited:1 factorization:3 nemirovski:1 averaged:3 practical:3 lecun:1 testing:3 vu:1 chongw:1 practice:3 definite:1 pesky:1 svi:6 empirical:3 mult:2 composite:1 boyd:1 pre:2 word:9 cannot:1 context:1 applying:1 optimize:2 equivalent:1 map:5 www:1 center:1 maximizing:2 reviewer:1 regardless:1 starting:1 convex:19 ke:1 simplicity:1 dbi:1 deriving:1 vandenberghe:1 hd:48 coordinate:4 target:1 suppose:2 exact:4 edn:1 programming:2 us:4 smyth:1 element:2 trend:1 expensive:1 approximated:1 updating:1 recognition:1 stripe:2 observed:1 csie:1 wang:5 capture:1 dirv:1 ensures:1 intuition:1 pd:3 convexity:2 seung:1 depend:2 predictive:5 eric:2 efficiency:1 effective:2 describe:1 artificial:4 pearson:6 choosing:1 whose:1 fluctuates:1 widely:3 spend:2 larger:2 supplementary:2 dominating:1 quite:1 amari:1 statistic:7 cov:1 noisy:34 final:1 online:4 obviously:1 beal:1 advantage:1 eigenvalue:1 exemplary:1 analytical:2 propose:1 maximal:1 schaul:1 description:1 kv:13 convergence:18 requirement:1 optimum:3 a11:1 help:2 derive:1 develop:1 wider:2 eq:34 solves:1 strong:2 c:1 indicate:1 convention:1 qd:2 direction:11 posit:1 drawback:1 correct:1 stochastic:51 libsvmtools:1 material:2 virtual:1 require:1 ntu:1 anonymous:1 exploring:1 helping:1 extension:1 around:3 exp:6 estimation:6 label:2 ross:1 create:1 hoffman:2 dani:1 always:1 rather:1 covd:14 jaakkola:2 derived:2 focus:1 consistently:1 indicates:4 likelihood:8 digamma:1 helpful:1 inference:24 elsevier:1 entire:6 lnai:1 unreachable:2 classification:1 dual:1 augment:1 loess:1 smoothing:1 marginal:1 equal:2 construct:10 field:4 having:1 ng:1 manually:1 kdv:4 future:3 spall:1 employ:2 divergence:2 maintain:1 wdn:3 highly:4 mining:1 evaluation:1 chong:2 analyzed:1 behind:1 held:2 integral:1 fu:4 necessary:1 indexed:1 taylor:4 sacrificing:2 theoretical:1 modeling:1 assignment:3 cost:1 tractability:1 subset:3 entry:1 conducted:1 too:1 dtrain:2 characterize:3 drichlet:1 gd:57 vard:19 decayed:4 international:4 siam:1 lee:1 together:1 w1:1 von:1 dtest:3 choose:2 slowly:1 worse:3 stochastically:1 ged:6 american:1 resort:1 inefficient:1 leading:4 coefficient:6 matter:1 satisfy:1 mv:2 depends:9 later:1 performed:1 closed:1 francis:1 red:1 xing:2 asuncion:1 minimize:1 accuracy:4 variance:49 qk:2 efficiently:2 bayesian:3 app:1 ed:25 definition:1 inexpensive:3 pp:1 frequency:2 con:1 sampled:9 dataset:3 popular:1 color:1 ubiquitous:1 organized:1 carefully:1 actually:1 back:1 higher:7 follow:3 improved:3 formulation:3 evaluated:2 strongly:2 smola:2 correlation:9 until:4 undermine:1 logistic:13 mode:1 lda:14 usage:1 effect:1 k22:1 contain:1 true:5 remedy:1 unbiased:1 regularization:2 hence:2 outline:1 fdu:5 demonstrate:3 performs:4 variational:41 nih:1 sigmoid:1 wikipedia:3 multinomial:1 discussed:1 association:1 pena:1 mellon:1 significant:2 cambridge:1 paisley:2 similarly:1 moving:1 argmina:1 curvature:1 posterior:4 recent:1 optimizes:2 belongs:1 scenario:1 verlag:1 continue:1 additional:1 converge:1 maximize:3 semi:1 ii:9 full:1 sound:1 smooth:1 faster:6 bach:1 long:1 lin:1 plugging:2 regression:13 basic:2 denominator:1 cmu:1 expectation:11 metric:2 arxiv:1 iteration:5 addition:1 want:2 separately:1 rest:1 wd2:4 ascent:4 nv:2 comment:1 legend:2 lafferty:1 jordan:5 split:1 variate:34 xichen:1 reduce:6 idea:5 bounce:1 suffer:1 york:4 iterating:1 simplest:1 http:1 nsf:1 estimated:4 per:1 carnegie:1 discrete:1 key:3 lan:2 practicing:1 libsvm:1 sum:1 run:4 inverse:2 luxburg:1 fourth:1 bouncing:2 uncertainty:1 throughout:1 family:1 draw:5 bound:9 oracle:1 nonnegative:2 alex:2 constrain:1 bousquet:1 according:6 poor:1 belonging:1 remain:1 smaller:2 across:1 son:1 tw:1 making:1 dv:7 intuitively:1 wd1:3 yogatama:1 computationally:1 turn:2 cjlin:1 sahami:1 end:1 available:1 apply:3 observe:1 hierarchical:4 batch:8 alternative:1 slower:2 denotes:1 dirichlet:14 include:1 ensure:2 running:3 graphical:1 newton:1 ghahramani:1 build:1 especially:1 classical:1 objective:8 added:1 print:1 strategy:2 diagonal:3 surrogate:3 traditional:1 gradient:75 separate:1 thank:2 berlin:1 topic:20 length:1 code:1 index:2 mini:6 illustration:1 expense:1 trace:2 negative:3 proper:1 perform:2 teh:1 upper:1 datasets:3 finite:1 dirk:1 zdn:5 kl:2 z1:1 ndv:5 california:1 hour:1 nip:1 able:1 usually:4 pattern:1 geom:1 including:1 natural:7 regularized:1 largescale:1 advanced:1 improve:5 epxing:1 created:1 text:2 sg:11 literature:2 acknowledgement:1 relative:1 expect:1 heldout:1 lecture:2 allocation:8 var:6 principle:3 uncorrelated:1 pi:1 supported:1 free:1 allow:2 taking:1 sparse:2 benefit:1 calculated:1 vocabulary:4 valid:1 avoids:1 dimension:1 qn:2 author:1 collection:1 adaptive:3 replicated:1 welling:1 ranganath:1 approximate:2 reveals:1 corpus:10 conclude:1 xi:1 search:3 latent:9 iterative:1 vergence:1 why:1 nature:4 robust:1 correlated:4 expansion:2 bottou:1 necessarily:1 domain:1 diag:1 did:3 pk:3 whole:1 hyperparameters:1 augmented:3 slow:2 aid:1 wiley:1 pv:4 third:1 learns:1 bishop:1 showing:1 dk:8 covtype:1 intractable:2 exists:1 essential:1 workshop:1 importance:1 supplement:1 budget:1 illustrates:1 conditioned:1 chen:2 simply:1 likely:1 explore:4 scalar:1 springer:2 minimizer:1 ndu:2 viewed:1 replace:2 change:1 specifically:1 except:1 uniformly:1 reducing:1 wt:2 averaging:1 called:1 indicating:1 evaluate:4 tested:3 srivastava:1 |
4,459 | 5,035 | Memory Limited, Streaming PCA
Constantine Caramanis
Dept. of Electrical and Computer Engineering
The University of Texas at Austin
[email protected]
Ioannis Mitliagkas
Dept. of Electrical and Computer Engineering
The University of Texas at Austin
[email protected]
Prateek Jain
Microsoft Research
Bangalore, India
[email protected]
Abstract
We consider streaming, one-pass principal component analysis (PCA), in the highdimensional regime, with limited memory. Here, p-dimensional samples are presented sequentially, and the goal is to produce the k-dimensional subspace that
best approximates these points. Standard algorithms require O(p2 ) memory;
meanwhile no algorithm can do better than O(kp) memory, since this is what the
output itself requires. Memory (or storage) complexity is most meaningful when
understood in the context of computational and sample complexity. Sample complexity for high-dimensional PCA is typically studied in the setting of the spiked
covariance model, where p-dimensional points are generated from a population
covariance equal to the identity (white noise) plus a low-dimensional perturbation
(the spike) which is the signal to be recovered. It is now well-understood that
the spike can be recovered when the number of samples, n, scales proportionally
with the dimension, p. Yet, all algorithms that provably achieve this, have memory complexity O(p2 ). Meanwhile, algorithms with memory-complexity O(kp)
do not have provable bounds on sample complexity comparable to p. We present
an algorithm that achieves both: it uses O(kp) memory (meaning storage of any
kind) and is able to compute the k-dimensional spike with O(p log p) samplecomplexity ? the first algorithm of its kind. While our theoretical analysis focuses
on the spiked covariance model, our simulations show that our algorithm is successful on much more general models for the data.
1
Introduction
Principal component analysis is a fundamental tool for dimensionality reduction, clustering, classification, and many more learning tasks. It is a basic preprocessing step for learning, recognition, and
estimation procedures. The core computational element of PCA is performing a (partial) singular
value decomposition, and much work over the last half century has focused on efficient algorithms
(e.g., Golub & Van Loan (2012) and references therein) and hence on computational complexity.
The recent focus on understanding high-dimensional data, where the dimensionality of the data
scales together with the number of available sample points, has led to an exploration of the sample
complexity of covariance estimation. This direction was largely influenced by Johnstone?s spiked
covariance model, where data samples are drawn from a distribution whose (population) covariance
is a low-rank perturbation of the identity matrix Johnstone (2001). Work initiated there, and also
work done in Vershynin (2010a) (and references therein) has explored the power of batch PCA in
the p-dimensional setting with sub-Gaussian noise, and demonstrated that the singular value decom1
position (SVD) of the empirical covariance matrix succeeds in recovering the principal components
(extreme eigenvectors of the population covariance) with high probability, given n = O(p) samples.
This paper brings the focus on another critical quantity: memory/storage. The only currently available algorithms with provable sample complexity guarantees either store all n = O(p) samples (note
that for more than a single pass over the data, the samples must all be stored) or explicitly form or
approximate the empirical p ? p (typically dense) covariance matrix. All cases require as much as
O(p2 ) storage for exact recovery. In certain high-dimensional applications, where data points are
high resolution photographs, biometrics, video, etc., p often is of the order of 1010 ? 1012 , making
the need for O(p2 ) memory prohibitive. At many computing scales, manipulating vectors of length
O(p) is possible, when storage of O(p2 ) is not. A typical desktop may have 10-20 GB of RAM, but
will not have more than a few TB of total storage. A modern smart-phone may have as much as a
GB of RAM, but has a few GB, not TB, of storage. In distributed storage systems, the scalability in
storage comes at the heavy cost of communication.
In this light, we consider the streaming data setting, where the samples xt ? Rp are collected
sequentially, and unless we store them, they are irretrievably gone.1 On the spiked covariance model
(and natural generalizations), we show that a simple algorithm requiring O(kp) storage ? the best
possible ? performs as well as batch algorithms (namely, SVD on the empirical covariance matrix),
with sample complexity O(p log p). To the best of our knowledge, this is the only algorithm with
both storage complexity and sample complexity guarantees.
We discuss connections to past work in detail in Section 2, introduce the model in Section 3, and
present the solution to the rank 1 case, the rank k case, and the perturbed-rank-k case in Sections 4.1,
4.2 and 4.3, respectively. In Section 5 we provide experiments that not only confirm the theoretical
results, but demonstrate that our algorithm works well outside the assumptions of our main theorems.
2
Related Work
Memory- and computation-efficient algorithms that operate on streaming data are plentiful in the
literature and many seem to do well in practice. However, there is no algorithm that provably
recovers the principal components in the same noise and sample-complexity regime as the batch
PCA algorithm does and maintains a provably light memory footprint. Because of the practical
relevance, there is renewed interest in this problem. The fact that it is an important unresolved issue
has been pointed out in numerous places, e.g., Warmuth & Kuzmin (2008); Arora et al. (2012).
Online-PCA for regret minimization is considered in several papers, most recently in Warmuth &
Kuzmin (2008). There the multiplicative weights approach is adapted to this problem, with experts
corresponding to subspaces. The goal is to control the regret, improving on the natural followthe-leader algorithm that performs batch-PCA at each step. However, the algorithm can require
O(p2 ) memory, in order to store the multiplicative weights. A memory-light variant described in
Arora et al. (2012) typically requires much less memory, but there are no guarantees for this, and
moreover, for certain problem instances, its memory requirement is on the order of p2 .
Sub-sampling, dimensionality-reduction and sketching form another family of low-complexity and
low-memory techniques, see, e.g., Clarkson & Woodruff (2009); Nadler (2008); Halko et al. (2011).
These save on memory and computation by performing SVD on the resulting smaller matrix. The
results in this line of work provide worst-case guarantees over the pool of data, and typically require
a rapidly decaying spectrum, not required in our setting, to produce good bounds. More fundamentally, these approaches are not appropriate for data coming from a statistical model such as the
spiked covariance model. It is clear that subsampling approaches, for instance, simply correspond to
discarding most of the data, and for fundamental sample complexity reasons, cannot work. Sketching produces a similar effect: each column of the sketch is a random (+/?) sum of the data points.
If the data points are, e.g., independent Gaussian vectors, then so will each element of the sketch,
and thus this approach again runs against fundamental sample complexity constraints. Indeed, it is
straightforward to check that the guarantees presented in (Clarkson & Woodruff (2009); Halko et al.
(2011)) are not strong enough to guarantee recovery of the spike. This is not because the results are
weak; it is because they are geared towards worst-case bounds.
1
This is similar to what is sometimes referred to as the single pass model.
2
Algorithms focused on sequential SVD (e.g., Brand (2002, 2006), Comon & Golub (1990),Li (2004)
and more recently Balzano et al. (2010); He et al. (2011)) seek to have the best subspace estimate
at every time (i.e., each time a new data sample arrives) but without performing full-blown SVD
at each step. While these algorithms indeed reduce both the computational and memory burden of
batch-PCA, there are no rigorous guarantees on the quality of the principal components or on the
statistical performance of these methods.
In a Bayesian mindset, some researchers have come up with expectation maximization approaches
Roweis (1998); Tipping & Bishop (1999), that can be used in an incremental fashion. The finite
sample behavior is not known.
Stochastic-approximation-based algorithms along the lines of Robbins & Monro (1951) are also
quite popular, due to their low computational and memory complexity, and excellent performance.
They go under a variety of names, including Incremental PCA (though the term Incremental has been
used in the online setting as well Herbster & Warmuth (2001)), Hebbian learning, and stochastic
power method Arora et al. (2012). The basic algorithms are some version of the following: upon
receiving data point xt at time t, update the estimate of the top k principal components via:
(t)
U (t+1) = Proj(U (t) + ?t xt x?
),
t U
(1)
where Proj(?) denotes the ?projection? that takes the SVD of the argument, and sets the top k
singular values to 1 and the rest to zero (see Arora et al. (2012) for discussion). While empirically
these algorithms perform well, to the best of our knowledge - and efforts - there is no associated
finite sample guarantee. The analytical challenge lies in the high variance at each step, which makes
direct analysis difficult.
In summary, while much work has focused on memory-constrained PCA, there has as of yet been no
work that simultaneously provides sample complexity guarantees competitive with batch algorithms,
and also memory/storage complexity guarantees close to the minimal requirement of O(kp) ? the
memory required to store only the output. We present an algorithm that provably does both.
3
Problem Formulation and Notation
We consider the streaming model: at each time step t, we receive a point xt ? Rp . Any point that is
not explicitly stored can never be revisited. Our goal is to compute the top k principal components
of the data: the k-dimensional subspace that offers the best squared-error estimate for the points. We
assume a probabilistic generative model, from which the data is sampled at each step t. Specifically,
xt = Azt + wt ,
(2)
where A ? Rp?k is a fixed matrix, zt ? Rk?1 is a multivariate normal random variable, i.e.,
p?1
and wt ? R
zt ? N (0k?1 , Ik?k ),
is the ?noise? vector, also sampled from a multivariate normal distribution, i.e.,
wt ? N (0p?1 , ? 2 Ip?p ).
Furthermore, we assume that all 2n random vectors (zt , wt , ?1 ? t ? n) are mutually independent.
In this regime, it is well-known that batch-PCA is asymptotically consistent (hence recovering A up
to unitary transformations) with number of samples scaling as n = O(p) Vershynin (2010b). It is
interesting to note that in this high-dimensional regime, the signal-to-noise ratio quickly approaches
zero, as the signal, ?
or ?elongation? of the major axis, kAzk2 , is O(1), while the noise magnitude,
kwk2 , scales as O( p). The central goal of this paper is to provide finite sample guarantees for a
streaming algorithm that requires memory no more than O(kp) and matches the consistency results
of batch PCA in the sampling regime n = O(p) (possibly with additional log factors, or factors
depending on ? and k).
We denote matrices by capital letters (e.g. A) and vectors by lower-case bold-face letters (x). kxkq
denotes the ?q norm of x; kxk denotes the ?2 norm of x. kAk or kAk2 denotes the spectral norm of
A while kAkF denotes the Frobenius norm of A. Without loss of generality (WLOG), we assume
that: kAk2 = 1, where kAk2 = maxkxk2 =1 kAxk2 denotes the spectral norm of A. Finally, we
write ha, bi = a? b for the inner product between a, b. In proofs the constant C is used loosely and
its value may vary from line to line.
3
Algorithm 1 Block-Stochastic Power Method
Block-Stochastic Orthogonal Iteration
input {x1 , . . . , xn }, Block size: B
1: q0 ? N (0, Ip?p ) (Initialization)
2: q0 ? q0 /kq0 k2
3: for ? = 0, . . . , n/B ? 1 do
4:
s? +1 ? 0
5:
for t = B? + 1, . . . , B(? + 1) do
6:
s? +1 ? s? +1 + B1 hq? , xt ixt
7:
end for
8:
q? +1 ? s? +1 /ks? +1 k2
9: end for
output
4
H i ? N (0, Ip?p ), 1 ? i ? k (Initialization)
H ? Q0 R0 (QR-decomposition)
S? +1 ? 0
S? +1 ? S? +1 +
1
?
B x t x t Q?
S? +1 = Q? +1 R? +1 (QR-decomposition)
Algorithm and Guarantees
In this section, we present our proposed algorithm and its finite sample analysis. It is a block-wise
stochastic variant of the classical power-method. Stochastic versions of the power method already
exist in the literature; see Arora et al. (2012). The main impediment to the analysis of such stochastic
algorithms (as in (1)) is the large variance of each step, in the presence of noise. This motivates us
to consider a modified stochastic power method algorithm, that has a variance reduction step built
in. At a high level, our method updates only once in a ?block? and within one block we average out
noise to reduce the variance.
Below, we first illustrate the main ideas of our method as well as our sample complexity proof for
the simpler rank-1 case. The rank-1 and rank-k algorithms are so similar, that we present them in
the same panel. We provide the rank-k analysis in Section 4.2. We note that, while our algorithm
describes {x1 , . . . , xn } as ?input,? we mean this in the streaming sense: the data are no-where
stored, and can never be revisited unless the algorithm explicitly stores them.
4.1
Rank-One Case
We first consider the rank-1 case for which each sample xt is generated using: xt = uzt + wt
where u ? Rp is the principal component that we wish to recover. Our algorithm is a block-wise
method where all the n samples are divided in n/B blocks (for simplicity we assume that n/B is
an integer). In the (? + 1)-st block, we compute
?
?
B(? +1)
X
1
? q? .
s? +1 = ?
xt x?
(3)
t
B
t=B? +1
Then, the iterate q? is updated using q? +1 = s? +1 /ks? +1 k2 . Note that, s? +1 can be computed
online, with O(p) operations per step. Furthermore, storage requirement is also linear in p.
4.1.1
Analysis
We now present the sample complexity analysis of our proposed method. Using O(? 4 p log(p)/?2 )
samples, Algorithm 1 obtains a solution qT of accuracy ?, i.e. kqT ? uk2 ? ?.
Theorem 1. Denote the data stream by x1 , . . . , xn , where xt ? Rp , ?t is generated by (2).
log(p/?)
=
Set the total number of iterations T = ?( log((?2 +.75)/(?
2 +.5)) ) and the block size B
?
(1+3(?+? 2 ) p)2 log(T )
?(
). Then, with probability 0.99, kqT ? uk2 ? ?, where qT is the T -th
?2
iterate of Algorithm 1. That is, Algorithm 1 obtains an ?-accurate solution with number of samples
(n) given by:
?
(1 + 3(? + ? 2 ) p)2 log(p/?)
?
.
n=?
?2 log((? 2 + .75)/(? 2 + .5))
? to suppress the extra log(T ) factor
Note that in the total sample complexity, we use the notation ?(?)
for clarity of exposition, as T already appears in the expression linearly.
4
Proof. The proof decomposes the current iterate into the component of the current iterate, q? , in the
direction of the true principal component (the spike) u, and the perpendicular component, showing
that the former eventually dominates. Doing so hinges on three key components: (a) for large enough
PB(? +1)
B, the empirical covariance matrix F? +1 = B1 t=B? +1 xt x?
t is close to the true covariance matrix
?
2
M = uu + ? I, i.e., kF? +1 ? M k2 is small. In the process, we obtain ?tighter? bounds for
ku? (F? +1 ? M )uk for fixed u; (b) with probability
0.99 (or any other constant probability), the
?
initial point q0 has a component of at least O(1/ p) magnitude along the true direction u; (c) after
? iterations, the error in estimation is at most O(? ? ) where ? < 1 is a constant.
There are several results that we use repeatedly, which we collect here, and prove individually in the
full version of the paper (Mitliagkas et al. (2013)).
Lemmas 4, 5 and 6. Let B, T and the data stream {xi } be as defined in the theorem. Then:
? (Lemma 4): With probability 1 ? C/T , for C a universal constant, we have:
1 X
?
2
xt x?
t ? uu ? ? I
? ?.
B
t
2
? (Lemma 5): With probability 1 ? C/T , for C a universal constant, we have:
?
?
?
2
,
u s? +1 ? u q? (1 + ? ) 1 ?
4(1 + ? 2 )
P
where st = B1 B? <t?B(? +1) xt x?
t q? .
? (Lemma 6): Let q0 be the initial guess for u, given by Steps 1 and 2 of Algorithm 1. Then,
C0
w.p. 0.99: |hq0 , ui| ? ?
p , where C0 > 0 is a universal constant.
Step (a) is proved in Lemmas 4 and 5, while Lemma 6 provides the required result for the initial
vector q0 . Using these lemmas, we next complete the proof of the theorem. We note that both (a)
and (b) follow from well-known results; we provide them for completeness.
?
?
Let q? =? 1 ? ?? u+ ?? g? , 1 ? ? ? n/B, where g? is the component of q? that is perpendicular
to u and 1 ? ?? is the magnitude of the component of q? along u. Note that g? may well change
at each iteration; we only wish to show ?? ? 0.
Now, using Lemma 5, the following holds with probability at least 1 ? C/T :
p
?
.
u? s? +1 ? 1 ? ?? (1 + ? 2 ) 1 ?
4(1 + ? 2 )
Next, we consider the component of s? +1 that is perpendicular to u:
?
?
B(? +1)
X
1
? q? = g??+1 (M + E? )q? ,
g??+1 s? +1 = g??+1 ?
xt x?
t
B
(4)
t=B? +1
where M = uu? +? 2 I and E? is the error matrix: E? = M ? B1
kE? k2 ? ? (w.p. ? 1 ? C/T ). Hence, w.p. ? 1 ? C/T :
PB(? +1)
t=B? +1
g??+1 s? +1 = ? 2 g??+1 q? + kg? +1 k2 kE? k2 kq? k2 ? ? 2
Now, since q? +1 = s? +1 /ks? +1 k2 ,
?? +1 = (g??+1 q? +1 )2 =
xt x?
t . Using Lemma 4,
p
?? + ?.
(5)
(g??+1 s? +1 )2
,
(u? s? +1 )2 + (g??+1 s? +1 )2
(g??+1 s? +1 )2
,
2
(1 ? ?? ) 1 + ? 2 ? 4? + (g??+1 s? +1 )2
?
(ii)
(? 2 ?? + ?)2
?
,
2
?
(1 ? ?? ) 1 + ? 2 ? 4? + (? 2 ?? + ?)2
(i)
?
5
(6)
x
where, (i) follows from (4) and (ii) follows from (5) along with the fact that c+x
is an increasing
?
function in x for c, x ? 0. Assuming ?? ? 2? and using (6) and bounding the failure probability
with a union bound, we get (w.p. ? 1 ? ? ? C/T )
(ii)
(i)
? 2? ?0
?? (? 2 + 1/2)2
? C1 ? 2? p,
(7)
?
?? +1 ?
2
2
2
2
2?
(1 ? ?? )(? + 3/4) + ?? (? + 1/2)
1 ? (1 ? ? )?0
2
where ? = ??2 +1/2
+3/4 and C1 > 0 is a global constant. Inequality (ii) follows from Lemma 6; to prove
(i), we need the following lemma. It shows that in the recursion given by (7), ?? decreases at a fast
rate. The rate of decrease in ?? might be initially (for small ? ) sub-linear, but for large enough ? the
rate is linear. We defer the proof to the full version of the paper (Mitliagkas et al. (2013)).
Lemma 2. If for any ? ? 0 and 0 < ? < 1, we have ?? +1 ?
? 2 ??
1??? +? 2 ??
, then,
2t+2
?? +1 ?
?
?0
.
1 ? (1 ? ? 2t+2 )?0
Hence, using?the above equation after T = O (log(p/?)/ log (1/?)) updates,?with probability at
least 1 ? C, ?T ? 2?. The result now follows by noting that ku ? qT k2 ? 2 ?T .
Remark: In Theorem 1, the probability of recovery is a constant and does not decay with p. One can
correct this by either paying a price of O(log p) in storage, or in sample complexity: for the former,
we can run O(log p) instances of Algorithm 1 in parallel; alternatively, we can run Algorithm 1
O(log p) times on fresh data each time, using the next block of data to evaluate the old solutions,
1
always keeping the best one. Either approach guarantees a success probability of at least 1 ? pO(1)
.
4.2
General Rank-k Case
In this section, we consider the general rank-k PCA problem where each sample is assumed to be
generated using the model of equation (2), where A ? Rp?k represents the k principal components
that need to be recovered. Let A = U ?V ? be the SVD of A where U ? Rp?k , ?, V ? Rk?k .
The matrices U and V are orthogonal, i.e., U ? U = I, V ? V = I, and ? is a diagonal matrix with
diagonal elements ?1 ? ?2 ? ? ? ? ?k . The goal is to recover the space spanned by A, i.e., span(U ).
Without loss of generality, we can assume that kAk2 = ?1 = 1.
Similar to the rank-1 problem, our algorithm for the rank-k problem can be viewed as a streaming
variant of the classical orthogonal iteration used for SVD. But unlike the rank-1 case, we require
a more careful analysis as we need to bound spectral norms of various quantities in intermediate
steps and simple, crude analysis can lead to significantly worse bounds. Interestingly, the analysis
is entirely different from the standard analysis of the orthogonal iteration as there, the empirical
estimate of the covariance matrix is fixed while in our case it varies with each block.
For the general rank-k problem, we use the largest-principal-angle-based distance function between
any two given subspaces:
?
dist (span(U ), span(V )) = dist(U, V ) = kU?
V k2 = kV?? U k2 ,
where U? and V? represent an orthogonal basis of the perpendicular subspace to span(U ) and
span(V ), respectively. For the spiked covariance model, it is straightforward to see that this is
equivalent to the usual PCA figure-of-merit, the expressed variance.
Theorem 3. Consider a data stream, where xt ? Rp for every t is generated by (2), and the SVD
of A ? Rp?k is given by A = U ?V ? . Let, wlog, ?1 = 1 ? ?2 ? ? ? ? ? ?k > 0. Let,
?
?
2
?
?
2
2
2 k ?p
k
+
?
1
+
?
log(T
)
(1
+
?)
2
p
? + 0.75?k
?
?
T = ? log
/ log
, B = ??
?.
k?
? 2 + 0.5?2k
?4k ?2
Then, after T B-size-block-updates, w.p. 0.99, dist(U, QT ) ? ?. Hence, the sufficient number of
samples for ?-accurate recovery of all the top-k principal components is:
?
?
2
?
?
2
2 k ?p
k
+
?
1
+
?
(1
+
?)
log(p/k?)
?
??
2
n=?
?.
?
? +0.75?2k
4
2
?k ? log ?2 +0.5?2
k
6
? to suppress the extra log(T ) factor.
Again, we use ?(?)
The key part of the proof requires the following additional lemmas that bound the energy of the
current iterate along the desired subspace and its perpendicular space (Lemmas 8 and 9), and Lemma
10, which controls the quality of the initialization.
Lemmas 8, 9 and 10. Let the
P data stream, A, B, and T be as defined in Theorem 3, ? be the
variance of noise, F? +1 = B1 B? <t?B(? +1) xt x?
t and Q? be the ? -th iterate of Algorithm 1.
? (Lemma 8): ? v ? Rk and kvk2 = 1, w.p. 1 ? 5C/T we have:
q
?2 ?
? Q k2 .
kU ? F? +1 Q? vk2 ? (?2k + ? 2 ? k ) 1 ? kU?
? 2
4
? (Lemma 9): With probability at least 1 ? 4C/T ,
?
?
kU?
F? +1 Q? k2 ? ? 2 kU?
Q? k2 + ?2k ?/2.
? (Lemma 10): Let Q0 ? Rp?k be sampled
uniformly at random as in Algorithm 1. Then,
q
1
?
w.p. at least 0.99: ?k (U Q0 ) ? C kp .
We provide the proof of the lemmas and theorem in the full version (Mitliagkas et al. (2013)).
4.3
Perturbation-tolerant Subspace Recovery
While our results thus far assume A has rank exactly k, and k is known a priori, here we show that
both these can be relaxed; hence our results hold in a quite broad setting.
Let xt = Azt + wt be the t-th step sample, with A = U ?V T ? Rp?r and U ? Rp?r , where r ? k
is the unknown true rank of A. We run Algorithm 1 with rank k to recover a subspace QT that is
contained in U . The largest principal angle-based distance, from the previous section, can be used
T
directly in our more general setting. That is, dist(U, QT ) = kU?
QT k2 measures the component of
QT ?outside? the subspace U .
Now, our analysis can be easily modified to handle this case. Naturally, now the number of samples
we require increases according to r. In particular, if
?
?
?
2
?
2
2 r ?p
(1
+
?)
log(p/r?)
r
+
?
1
+
?
??
?,
2
n=?
? +0.75?2
?4r ?2 log ?2 +0.5?2r
r
then dist(U, QT ) ? ?. Furthermore, if we assume r ? C ? k (or a large enough?constant C >
0) then the initialization step provides us better distance, i.e., dist(U, Q0 ) ? C ? / p rather than
?
dist(U, Q0 ) ? C ? / ?kp bound if r = k. This initialization step enables
us to give tighter sample
?
complexity as the r p in the numerator above can be replaced by rp.
5
Experiments
In this section, we show that, as predicted by our theoretical results, our algorithm performs close
to the optimal batch SVD. We provide the results from simulating the spiked covariance model,
and demonstrate the phase-transition in the probability of successful recovery that is inherent to the
statistical problem. Then we stray from the analyzed model and performance metric and test our
algorithm on real world?and some very big?datasets, using the metric of explained variance.
In the experiments for Figures 1 (a)-(b), we draw data from the generative model of (2). Our results
are averaged over at least 200 independent runs. Algorithm 1 uses the block size prescribed in
Theorem 3, with the empirically tuned constant of 0.2. As expected, our algorithm exhibits linear
scaling with respect to the ambient dimension p ? the same as the batch SVD. The missing point
on batch SVD?s curve (Figure 1(a)), corresponds to p > 2.4 ? 104 . Performing SVD on a dense
p ? p matrix, either fails or takes a very long time on most modern desktop computers; in contrast,
our streaming algorithm easily runs on this size problem. The phase transition plot in Figure 1(b)
7
Samples to retrieve spike (?=0.5, ?=0.05)
Ambient dimension (p).
4
6
n (samples)
10
4
10
2
10
Batch SVD
Our algorithm (streaming)
0
10 2
10
3
10
p (dimension)
10
Probability of success (n=1000, ?=0.05).
0.8
3
10
0.6
0.4
2
10
0.2
1
10 ?2
10
4
10
?1
10
Noise standard deviation (?).
(a)
20%
Explained variance
Explained variance
(b)
30%
20%
Optimal (batch)
Our algorithm (streaming)
Optimal using B samples
10%
0%
0
0
10
Our algorithm on large bag?of?words datasets
NIPS bag?of?words dataset
40%
1
2
4
6
8
k (number of components)
NY Times: 300K samples, p=103K
PubMed: 8.2M samples, p=140K
10%
0%
1
10
(c)
2
3
4
5
6
k (number of components)
7
(d)
Figure 1: (a) Number of samples required for recovery of a single component (k = 1) from the
spiked covariance model, with noise standard deviation ? = 0.5 and desired accuracy ? = 0.05.
(b) Fraction of trials in which Algorithm 1 successfully recovers the principal component (k = 1)
in the same model, with ? = 0.05 and n = 1000 samples, (c) Explained variance by Algorithm 1
compared to the optimal batch SVD, on the NIPS bag-of-words dataset. (d) Explained variance by
Algorithm 1 on the NY Times and PubMed datasets.
shows the empirical sample complexity on a large class of problems and corroborates the scaling
with respect to the noise variance we obtain theoretically.
Figures 1 (c)-(d) complement our complete treatment of the spiked covariance model, with some
out-of-model experiments. We used three bag-of-words datasets from Porteous et al. (2008). We
evaluated our algorithm?s performance with respect to the fraction of explained variance metric:
given the p ? k matrix V output from the algorithm, and all the provided samples in matrix X, the
fraction of explained variance is defined as Tr(V T XX T V )/ Tr(XX T ). To be consistent with our
theory, for a dataset of n samples of dimension p, we set the number of blocks to be T = ?log(p)?
and the size of blocks to B = ?n/T ? in our algorithm. The NIPS dataset is the smallest, with
1500 documents and 12K words and allowed us to compare our algorithm with the optimal, batch
SVD. We had the two algorithms work on the document space (p = 1500) and report the results in
Figure 1(c). The dashed line represents the optimal using B samples. The figure is consistent with
our theoretical result: our algorithm performs as well as the batch, with an added log(p) factor in
the sample complexity.
Finally, in Figure 1 (d), we show our algorithm?s ability to tackle very large problems. Both the
NY Times and PubMed datasets are of prohibitive size for traditional batch methods ? the latter
including 8.2 million documents on a vocabulary of 141 thousand words ? so we just report the
performance of Algorithm 1. It was able to extract the top 7 components for each dataset in a few
hours on a desktop computer. A second pass was made on the data to evaluate the results, and we
saw 7-10 percent of the variance explained on spaces with p > 104 .
8
References
Arora, R., Cotter, A., Livescu, K., and Srebro, N. Stochastic optimization for PCA and PLS. In 50th Allerton
Conference on Communication, Control, and Computing, Monticello, IL, 2012.
Balzano, L., Nowak, R., and Recht, B. Online identification and tracking of subspaces from highly incomplete
information. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference
on, pp. 704?711, 2010.
Brand, M. Fast low-rank modifications of the thin singular value decomposition. Linear algebra and its
applications, 415(1):20?30, 2006.
Brand, Matthew. Incremental singular value decomposition of uncertain data with missing values. Computer
Vision?ECCV 2002, pp. 707?720, 2002.
Clarkson, Kenneth L. and Woodruff, David P. Numerical linear algebra in the streaming model. In Proceedings
of the 41st annual ACM symposium on Theory of computing, pp. 205?214, 2009.
Comon, P. and Golub, G. H. Tracking a few extreme singular values and vectors in signal processing. Proceedings of the IEEE, 78(8):1327?1343, 1990.
Golub, Gene H. and Van Loan, Charles F. Matrix computations, volume 3. JHUP, 2012.
Halko, Nathan, Martinsson, Per-Gunnar, and Tropp, Joel A. Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217?288, 2011.
He, J., Balzano, L., and Lui, J. Online robust subspace tracking from partial information. arXiv preprint
arXiv:1109.3827, 2011.
Herbster, Mark and Warmuth, Manfred K. Tracking the best linear predictor. The Journal of Machine Learning
Research, 1:281?309, 2001.
Johnstone, Iain M. On the distribution of the largest eigenvalue in principal components analysis.(english. Ann.
Statist, 29(2):295?327, 2001.
Li, Y. On incremental and robust subspace learning. Pattern recognition, 37(7):1509?1518, 2004.
Mitliagkas, Ioannis, Caramanis, Constantine, and Jain, Prateek. Memory limited, streaming PCA. arXiv
preprint arXiv:1307.0032, 2013.
Nadler, Boaz. Finite sample approximation results for principal component analysis: a matrix perturbation
approach. The Annals of Statistics, pp. 2791?2817, 2008.
Porteous, Ian, Newman, David, Ihler, Alexander, Asuncion, Arthur, Smyth, Padhraic, and Welling, Max. Fast
collapsed gibbs sampling for latent dirichlet allocation. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 569?577, 2008.
Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The Annals of Mathematical Statistics, pp. 400?407, 1951.
Roweis, Sam. EM algorithms for PCA and SPCA. Advances in neural information processing systems, pp.
626?632, 1998.
Rudelson, Mark and Vershynin, Roman. Smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics, 62(12):1707?1739, 2009.
Tipping, Michael E. and Bishop, Christopher M. Probabilistic principal component analysis. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 61(3):611?622, 1999.
Vershynin, R. How close is the sample covariance matrix to the actual covariance matrix? Journal of Theoretical Probability, pp. 1?32, 2010a.
Vershynin, Roman. Introduction to the non-asymptotic analysis of random matrices.
arXiv:1011.3027, 2010b.
arXiv preprint
Warmuth, Manfred K. and Kuzmin, Dima. Randomized online PCA algorithms with regret bounds that are
logarithmic in the dimension. Journal of Machine Learning Research, 9:2287?2320, 2008.
9
| 5035 |@word trial:1 version:5 norm:6 c0:2 simulation:1 seek:1 covariance:21 decomposition:6 tr:2 reduction:3 initial:3 plentiful:1 series:1 woodruff:3 tuned:1 renewed:1 interestingly:1 document:3 past:1 recovered:3 com:1 current:3 yet:2 must:1 numerical:1 enables:1 plot:1 update:4 half:1 prohibitive:2 generative:2 guess:1 warmuth:5 desktop:3 core:1 manfred:2 provides:3 completeness:1 revisited:2 allerton:3 simpler:1 mathematical:1 along:5 kvk2:1 direct:1 symposium:1 ik:1 prove:2 introduce:1 theoretically:1 expected:1 indeed:2 behavior:1 dist:7 actual:1 increasing:1 provided:1 xx:2 moreover:1 notation:2 panel:1 prateek:2 what:2 kind:2 kg:1 finding:1 transformation:1 guarantee:13 every:2 tackle:1 exactly:1 k2:16 uk:1 control:4 dima:1 engineering:2 understood:2 sutton:1 initiated:1 might:1 plus:1 therein:2 studied:1 initialization:5 k:3 collect:1 limited:3 gone:1 bi:1 perpendicular:5 averaged:1 practical:1 practice:1 regret:3 block:16 union:1 footprint:1 procedure:1 empirical:6 universal:3 significantly:1 projection:1 word:6 get:1 cannot:1 close:4 storage:14 context:1 collapsed:1 equivalent:1 demonstrated:1 missing:2 straightforward:2 go:1 focused:3 resolution:1 ke:2 simplicity:1 recovery:7 rectangular:1 pure:1 iain:1 spanned:1 retrieve:1 population:3 century:1 handle:1 updated:1 annals:2 exact:1 smyth:1 us:2 livescu:1 element:3 recognition:2 preprint:3 electrical:2 worst:2 thousand:1 decrease:2 complexity:26 ui:1 kaxk2:1 smart:1 algebra:2 upon:1 basis:1 po:1 easily:2 various:1 caramanis:2 jain:2 fast:3 kp:8 newman:1 outside:2 whose:1 balzano:3 quite:2 ability:1 statistic:2 azt:2 itself:1 ip:3 online:6 eigenvalue:1 analytical:1 coming:1 unresolved:1 product:1 rapidly:1 achieve:1 roweis:2 frobenius:1 kv:1 scalability:1 qr:2 requirement:3 produce:3 incremental:5 depending:1 illustrate:1 qt:9 paying:1 strong:1 p2:7 recovering:2 predicted:1 come:2 uu:3 direction:3 correct:1 stochastic:10 exploration:1 require:6 generalization:1 tighter:2 hold:2 considered:1 normal:2 nadler:2 matthew:1 major:1 achieves:1 vary:1 smallest:2 estimation:3 bag:4 currently:1 utexas:2 saw:1 robbins:2 individually:1 largest:3 kq0:1 successfully:1 tool:1 cotter:1 minimization:1 gaussian:2 always:1 modified:2 rather:1 focus:3 rank:20 check:1 contrast:1 rigorous:1 sigkdd:1 sense:1 vk2:1 streaming:13 typically:4 initially:1 manipulating:1 proj:2 provably:4 issue:1 classification:1 priori:1 constrained:1 equal:1 once:1 never:2 sampling:3 elongation:1 represents:2 broad:1 thin:1 report:2 fundamentally:1 bangalore:1 few:4 inherent:1 modern:2 roman:2 simultaneously:1 replaced:1 phase:2 microsoft:2 interest:1 highly:1 mining:1 joel:1 golub:4 analyzed:1 extreme:2 arrives:1 light:3 accurate:2 ambient:2 nowak:1 partial:2 monticello:1 arthur:1 orthogonal:5 biometrics:1 unless:2 incomplete:1 loosely:1 old:1 desired:2 theoretical:5 minimal:1 uncertain:1 instance:3 column:1 maximization:1 cost:1 deviation:2 kq:1 predictor:1 successful:2 ixt:1 stored:3 perturbed:1 varies:1 vershynin:5 st:3 recht:1 fundamental:3 herbster:2 siam:1 international:1 randomized:1 probabilistic:3 receiving:1 pool:1 michael:1 together:1 sketching:2 quickly:1 again:2 squared:1 central:1 padhraic:1 possibly:1 worse:1 expert:1 li:2 ioannis:3 bold:1 explicitly:3 stream:4 multiplicative:2 doing:1 competitive:1 decaying:1 maintains:1 recover:3 parallel:1 asuncion:1 defer:1 monro:2 il:1 accuracy:2 variance:15 largely:1 correspond:1 weak:1 bayesian:1 identification:1 researcher:1 randomness:1 influenced:1 against:1 failure:1 energy:1 pp:8 naturally:1 associated:1 proof:8 recovers:2 ihler:1 sampled:3 proved:1 dataset:5 popular:1 treatment:1 knowledge:3 dimensionality:3 appears:1 tipping:2 follow:1 methodology:1 formulation:1 done:1 though:1 evaluated:1 generality:2 furthermore:3 just:1 sketch:2 tropp:1 christopher:1 brings:1 quality:2 name:1 effect:1 requiring:1 true:4 former:2 hence:6 q0:11 white:1 numerator:1 kak:1 complete:2 demonstrate:2 performs:4 percent:1 meaning:1 wise:2 recently:2 charles:1 empirically:2 volume:1 million:1 he:2 approximates:1 martinsson:1 kwk2:1 gibbs:1 consistency:1 mathematics:1 pointed:1 had:1 geared:1 etc:1 multivariate:2 recent:1 followthe:1 constantine:3 phone:1 store:5 certain:2 inequality:1 success:2 herbert:1 additional:2 relaxed:1 r0:1 signal:4 ii:4 dashed:1 full:4 hebbian:1 match:1 offer:1 long:1 dept:2 divided:1 variant:3 basic:2 vision:1 expectation:1 metric:3 arxiv:6 iteration:6 sometimes:1 represent:1 c1:2 receive:1 singular:7 extra:2 operate:1 rest:1 unlike:1 seem:1 integer:1 unitary:1 presence:1 noting:1 intermediate:1 spca:1 enough:4 variety:1 iterate:6 impediment:1 reduce:2 inner:1 idea:1 texas:2 expression:1 pca:19 gb:3 effort:1 clarkson:3 repeatedly:1 remark:1 proportionally:1 eigenvectors:1 clear:1 statist:1 exist:1 blown:1 uk2:2 per:2 write:1 key:2 gunnar:1 pb:2 drawn:1 capital:1 clarity:1 kenneth:1 ram:2 asymptotically:1 fraction:3 sum:1 run:6 angle:2 letter:2 place:1 family:1 draw:1 scaling:3 comparable:1 entirely:1 bound:10 annual:2 adapted:1 constraint:1 nathan:1 argument:1 span:5 prescribed:1 performing:4 according:1 smaller:1 describes:1 em:1 sam:1 making:1 modification:1 comon:2 explained:8 spiked:9 equation:2 mutually:1 discus:1 eventually:1 merit:1 prajain:1 end:2 available:2 operation:1 appropriate:1 spectral:3 simulating:1 save:1 batch:17 rp:13 top:5 clustering:1 subsampling:1 denotes:6 porteous:2 dirichlet:1 rudelson:1 hinge:1 classical:2 society:1 already:2 quantity:2 spike:6 added:1 kak2:4 diagonal:2 usual:1 traditional:1 exhibit:1 subspace:13 hq:1 distance:3 collected:1 reason:1 provable:2 fresh:1 assuming:1 length:1 ratio:1 difficult:1 suppress:2 zt:3 motivates:1 unknown:1 perform:1 datasets:5 finite:5 communication:4 perturbation:4 david:2 complement:1 namely:1 required:4 connection:1 hour:1 nip:3 able:2 below:1 pattern:1 regime:5 challenge:1 tb:2 built:1 including:2 memory:25 video:1 kqt:2 max:1 power:6 critical:1 royal:1 natural:2 recursion:1 numerous:1 arora:6 axis:1 extract:1 review:1 understanding:1 literature:2 discovery:1 kf:1 asymptotic:1 loss:2 kakf:1 interesting:1 allocation:1 srebro:1 mindset:1 sufficient:1 consistent:3 heavy:1 austin:2 eccv:1 summary:1 last:1 keeping:1 english:1 india:1 johnstone:3 face:1 van:2 distributed:1 curve:1 dimension:6 xn:3 transition:2 world:1 vocabulary:1 made:1 preprocessing:1 far:1 welling:1 approximate:2 obtains:2 boaz:1 gene:1 confirm:1 global:1 sequentially:2 tolerant:1 b1:5 assumed:1 corroborates:1 leader:1 xi:1 alternatively:1 spectrum:1 latent:1 decomposes:1 ku:8 robust:2 improving:1 excellent:1 meanwhile:2 constructing:1 dense:2 main:3 linearly:1 bounding:1 noise:12 big:1 allowed:1 kuzmin:3 x1:3 referred:1 pubmed:3 fashion:1 ny:3 wlog:2 sub:3 position:1 stray:1 wish:2 fails:1 lie:1 crude:1 ian:1 theorem:9 rk:3 xt:18 discarding:1 bishop:2 showing:1 explored:1 decay:1 dominates:1 burden:1 hq0:1 sequential:1 mitliagkas:5 magnitude:3 led:1 photograph:1 halko:3 simply:1 logarithmic:1 kxk:1 expressed:1 contained:1 pls:1 tracking:4 corresponds:1 acm:2 goal:5 identity:2 viewed:1 ann:1 exposition:1 towards:1 careful:1 price:1 change:1 loan:2 typical:1 specifically:1 uniformly:1 lui:1 wt:6 principal:17 lemma:20 total:3 pas:4 kxkq:1 svd:16 succeeds:1 brand:3 meaningful:1 highdimensional:1 mark:2 latter:1 alexander:1 relevance:1 evaluate:2 |
4,460 | 5,036 | Near-Optimal Entrywise Sampling for Data Matrices
Dimitris Achlioptas
UC Santa Cruz
[email protected]
Zohar Karnin
Yahoo Labs
[email protected]
Edo Liberty
Yahoo Labs
[email protected]
Abstract
We consider the problem of selecting non-zero entries of a matrix A in order to
produce a sparse sketch of it, B, that minimizes A B 2 . For large m n matrices, such that n m (for example, representing n observations over m attributes)
we give sampling distributions that exhibit four important properties. First, they
have closed forms computable from minimal information regarding A. Second,
they allow sketching of matrices whose non-zeros are presented to the algorithm
in arbitrary order as a stream, with O 1 computation per non-zero. Third, the
resulting sketch matrices are not only sparse, but their non-zero entries are highly
compressible. Lastly, and most importantly, under mild assumptions, our distributions are provably competitive with the optimal offline distribution. Note that
the probabilities in the optimal offline distribution may be complex functions of
all the entries in the matrix. Therefore, regardless of computational complexity,
the optimal distribution might be impossible to compute in the streaming model.
1
Introduction
Given an m n matrix A, it is often desirable to find a sparser matrix B that is a good proxy
for A. Besides being a natural mathematical question, such sparsification has become a ubiquitous preprocessing step in a number of data analysis operations including approximate eigenvector
computations [AM01, AHK06, AM07], semi-definite programming [AHK05, d?A08], and matrix
completion problems [CR09, CT10].
A fruitful measure for the approximation of A by B is the spectral norm of A B, where for any
matrix C its spectral norm is defined as C 2 max x 2 1 Cx 2 . Randomization has been central
in the context of matrix approximations and the overall problem is typically cast as follows: given a
matrix A and a budget s, devise a distribution over matrices B such that the (expected) number of
non-zero entries in B is at most s and A B 2 is as small as possible.
Our work is motivated by big data matrices that are generated by measurement processes. Each
of the n matrix columns correspond to an observation of m attributes. Thus, we expect n
m.
Also we expect the total number of non-zero entries in A to exceed available memory. We assume
that the original data matrix A is accessed in the streaming model where we know only very basic
features of A a priori and the actual non-zero entries are presented to us one at a time in an arbitrary
order. The streaming model is especially important for tasks like recommendation engines where
user-item preferences become available one by one in an arbitrary order. But, it is also important in
cases when A exists in durable storage and random access of its entries is prohibitively expensive.
We establish that for such matrices the following approach gives provably near-optimal sparsification. Assign to each element Aij of the matrix a weight that depends only on the elements in its
row qij
Aij A i 1 . Take ? to be an (appropriate) distribution over the rows. Sample s i.i.d.
locations from A using the distribution pij ?i qij . Return B which is the mean of s matrices, each
containing a single non zero entry Aij pij in the corresponding selected location i, j .
As we will see, this simple form of the probabilities pij falls out naturally from generic optimization
considerations. The fact that each entry is kept with probability proportional to its magnitude, be1
sides being interesting on its own right, has a remarkably practical implication. Every non-zero in the
i-th row of B will take the form kij A i 1 s?i where kij is the number of times location i, j of
A was selected. Note that since we sample with replacement kij may be more than 1 but, typically,
kij
0, 1 . The result is a matrix B which is representable in O m log n
s log n s bits.
This is because there is no reason to store floating point matrix entry values. We use O m log n
bits to store1 all values A i 1 s?i and O s log n s bits to store the non zero index offsets. Note
that
kij
s and that some of the offsets may be zero. In a simple experiment we measured
the average number of bits per sample resulting from this approach (total size of the sketch divided
by the number of samples s). The results were between 5 and 22 bits per sample depending on the
matrix and s. It is important to note that the number of bits per sample was usually less than even
log2 n
log2 m , the minimal number of bits required to represent a pair i, j . Our experiments
show a reduction of disc space by a factor of between 2 and 5 relative to the compressed size of the
file representing the sample matrix B in the standard row-column-value list format.
Another insight of our work is that the distributions we propose are combinations of two L1-based
distributions and and which distribution is more dominant depends on the sampling budget. When
the number of samples s is small, ?i is nearly linear in A i 1 resulting in pij Aij . However, as
the number of samples grows, ?i tends towards A i 21 resulting in pij Aij A i 1 , a distribution
we refer to as Row-L1 sampling. The dependence of the preferred distribution on the sample budget
is also borne out in experiments, with sampling based on appropriately mixed distributions being
consistently best. This highlights that the need to adapt the sampling distribution to the sample
budget is a genuine phenomenon.
2
Measure of Error and Related Work
We measure the difference between A and B with respect to the L2 (spectral) norm as it is highly
revealing in the context of data analysis. Let us define a linear trend in the data of A as any tendency
of the rows to align with a particular unit vector x. To examine the presence of such a trend, we need
only multiply A with x: the ith coordinate of Ax is the projection of the ith row of A onto x. Thus,
Ax 2 measures the strength of linear trend x in A, and A 2 measures the strongest linear trend in
A. Thus, minimizing A B 2 minimizes the strength of the strongest linear trend of A not captured
by B. In contrast, measuring the difference using an entry-wise norm, e.g., the Frobenius norm, can
be completely uninformative. This is because the best strategy would be to always pick the largest
s matrix entries from A, a strategy that can easily be ?fooled?. As a stark example, when the matrix
entries are Aij
0, 1 , the quality of approximation of A by B is completely independent of which
elements of A we keep. This is clearly bad; as long as A contains even a modicum of structure
certain approximations will be far better than others.
By using the spectral norm to measure error we get a natural and sophisticated target: to minimize
A B 2 is to make E A B a near-rotation, having only small variations in the amount by which
it stretches different vectors. This idea that the error matrix E should be isotropic, thus packing as
much Frobenius norm as possible for its L2 norm, motivated the first work on element-wise matrix
sampling by Achlioptas and McSherry [AM07]. Concretely, to minimize E 2 it is natural to aim
for a matrix E that is both zero-mean, i.e., an unbiased estimator of A, and whose entries are formed
by sampling the entries of A (and, thus, of E) independently. In the work of [AM07], E is a matrix
of i.i.d. zero-mean random variables. The study of the spectral characteristics of such matrices
goes back all the way to Wigner?s famous semi-circle law [Wig58]. Specifically, to bound E 2
in [AM07] a bound due to Alon Krivelevich and Vu [AKV02] was used, a refinement of a bound
by Juh?asz [Juh81] and F?uredi and Koml?os [FK81]. The most salient feature of that bound is that it
depends on the maximum entry-wise variance 2 of A B, and therefore the distribution optimizing
the bound is the one in which the variance of all entries in E is the same. In turn, this means keeping
each entry of A independently with probability pij A2ij (up to a small wrinkle discussed below).
Several papers have since analyzed L2-sampling and variants [NDT09, NDT10, DZ11, GT09,
AM07]. An inherent difficulty of L2-sampling based strategies is the need for special handling
of small entries. This is because when each item Aij is kept with probability pij A2ij , the result1
It is harmless to assume any value in the matrix is kept using O log n
truncating the trailing bits can be shown to be negligible.
2
bits of precision. Otherwise,
ing entry Bij in the sample matrix has magnitude Aij pij Aij 1 . Thus, if an extremely small
element Aij is accidentally picked, the largest entry of the sample matrix ?blows up?. In [AM07]
this was addressed by sampling small entries with probability proportional to Aij rather than A2ij .
In the work of Gittens and Tropp [GT09], small entries are not handled separately and the bound
derived depends on the ratio between the largest and the smallest non-zero magnitude.
Random matrix theory has witnessed dramatic progress in the last few years and [AW02, RV07,
Tro12a, Rec11] provide a good overview of the results. This progress motivated Drineas and Zouzias
in [DZ11] to revisit L2-sampling using concentration results for sums of random matrices [Rec11],
as we do here. This is somewhat different from the original setting of [AM07] since now B is not
a random matrix with independent entries, but a sum of many single-element independent matrices,
each such matrix resulting by choosing a location of A with replacement. Their work improved
upon all previous L2-based sampling results and also upon the L1-sampling result of Arora, Hazan
and Kale [AHK06], discussed below, while admitting a remarkably compact proof. The issue of
small entries was handled in [DZ11] by deterministically discarding all sufficiently small entries, a
strategy that gives a strong mathematical guarantee (but see the discussion regarding deterministic
truncation in the experimental section).
A completely different tack at the problem, avoiding random matrix theory altogether, was taken
by Arora et al. [AHK06]. Their approximation keeps the largest entries in A deterministically
(specifically all Aij
" n where the threshold " needs be known a priori) and randomly rounds
the remaining smaller entries to sign Aij " n or 0. They exploit the simple fact A B
sup x 1, y 1 xT A B y by noting that, as a scalar quantity, its concentration around its expectation can be established by standard Bernstein-Bennet type inequalities. A union bound then allows
them to prove that with high probability, xT A B y " for every x and y. The result of [AHK06]
admits a relatively simple proof. However, it also requires a truncation that depends on the desired
approximation ". Rather interestingly, this time the truncation amounts to keeping every entry larger
than some threshold.
3
Our Approach
Following the discussion in Section 2 and in line with previous works, we: (i) measure the quality
of B by A B 2 , (ii) sample the entries of A independently, and (iii) require B to be an unbiased
estimator of A. We are therefore left with the task of determining a good probability distribution pij
from which to sample the entries of A in order to get B. As discussed in Section 2 prior art makes
heavy use of beautiful results in the theory of random matrices. Specifically, each work proposes a
specific sampling distribution and then uses results from random matrix theory to demonstrate that it
has good properties. In this work we reverse the approach, aiming for its logical conclusion. We start
from a cornerstone result in random matrix theory and work backwards to reverse-engineer nearoptimal distributions with respect to the notion of probabilistic deviations captured by the inequality.
The inequality we use is the Matrix-Bernstein inequality for sums of independent random matrices
(see e.g., [Tro12b], Theorem 1.6). In the following, we often denote A 2 as A to lighten notation.
Theorem 3.1 (Matrix Bernstein inequality). Consider a finite sequence Xi of i.i.d. random m
matrices, where E X1
0 and X1
R. Let 2 max E X1 X1T , E X1T X1 .
For some fixed s
1, let X
Pr X
Xs s. For all "
X1
"
m
n exp
2
n
0,
s"2
R" 3
.
To get a feeling for our approach, fix any probability distribution p over the non-zero elements of
A. Let B be a random m n matrix with exactly one non-zero element, formed by sampling an
element Aij of A according to p and letting Bij Aij pij . Observe that for every i, j , regardless
of the choice of p, we have E Bij
Aij , and thus B is always an unbiased estimator of A. Clearly,
the same is true if we repeat this s times, taking i.i.d. samples B1 , . . . , Bs , and let our matrix B
be their average. With this approach in mind, the goal is now to find a distribution p minimizing
E
A B1
Bs s . Writing sE
A B1
A Bs we see that sE is the
operator norm of a sum of i.i.d. zero-mean random matrices Xi A Bi , i.e., exactly the setting
3
of Theorem 3.1. The relevant parameters are
2
max E A B1 A B1 T , E A B1 T A B1
max A B1
over all possible realizations of B1 .
R
(1)
(2)
Equations (1) and (2) mark the starting point of our work. Our goal is to find probability distributions
over the elements of A that optimize (1) and (2) simultaneously with respect to their functional form
in Theorem 3.1, thus yielding the strongest possible bound on A B . A conceptual contribution
of our work is the discovery that good distributions depend on the sample budget s, a fact also borne
out in experiments. The fact that minimizing the deviation metric of Theorem 3.1, i.e., 2 R? 3,
suffices to bring out this dependence can be viewed as testament to the theorem?s sharpness.
Theorem 3.1 is stated as a bound on the probability that the norm of the error matrix is greater than
some target error " given the number of samples s. In practice, the target error " is typically not
known in advance, but rather is the quantity to minimize, given the matrix A, the number of samples
s, and the target confidence . Specifically, for any given distribution p on the elements of A, define
"1 p
inf " : m
n exp
p
2
s"2
Rp" 3
.
(3)
Our goal in the rest of the paper is to seek the distribution p minimizing "1 . Our result is an easily
computable distribution p which comes within a factor of 3 of "1 p and, as a result, within a factor
of 9 in terms of sample complexity (in practice we expect this to be even smaller, as the factor of
3 comes from consolidating bounds for a number of different worst-case matrices). To put this in
perspective note that the definition of p does not place any restriction either on the access model
for A while computing p , or on the amount of time needed to compute p . In other words, we are
competing against an oracle which in order to determine p has all of A in its purview at once and
can spend an unbounded amount of computation to determine it.
In contrast, the only global information regarding A we require are the ratios between the L1 norms
of the rows of the matrix. Trivially, the exact L1 norms of the rows (and therefore their ratios) can
be computed in a single pass over the matrix, yielding a 2-pass algorithm. Slightly less trivially,
standard concentration arguments imply that these ratios can be estimated very well by sampling
only a small number of columns. In the setting of data analysis, though, it is in fact reasonable
to expect that good estimates of these ratios are available a priori. This is because different rows
correspond to different attributes and the ratios between the row norms reflect the ratios between the
average absolute values of the features. For example, if the matrix corresponds to text documents,
knowing the ratios amounts to knowing global word frequencies. Moreover these ratios do not need
to be known exactly to apply the algorithm, as even rough estimates of them give highly competitive
results. Indeed, even disregarding this issue completely and simply assuming that all ratios equal 1,
yields an algorithm that appears quite competitive in practice, as demonstrated by our experiments.
4
Data Matrices and Statement of Results
Throughout A i and A j will denote the i-th row and j-th column of A, respectively. Also, we
2
2
use the notation A 1
i,j Aij and A F
i,j Aij . Before we formally state our result we
introduce a definition that expresses the class of matrices for which our results hold.
Definition 4.1. An m
n matrix A is a Data matrix if:
1. mini A i 1 maxj A
2. A 21 A 22 30m.
3. m 30.
j
1.
Regarding Condition 1, recall that we think of A as being generated by a measurement process
of a fixed number of attributes (rows), each column corresponding to an observation. As a result,
columns have bounded L1 norm, i.e., A j 1
constant. While this constant may depend on
the type of object and its dimensionality, it is independent of the number of objects. On the other
hand, A i 1 grows linearly with the number of columns (objects). As a result, we can expect
Definition 4.1 to hold for all large enough data sets. Regarding Condition 2, it is easy to verify that
4
unless the values of the entries of A exhibit unbounded variance as n grows, the ratio A 21 A 22
grows as ? n and Condition 2 follows from n
m. Condition 3 is trivial. All in all, out of the
three conditions the essential one is Condition 1. The other two are merely technical and hold in all
non-trivial cases where Condition 1 applies.
One last point is that to apply Theorem 3.1, the entries of A must be sampled with replacement.
A simple way to achieve this in the streaming model was presented in [DKM06] that uses O s
operations per matrix element and O s active memory. In Section D (see supplementary material)
we discuss how to implement sampling with replacement far more efficiently, using O log s active
? s space, and O 1 operations per element. To simplify the exposition of our algorithm,
memory, O
below, we describe it in the non-streaming setting. That is, we assume we know m and n and that
we can compute numbers zi A i 1 as well as repeatedly sample entries from the matrix. We
stress, however, that these conditions are not required and that the algorithm can be implemented
efficiently in the streaming model as discussed in Section D.
Algorithm 1 Construct a sketch B of a data matrix A
1: Input: Data matrix A Rm n , sampling budget s, acceptable failure probability
2: Set ?
C OMPUTE ROW D ISTRIBUTION(A, s, )
3: Sample s elements of A with replacement, each Aij having probability pij
?i Aij A i 1
4: For each sample i, j, Aij ` , let B` be the matrix with B` i, j
Aij pij and zero elsewhere.
s
1
5: Output: B
` 1 B` .
s
6: function C OMPUTE ROW D ISTRIBUTION(A, s, )
7:
Obtain z such that zi A i 1 for i m
8:
Set ?
log m
n
s
and
log m
n
3s
2
9:
10:
11:
Define ?i ?
?zi 2?
Find ?1 such that
return ? such that
m
i 1
?i
?zi 2?
?i ? 1
1
?i ?1 for i
2
zi ?
m
Steps 6?11 compute a distribution ? over the rows. Assuming step 7 can be implemented efficiently (or skipped altogether in the case z are known a priori), we see that the running time of
ComputeRowDistribution is independent of n. Specifically, finding ?1 in step 10 can be done
efficiently by binary search because the function i ?i ? is strictly decreasing in ?. Conceptually,
we see that the probability assigned to each element Aij in Step 3 is simply the probability ?i of its
row times its intra-row weight Aij A i 1 .
We are now able to state our main lemma. We defer its proof to Section 5 and subsequent details to
appendices (see supplementary material).
Theorem 4.2. If A is a Data matrix per Definition 4.1 and p is the probability distribution defined
in Algorithm 1, then "1 p
3 "1 p , where p is the minimizer of "1 .
To compare our result with previous ones we first define several matrix metrics. We then state the
bound implied by Theorem 4.2 on the minimal number of samples s0 needed by our algorithm to
achieve an approximation B to the matrix A such that A B
" A with constant probability.
Stable rank: Denoted as sr and defined as A 2F A 22 . This is a smooth analog for the algebraic
rank, always bounded by it from above, and resilient to small perturbations of the matrix. For data
matrices we expect it is small, even constant, and that it captures the ?inherent dimensionality? of
the data.
Numeric density: Denoted as nd and defined as A 21 A 2F , this is a smooth analog of the number
of non-zero entries nnz A . For 0-1 matrices it equals nnz A , but when there is variance in the
magnitude of the entries it is smaller.
Numeric row density: Denoted as nrd and defined as i A i 21 A 2F n. In practice, it is often
close to the average numeric density of a single row, a quantity typically much smaller than n.
5
Theorem 4.3. Let A be a Data Matrix per Definition 4.1 and let B be the matrix returned by
Algorithm 1 for
1 10, " 0 and any
s
? nrd sr "2 log n
s0
With probability at least 9 10, A
B
sr nd "2 log n
1 2
.
" A .
The proof of Theorem 4.3 is given in Appendix C (see supplementary material).
The third column of the table below shows the number of samples needed to guarantee that
A B
" A occurs with constant probability, in terms of the matrix metrics defined above.
The fourth column presents the ratio of the samples needed by previous results divided by the samples needed by our method. (To simplify the expressions, we present the ratio between our bound
and [AHK06] only when the result of [AHK06] gives superior bounds to [DZ11], i.e., we always
compare our bound to the stronger of the two bounds implied by these works). Holding " and the
stable rank constant we readily see that our method requires roughly 1 n the samples needed
by [AHK06]. In the comparison with [DZ11] we see that the key parameter is the ratio nrd n, a
quantity typically much smaller than 1 for data matrices. As a point of reference for the assumptions,
in the experimental Section 6 we provide the values of all relevant matrix metrics for all the real data
matrices we worked with, wherein the ratio nrd n is typically around 10 2 . By this discussion, one
would expect that L2-sampling should fare better than L1-sampling in experiments. As we will see,
quite the opposite is true. A potential explanation for this phenomenon is the relative looseness of
the bound of [AHK06] for the performance of L1-sampling.
Citation
Method
[AM07]
L1, L2
[DZ11]
L2
[AHK06]
L1
This paper
5
Number of samples needed
sr n "
2
Improvement ratio of Theorem 4.3
n polylog n
sr n "2 log n
nd n "
nrd n
nd n
2 1 2
"
sr log n
sr log n n
2
nrd sr " log n
sr nd "2 log n 1
Bernstein
2
Proof outline
We start by iteratively replacing the objective functions (1) and (2) with simpler and simpler functions. Each replacement will incur a (small) loss in accuracy but will bring us closer to a function
for which we can give a closed form solution. Recalling the definitions of ?, from Algorithm 1
and rewriting the requirement in (3) as a quadratic form in " gives "2 " R
? 2 0. Our first
2
step is to observe that for any c, d
0, the equation "
" c d
0 has one negative and one
positive solution and that the latter is at least c
d
2 and at most c
d. Therefore, if we
define2 "2 : ?
R we see that 1 2 "1 "2 1.
Our next simplification encompasses Conditions 2, 3 of Definition 4.1. Let "3 :
?2 :
A2ij pij , max
max max
i
j
j
A2ij pij
i
and
?:
R
??
? where
R
max Aij pij .
ij
Lemma 5.1. For every matrix A satisfying Conditions 2 and 3 of Definition 4.1, for every probability
distribution on the elements of A, "2 "3 1
1 30.
? R.
Lemma 5.1 is proved in section A (see supplementary material) by showing that ?
and R
This allows us to optimize p with respect to "3 instead of "2 . In minimizing "3 we see that there is
? At a cost of a factor of 2, we will couple the two
freedom to use different rows to optimize ? and R.
2
Here and in the following, to lighten notation, we will omit all arguments, i.e., p,
objective functions "i we seeks to optimize, as they are readily understood from context.
6
p , R p , from the
minimizations by minimizing "4
"5 :
max ?
i
j
A2ij
pij
max "5 , "6 where
max
j
Aij
pij
,
"6 :
max ?
j
i
A2ij
pij
max
i
Aij
pij
. (4)
? in "5 (and "6 ) is coupled with that of the ? -related term by conNote that the maximization of R
straining the optimization to consider only one row (column) at a time. Clearly, 1 "3 "4 2.
Next we focus on "5 , the first term in the maximization of "4 . The following key lemma establishes
that for all data matrices satisfying Condition 1 of Definition 4.1, by minimizing "5 we also minimize
"4 max "5 , "6 .
Lemma 5.2. For every matrix satisfying Condition 1 of Definition 4.1, argminp "5 argminp "4 .
At this point we can derive in closed form the probability distribution p minimizing "5 .
Lemma 5.3. The function "5 is minimized by pij
?i qij where qij
Aij A i 1 . To define ?i
2
let zi
A
i
solution to3
1
i
and define ?i ?
1. Let ?i :
?i ? 1
?zi 2?
?zi 2?
2
zi ?
. Let ?1
0 be the unique
?i ? 1 .
To prove Theorem 4.2 we see that Lemmas 5.2 and 5.3 combined imply that there is an efficient
algorithm for minimizing "4 for every matrix A satisfying Condition 1 of Definition 4.1. If A also
satisfies Conditions 2 and 3 of Definition 4.1, then it is possible to lower and upper bound the ratios
"1 "2 , "2 "3 and "3 "4 . Combined, these bounds guarantee a lower and upper bound for "1 "4 .
In general, if c
"4 "1
C we can conclude that "1 arg min "4
C c min "1 . Thus,
calculating the constants shows "1 arg min "4
3 min "1 , yielding Theorem 4.3.
6
Experiments
We experimented with 4 matrices with different characteristics, summarized in the table below. See
Section 4 for the definition of the different characteristics.
Measure
Synthetic
Enron
Images
Wikipedia
m
1.0e+2
1.3e+4
5.1e+3
4.4e+5
n
1.0e+4
1.8e+5
4.9e+5
3.4e+6
nnz A
5.0e+5
7.2e+5
2.5e+8
5.3e+8
A 1
1.8e+7
4.0e+9
6.5e+9
5.3e+9
A F
3.2e+4
5.8e+6
2.0e+6
7.5e+5
A 2
8.7e+3
1.0e+6
1.8e+6
1.6e+5
sr
1.3e+1
3.2e+1
1.3e+0
2.1e+1
nd
3.1e+5
4.9e+5
1.1e+7
5.0e+7
nrd
3.2e+3
1.5e+3
2.3e+3
1.9e+4
Enron: Subject lines of emails in the Enron email corpus [Sty11]. Columns correspond to subject
lines, rows to words, and entries to tf-idf values. This matrix is extremely sparse to begin with.
Wikipedia: Term-document matrix of a fragment of Wikipedia in English. Entries are tf-idf values.
Images: A collection of images of buildings from Oxford [PCI 07]. Each column represents the
wavelet transform of a single 128 128 pixel grayscale image.
Synthetic: This synthetic matrix simulates a collaborative filtering matrix. Each row corresponds to
an item and each column to a user. Each user and each item was first assigned a random latent vector
(i.i.d. Gaussian). Each value in the matrix is the dot product of the corresponding latent vectors plus
additional Gaussian noise. We simulated the fact that some items are more popular than others by
retaining each entry of each item i with probability 1 i m where i 0, . . . , m 1.
6.1
Sampling techniques and quality measure
The experiments report the accuracy of sampling according to four different distributions. In Figure 6.1, Bernstein denotes the distribution of this paper, defined in Lemma 5.3. The Row-L1
distribution is a simplified version of the Bernstein distribution, where pij Aij
A i 1 . L1 and
L2 refer to pij Aij and pij Aij 2 , respectively, as defined earlier in the paper. The case of L2
3
Notice that the function
?i ? is monotonically decreasing for ?
7
0 hence the solution is indeed unique.
sampling was split into three sampling methods corresponding to different trimming thresholds. In
the method referred to as L2 no trimming is made and pij Aij 2 . In the case referred to as L2 trim
0.1, pij Aij 2 for any entry where Aij 2 0.1 Eij Aij 2 and pij 0 otherwise. The sampling
technique referred to as L2 trim 0.01 is analogous with threshold 0.01 Eij Aij 2 .
Although to derive our sampling probability distributions we targeted minimizing A B 2 , in
experiments it is more informative to consider a more sensitive measure of quality of approximation.
The reason is that for a number of values of s, the scaling of entries required for B to be an unbiased
estimator of A, results in A B
A which would suggest that the all zeros matrix is a
better sketch for A than the sampled matrix. We will see that this is far from being the case. As
a trivial example, consider the possibility B
10A. Clearly, B is very informative of A although
A B
9 A . To avoid this pitfall, we measure PkB A F Ak F , where PkB is the projection
on the top k left singular vectors of B. Thus, Ak
PkA A is the optimal rank k approximation of
A. Intuitively, this measures how well the top k left singular vectors of B capture A, compared
B
to A?s own (optimal) top-k left singular vectors. We also compute AQB
k F Ak F where Qk is
the projection on the top k right singular vectors of A. Note that, for a given k, approximating
the row-space is harder than approximating the column-space since it is of dimension n which is
significantly larger than m, a fact also borne out in the experiments. In the experiments we made
sure to choose a sufficiently wide range of sample sizes so that at least the best method for each
matrix goes from poor to near-perfect both in approximating the row and the column space. In all
cases we report on k
20 which is close to the upper end of what could be efficiently computed
on a single machine for matrices of this size. The results for all smaller values of k are qualitatively
indistinguishable.
1"
1"
1%
1$
0.8"
0.8"
0.95%
0.9$
0.6"
0.6"
0.9%
0.8$
0.4"
0.4"
0.85%
0.7$
0.2"
0.2"
0.8%
0.6$
0"
0"
0.75%
4" 4.7" 5" 5.7" 6" 6.7" 7"
4" 4.7" 5" 5.7" 6" 6.7" 7"
0.5$
4%
4.7%
5%
5.7%
6%
1"
1"
1"
1"
0.8"
0.8"
0.8"
0.8"
0.6"
0.6"
0.6"
0.6"
0.4"
0.4"
0.4"
0.4"
0.2"
0.2"
0.2"
0.2"
0"
0"
0"
4" 4.7" 5" 5.7" 6" 6.7" 7"
4" 4.7" 5" 5.7" 6" 6.7" 7"
4$
4.7$
5$
5.7$
6$
4"
4.7"
5"
5.7"
6"
0"
4"
4.7"
5"
5.7"
6"
Figure 1: Each vertical pair of plots corresponds to one matrix. Left to right: Wikipedia, Images, Enron, Synthetic . Each top plot shows the quality of the column-space approximation ratio,
PBk A F Ak , while the bottom plots show the row-space approximation ratio AQkB F Ak .
The number of samples s is on the x-axis in log scale x log10 s .
6.2
Insights
The experiments demonstrate three main insights. First and most important, Bernstein-sampling is
never worse than any of the other techniques and is often strictly better. A dramatic example of
this is the Wikipedia matrix for which it is far superior to all other methods. The second insight
is that L1-sampling, i.e., simply taking pij
Aij A 1 , performs rather well in many cases.
Hence, if it is impossible to perform more than one pass over the matrix and one can not even obtain
an estimate of the ratios of the L1-weights of the rows, L1-sampling seems to be a highly viable
option. The third insight is that for L2-sampling, discarding small entries may drastically improve
the performance. However, it is not clear which threshold should be chosen in advance. In any case,
in all of the example matrices, both L1-sampling and Bernstein-sampling proved to outperform or
perform equally to L2-sampling, even with the correct trimming threshold.
8
References
[AHK05] Sanjeev Arora, Elad Hazan, and Satyen Kale. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In Foundations of Computer Science,
2005. FOCS 2005. 46th Annual IEEE Symposium on, pages 339?348. IEEE, 2005.
[AHK06] Sanjeev Arora, Elad Hazan, and Satyen Kale. A fast random sampling algorithm for sparsifying
matrices. In Proceedings of the 9th international conference on Approximation Algorithms for Combinatorial Optimization Problems, and 10th international conference on Randomization and Computation, APPROX?06/RANDOM?06, pages 272?279, Berlin, Heidelberg, 2006. Springer-Verlag.
[AKV02] Noga Alon, Michael Krivelevich, and VanH. Vu. On the concentration of eigenvalues of random
symmetric matrices. Israel Journal of Mathematics, 131:259?267, 2002.
[AM01]
Dimitris Achlioptas and Frank McSherry. Fast computation of low rank matrix approximations. In
Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 611?618.
ACM, 2001.
[AM07]
Dimitris Achlioptas and Frank Mcsherry. Fast computation of low-rank matrix approximations. J.
ACM, 54(2), april 2007.
[AW02]
Rudolf Ahlswede and Andreas Winter. Strong converse for identification via quantum channels.
IEEE Transactions on Information Theory, 48(3):569?579, 2002.
[Ber07]
Ale?s Berkopec. Hyperquick algorithm for discrete hypergeometric distribution. Journal of Discrete
Algorithms, 5(2):341?347, 2007.
[CR09]
Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009.
[CT10]
Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory, IEEE Transactions on, 56(5):2053?2080, 2010.
[d?A08]
Alexandre d?Aspremont. Subsampling algorithms for semidefinite programming. arXiv preprint
arXiv:0803.1990, 2008.
[DKM06] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast monte carlo algorithms for matrices;
approximating matrix multiplication. SIAM J. Comput., 36(1):132?157, July 2006.
[DZ11]
Petros Drineas and Anastasios Zouzias. A note on element-wise matrix sparsification via a matrixvalued bernstein inequality. Inf. Process. Lett., 111(8):385?389, 2011.
[FK81]
Z. F?uredi and J. Koml?os. The eigenvalues of random symmetric matrices. Combinatorica, 1(3):233?
241, 1981.
[GT09]
Alex Gittens and Joel A Tropp. Error bounds for random matrix approximation schemes. arXiv
preprint arXiv:0911.4108, 2009.
[Juh81]
F. Juh?asz. On the spectrum of a random graph. In Algebraic methods in graph theory, Vol. I,
II (Szeged, 1978), volume 25 of Colloq. Math. Soc. J?anos Bolyai, pages 313?316. North-Holland,
Amsterdam, 1981.
[NDT09] NH Nguyen, Petros Drineas, and TD Tran. Matrix sparsification via the khintchine inequality, 2009.
[NDT10] Nam H Nguyen, Petros Drineas, and Trac D Tran. Tensor sparsification via a bound on the spectral
norm of random tensors. arXiv preprint arXiv:1005.4732, 2010.
[PCI 07] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies
and fast spatial matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2007.
[Rec11]
Benjamin Recht. A simpler approach to matrix completion. J. Mach. Learn. Res., 12:3413?3430,
December 2011.
[RV07]
Mark Rudelson and Roman Vershynin. Sampling from large matrices: An approach through geometric functional analysis. J. ACM, 54(4), July 2007.
[Sty11]
Will Styler. The enronsent corpus. In Technical Report 01-2011, University of Colorado at Boulder
Institute of Cognitive Science, Boulder, CO., 2011.
[Tro12a] Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389?434, 2012.
[Tro12b] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational
Mathematics, 12(4):389?434, 2012.
[Wig58]
Eugene P. Wigner. On the distribution of the roots of certain symmetric matrices. Annals of Mathematics, 67(2):pp. 325?327, 1958.
9
| 5036 |@word mild:1 version:1 norm:15 stronger:1 nd:6 seems:1 seek:2 pick:1 dramatic:2 harder:1 reduction:1 contains:1 fragment:1 selecting:1 document:2 interestingly:1 com:2 must:1 readily:2 cruz:1 subsequent:1 informative:2 plot:3 update:1 isard:1 selected:2 item:6 isotropic:1 ith:2 math:1 location:4 compressible:1 preference:1 simpler:3 accessed:1 unbounded:2 mathematical:2 ucsc:1 become:2 symposium:2 viable:1 qij:4 prove:2 focs:1 introduce:1 pkb:2 indeed:2 expected:1 roughly:1 cand:2 examine:1 decreasing:2 pitfall:1 td:1 actual:1 begin:1 notation:3 moreover:1 bounded:2 what:1 israel:1 minimizes:2 eigenvector:1 rec11:3 finding:1 sparsification:5 guarantee:3 every:8 friendly:2 exactly:3 prohibitively:1 rm:1 unit:1 converse:1 omit:1 before:1 negligible:1 positive:1 understood:1 tends:1 aiming:1 matrixvalued:1 ak:5 oxford:1 mach:1 might:1 plus:1 co:1 bennet:1 bi:1 range:1 practical:1 unique:2 thirty:1 vu:2 union:1 practice:4 definite:1 implement:1 nnz:3 wrinkle:1 revealing:1 projection:3 significantly:1 confidence:1 word:3 trac:1 matching:1 suggest:1 ahlswede:1 get:3 onto:1 close:2 operator:1 storage:1 context:3 impossible:2 writing:1 put:1 optimize:4 fruitful:1 deterministic:1 restriction:1 demonstrated:1 go:2 regardless:2 kale:3 independently:3 truncating:1 starting:1 sharpness:1 modicum:1 convex:2 insight:5 estimator:4 importantly:1 nam:1 harmless:1 notion:1 coordinate:1 variation:1 analogous:1 annals:1 target:4 colorado:1 user:5 exact:2 programming:3 us:2 element:17 trend:5 expensive:1 satisfying:4 recognition:1 bottom:1 preprint:3 capture:2 worst:1 benjamin:2 complexity:2 depend:2 incur:1 upon:2 completely:4 drineas:5 packing:1 easily:2 testament:1 fast:6 describe:1 monte:1 pci:2 choosing:1 whose:2 quite:2 larger:2 spend:1 supplementary:4 elad:2 otherwise:2 compressed:1 satyen:2 think:1 transform:1 sequence:1 eigenvalue:2 propose:1 tran:2 product:1 relevant:2 realization:1 pka:1 achieve:2 frobenius:2 x1t:2 requirement:1 produce:1 perfect:1 object:4 depending:1 alon:2 completion:4 polylog:1 derive:2 measured:1 ij:1 uredi:2 progress:2 strong:2 soc:1 implemented:2 c:1 come:2 liberty:2 correct:1 attribute:4 istribution:2 material:4 require:2 resilient:1 assign:1 fix:1 suffices:1 randomization:2 strictly:2 stretch:1 hold:3 sufficiently:2 around:2 exp:2 trailing:1 smallest:1 combinatorial:1 sensitive:1 largest:4 tf:2 establishes:1 minimization:1 rough:1 clearly:4 gaussian:2 always:4 aim:1 rather:4 avoid:1 ax:2 derived:1 focus:1 improvement:1 consistently:1 rank:6 fooled:1 contrast:2 skipped:1 durable:1 streaming:6 typically:6 tao:1 provably:2 pixel:1 overall:1 issue:2 arg:2 denoted:3 priori:4 retaining:1 yahoo:2 proposes:1 art:1 special:1 spatial:1 uc:1 genuine:1 construct:1 once:1 karnin:1 having:2 sampling:37 equal:2 never:1 represents:1 nearly:1 argminp:2 minimized:1 report:3 others:2 simplify:2 roman:1 nrd:7 lighten:2 inherent:2 few:1 winter:1 randomly:1 simultaneously:1 floating:1 maxj:1 replacement:6 recalling:1 freedom:1 trimming:3 highly:4 possibility:1 multiply:1 intra:1 joel:3 mahoney:1 analyzed:1 admitting:1 yielding:3 semidefinite:2 mcsherry:3 implication:1 closer:1 unless:1 circle:1 desired:1 re:1 ymail:2 minimal:3 kij:5 column:16 earlier:1 witnessed:1 measuring:1 maximization:2 cost:1 deviation:2 entry:42 consolidating:1 nearoptimal:1 synthetic:4 combined:2 vershynin:1 recht:2 density:3 international:2 siam:1 probabilistic:1 terence:1 michael:2 sketching:1 sanjeev:2 central:1 reflect:1 containing:1 choose:1 borne:3 worse:1 cognitive:1 return:2 stark:1 potential:1 blow:1 summarized:1 north:1 depends:5 stream:1 multiplicative:1 root:1 philbin:1 lab:2 closed:3 picked:1 hazan:3 sup:1 competitive:3 start:2 option:1 defer:1 contribution:1 minimize:4 formed:2 collaborative:1 accuracy:2 qk:1 variance:4 characteristic:3 efficiently:5 correspond:3 yield:1 conceptually:1 famous:1 identification:1 disc:1 carlo:1 strongest:3 edo:2 email:2 definition:14 against:1 failure:1 frequency:1 pp:1 naturally:1 proof:5 petros:4 couple:1 sampled:2 proved:2 popular:1 logical:1 recall:1 dimensionality:2 ubiquitous:1 sophisticated:1 back:1 appears:1 alexandre:1 wherein:1 improved:1 april:1 entrywise:1 zisserman:1 done:1 though:1 ano:1 lastly:1 achlioptas:4 szeged:1 sketch:5 hand:1 tropp:4 replacing:1 o:2 quality:5 grows:4 building:1 verify:1 unbiased:4 true:2 hence:2 assigned:2 symmetric:3 iteratively:1 round:1 indistinguishable:1 pbk:1 be1:1 stress:1 outline:1 demonstrate:2 performs:1 l1:16 bring:2 wigner:2 dkm06:2 wise:4 consideration:1 image:5 superior:2 rotation:1 wikipedia:5 functional:2 overview:1 volume:1 nh:1 discussed:4 analog:2 fare:1 tail:2 measurement:2 refer:2 approx:1 trivially:2 mathematics:5 dot:1 access:2 stable:2 align:1 dominant:1 own:2 perspective:1 optimizing:1 inf:2 reverse:2 store:2 certain:2 verlag:1 inequality:7 binary:1 devise:1 captured:2 greater:1 somewhat:1 additional:1 zouzias:2 determine:2 monotonically:1 ale:1 semi:2 ii:2 july:2 desirable:1 anastasios:1 ing:1 technical:2 smooth:2 adapt:1 long:1 retrieval:1 divided:2 equally:1 variant:1 basic:1 vision:1 expectation:1 metric:4 arxiv:6 represent:1 cr09:2 ompute:2 remarkably:2 uninformative:1 separately:1 addressed:1 singular:4 appropriately:1 noga:1 rest:1 sr:10 asz:2 file:1 enron:4 subject:2 sure:1 simulates:1 december:1 near:5 presence:1 noting:1 exceed:1 bernstein:9 iii:1 backwards:1 enough:1 easy:1 split:1 zi:9 competing:1 opposite:1 andreas:1 regarding:5 idea:1 knowing:2 computable:2 motivated:3 handled:2 expression:1 algebraic:2 returned:1 repeatedly:1 krivelevich:2 cornerstone:1 result1:1 santa:1 se:2 clear:1 amount:5 outperform:1 bolyai:1 revisit:1 notice:1 chum:1 sign:1 estimated:1 per:8 discrete:2 vol:1 express:1 sparsifying:1 key:2 four:2 salient:1 threshold:6 rewriting:1 ravi:1 kept:3 graph:2 relaxation:1 merely:1 year:1 sum:6 fourth:1 khintchine:1 place:1 throughout:1 reasonable:1 acceptable:1 appendix:2 scaling:1 bit:9 bound:23 simplification:1 quadratic:1 oracle:1 annual:2 strength:2 worked:1 idf:2 alex:1 argument:2 extremely:2 min:4 format:1 relatively:1 according:2 combination:1 representable:1 poor:1 smaller:6 slightly:1 gittens:2 b:3 intuitively:1 pr:1 boulder:2 taken:1 equation:2 turn:1 discus:1 needed:7 know:2 letting:1 mind:1 end:1 koml:2 available:3 operation:3 apply:2 ct10:2 observe:2 spectral:6 appropriate:1 generic:1 altogether:2 rp:1 original:2 denotes:1 remaining:1 running:1 top:5 subsampling:1 rudelson:1 log2:2 log10:1 calculating:1 exploit:1 emmanuel:2 especially:1 establish:1 approximating:4 implied:2 objective:2 tensor:2 question:1 quantity:4 occurs:1 strategy:4 concentration:4 dependence:2 exhibit:2 am07:9 simulated:1 berlin:1 trivial:3 reason:2 kannan:1 zkarnin:1 assuming:2 besides:1 index:1 mini:1 ratio:20 minimizing:10 statement:1 holding:1 frank:2 stated:1 a2ij:7 negative:1 looseness:1 perform:2 upper:3 vertical:1 observation:3 finite:1 perturbation:1 arbitrary:3 cast:1 required:3 pair:2 sivic:1 engine:1 hypergeometric:1 established:1 zohar:1 able:1 usually:1 below:5 dimitris:3 pattern:1 encompasses:1 including:1 max:14 memory:3 explanation:1 power:1 natural:3 difficulty:1 beautiful:1 representing:2 scheme:1 improve:1 imply:2 arora:4 axis:1 aspremont:1 coupled:1 text:1 prior:1 geometric:1 l2:16 discovery:1 eugene:1 multiplication:1 determining:1 relative:2 law:1 loss:1 expect:7 highlight:1 mixed:1 interesting:1 proportional:2 filtering:1 foundation:4 pij:27 proxy:1 s0:2 heavy:1 row:29 elsewhere:1 repeat:1 last:2 keeping:2 truncation:3 accidentally:1 english:1 offline:2 aij:37 allow:1 side:1 drastically:1 institute:1 fall:1 wide:1 taking:2 absolute:1 sparse:3 dimension:1 lett:1 numeric:3 vocabulary:1 quantum:1 concretely:1 collection:1 refinement:1 preprocessing:1 simplified:1 made:2 feeling:1 far:4 qualitatively:1 nguyen:2 transaction:2 approximate:2 compact:1 citation:1 preferred:1 trim:2 keep:2 global:2 active:2 b1:9 conceptual:1 conclude:1 corpus:2 xi:2 grayscale:1 spectrum:1 search:1 latent:2 table:2 channel:1 learn:1 purview:1 heidelberg:1 complex:1 main:2 linearly:1 big:1 noise:1 x1:5 referred:3 precision:1 deterministically:2 comput:1 third:4 wavelet:1 bij:3 theorem:15 bad:1 discarding:2 xt:2 specific:1 showing:1 offset:2 list:1 admits:1 x:1 disregarding:1 experimented:1 exists:1 essential:1 magnitude:4 budget:6 sparser:1 cx:1 simply:3 eij:2 amsterdam:1 scalar:1 recommendation:1 holland:1 applies:1 springer:1 corresponds:3 minimizer:1 satisfies:1 acm:4 goal:3 viewed:1 targeted:1 exposition:1 towards:1 specifically:5 tack:1 engineer:1 lemma:8 total:2 pas:3 tendency:1 experimental:2 e:2 formally:1 rudolf:1 mark:2 combinatorica:1 latter:1 phenomenon:2 avoiding:1 handling:1 |
4,461 | 5,037 | Large Scale Distributed Sparse Precision Estimation
Huahua Wang, Arindam Banerjee
Dept. of Computer Science & Engg, University of Minnesota, Twin Cities
{huwang,banerjee}@cs.umn.edu
Cho-Jui Hsieh, Pradeep Ravikumar, Inderjit S. Dhillon
Dept. of Computer Science, University of Texas, Austin
{cjhsieh,pradeepr,inderjit}@cs.utexas.edu
Abstract
We consider the problem of sparse precision matrix estimation in high dimensions
using the CLIME estimator, which has several desirable theoretical properties. We
present an inexact alternating direction method of multiplier (ADMM) algorithm
for CLIME, and establish rates of convergence for both the objective and optimality conditions. Further, we develop a large scale distributed framework for the
computations, which scales to millions of dimensions and trillions of parameters,
using hundreds of cores. The proposed framework solves CLIME in columnblocks and only involves elementwise operations and parallel matrix multiplications. We evaluate our algorithm on both shared-memory and distributed-memory
architectures, which can use block cyclic distribution of data and parameters to
achieve load balance and improve the efficiency in the use of memory hierarchies.
Experimental results show that our algorithm is substantially more scalable than
state-of-the-art methods and scales almost linearly with the number of cores.
1
Introduction
p
Consider a p-dimensional probability distribution with true covariance matrix ?0 ? S++
and true
p
?1
p?n
precision (or inverse covariance) matrix ?0 = ?0 ? S++ . Let [R1 ? ? ? Rn ] ? <
be n independent and identically distributed random samples drawn from this p-dimensional distribution. The
?
centered normalized sample matrix A = [a1 ? ? ? an ] ? <p?n can be obtained as ai = ?1n (Ri ? R),
P
1
T
? =
where R
i Ri , so that the sample covariance matrix can be computed as C = AA . In
n
recent years, considerable effort has been invested in obtaining an accurate estimate of the precision
? based on the sample covariance matrix C in the ?low sample, high dimensions? setting,
matrix ?
i.e., n p, especially when the true precision ?0 is assumed to be sparse [28]. Suitable estimators and corresponding statistical convergence rates have been established for a variety of settings,
including distributions with sub-Gaussian tails, polynomial tails [25, 3, 19]. Recent advances have
also established parameter-free methods which achieve minimax rates of convergence [4, 19].
Spurred by these advances in the statistical theory of precision matrix estimation, there has been
considerable recent work on developing computationally efficient optimization methods for solving
the corresponding statistical estimation problems: see [1, 8, 14, 21, 13], and references therein.
While these methods are able to efficiently solve problems up to a few thousand variables, ultralarge-scale problems with millions of variables remain a challenge. Note further that in precision
matrix estimation, the number of parameters scales quadratically with the number of variables; so
that with a million dimensions p = 106 , the total number of parameters to be estimated is a trillion,
p2 = 1012 . The focus of this paper is on designing an efficient distributed algorithm for precision
matrix estimation under such ultra-large-scale dimensional settings.
We focus on the CLIME statistical estimator [3], which solves the following linear program (LP):
? 1
min k?k
s.t.
? ? Ik? ? ? ,
kC?
1
(1)
where ? > 0 is a tuning parameter. The CLIME estimator not only has strong statistical guarantees [3], but also comes with inherent computational advantages. First, the LP in (1) does not
? which can be a challenge to handle efficiently in highexplicitly enforce positive definiteness of ?,
dimensions. Secondly, it can be seen that (1) can be decomposed into p independent LPs, one
? This separable structure has motivated solvers for (1) which solve the LP
for each column of ?.
column-by-column using interior point methods [3, 28] or the alternating direction method of multipliers (ADMM) [18]. However, these solvers do not scale well to ultra-high-dimensional problems:
they are not designed to run on hundreds to thousands of cores, and in particular require the entire
sample covariance matrix C to be loaded into the memory of a single machine, which is impractical
even for moderate sized problems.
In this paper, we present an efficient CLIME-ADMM variant along with a scalable distributed framework for the computations [2, 26]. The proposed CLIME-ADMM algorithm can scale up to millions
of dimensions, and can use up to thousands of cores in a shared-memory or distributed-memory architecture. The scalability of our method relies on the following key innovations. First, we propose
an inexact ADMM [27, 12] algorithm targeted to CLIME, where each step is either elementwise
parallel or involves suitable matrix multiplications. We show that the rates of convergence of the
objective to the optimum as well as residuals of constraint violation are both O(1/T ). Second, we
solve (1) in column-blocks of the precision matrix at a time, rather than one column at a time. Since
(1) already decomposes columnwise, solving multiple columns together in blocks might not seem
worthwhile. However, as we show our CLIME-ADMM working with column-blocks uses matrixmatrix multiplications which, building on existing literature [15, 5, 11] and the underlying low rank
and sparse structure inherent in the precision matrix estimation problem, can be made substantially
more efficient than repeated matrix-vector multiplications. Moreover, matrix multiplication can be
further simplified as block-by-block operations, which allows choosing optimal block sizes to minimize cache misses, leading to high scalability and performance [16, 5, 15]. Lastly, since the core
computations can be parallelized, CLIME-ADMM scales almost linearly with the number of cores.
We experiment with shared-memory and distributed-memory architectures to illustrate this point.
Empirically, CLIME-ADMM is shown to be much faster than existing methods for precision estimation, and scales well to high-dimensional problems, e.g., we estimate a precision matrix of one
million dimension and one trillion parameters in 11 hours by running the algorithm on 400 cores.
Our framework can be positioned as a part of the recent surge of effort in scaling up machine learning algorithms [29, 22, 6, 7, 20, 2, 23, 9] to ?Big Data?. Scaling up machine learning algorithms
through parallelization and distribution has been heavily explored on various architectures, including shared-memory architectures [22], distributed memory architectures [23, 6, 9] and GPUs [24].
Since MapReduce [7] is not efficient for optimization algorithms, [6] proposed a parameter server
that can be used to parallelize gradient descent algorithms for unconstrained optimization problems.
However, this framework is ill-suited for the constrained optimization problems we consider here,
because gradient descent methods require the projection at each iteration which involves all variables and thus ruins the parallelism. In other recent related work based on ADMM, [23] introduce
graph projection block splitting (GPBS) to split data into blocks so that examples and features can
be distributed among multiple cores. Our framework uses a more general blocking scheme (block
cyclic distribution), which provides more options in choosing the optimal block size to improve the
efficiency in the use of memory hierarchies and minimize cache misses [16, 15, 5]. ADMM has
also been used to solve constrained optimization in a distributed framework [9] for graphical model
inference, but they consider local constraints, in contrast to the global constraints in our framework.
Notation: A matrix is denoted by a bold face upper case letter, e.g., A. An element of a matrix
is denoted by a upper case letter with row index i and column index j, e.g., Aij is the ij-th element of A. A block of matrix is denoted by a bold face lower case letter indexed by ij, e.g.,
~ ij represents a collection of blocks of matrix A on the ij-th core (see block cyclic distriAij . A
bution in Section
A0 refers the transpose
Ppof A.
PnMatrix norms used are all elementwise norms,
Pp4).P
n
e.g., kAk1 = i=1 j=1 |Aij |, kAk22 = i=1 j=1 A2ij , kAk? = max1?i?p,1?j?n |Aij |. The
Pp Pn
matrix inner product is defined in elementwise, e.g., hA, Bi = i=1 j=1 Aij Bij . X ? <p?k de? and E ? <p?k denotes the same k columns
notes k(1 ? k ? p) columns of the precision matrix ?,
p?p
of the identity matrix I ? < . Let ?max (C) be the largest eigenvalue of covariance matrix C.
2
Algorithm 1 Column Block ADMM for CLIME
1: Input: C, ?, ?, ?
2: Output: X
?0 = 0
3: Initialization: X0 , Z0 , Y 0 , V0 , V
(
Xij ? ? ,
4: for t = 0 to T ? 1 do
t+1
t
t 1
Xij + ? ,
soft(X, ?) =
5:
X-update: X
= soft(X ? V , ? ), where
0,
t+1
t+1
sparse :
U
= CX
6:
Mat-Mul:
(
low rank : Ut+1 = A(A0 Xt+1 )
Eij + ?,
t+1
t+1
t
7:
Z-update: Z
= box(U
+ Y , ?), where box(X, E, ?) =
Xij ,
8:
Y-update:Yt+1 = Yt + Ut+1 ? Zt+1
Eij ? ?,
? t+1 = CYt+1
sparse :
V
9:
Mat-Mul:
? t+1 = A(A0 Yt+1 )
low rank : V
?
t+1
t+1
?
? t)
10:
V-update: V
= ? (2V
?V
11: end for
2
if Xij > ? ,
if Xij < ?? ,
otherwise
if Xij ? Eij > ?,
if |Xij ? Eij | ? ?,
if Xij ? Eij < ??,
Column Block ADMM for CLIME
In this section, we propose an algorithm to estimate the precision matrix in terms of column blocks
instead of column-by-column. Assuming a column block contains k(1 ? k ? p) columns, the
sparse precision matrix estimation amounts to solving dp/ke independent linear programs. Denoting
? (1) can be written as
X ? <p?k be k columns of ?,
min kXk1
s.t.
kCX ? Ek? ? ? ,
(2)
which can be rewritten in the following equality-constrained form:
min kXk1
s.t.
kZ ? Ek? ? ?, CX = Z .
(3)
Through the splitting variable Z ? <p?k , the infinity norm constraint becomes a box constraint and
is separated from the `1 norm objective. We use ADMM to solve (3). The augmented Lagrangian
of (3) is
?
L? = kXk1 + ?hY, CX ? Zi + kCX ? Zk22 ,
(4)
2
where Y ? <p?k is a scaled dual variable and ? > 0. ADMM yields the following iterates [2]:
?
(5)
Xt+1 = argminX kXk1 + kCX ? Zt + Yt k22 ,
2
?
Zt+1 = argmin kCXt+1 ? Z + Yt k22 ,
(6)
kZ?Ek? ?? 2
Yt+1 = Yt + CXt+1 ? Zt+1 .
(7)
As a Lasso problem, (5) can be solved using exisiting Lasso algorithms, but that will lead to a
double-loop algorithm. (5) does not have a closed-form solution since C in the quadratic penalty
term makes X coupled. We decouple X by linearizing the quadratic penalty term and adding a
proximal term as follows:
?
(8)
Xt+1 = argminX kXk1 + ?hVt , Xi + kX ? Xt k22 ,
2
where Vt = ?? C(Yt + CXt ? Zt ) and ? > 0. (8) is usually called an inexact ADMM update.
? t = CYt , we have Vt = ? (2V
?t ?V
? t?1 ) . (8) has the
Using (7), Vt = ? C(2Yt ? Yt?1 ). Let V
?
?
following closed-form solution:
1
Xt+1 = soft(Xt ? Vt , ) ,
?
where soft denotes the soft-thresholding and is defined in Step 5 of Algorithm 1.
(9)
Let Ut+1 = CXt+1 . (6) is a box constrained quadratic programming which has the following
closed-form solution:
Zt+1 = box(Ut+1 + Yt , E, ?) ,
3
(10)
where box denotes the projection onto the infinity norm constraint kZ ? Ek? ? ? and is defined
in Step 7 of Algorithm 1. In particular, if kUt+1 + Yt ? Ek? ? ?, Zt+1 = Ut+1 + Yt and thus
Yt+1 = Yt + Ut+1 ? Zt+1 = 0.
The ADMM algorithm for CLIME is summarized in Algorithm 1. In Algorithm 1, while step 5, 7, 8
and 10 amount to elementwise operations which cost O(pk) operations, steps 6 and 9 involve matrix
multiplication which is the most computationally intensive part and costs O(p2 k) operations. The
memory requirement includes O(pn) for A and O(pk) for the other six variables.
As the following results show, Algorithm 1 has a O(1/T ) convergence rate for both the objective
function and the residuals of optimality conditions. The proof technique is similar to [26]. [12]
shows a similar result as Theorem 2 but uses a different proof technique. For proofs, please see
Appendix A in the supplement.
?T =
Theorem 1 Let {Xt , Zt , Yt } be generated by Algorithm 1 and X
Z0 = Y0 = 0 and ? ? ??2max (C). For any CX = Z, we have
? T k1 ? kXk1 ?
kX
1
T
PT
t=1
Xt . Assume X0 =
?kXk22
.
2T
(11)
Theorem 2 Let {Xt , Zt , Yt } be generated by Algorithm 1 and {X? , Z? , Y? } be a KKT point for
the Lagrangian of (3). Assume X0 = Z0 = Y0 = 0 and ? ? ??2max (C). We have
kCXT ? ZT k22 + kZT ? ZT ?1 k22 + kXT ? XT ?1 k2? I?C2 ?
kY? k22 + ?? kX? k22
?
3
T
.
(12)
Leveraging Sparse, Low-Rank Structure
In this section, we consider a few possible directions that can further leverage the underlying structure of the problem; specifically sparse and low-rank structure.
3.1
Sparse Structure
As we detail here, there could be sparsity in the intermediate iterates, or the sample covariance
matrix itself (or a perturbed version thereof); which can be exploited to make our CLIME-ADMM
variant more efficient.
Iterate Sparsity: As the iterations progress, the soft-thresholding operation will yield a sparse
Xt+1 , which can help speed up step 6: Ut+1 = CX t+1 , via sparse matrix multiplication. Further,
the box-thresholding operation will yield a sparse Yt+1 . In the ideal case, if kUt+1 +Yt ?Ek? ? ?
? t+1 = Yt + Ut+1 ? Zt+1 = 0. More generally, Yt+1
in step 7, then Zt+1 = Ut+1 + Yt . Thus, Y
? t+1 = CYt+1 .
will become sparse as the iterations proceed, which can help speed up step 9: V
Sample Covariance Sparsity: We show that one can ?perturb? the sample covariance to obtain a
sparse and coarsened matrix, solve CLIME with this pertubed matrix, and yet have strong statistical
guarantees. The statistical guarantees for CLIME [3], including convergence in spectral, matrix
L1 , and Frobenius norms, only
p require from the sample covariance matrix C a deviation bound of
the form kC ? ?0 k? ? c log p/n, for some constant c. Accordingly, if we perturb the matrix
C with a perturbation matrix ? so that the perturbed matrix (C + ?) continues to satisfy the
deviation bound, the statistical guarantees for CLIME would hold even if we used the perturbed
matrix (C + ?). The following theorem (for details, please see Appendix B in the supplement)
illustrates some perturbations ? that satisfy this property:
Theorem 3 Let the original random variables Ri be sub-Gaussian, with sample covariance C. Let
? be a random perturbation matrix, where ?ij are independent sub-exponential
random variables.
q
Then, for positive constants c1 , c2 , c3 , P (kC + ? ? ?0 k? ? c1
log p
n )
? c2 p?c3 .
As
pa special case, one can thus perturb elements of Cij with suitable
p constants ?ij with |?ij | ?
c log p/n, so that the perturbed matrix is sparse, i.e., if |Cij | ? c log p/n, then it can be safely
4
truncated to 0. Thus, in practice, even if sample covariance matrix is only close to a sparse matrix [21, 13], or if it is close to being block diagonal [21, 13], the complexity of matrix multiplication
in steps 6 and 9 can be significantly reduced via the above perturbations.
3.2
Low Rank Structure
Although one can use sparse structures of matrices participating in the matrix multiplication to
accelerate the algorithm, the implementation requires substantial work since dynamic sparsity of
X and Y is unknown upfront and static sparsity of the sample covariance matrix may not exist.
Since the method will operate in a low-sample setting, we can alternatively use the low rank of the
sample covariance matrix to reduce the complexity of matrix multiplication. Since C = AAT and
p n, CX = A(AT X), and thus the computational complexity of matrix multiplication reduces
from O(p2 k) to O(npk), which can achieve significant speedup for small n. We use such low-rank
multiplications for the experiments in Section 5.
4
Scalable Parallel Computation Framework
In this section, we elaborate on scalable frameworks for CLIME-ADMM in both shared-memory
and distributed-memory achitectures.
In a shared-memory architecture (e.g., a single machine), data A is loaded to the memory and shared
? is evenly divided into
by q cores, as shown in Figure 1(a). Assume the p ? p precision matrix ?
1
q
l
l = p/k (? q) column blocks, e.g., X , ? ? ? , X , ? ? ? , X , and thus each column block contains k
columns. The column blocks are assigned to q cores cyclically, which means the j-th column block
is assigned to the mod(j, q)-th core. The q cores can solve q column blocks in parallel without communication and synchronization, which can be simply implemented via multithreading. Meanwhile,
another q column blocks are waiting in their respective queues. Figure 1(a) gives an example of how
to solve 8 column blocks on 4 cores in a shared-memory environment. While the 4 cores are solving
the first 4 column blocks, the next 4 column blocks are waiting in queues (red arrows).
Although the shared-memory framework is free from communication and synchronization, the limited resources prevent it from scaling up to datasets with millions of dimensions, which can not be
loaded to the memory of a single machine or solved by tens of cores in a reasonble time. As more
memory and computing power are needed for high dimensional datasets, we implement a framework for CLIME-ADMM in a distributed-memory architecture, which automatically distributes
data among machines, parallelizes computation, and manages communication and synchronization
among machines, as shown in Figure 1(b). Assume q processes are formed as a r ? c process
? is evenly divided into l = p/k (? q) column blocks, e.g.,
grid and the p ? p precision matrix ?
j
X , 1 ? j ? l. We solve a column block Xj at a time in the process grid. Assume the data matrix
~ ij is the data on the ij-th core, i.e., A is
A has been evenly distributed into the process grid and A
~ ij under a mapping scheme, which we will discuss later. Figure 1(b) illustrates that
colletion of A
the 2 ? 2 process grid is computing the first column block X1 while the second column block X2
is waiting in queues (red lines), assuming X1 , X2 are distributed into the process grid in the same
~ 1 is the block of X1 assigned to the ij-th core.
way as A and X
ij
A typical issue in parallel computation is load imbalance, which is mainly caused by the computational disparity among cores and leads to unsatisfactory speedups. Since each step in CLIMEADMM are basic operations like matrix multiplication, the distribution of sub-matrices over processes has a major impact on the load balance and scalability. The following discussion focuses on
the matrix multiplication in the step 6 in Algorithm 1. Other steps can be easily incorporated into
the framework. The matrix multiplication U = A(A0 X1 ) can be decomposed into two steps, i.e.,
W = A0 X1 and U = AW, where A ? <n?p , X1 ? <p?k , W ? <n?k and U ? <n?k . Dividing matrices A, X evenly into r ? c large consecutive blocks like [23] will lead to load imbalance.
First, since the sparse structure of X changes over time (Section 3.1), large consecutive blocks may
assign dense blocks to some processes and sparse blocks to the other processes. Second, there will
be no blocks in some processes after the multiplication using large blocks since W is a small matrix
compared to A, X, e.g., p could be millions and n, k are hundreds. Third, large blocks may not be
fit in the cache, leading to cache misses. Therefore, we use block cyclic data distribution which uses
a small nonconsecutive blocks and thus can largely achieve load balance and scalability. A matrix
is first divided into consecutive blocks of size pb ? nb . Then blocks are distributed into the process
5
6
X5 X
8
X7 X
X112
X111 A11
A12 X112
X 222
2
X1 X
X3 X 4
X 221
X121 A 21
A 22 X122
X 222
A11 A12 A13 A14
A 21 A 22 A 23 A 24
A 31 A 32 A 33 A 34
A 41 A 42 A 43 A 44
A
Parallel IO
A 51 A 52 A 53 A 54
A 61 A 62 A 63 A 64
(a) Shared-Memory
(b) Distributed-Memory
(c) Block Cyclic
Figure 1: CLIME-ADMM on shared-memory and distribtued-memory architectures.
grid cyclically. Figure 1(c) illustrates how to distribute the matrix to a 2 ? 2 process grid. A is
divided into 3 ? 2 consecutive blocks, where each block is of size pb ? nb . Blocks of the same color
will be assigned to the same process. Green blocks will be assigned to the upper left process, i.e.,
~ 11 = {a11 , a13 , a31 , a33 , a51 , a53 } in Figure 1(b). The distribution of X1 can be done in a similar
A
way except the block size should be pb ? kb , where pb is to guarantee that matrix multiplication
A0 X1 works. In particular, we denote pb ? nb ? kb as the block size for matrix multiplication.
To distribute the data in a block cyclic manner, we use a parallel I/O scheme, where processes can
access the data in parallel and only read/write the assigned blocks.
5
Experimental Results
In this section, we present experimental results to compare CLIME-ADMM with existing algorithms and show its scalability. In all experiments, we use the low rank property of the
sample covariance matrix and do not assume any other special structures. Our algorithm is
implemented in a shared-memory architecture using OpenMP (http://openmp.org/wp/) and a
distributed-memory architecture using OpenMPI (http://www.open-mpi.org) and ScaLAPACK [15]
(http://www.netlib.org/scalapack/).
5.1
Comparision with Existing Algorithms
We compare CLIME-ADMM with three other methods for estimating the inverse covariance matrix,
including CLIME, Tiger in package flare1 and divide and conquer QUIC (DC-QUIC) [13]. The
comparisons are run on an Intel Zeon E5540 2.83GHz CPU with 32GB main memory.
We test the efficiency of the above methods on both synthetic and real datasets. For synthetic
datasets, we generate the underlying graphs with random nonzero pattern by the same way as in [14].
We control the sparsity of the underlying graph to be 0.05, and generate random graphs with various dimension. Since each estimator has different parameters to control the sparsity, we set them
individually to recover the graph with sparsity 0.05, and compare the time to get the solution. The
column block size k for CLIME-ADMM is 100. Figure 2(a) shows that CLIME-ADMM is the most
scalable estimator for large graphs. We compare the precision and recall for different methods on
recovering the groud truth graph structure. We run each method using different parameters (which
controls the sparsity of the solution), and plot the precision and recall for each solution in Figure
2(b). As Tiger is parameter tuning free and achieves the minimax optimal rate [19], it achieves the
best performance in terms of recall. The other three methods have the similar performance. CLIME
can also be free of parameter tuning and achieve the optimal minimax rate by solving an additional
linear program which is similar to (1) [4]. We refer the readers to [3, 4, 19] for detailed comparisons
between the two models CLIME and Tiger, which is not the focus of this paper.
We further test the efficiency of the above algorithms on two real datasets, Leukemia and Climate
(see Table 1). Leukemia is gene expression data provided by [10], and the pre-processing was done
by [17]. Climate dataset is the temperature data in year 2001 recorded by NCEP/NCAR Reanalysis
data2 and preprocessed by [13]. Since the ground truth for real datasets are unknown, we test the
time taken for each method to recover graphs with 0.1 and 0.01 sparsity. The results are presented
in Table 1. Although Tiger is faster than CLIME-ADMM on small dimensional dataset Leukemia,
1
The interior point method in [3] is written in R and extremely slow. Therefore, we use flare which is
implemented in C with R interface. http://cran.r-project.org/web/packages/flare/index.html
2
www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html
6
(a) Runtime
(a) Speedup Skcol
(a) Speedup Skcol
(b) Precision and recall
(b) Speedup Sqcore
(b) Speedup Sqcore
Figure 2: Synthetic datasets
Figure 3: Shared-Memory.
Figure 4: Distributed-Memory.
it does not scale well on the high dimensional dataset as CLIME-ADMM, which is mainly due
to the fact that ADMM is not competitive with other methods on small problems but has superior
scalability on big datasets [2]. DC-QUIC runs faster than other methods for small sparsity but
dramatically slows down when sparsity increases. DC-QUIC essentially works on a block-diagonal
matrix by thresholding the off-diagonal elements of the sample covariance matrix. A small sparsity
generally leads to small diagonal blocks, which helps DC-QUIC to make a giant leap forward in the
computation. A block-diagonal structure in the sample covariance matrix can be easily incorporated
into the matrix multiplication in CLIME-ADMM to achieve a sharp computational gain. On a single
core, CLIME-ADMM is faster than flare ADMM. We also show the results of CLIME-ADMM on 8
cores, showing CLIME-ADMM achieves a linear speedup (more results will be seen in Section 5.2).
Note Tiger can estimate the spase precision matrix column-by-column in parallel, while CLIMEADMM solves CLIME in column-blocks in parallel.
5.2
Scalability of CLIME ADMM
We evaluate the scalability of CLIME-ADMM in a shared memory and a distributed memory architecture in terms of two kinds of speedups. The first speedup is defined as the time on 1 core
T1core over q cores Tqcore , i.e., Sqcore = T1core /Tqcore . The second speedup is caused by the use of column blocks. Assume the total time for solving CLIME column-by-column (k = 1) is T1col , which
is considered as the baseline. The speedup of solving CLIME in column block with size k over a
single column is defined as Skcol = T1col /Tkcol . The experiments are done on synthetic data which is
generated in the same way as in Section 5.1. The number of samples is fixed to be n = 200.
Shared-memory We estimate a precision matrix with p = 104 dimensions on a server with 20
cores and 64G memory. We use OpenMP to parallelize column blocks. We run the algorithm on
different number of cores q = 1, 5, 10, 20, and with different column block size k. The speedup
Skcol is plotted in Figure 3(a), which shows the results on three different number of cores. When
k ? 20, the speedups keep increasing with increasing number of columns k in each block. For
k ? 20, the speedups are maintained on 1 core and 5 cores, but decreases on 10 and 20 cores. The
total number of columns in the shared-memory is k ? q. For a fixed k, more columns are involved in
the computation when more cores are used, leading to more memory consumption and competition
for the usage of shared cache. The speedup Sqcore is plotted in Figure 3(b), where T1core is the time
on a single core. The ideal linear speedups are archived on 5 cores for all block sizes k. On 10
cores, while small and medium column block sizes can maintain the ideal linear speedups, the large
column block sizes fail to scale linearly. The failure to achieve a linear speedup propagate to small
and medium column block sizes on 20 cores, although their speedups are larger than large column
block size. As more and more column blocks are participating in the computation, the speed-ups
decrease possibly because of the competition for resources (e.g., L2 cache) in the shared-memory
environment.
7
Dataset
Leukemia
(1255 ? 72)
Climate
(10512 ? 1464)
Table 1: Comparison of runtime (sec) on real datasets.
CLIME-ADMM
sparsity
DC-QUIC
Tiger
1 core
8 cores
0.1
48.64
6.27
93.88
34.56
0.01
44.98
5.83
21.59
17.10
0.1
4.76 hours 0.6 hours 10.51 hours > 1 day
0.01
4.46 hours 0.56 hours 2.12 hours > 1 day
flare CLIME
142.5
87.60
> 1 day
> 1 day
Table 2: Effect (runtime (sec)) of using different number of cores in a node with p = 106 .
Using one core per node is the most efficient as there is no resource sharing with other cores.
node ?core k = 1 k = 5 k = 10 k = 50 k = 100 k = 500 k = 1000
100?1
0.56
1.26
2.59
6.98
13.97
62.35
136.96
25? 4
1.02
2.40
3.42
8.25
16.44
84.08
180.89
200?1
0.37
0.68
1.12
3.48
6.76
33.95
70.59
50?4
0.74
1.44
2.33
4.49
8.33
48.20
103.87
Distributed-memory We estimate a precision matrix with one million dimensions (p = 106 ), which
contains one trillion parameters (p2 = 1012 ). The experiments are run on a cluster with 400 computing nodes. We use 1 core per node to avoid the competition for the resources as we observed in
the shared-memory case. For q cores, we use the process grid 2q ? 2 since p n. The block size
pb ?nb ?kb for matrix multiplication is 10?10?1 for k ? 10 and 10?10?10 for k > 10. Since the
column block CLIME problems are totally independent, we report the speedups on solving a single
column block. The speedup Skcol is plotted in Figure 4(a), where the speedups are larger and more
stable than that in the shared-memory environment. The speedup keeps increasing before arriving
at a certain number as column block size increases. For any column block size, the speedup also
increases as the number of cores increases. The speedup Sqcore is plotted in Figure 4(b), where T1core
is the time on 50 cores. A single column (k = 1) fails to achieve linear speedups when hundreds of
cores are used. However, if using a column block k > 1, the ideal linear speedups are achieved with
increasing number of cores. Note that due to distributed memory, the larger column block sizes also
scale linearly, unlike in the shared memory setting, where the speedups were limited due to resource
sharing. As we have seen, k depends on the size of process grid, block size in matrix multiplication,
cache size and probably the sparsity pattern of matrices. In Table 2, we compare the performance
of 1 core per node to that of using 4 cores per node, which mixes the effects of shared-memory and
distributed-memory architectures. For small column block size (k = 1, 5), the use of multiple cores
in a node is almost two times slower than the use of a single core in a node. For other column block
sizes, it is still 30% slower. Finally, we ran CLIME-ADMM on 400 cores with one node per core
and block size k = 500, and the entire computation took about 11 hours.
6
Conclusions
In this paper, we presented a large scale distributed framework for the estimation of sparse precision
matrix using CLIME. Our framework can scale to millions of dimensions and run on hundreds of
machines. The framework is based on inexact ADMM, which decomposes the constrained optimization problem into elementary matrix multiplications and elementwise operations. Convergence rates
for both the objective and optimality conditions are established. The proposed framework solves the
CLIME in column-blocks and uses block cyclic distribution to achieve load balancing. We evaluate
our algorithm on both shared-memory and distributed-memory architectures. Experimental results
show that our algorithm is substantially more scalable than state-of-the-art methods and scales almost linearly with the number of cores. The framework presented can be useful for a variety of other
large scale constrained optimization problems and will be explored in future work.
Acknowledgment
H. W. and A. B. acknowledge the support of NSF via IIS-0953274, IIS-1029711, IIS- 0916750,
IIS-0812183, and the technical support from the University of Minnesota Supercomputing Institute.
H. W. acknowledges the support of DDF (2013-2014) from the University of Minnesota. C.-J.H.
and I.S.D was supported by NSF grants CCF-1320746 and CCF-1117055. C.-J.H also acknowledge
the support of IBM PhD fellowship. P.R. acknowledges the support of NSF via IIS-1149803, DMS1264033 and ARO via W911NF-12-1-0390.
8
References
[1] O. Banerjee, L. E. Ghaoui, and A. dAspremont. Model selection through sparse maximum likelihood
estimation for multivariate Gaussian or binary data. JMLR, 9:2261?2286, 2008.
[2] S. Boyd, E. Chu N. Parikh, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundation and Trends Machine Learning, 3(1), 2011.
[3] T. Cai, W. Liu, and X. Luo. A constrained `1 minimization approach to sparse precision matrix estimation.
Journal of American Statistical Association, 106:594?607, 2011.
[4] T. Cai, W. Liu, and H. Zhou. Estimating sparse precision matrix: Optimal rates of convergence and
adaptive estimation. Preprint, 2012.
[5] J. Choi. A new parallel matrix multiplication algorithm on distributed-memory concurrent computers. In
High Performance Computing on the Information Superhighway, 1997.
[6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker,
K. Yang, and A. Y. Ng. Large scale distributed deep networks. In NIPS, 2012.
[7] J. Dean and S. Ghemawat. Map-Reduce: simplified data processing on large clusters. In CACM, 2008.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Model selection through sparse maximum likelihood estimation
for multivariate gaussian or binary data. Biostatistics, 9:432?441, 2008.
[9] Q. Fu, H. Wang, and A. Banerjee. Bethe-ADMM for tree decomposition based parallel MAP inference.
In UAI, 2013.
[10] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh,
J. R. Downing, M. A. Caligiuri, and C. D. Bloomfield. Molecular classication of cancer: class discovery
and class prediction by gene expression monitoring. Science, pages 531?537, 1999.
[11] K. Goto and R. Van De Geijn. Highperformance implementation of the level-3 BLAS. ACM Transactions
on Mathematical Software, 35:1?14, 2008.
[12] B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method
of multipliers. Preprint, 2012.
[13] C. Hsieh, I. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for sparse inverse
covariance estimation. In NIPS, 2012.
[14] C. Hsieh, M. Sustik, I. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using
quadratic approximation. In NIPS, 2011.
[15] A. Cleary J. Demmel I. S. Dhillon J. Dongarra S. Hammarling G. Henry A. Petitet K. Stanley D. Walker
L. Blackford, J. Choi and R.C. Whaley. ScaLAPACK Users? Guide. SIAM, 1997.
[16] M. Lam, E. Rothberg, and M. Wolf. The cache performance and optimization of blocked algorithms. In
Architectural Support for Programming Languages and Operating Systems, 1991.
[17] L. Li and K.-C. Toh. An inexact interior point method for L1-reguarlized sparse covariance selection.
Mathematical Programming Computation, 2:291?315, 2010.
[18] X. Li, T. Zhao, X. Yuan, and H. Liu. An R package flare for high dimensional linear regression and
precision matrix estimation. http://cran.r-project.org/web/packages/flare, 2013.
[19] H. Liu and L. Wang. Tiger: A tuning-insensitive approach for optimally estimating Gaussian graphical
models. Preprint, 2012.
[20] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. Hellerstein. Distributed graphlab: A
framework for machine learning in the cloud. In VLDB, 2012.
[21] R. Mazumder and T. Hastie. Exact covariance thresholding into connected components for large-scale
graphical lasso. JMLR, 13:723?736, 2012.
[22] F. Niu, B. Retcht, C. Re, and S. J. Wright. Hogwild! a lock-free approach to parallelizing stochastic
gradient descent. In NIPS, 2011.
[23] N. Parikh and S. Boyd. Graph projection block splitting for distributed optimization. Preprint, 2012.
[24] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors.
In ICML, 2009.
[25] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by
minimizing l1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011.
[26] H. Wang and A. Banerjee. Online alternating direction method. In ICML, 2012.
[27] J. Yang and Y. Zhang. Alternating direction algorithms for L1-problems in compressive sensing. ArXiv,
2009.
[28] M. Yuan. Sparse inverse covariance matrix estimation via linear programming. JMLR, 11, 2010.
[29] M. Zinkevich, M. Weimer, A. Smola, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010.
9
| 5037 |@word determinant:1 version:1 polynomial:1 norm:6 open:1 strong:2 vldb:1 tamayo:1 propagate:1 hsieh:3 covariance:24 decomposition:1 cleary:1 cyclic:7 contains:3 disparity:1 liu:4 denoting:1 existing:4 ncar:1 luo:1 toh:1 yet:1 chu:1 written:2 devin:1 engg:1 designed:1 plot:1 update:5 bickson:1 flare:6 accordingly:1 data2:1 core:53 provides:1 iterates:2 node:10 org:5 zhang:1 downing:1 mathematical:2 along:1 c2:3 become:1 ik:1 kak22:1 yuan:3 manner:1 introduce:1 x0:3 surge:1 decomposed:2 automatically:1 gov:1 cpu:1 cache:8 solver:2 increasing:4 becomes:1 provided:1 estimating:3 underlying:4 moreover:1 notation:1 medium:2 project:2 biostatistics:1 argmin:1 kind:1 substantially:3 compressive:1 giant:1 impractical:1 guarantee:5 safely:1 runtime:3 scaled:1 k2:1 control:3 grant:1 positive:2 before:1 aat:1 local:1 slonim:1 io:1 parallelize:2 niu:1 might:1 therein:1 initialization:1 limited:2 bi:1 blackford:1 acknowledgment:1 practice:1 block:83 implement:1 x3:1 significantly:1 projection:4 ups:1 pre:1 boyd:2 refers:1 jui:1 get:1 onto:1 interior:3 close:2 selection:3 nb:4 www:3 totally:1 zinkevich:1 a33:1 lagrangian:2 yt:22 dean:2 map:2 ergodic:1 ke:1 splitting:3 estimator:6 handle:1 hierarchy:2 pt:1 heavily:1 a13:2 user:1 programming:4 exact:1 us:5 designing:1 pa:1 element:4 trend:1 continues:1 blocking:1 kxk1:6 coarsened:1 observed:1 preprint:4 whaley:1 wang:4 solved:2 cloud:1 thousand:3 pradeepr:1 connected:1 ranzato:1 decrease:2 ran:1 substantial:1 environment:3 complexity:3 dynamic:1 solving:8 max1:1 efficiency:4 accelerate:1 easily:2 various:2 separated:1 demmel:1 choosing:2 cacm:1 larger:3 solve:9 otherwise:1 statistic:1 invested:1 itself:1 online:1 advantage:1 eigenvalue:1 kxt:1 cai:2 took:1 mesirov:1 propose:2 aro:1 product:1 lam:1 parallelizes:1 loop:1 kak1:1 achieve:9 frobenius:1 participating:2 competition:3 scalability:8 ky:1 convergence:9 double:1 optimum:1 r1:1 requirement:1 cluster:2 a11:3 help:3 illustrate:1 develop:1 ij:12 progress:1 p2:4 dividing:1 implemented:3 c:2 huwang:1 involves:3 come:1 solves:4 recovering:1 direction:7 stochastic:2 kb:3 centered:1 a12:2 require:3 assign:1 ultra:2 elementary:1 secondly:1 hold:1 considered:1 ruin:1 ground:1 wright:1 mapping:1 a31:1 major:1 achieves:3 consecutive:4 estimation:19 leap:1 utexas:1 individually:1 largest:1 concurrent:1 city:1 minimization:1 cyt:3 gaussian:5 rather:1 pn:2 avoid:1 zhou:1 focus:4 unsatisfactory:1 rank:9 likelihood:2 mainly:2 kyrola:1 contrast:1 baseline:1 zk22:1 inference:2 hvt:1 entire:2 a0:6 kc:3 issue:1 among:4 ill:1 dual:1 denoted:3 html:2 art:2 constrained:7 special:2 ng:2 represents:1 yu:1 unsupervised:1 leukemia:4 icml:2 future:1 report:1 inherent:2 few:2 divergence:1 argminx:2 maintain:1 psd:1 friedman:1 golub:1 umn:1 violation:1 pradeep:1 accurate:1 fu:1 respective:1 indexed:1 tree:1 divide:2 re:1 plotted:4 theoretical:1 column:61 soft:6 w911nf:1 cost:2 deviation:2 hundred:5 graphic:1 optimally:1 perturbed:4 aw:1 proximal:1 synthetic:4 cho:1 siam:1 kut:2 off:1 together:1 recorded:1 possibly:1 ek:6 american:1 leading:3 zhao:1 highperformance:1 li:3 distribute:2 de:2 archived:1 sec:2 twin:1 bold:2 includes:1 ddf:1 summarized:1 satisfy:2 caused:2 depends:1 later:1 hogwild:1 closed:3 bution:1 red:2 recover:2 option:1 parallel:12 competitive:1 clime:45 cjhsieh:1 cxt:3 minimize:2 formed:1 loaded:3 largely:1 efficiently:2 yield:3 groud:1 manages:1 monitoring:1 processor:1 sharing:2 inexact:5 failure:1 pp:1 involved:1 tucker:1 thereof:1 proof:3 static:1 gain:1 dataset:4 recall:4 color:1 ut:9 stanley:1 positioned:1 noaa:1 day:4 done:3 box:7 smola:1 lastly:1 working:1 cran:2 web:2 banerjee:6 building:1 usage:1 k22:7 normalized:1 multiplier:4 true:3 effect:2 daspremont:1 equality:1 assigned:6 ccf:2 alternating:6 read:1 dhillon:4 wp:1 nonzero:1 climate:3 x5:1 please:2 maintained:1 kak:1 mpi:1 linearizing:1 nonconsecutive:1 l1:4 temperature:1 interface:1 arindam:1 parikh:2 ncep:2 superior:1 raskutti:1 empirically:1 insensitive:1 million:9 tail:2 association:1 blas:1 elementwise:6 he:1 rachford:1 significant:1 refer:1 blocked:1 ai:1 tuning:4 unconstrained:1 grid:9 language:1 henry:1 minnesota:3 access:1 stable:1 surface:1 v0:1 operating:1 multivariate:2 recent:5 moderate:1 kcx:3 server:2 certain:1 binary:2 vt:4 exploited:1 seen:3 guestrin:1 additional:1 parallelized:2 dms1264033:1 corrado:1 ii:5 multiple:3 desirable:1 mix:1 reduces:1 technical:1 faster:4 divided:4 ravikumar:4 molecular:1 a1:1 impact:1 prediction:1 scalable:6 variant:2 basic:1 regression:1 essentially:1 arxiv:1 iteration:3 a53:1 monga:1 achieved:1 c1:2 fellowship:1 walker:1 parallelization:1 operate:1 unlike:1 probably:1 goto:1 leveraging:1 mod:1 seem:1 coller:1 leverage:1 ideal:4 intermediate:1 split:1 identically:1 yang:2 variety:2 iterate:1 xj:1 zi:1 fit:1 architecture:14 lasso:3 scalapack:3 hastie:2 inner:1 reduce:2 intensive:1 texas:1 bloomfield:1 motivated:1 six:1 expression:2 gb:1 effort:2 penalty:2 loh:1 queue:3 proceed:1 deep:2 dramatically:1 generally:2 useful:1 detailed:1 involve:1 amount:2 ten:1 reduced:1 http:5 generate:2 xij:8 exist:1 nsf:3 upfront:1 estimated:1 per:5 tibshirani:1 write:1 mat:2 waiting:3 key:1 pb:6 drawn:1 reguarlized:1 prevent:1 preprocessed:1 douglas:1 caligiuri:1 graph:9 year:2 run:7 inverse:5 letter:3 package:4 hammarling:1 dongarra:1 almost:4 reader:1 architectural:1 electronic:1 gonzalez:1 appendix:2 scaling:3 bound:2 quadratic:4 comparision:1 constraint:6 infinity:2 ri:3 x2:2 software:1 hy:1 x7:1 speed:3 optimality:3 min:3 extremely:1 separable:1 gpus:1 speedup:28 developing:1 remain:1 y0:2 lp:4 ghaoui:1 multithreading:1 taken:1 computationally:2 resource:5 discus:1 fail:1 needed:1 end:1 sustik:1 operation:9 rewritten:1 worthwhile:1 hellerstein:1 enforce:1 spectral:1 gridded:1 slower:2 original:1 denotes:3 spurred:1 running:1 graphical:3 lock:1 k1:1 especially:1 establish:1 perturb:3 npk:1 conquer:2 objective:5 already:1 quic:6 diagonal:5 gradient:4 dp:1 columnwise:1 consumption:1 evenly:4 evaluate:3 rothberg:1 assuming:2 index:3 balance:3 minimizing:1 innovation:1 cij:2 slows:1 a2ij:1 implementation:2 zt:14 unknown:2 upper:3 imbalance:2 datasets:9 acknowledge:2 descent:4 truncated:1 communication:3 incorporated:2 dc:5 rn:1 perturbation:4 sharp:1 parallelizing:1 peleato:1 eckstein:1 c3:2 huahua:1 quadratically:1 established:3 hour:8 nip:5 able:1 parallelism:1 usually:1 pattern:2 sparsity:15 challenge:2 program:3 including:4 memory:47 max:3 x112:2 green:1 power:1 suitable:3 wainwright:1 residual:2 raina:1 minimax:3 scheme:3 improve:2 kxk22:1 acknowledges:2 coupled:1 literature:1 mapreduce:1 l2:1 discovery:1 multiplication:23 synchronization:3 foundation:1 madhavan:1 thresholding:5 reanalysis:2 classication:1 austin:1 cancer:1 row:1 penalized:1 balancing:1 supported:1 ibm:1 free:5 transpose:1 arriving:1 aij:4 guide:1 senior:1 institute:1 face:2 sparse:29 distributed:30 exisiting:1 ghz:1 dimension:12 van:1 kz:3 forward:1 made:1 collection:1 adaptive:1 simplified:2 supercomputing:1 transaction:1 gene:2 keep:2 graphlab:1 global:1 kkt:1 uai:1 assumed:1 xi:1 alternatively:1 decomposes:2 table:5 bethe:1 obtaining:1 mazumder:1 meanwhile:1 pk:2 dense:1 main:1 linearly:5 arrow:1 big:2 weimer:1 repeated:1 mul:2 x1:9 augmented:1 gaasenbeek:1 intel:1 elaborate:1 definiteness:1 openmpi:1 slow:1 precision:27 sub:4 fails:1 mao:1 kzt:1 exponential:1 jmlr:3 third:1 bij:1 cyclically:2 z0:3 theorem:5 down:1 load:6 xt:11 choi:2 showing:1 ghemawat:1 sensing:1 explored:2 adding:1 supplement:2 phd:1 illustrates:3 kx:3 chen:1 suited:1 cx:6 eij:5 simply:1 petitet:1 huard:1 inderjit:2 aa:1 wolf:1 truth:2 relies:1 trillion:4 acm:1 netlib:1 sized:1 targeted:1 identity:1 shared:23 admm:38 considerable:2 change:1 tiger:7 specifically:1 typical:1 except:1 openmp:3 miss:3 decouple:1 distributes:1 total:3 called:1 experimental:4 support:6 dept:2 |
4,462 | 5,038 | Optimistic Concurrency Control for
Distributed Unsupervised Learning
Xinghao Pan1 Joseph Gonzalez1 Stefanie Jegelka1 Tamara Broderick1,2 Michael I. Jordan1,2
1
Department of Electrical Engineering and Computer Science, and 2 Department of Statistics
University of California, Berkeley
Berkeley, CA USA 94720
{xinghao,jegonzal,stefje,tab,jordan}@eecs.berkeley.edu
Abstract
Research on distributed machine learning algorithms has focused primarily on one
of two extremes?algorithms that obey strict concurrency constraints or algorithms
that obey few or no such constraints. We consider an intermediate alternative in
which algorithms optimistically assume that conflicts are unlikely and if conflicts
do arise a conflict-resolution protocol is invoked. We view this ?optimistic concurrency control? paradigm as particularly appropriate for large-scale machine
learning algorithms, particularly in the unsupervised setting. We demonstrate our
approach in three problem areas: clustering, feature learning and online facility location. We evaluate our methods via large-scale experiments in a cluster computing
environment.
1
Introduction
The desire to apply machine learning to increasingly larger datasets has pushed the machine learning
community to address the challenges of distributed algorithm design: partitioning and coordinating
computation across the processing resources. In many cases, when computing statistics of iid data or
transforming features, the computation factors according to the data and coordination is only required
during aggregation. For these embarrassingly parallel tasks, the machine learning community has
embraced the map-reduce paradigm, which provides a template for constructing distributed algorithms
that are fault tolerant, scalable, and easy to study.
However, in pursuit of richer models, we often introduce statistical dependencies that require more
sophisticated algorithms (e.g., collapsed Gibbs sampling or coordinate ascent) which were developed
and studied in the serial setting. Because these algorithms iteratively transform a global state,
parallelization can be challenging and often requires frequent and complex coordination.
Recent efforts to distribute these algorithms can be divided into two primary approaches. The mutual
exclusion approach, adopted by [1] and [2], guarantees a serializable execution preserving the theoretical properties of the serial algorithm but at the expense of parallelism and costly locking overhead.
Alternatively, in the coordination-free approach, proposed by [3] and [4], processors communicate frequently without coordination minimizing the cost of contention but leading to stochasticity,
data-corruption, and requiring potentially complex analysis to prove algorithm correctness.
In this paper we explore a third approach, optimistic concurrency control (OCC) [5] which offers
the performance gains of the coordination-free approach while at the same time ensuring a serializable
execution and preserving the theoretical properties of the serial algorithm. Like the coordinationfree approach, OCC exploits the infrequency of data-corrupting operations. However, instead of
allowing occasional data-corruption, OCC detects data-corrupting operations and applies correcting
computation. As a consequence, OCC automatically ensures correctness, and the analysis is only
necessary to guarantee optimal scaling performance.
1
We apply OCC to distributed nonparametric unsupervised learning?including but not limited to
clustering?and implement distributed versions of the DP-Means [6], BP-Means [7], and online
facility location (OFL) algorithms. We demonstrate how to analyze OCC in the context of the
DP-Means algorithm and evaluate the empirical scalability of the OCC approach on all three of the
proposed algorithms. The primary contributions of this paper are:
1. Concurrency control approach to distributing unsupervised learning algorithms.
2. Reinterpretation of online nonparametric clustering in the form of facility location with
approximation guarantees.
3. Analysis of optimistic concurrency control for unsupervised learning.
4. Application to feature modeling and clustering.
2
Optimistic Concurrency Control
Many machine learning algorithms iteratively transform some global state (e.g., model parameters or
variable assignment) giving the illusion of serial dependencies between each operation. However,
due to sparsity, exchangeability, and other symmetries, it is often the case that many, but not all, of
the state-transforming operations can be computed concurrently while still preserving serializability:
the equivalence to some serial execution where individual operations have been reordered.
This opportunity for serializable concurrency forms the foundation of distributed database systems.
For example, two customers may concurrently make purchases exhausting the inventory of unrelated
products, but if they try to purchase the same product then we may need to serialize their purchases
to ensure sufficient inventory. One solution (mutual exclusion) associates locks with each product
type and forces each purchase of the same product to be processed serially. This might work for an
unpopular, rare product but if we are interested in selling a popular product for which we have a large
inventory the serialization overhead could lead to unnecessarily slow response times. To address this
problem, the database community has adopted optimistic concurrency control (OCC) [5] in which
the system tries to satisfy the customers requests without locking and corrects transactions that could
lead to negative inventory (e.g., by forcing the customer to checkout again).
Optimistic concurrency control exploits situations where most operations can execute concurrently
without conflicting or violating serialization invariants. For example, given sufficient inventory the
order in which customers are satisfied is immaterial and concurrent operations can be executed
serially to yield the same final result. However, in the rare event that inventory is nearly depleted
two concurrent purchases may not be serializable since the inventory can never be negative. By
shifting the cost of concurrency control to rare events we can admit more costly concurrency control
mechanisms (e.g., re-computation) in exchange for an efficient, simple, coordination-free execution
for the majority of the events.
Formally, to apply OCC we must define a set of transactions (i.e., operations or collections of
operations), a mechanism to detect when a transaction violates serialization invariants (i.e., cannot
be executed concurrently), and a method to correct (e.g., rollback) transactions that violate the
serialization invariants. Optimistic concurrency control is most effective when the cost of validating
concurrent transactions is small and conflicts occur infrequently.
Machine learning algorithms are ideal for optimistic concurrency control. The conditional independence structure and sparsity in our models and data often leads to sparse parameter updates
substantially reducing the chance of conflicts. Similarly, symmetry in our models often provides the
flexibility to reorder serial operations while preserving algorithm invariants. Because the models
encode the dependency structure, we can easily detect when an operation violates serial invariants
and correct by rejecting the change and rerunning the computation. Alternatively, we can exploit the
semantics of the operations to resolve the conflict by accepting a modified update. As a consequence
OCC allows us to easily construct provably correct and efficient distributed algorithms without the
need to develop new theoretical tools to analyze complex non-deterministic distributed behavior.
2
2.1
The OCC Pattern for Machine Learning
Optimistic concurrency control can be distilled to a simple pattern (meta-algorithm) for the design
and implementation of distributed machine learning systems. We begin by evenly partitioning N
data points (and the corresponding computation) across the P available processors. Each processor
maintains a replicated view of the global state and serially applies the learning algorithm as a sequence
of operations on its assigned data and the global state. If an operation mutates the global state in a
way that preserves the serialization invariants then the operation is accepted locally and its effect on
the global state, if any, is eventually replicated to other processors.
However, if an operation could potentially conflict with operations on other processors then it is
sent to a unique serializing processor where it is rejected or corrected and the resulting global state
change is eventually replicated to the rest of the processors. Meanwhile the originating processor
either tentatively accepts the state change (if a rollback operator is defined) or proceeds as though the
operation has been deferred to some point in the future.
While it is possible to execute this pattern asynchronously with minimal coordination, for simplicity
we adopt the bulk-synchronous model of [8] and divide the computation into epochs. Within an
epoch t, b data points B(p, t) are evenly assigned to each of the P processors. Any state changes
or serialization operations are transmitted at the end of the epoch and processed before the next
epoch. While potentially slower than an asynchronous execution, the bulk-synchronous execution is
deterministic and can be easily expressed using existing systems like Hadoop or Spark [9].
3
OCC for Unsupervised Learning
Much of the existing literature on distributed machine learning algorithms has focused on classification
and regression problems, where the underlying model is continuous. In this paper we apply the OCC
pattern to machine learning problems that have a more discrete, combinatorial flavor?in particular
unsupervised clustering and latent feature learning problems. These problems exhibit symmetry
via their invariance to both data permutation and cluster or feature permutation. Together with the
sparsity of interacting operations in their existing serial algorithms, these problems offer a unique
opportunity to develop OCC algorithms.
The K-means algorithm provides a paradigm example; here the inferential goal is to partition the
data. Rather than focusing solely on K-means, however, we have been inspired by recent work
in which a general family of K-means-like algorithms have been obtained by taking Bayesian
nonparametric (BNP) models based on combinatorial stochastic processes such as the Dirichlet
process, the beta process, and hierarchical versions of these processes, and subjecting them to smallvariance asymptotics where the posterior probability under the BNP model is transformed into a
cost function that can be optimized [7]. The algorithms considered to date in this literature have
been developed and analyzed in the serial setting; our goal is to explore distributed algorithms for
optimizing these cost functions that preserve the structure and analysis of their serial counterparts.
3.1
OCC DP-Means
We first consider the DP-means algorithm (Alg. 1) introduced by [6]. Like the K-means algorithm,
DP-Means alternates between updating the cluster assignment zi for each point xi and recomputing
K
the centroids C = {?k }k=1 associated with each clusters. However, DP-Means differs in that the
number of clusters is not fixed a priori. Instead, if the distance from a given data point to all existing
cluster centroids is greater than a parameter ?, then a new cluster is created. While the second phase
is trivially parallel, the process of introducing clusters in the first phase is inherently serial. However,
clusters tend to be introduced infrequently, and thus DP-Means provides an opportunity for OCC.
In Alg. 3 we present an OCC parallelization of the DP-Means algorithm in which each iteration
of the serial DP-Means algorithm is divided into N/(P b) bulk-synchronous epochs. The data is
evenly partitioned {xi }i?B(p,t) across processor-epochs into blocks of size b = |B(p, t)|. During
each epoch t, each processor p evaluates the cluster membership of its assigned data {xi }i?B(p,t)
using the cluster centers C from the previous epoch and optimistically proposes a new set of cluster
? At the end of each epoch the proposed cluster centers, C,
? are serially validated using Alg. 2.
centers C.
3
Algorithm 1: Serial DP-means
Algorithm 3: Parallel DP-means
Input: data
threshold ?
C??
while not converged do
for i = 1 to N do
?? ? argmin??C kxi ? ?k
if kxi ? ?? k > ? then
zi ? x i
C ? C ? xi
// New cluster
Input: data {xi }N
i=1 , threshold ?
Input: Epoch size b and P processors
Input: Partitioning B(p, t) of data {xi }i?B(p,t) to
processor-epochs where b = |B(p, t)|
C??
while not converged do
for epoch t = 1 to N/(P b) do
C? ? ? // New candidate centers
for p ? {1, . . . , P } do in parallel
// Process local data
for i ? B(p, t) do
?? ? argmin??C kxi ? ?k
// Optimistic Transaction
if kxi ? ?? k > ? then
zi ? Ref(xi )
C? ? C? ? xi
else zi ? ?? // Always Safe
{xi }N
i=1 ,
else zi ? ??
// Use nearest
for ? ? C do // Recompute Centers
? ? Mean({xi | zi = ?})
Output: Accepted cluster centers C
Algorithm 2: DPValidate
Input: Set of proposed cluster centers C?
C??
for x ? C? do
?? ? argmin??C kx ? ?k
if kxi ? ?? k < ? then // Reject
Ref(x) ? ??
// Rollback Assgs
else C ? C ? x
Output: Accepted cluster centers C
// Serially validate clusters
?
C ? C ? DPValidate(C)
for ? ? C do // Recompute Centers
? ? Mean({xi | zi = ?})
// Accept
Output: Accepted cluster centers C
Figure 1: The Serial DP-Means algorithm and distributed implementation using the OCC pattern.
The validation process accepts cluster centers that are not covered by (i.e., not within ? of) already
accepted cluster centers. When a cluster center is rejected we update its reference to point to the
already accepted center, thereby correcting the original point assignment.
3.2
OCC Facility Location
The DP-Means objective turns out to be equivalent to the classic Facility Location (FL) objective:
P
2
J(C) = x?X min??C kx ? ?k + ?2 |C|,which selects the set of cluster centers (facilities) ? ? C
2
that minimizes the shortest distance kx ? ?k to each point (customer) x as well as the penalized
2
cost of the clusters ? |C|. However, while DP-Means allows the clusters to be arbitrary points (e.g.,
C ? RD ), FL constrains the clusters to be points C ? F in a set of candidate locations F. Hence,
we obtain a link between combinatorial Bayesian models and FL allowing us to apply algorithms
with known approximation bounds to Bayesian inspired nonparametric models. As we will see in
Section 4, our OCC algorithm provides constant-factor approximations for both FL and DP-means.
Facility location has been studied intensely. We build on the online facility location (OFL) algorithm
described by Meyerson [10]. The OFL algorithm processes each data point x serially in a single pass
2
by either adding x to the set of clusters with probability min(1, min??C kx ? ?k /?2 ) or assigning
x to the nearest existing cluster. Using OCC we are able to construct a distributed OFL algorithm
(Alg. 4) which is nearly identical to the OCC DP-Means algorithm (Alg. 3) but which provides
strong approximation bounds. The OCC OFL algorithm differs only in that clusters are introduced
and validated stochastically?the validation process ensures that the new clusters are accepted with
probability equal to the serial algorithm.
3.3
OCC BP-Means
BP-means is an algorithm for learning collections of latent binary features, providing a way to define
groupings of data points that need not be mutually exclusive or exhaustive like clusters.
4
Algorithm 4: Parallel OFL
Algorithm 5: OFLValidate
Input: Same as DP-Means
for epoch t = 1 to N/(P b) do C? ? ?
for p ? {1, . . . , P } do in parallel
for i ? B(p, t) do
d ? min??C kxi ? ?k
with probability min d2 , ?2 /?2
C? ? C? ? (xi , d)
Input: Set of proposed cluster centers C?
C??
for (x, d) ? C? do
d? ? min??C kx ? ?k
with probability min d?2 , d2 /d2
C ?C?x
// Accept
Output: Accepted cluster centers C
?
C ? C ? OFLValidate(C)
Output: Accepted cluster centers C
Figure 2: The OCC algorithm for Online Facility Location (OFL).
As with serial DP-means, there are two phases in serial BP-means (Alg. 6). In the first phase,
each data point xi is labeled with binary assignments from a collection of features
P (zik = 0 if xi
doesn?t belong to feature k; otherwise zik = 1) to construct a representation xi ? k zik fk . In the
? are updated based on the assignments.
second phase, parameter values (the feature means fk ? C)
The first step also includes the possibility of introducing an additional feature. While the second
phase is trivially parallel, the inherently serial nature of the first phase combined with the infrequent
introduction of new features points to the usefulness of OCC in this domain.
The OCC parallelization for BP-means follows the same basic structure as OCC DP-means. Each
transaction operates
P on a data point xi in two phases. In the first, analysis
Pphase, the optimal
representation k zik fk is found. If xi is not well represented (i.e., kxi ? k zik fk k > ?), the
difference is proposed as a new feature in the second validation phase. At the end of epoch t,
? For
the proposed features {finew } are serially validated to obtain a set of accepted features C.
new
new
each
proposed
feature
f
,
the
validation
process
first
finds
the
optimal
representation
f
?
i
i
P
new
is not well represented, the difference finew ?
fk ?C? zik fk using newly accepted features. If fi
P
?
fk ?C? zik fk is added to C and accepted as a new feature.
Finally, to update the feature means, let F be the K-row matrix of feature means. The feature
means update FP? (Z T Z)?1 Z T X can be evaluated as a single transaction by computing the
T
T
sums Z TP
Z =
i zi zi (where zi is a K ? 1 column vector so zi zi is a K ? K matrix) and
Z T X = i zi xTi in parallel.
We present the pseudocode for the OCC parallelization of BP-means in Appendix A.
4
Analysis of Correctness and Scalability
In contrast to the coordination-free pattern in which scalability is trivial and correctness often requires
strong assumptions or holds only in expectation, the OCC pattern leads to simple proofs of correctness
and challenging scalability analysis. However, in many cases it is preferable to have algorithms that
are correct and probably fast rather than fast and possibly correct. We first establish serializability:
Theorem 4.1 (Serializability). The distributed DP-means, OFL, and BP-means algorithms are
serially equivalent to DP-means, OFL and BP-means, respectively.
The proof (Appendix B) of Theorem 4.1 is relatively straightforward and is obtained by constructing
a permutation function that describes an equivalent serial execution for each distributed execution.
The proof can easily be extended to many other machine learning algorithms.
Serializability allows us to easily extend important theoretical properties of the serial algorithm to the
distributed setting. For example, by invoking serializability, we can establish the following result for
the OCC version of the online facility location (OFL) algorithm:
5
Theorem 4.2. If the data is randomly ordered, then the OCC OFL algorithm provides a constantfactor approximation for the DP-means objective. If the data is adversarially ordered, then OCC
OFL provides a log-factor approximation to the DP-means objective.
The proof (Appendix B) of Theorem 4.2 is first derived in the serial setting then extended to
the distributed setting through serializability. In contrast to divide-and-conquer schemes, whose
approximation bounds commonly depend multiplicatively on the number of levels [11], Theorem 4.2 is
unaffected by distributed processing and has no communication or coarsening tradeoffs. Furthermore,
to retain the same factors as a batch algorithm on the full data, divide-and-conquer schemes need a
large number of preliminary centers at lower levels [11, 12]. In that case, the communication cost
can be high, since all proposed clusters are sent at the same time, as opposed to the OCC approach.
We address the communication overhead (the number of rejections) for our scheme next.
Scalability The scalability of the OCC algorithms depends on the number of transactions that
are rejected during validation (i.e., the rejection rate). While a general scalability analysis can be
challenging, it is often possible to gain some insight into the asymptotic dependencies by making
simplifying assumptions. In contrast to the coordination-free approach, we can still safely apply OCC
algorithms in the absence of a scalability analysis or when simplifying assumptions do not hold.
To illustrate the techniques employed in OCC scalability analysis we study the DP-Means algorithm,
whose scalability limiting factor is determined by the number of points that must be serially validated.
We show that the communication cost only depends on the number of clusters and processing
resources and does not directly depend on the number of data points. The proof is in Appendix C.
Theorem 4.3 (DP-Means Scalability). Assume N data points are generated iid to form a random
number (KN ) of well-spaced clusters of diameter ?: ? is an upper bound on the distances within
clusters and a lower bound on the distance between clusters. Then the expected number of serially
validated points is bounded above by P b + E [KN ] for P processors and b points per epoch.
Under the separation assumptions of the theorem, the number of clusters present in N data points,
KN , is exactly equal to the number of clusters found by DP-Means in N data points; call this latter
quantity kN . The experimental results in Figure 3 suggest that the bound of P b + kN may hold
more generally beyond the assumptions above. Since the master must process at least kN points, the
overhead caused by rejections is P b and independent of N .
5
Evaluation
For our experiments, we generated synthetic data for clustering (DP-means and OFL) and feature
modeling (BP-means). The cluster and feature proportions were generated nonparametrically as
described below. All data points were generated in R16 space. We fixed threshold parameter ? = 1.
Clustering: The cluster proportions and indicators were generated simultaneously using the stickbreaking procedure for Dirichlet processes??sticks? are ?broken? on-the-fly to generate new clusters
as necessary. For our experiments, we used a fixed concentration parameter ? = 1. Cluster means
were sampled ?k ? N (0, I16 ), and data points were generated at xi ? N (?zi , 41 I16 ).
Feature modeling: We use the stick-breaking procedure of [13] to generate feature weights. Unlike with Dirichlet processes, we are unable to perform stick-breaking on-the-fly with Beta processes. Instead, we generate enough features so that with high probability (> 0.9999) the remaining non-generated features will have negligible weights (< 0.0001). The concentration parameter was
P also fixed at ? = 1. We generated feature means fk ? N (0, I16 ) and data points
xi ? N ( k zik fk , 14 I16 ).
5.1
Simulated experiments
To test the efficiency of our algorithms, we simulated the first iteration (one complete pass over all
the data, where most clusters / features are created and thus greatest coordination is needed) of each
algorithm in MATLAB. The number of data points, N , was varied from 256 to 2560 in intervals of
256. We also varied P b, the number of data points processed in one epoch, from 16 to 256 in powers
of 2. For each value of N and P b, we empirically measured kN , the number of accepted clusters /
6
(a) OCC DP-means
(b) OCC OFL
(c) OCC BP-means
Figure 3: Simulated distributed DP-means, OFL and BP-means: expected number of data points proposed but
not accepted as new clusters / features is independent of size of data set.
features, and MN , the number of proposed clusters / features. This was repeated 400 times to obtain
? N ? kN ] of the number of rejections.
the empirical average E[M
? N ? kN ] is bounded above by P b (Fig. 3a), and that this
For OCC DP-means, we observe E[M
bound is independent of the data set size, even when the assumptions of Thm 4.3 are violated. (We
also verified that similar empirical results are obtained when the assumptions are not violated; see
Appendix C.) The same behavior is observed for the other two OCC algorithms (Fig. 3b and Fig. 3c).
5.2
Distributed implementation and experiments
We also implemented1 the distributed algorithms in Spark [9], an open-source cluster computing
system. The DP-means and BP-means algorithms were initialized by pre-processing a small number
of data points (1/16 of the first P b points)?this reduces the number of data points sent to the master
on the first epoch, while still preserving serializability of the algorithms. Our Spark implementations
were tested on Amazon EC2 by processing a fixed data set on 1, 2, 4, 8 m2.4xlarge instances. Ideally,
to process the same amount of data, an algorithm and implementation with perfect scaling would
take half the runtime on 8 machines as it would on 4, and so on. The plots in Figure 4 shows this
comparison by dividing all runtimes by the runtime on one machine.
DP-means: We ran the distributed DP-means algorithm on 227 ? 134M data points, using ? = 2.
The block size b was chosen to keep P b = 223 ? 8M constant. The algorithm was run for 5 iterations
(complete pass over all data in 16 epochs). We were able to get perfect scaling (Figure 4a) in all but
the first iteration, when the master has to perform the most synchronization of proposed centers.
OFL: The distributed OFL algorithm was run on 220 ? 1M data points, using ? = 2. Unlike
DP-means and BP-means, OFL is a single-pass algorithm and we did not perform any initialization
clustering. The block size b was chosen such that P b = 216 ? 66K data points are processed each
epoch, which gives us 16 epochs. Figure 4b shows that we get no scaling in the first epoch, where all
P b data points are sent to the master. Scaling improves in the later epochs, as the master?s workload
decreases with fewer proposals but the workers? workload increases with more centers.
BP-means: Distributed BP-means was run on 223 ? 8M data points, with ? = 1; block size was
chosen such that P b = 219 ? 0.5M is constant. Five iterations were run, with 16 epochs per iteration.
As with DP-means, we were able to achieve nearly perfect scaling; see Figure 4c.
6
Related work
Others have proposed alternatives to mutual exclusion and coordination-free parallelism for machine
learning algorithm design. [14] proposed transforming the underlying model to expose additional
parallelism while preserving the marginal posterior. However, such constructions can be challenging
or infeasible and many hinder mixing or convergence. Likewise, [15] proposed a reparameterization of
the underlying model to expose additional parallelism through conditional independence. Additional
1
Code will be made available at our project page https://amplab.cs.berkeley.edu/projects/ccml/.
7
(a) OCC DP-means
(b) OCC OFL
(c) OCC BP-means
Figure 4: Normalized runtime for distributed algorithms. Runtime of each iteration / epoch is divided by that
using 1 machine (P = 8). Ideally, the runtime with 2, 4, 8 machines (P = 16, 32, 64) should be respectively
1/2, 1/4, 1/8 of the runtime using 1 machine. OCC DP-means and BP-means obtain nearly perfect scaling for all
iterations. OCC OFL rejects a lot initially, but quickly gets better in later epochs.
work similar in spirit to ours using OCC-like techniques includes [16] who proposed an approximate
parallel sampling algorithm for the IBP which is made exact by introducing an additional MetropolisHastings step, and [17] who proposed a look-ahead strategy in which future samples are computed
optimistically based on the likely outcomes of current samples.
There has been substantial work on scalable clustering algorithms [18, 19, 20]. Several authors
[11, 21, 22, 12] have proposed streaming approximation algorithms that rely on hierarchical divideand-conquer schemes. The approximation factors in these algorithms are multiplicative in the
hierarchy and demand a careful tradeoff between communication and approximation quality which is
obviated in the OCC framework. Several methods [12, 25, 21] first collect and then re-cluster a set
of centers, and therefore need to communicate all intermediate centers. Our approach avoids these
stages, since a center causes no rejections in the epochs after it is established: the rejection rate does
not grow with K. Finally, the OCC framework can easily integrate and exploit many of the ideas in
the cited works.
7
Discussion
In this paper we have shown how optimistic concurrency control can be usefully employed in the
design of distributed machine learning algorithms. As opposed to previous approaches, this preserves
correctness, in most cases at a small cost. We established the equivalence of our distributed OCC DPmeans, OFL and BP-means algorithms to their serial counterparts, thus preserving their theoretical
properties. In particular, the strong approximation guarantees of serial OFL translate immediately to
the distributed algorithm. Our theoretical analysis ensures OCC DP-means achieves high parallelism
without sacrificing correctness. We implemented and evaluated all three OCC algorithms on a
distributed computing platform and demonstrate strong scalability in practice.
We believe that there is much more to do in this vein. Indeed, machine learning algorithms have many
properties that distinguish them from classical database operations and may allow going beyond
the classic formulation of OCC. In particular we may be able to partially or probabilistically accept
non-serializable operations in a way that preserves underlying algorithm invariants. Laws of large
numbers and concentration theorems may provide tools for designing such operations. Moreover, the
conflict detection mechanism can be treated as a control knob, allowing us to softly switch between
stable, theoretically sound algorithms and potentially faster coordination-free algorithms.
Acknowledgments
This research is supported in part by NSF CISE Expeditions award CCF-1139158 and DARPA XData Award
FA8750-12-2-0331, and gifts from Amazon Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data,
Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Intel, Microsoft, NetApp, Oracle, Samsung,
Splunk, VMware and Yahoo!. This material is also based upon work supported in part by the Office of Naval
Research under contract/grant number N00014-11-1-0688. X. Pan?s work is also supported in part by a DSO
National Laboratories Postgraduate Scholarship. T. Broderick?s work is supported by a Berkeley Fellowship.
8
References
[1] J. Gonzalez, Y. Low, A. Gretton, and C. Guestrin. Parallel Gibbs sampling: From colored fields to thin
junction trees. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics
(AISTATS), pages 324?332, 2011.
[2] Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and J. M. Hellerstein.
Distributed GraphLab: A framework for machine learning and data mining in the cloud. In Proceedings of
the 38th International Conference on Very Large Data Bases (VLDB, Istanbul, 2012.
[3] Benjamin Recht, Christopher Re, Stephen J. Wright, and Feng Niu. Hogwild: A lock-free approach to
parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems (NIPS)
24, pages 693?701, Granada, 2011.
[4] Amr Ahmed, Mohamed Aly, Joseph Gonzalez, Shravan Narayanamurthy, and Alexander J. Smola. Scalable
inference in latent variable models. In Proceedings of the 5th ACM International Conference on Web
Search and Data Mining (WSDM), 2012.
[5] Hsiang-Tsung Kung and John T Robinson. On optimistic methods for concurrency control. ACM
Transactions on Database Systems (TODS), 6(2):213?226, 1981.
[6] Brian Kulis and Michael I. Jordan. Revisiting k-means: New algorithms via Bayesian nonparametrics. In
Proceedings of 29th International Conference on Machine Learning (ICML), Edinburgh, 2012.
[7] Tamara Broderick, Brian Kulis, and Michael I. Jordan. MAD-bayes: MAP-based asymptotic derivations
from Bayes. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
[8] Leslie G. Valiant. A bridging model for parallel computation. Communications of the ACM, 33(8):103?111,
1990.
[9] Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, and Ion Stoica. Spark: Cluster
computing with working sets. In Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud
Computing, 2010.
[10] A. Meyerson. Online facility location. In Proceedings of the 42nd Annual Symposium on Foundations of
Computer Science (FOCS), Las Vegas, 2001.
[11] A. Meyerson, N. Mishra, R. Motwani, and L. O?Callaghan. Clustering data streams: Theory and practice.
IEEE Transactions on Knowledge and Data Engineering, 15(3):515?528, 2003.
[12] N. Ailon, R. Jaiswal, and C. Monteleoni. Streaming k-means approximation. In Advances in Neural
Information Processing Systems (NIPS) 22, Vancouver, 2009.
[13] John Paisley, David Blei, and Michael I Jordan. Stick-breaking Beta processes and the Poisson process. In
Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.
[14] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed inference for Latent Dirichlet Allocation.
In Advances in Neural Information Processing Systems (NIPS) 20, Vancouver, 2007.
[15] D. Lovell, J. Malmaud, R. P. Adams, and V. K. Mansinghka. ClusterCluster: Parallel Markov chain Monte
Carlo for Dirichlet process mixtures. ArXiv e-prints, April 2013.
[16] F. Doshi-Velez, D. Knowles, S. Mohamed, and Z. Ghahramani. Large scale nonparametric Bayesian
inference: Data parallelisation in the Indian Buffet process. In Advances in Neural Information Processing
Systems (NIPS) 22, Vancouver, 2009.
[17] Tianbing Xu and Alexander Ihler. Multicore Gibbs sampling in dense, unstructured graphs. In Proceedings
of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS). 2011.
[18] I. Dhillon and D.S. Modha. A data-clustering algorithm on distributed memory multiprocessors. In
Workshop on Large-Scale Parallel KDD Systems, 2000.
[19] A. Das, M. Datar, A. Garg, and S. Ragarajam. Google news personalization: Scalable online collaborative
filtering. In Proceedings of the 16th World Wide Web Conference, Banff, 2007.
[20] A. Ene, S. Im, and B. Moseley. Fast clustering using MapReduce. In Proceedings of the 17th ACM
SIGKDD Conference on Knowledge Discovery and Data Mining, San Diego, 2011.
[21] M. Shindler, A. Wong, and A. Meyerson. Fast and accurate k-means for large datasets. In Advances in
Neural Information Processing Systems (NIPS) 24, Granada, 2011.
[22] Moses Charikar, Liadan O?Callaghan, and Rina Panigrahy. Better streaming algorithms for clustering
problems. In Proceedings of the 35th Annual ACM Symposium on Theory of Computing (STOC), 2003.
[23] Mihai B?adoiu, Sariel Har-Peled, and Piotr Indyk. Approximate clustering via core-sets. In Proceedings of
the 34th Annual ACM Symposium on Theory of Computing (STOC), 2002.
[24] D. Feldman, A. Krause, and M. Faulkner. Scalable training of mixture models via coresets. In Advances in
Neural Information Processing Systems (NIPS) 24, Granada, 2011.
[25] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii. Scalable kmeans++. In Proceedings
of the 38th International Conference on Very Large Data Bases (VLDB), Istanbul, 2012.
9
| 5038 |@word kulis:2 version:3 proportion:2 nd:2 open:1 d2:3 vldb:2 simplifying:2 invoking:1 thereby:1 bahmani:1 ours:1 fa8750:1 franklin:1 existing:5 mishra:1 current:1 assigning:1 danny:1 must:3 john:2 partition:1 kdd:1 plot:1 update:5 bickson:1 zik:8 half:1 fewer:1 intelligence:3 core:1 accepting:1 colored:1 blei:1 provides:8 recompute:2 location:11 banff:1 five:1 beta:3 symposium:3 focs:1 prove:1 overhead:4 introduce:1 theoretically:1 indeed:1 expected:2 behavior:2 frequently:1 bnp:2 inspired:2 detects:1 wsdm:1 automatically:1 resolve:1 xti:1 gift:1 begin:1 project:2 unrelated:1 underlying:4 bounded:2 moreover:1 argmin:3 substantially:1 minimizes:1 developed:2 guarantee:4 safely:1 berkeley:5 usefully:1 runtime:6 preferable:1 exactly:1 stick:4 control:16 partitioning:3 grant:1 before:1 negligible:1 engineering:2 local:1 service:1 consequence:2 niu:1 solely:1 optimistically:3 modha:1 datar:1 might:1 garg:1 initialization:1 studied:2 equivalence:2 collect:1 challenging:4 limited:1 unique:2 acknowledgment:1 practice:2 block:4 implement:1 differs:2 illusion:1 procedure:2 asymptotics:1 area:1 empirical:3 reject:2 inferential:1 pre:1 cloudera:1 suggest:1 get:3 cannot:1 operator:1 collapsed:1 context:1 wong:1 equivalent:3 map:2 customer:5 deterministic:2 center:24 straightforward:1 focused:2 resolution:1 simplicity:1 spark:4 amazon:2 correcting:2 immediately:1 unstructured:1 m2:1 insight:1 reparameterization:1 classic:2 coordinate:1 updated:1 limiting:1 construction:1 hierarchy:1 infrequent:1 diego:1 exact:1 smyth:1 designing:1 associate:1 infrequently:2 particularly:2 updating:1 database:4 labeled:1 observed:1 vein:1 cloud:2 fly:2 electrical:1 revisiting:1 ensures:3 news:1 rina:1 jaiswal:1 decrease:1 ran:1 substantial:1 benjamin:1 environment:1 transforming:3 locking:2 constrains:1 broken:1 ideally:2 broderick:2 peled:1 hinder:1 immaterial:1 depend:2 reinterpretation:1 concurrency:17 reordered:1 upon:1 efficiency:1 selling:1 easily:6 workload:2 darpa:1 samsung:1 represented:2 derivation:1 fast:4 effective:1 monte:1 artificial:3 newman:1 outcome:1 exhaustive:1 whose:2 richer:1 larger:1 identical:1 otherwise:1 divideand:1 statistic:5 mutates:1 transform:2 final:1 online:8 asynchronously:1 indyk:1 sequence:1 product:6 frequent:1 date:1 goji:1 mixing:1 flexibility:1 achieve:1 translate:1 validate:1 scalability:12 convergence:1 cluster:52 motwani:1 perfect:4 adam:1 illustrate:1 develop:2 measured:1 multicore:1 nearest:2 mansinghka:1 ibp:1 strong:4 dividing:1 implemented:1 c:1 safe:1 correct:5 stochastic:2 violates:2 material:1 require:1 exchange:1 preliminary:1 brian:2 im:1 hold:3 considered:1 wright:1 achieves:1 adopt:1 combinatorial:3 coordination:12 expose:2 stickbreaking:1 clearstory:1 concurrent:3 correctness:7 tool:2 concurrently:4 always:1 modified:1 rather:2 exchangeability:1 probabilistically:1 office:1 knob:1 encode:1 serializing:1 validated:5 derived:1 naval:1 kyrola:1 contrast:3 sigkdd:1 centroid:2 detect:2 inference:3 membership:1 streaming:3 softly:1 unlikely:1 multiprocessor:1 accept:3 initially:1 istanbul:2 originating:1 transformed:1 going:1 selects:1 interested:1 semantics:1 provably:1 classification:1 priori:1 yahoo:1 proposes:1 platform:1 mutual:3 marginal:1 field:1 equal:2 distilled:1 construct:3 piotr:1 sampling:4 never:1 unpopular:1 adversarially:1 unnecessarily:1 look:1 unsupervised:7 nearly:4 thin:1 subjecting:1 runtimes:1 purchase:5 future:2 others:1 icml:2 primarily:1 few:1 randomly:1 vmware:1 simultaneously:1 preserve:4 national:1 individual:1 phase:9 microsoft:1 detection:1 possibility:1 mining:3 chowdhury:1 evaluation:1 deferred:1 analyzed:1 extreme:1 mixture:2 personalization:1 chain:1 har:1 accurate:1 worker:1 necessary:2 tree:1 divide:3 initialized:1 re:3 sacrificing:1 adoiu:1 theoretical:6 minimal:1 recomputing:1 column:1 modeling:3 instance:1 tp:1 assignment:5 leslie:1 cost:9 introducing:3 rare:3 usefulness:1 i16:4 dependency:4 kn:9 eec:1 kxi:7 synthetic:1 combined:1 recht:1 cited:1 international:8 ec2:1 retain:1 contract:1 corrects:1 michael:5 together:1 quickly:1 dso:1 again:1 cisco:1 satisfied:1 opposed:2 possibly:1 r16:1 admit:1 stochastically:1 vattani:1 leading:1 distribute:1 includes:2 coresets:1 satisfy:1 caused:1 depends:2 tsung:1 stream:1 multiplicative:1 stoica:1 later:2 view:2 try:2 optimistic:13 tab:1 analyze:2 lot:1 aggregation:1 maintains:1 parallel:13 carlos:1 shravan:1 bayes:2 asuncion:1 expedition:1 contribution:1 collaborative:1 who:2 likewise:1 yield:1 spaced:1 hortonworks:1 bayesian:5 rejecting:1 iid:2 carlo:1 corruption:2 unaffected:1 processor:14 converged:2 monteleoni:1 facebook:1 evaluates:1 tamara:2 mohamed:2 doshi:1 associated:1 proof:5 ihler:1 gain:2 newly:1 sampled:1 sap:1 popular:1 knowledge:2 improves:1 embarrassingly:1 embraced:1 sophisticated:1 focusing:1 violating:1 response:1 april:1 formulation:1 execute:2 though:1 evaluated:2 nonparametrics:1 furthermore:1 rejected:3 stage:1 smola:1 working:1 web:3 christopher:1 google:2 nonparametrically:1 quality:1 liadan:1 believe:1 usa:1 effect:1 requiring:1 normalized:1 counterpart:2 facility:11 hence:1 assigned:3 ccf:1 iteratively:2 laboratory:1 dhillon:1 during:3 lovell:1 complete:2 demonstrate:3 parallelisation:1 invoked:1 contention:1 fi:1 netapp:1 vega:1 pseudocode:1 empirically:1 belong:1 extend:1 shenker:1 intensely:1 velez:1 mihai:1 gibbs:3 feldman:1 paisley:1 rd:1 trivially:2 fk:10 similarly:1 xdata:1 narayanamurthy:1 stochasticity:1 stable:1 base:2 posterior:2 recent:2 exclusion:3 optimizing:1 forcing:1 n00014:1 meta:1 binary:2 fault:1 preserving:7 transmitted:1 greater:1 additional:5 guestrin:2 employed:2 paradigm:3 shortest:1 stephen:1 violate:1 full:1 sound:1 reduces:1 gretton:1 faster:1 ahmed:1 offer:2 divided:3 serial:23 award:2 ensuring:1 scalable:6 regression:1 basic:1 aapo:1 checkout:1 expectation:1 poisson:1 arxiv:1 iteration:8 ofl:22 ion:1 proposal:1 fellowship:1 krause:1 interval:1 else:3 grow:1 source:1 parallelization:4 rest:1 unlike:2 strict:1 ascent:1 probably:1 tend:1 validating:1 sent:4 spirit:1 coarsening:1 jordan:4 call:1 depleted:1 ideal:1 intermediate:2 easy:1 enough:1 faulkner:1 switch:1 independence:2 zi:14 reduce:1 idea:1 tradeoff:2 synchronous:3 vassilvitskii:1 distributing:1 bridging:1 effort:1 cause:1 matlab:1 generally:1 covered:1 amount:1 nonparametric:5 locally:1 processed:4 diameter:1 generate:3 http:1 nsf:1 moses:1 coordinating:1 per:2 bulk:3 blue:1 discrete:1 threshold:3 verified:1 graph:1 sum:1 run:4 master:5 communicate:2 family:1 knowles:1 separation:1 gonzalez:3 appendix:5 scaling:7 pushed:1 fl:4 bound:7 distinguish:1 oracle:1 annual:3 occur:1 serializable:5 constraint:2 ahead:1 bp:18 min:7 kumar:1 relatively:1 department:2 ailon:1 according:1 alternate:1 charikar:1 request:1 across:3 describes:1 increasingly:1 pan:1 partitioned:1 joseph:3 making:1 invariant:7 ene:1 resource:2 mutually:1 turn:1 eventually:2 mechanism:3 needed:1 end:3 adopted:2 junction:1 pursuit:1 operation:23 available:2 xinghao:2 apply:6 obey:2 occasional:1 hierarchical:2 appropriate:1 observe:1 hellerstein:1 matei:1 alternative:2 batch:1 buffet:1 slower:1 original:1 clustering:14 ensure:1 dirichlet:5 remaining:1 opportunity:3 lock:2 exploit:4 giving:1 ghahramani:1 build:1 establish:2 conquer:3 obviated:1 classical:1 scholarship:1 feng:1 objective:4 already:2 added:1 quantity:1 amr:1 strategy:1 primary:2 costly:2 rerunning:1 exclusive:1 concentration:3 exhibit:1 gradient:1 dp:38 distance:4 link:1 unable:1 simulated:3 majority:1 evenly:3 topic:1 print:1 trivial:1 mad:1 panigrahy:1 code:1 multiplicatively:1 providing:1 minimizing:1 executed:2 potentially:4 stoc:2 expense:1 negative:2 hogwild:1 shindler:1 design:4 implementation:5 perform:3 allowing:3 upper:1 datasets:2 markov:1 descent:1 situation:1 extended:2 communication:6 interacting:1 varied:2 arbitrary:1 thm:1 parallelizing:1 community:3 usenix:1 aly:1 introduced:3 david:1 required:1 mosharaf:1 optimized:1 conflict:8 california:1 accepts:2 conflicting:1 established:2 nip:6 robinson:1 address:3 yucheng:1 able:4 serializability:7 proceeds:1 parallelism:5 pattern:7 beyond:2 below:1 fp:1 sparsity:3 challenge:1 scott:1 including:1 memory:1 shifting:1 greatest:1 event:3 power:1 serially:10 force:1 metropolishastings:1 rely:1 indicator:1 treated:1 hot:1 mn:1 scheme:4 created:2 stefanie:1 tentatively:1 epoch:26 literature:2 mapreduce:1 discovery:1 sariel:1 vancouver:3 asymptotic:2 law:1 synchronization:1 permutation:3 allocation:1 zaharia:1 filtering:1 validation:5 foundation:2 integrate:1 sufficient:2 corrupting:2 granada:3 row:1 penalized:1 supported:4 free:8 asynchronous:1 infeasible:1 allow:1 wide:1 template:1 taking:1 sparse:1 distributed:33 edinburgh:1 world:1 xlarge:1 avoids:1 doesn:1 meyerson:4 collection:3 commonly:1 replicated:3 made:2 author:1 san:1 splunk:1 transaction:11 welling:1 approximate:2 keep:1 graphlab:1 global:7 tolerant:1 reorder:1 xi:19 amplab:1 alternatively:2 continuous:1 latent:4 search:1 nature:1 ca:1 inherently:2 symmetry:3 hadoop:1 alg:6 inventory:7 complex:3 meanwhile:1 constructing:2 protocol:1 domain:1 electric:1 did:1 aistats:3 dense:1 da:1 arise:1 repeated:1 ref:2 xu:1 fig:3 intel:1 tod:1 slow:1 hsiang:1 candidate:2 breaking:3 third:1 theorem:8 ericsson:1 grouping:1 workshop:1 serialization:6 postgraduate:1 adding:1 valiant:1 callaghan:2 execution:8 kx:5 demand:1 flavor:1 rejection:6 explore:2 likely:1 desire:1 expressed:1 ordered:2 partially:1 applies:2 constantfactor:1 chance:1 acm:6 conditional:2 goal:2 kmeans:1 careful:1 occ:54 absence:1 cise:1 jordan1:1 change:4 determined:1 reducing:1 corrected:1 operates:1 exhausting:1 pas:4 accepted:14 invariance:1 experimental:1 la:1 moseley:2 formally:1 latter:1 kung:1 alexander:2 violated:2 indian:1 evaluate:2 stefje:1 tested:1 |
4,463 | 5,039 | Distributed Submodular Maximization:
Identifying Representative Elements in Massive Data
Baharan Mirzasoleiman
ETH Zurich
Amin Karbasi
ETH Zurich
Rik Sarkar
University of Edinburgh
Andreas Krause
ETH Zurich
Abstract
Many large-scale machine learning problems (such as clustering, non-parametric
learning, kernel machines, etc.) require selecting, out of a massive data set, a
manageable yet representative subset. Such problems can often be reduced to
maximizing a submodular set function subject to cardinality constraints. Classical
approaches require centralized access to the full data set; but for truly large-scale
problems, rendering the data centrally is often impractical. In this paper, we consider the problem of submodular function maximization in a distributed fashion.
We develop a simple, two-stage protocol G REE D I, that is easily implemented using MapReduce style computations. We theoretically analyze our approach, and
show, that under certain natural conditions, performance close to the (impractical)
centralized approach can be achieved. In our extensive experiments, we demonstrate the effectiveness of our approach on several applications, including sparse
Gaussian process inference and exemplar-based clustering, on tens of millions of
data points using Hadoop.
1
Introduction
Numerous machine learning algorithms require selecting representative subsets of manageable size
out of large data sets. Applications range from exemplar-based clustering [1], to active set selection
for large-scale kernel machines [2], to corpus subset selection for the purpose of training complex
prediction models [3]. Many such problems can be reduced to the problem of maximizing a submodular set function subject to cardinality constraints [4, 5].
Submodularity is a property of set functions with deep theoretical and practical consequences. Submodular maximization generalizes many well-known problems, e.g., maximum weighted matching, max coverage, and finds numerous applications in machine learning and social networks, such
as influence maximization [6], information gathering [7], document summarization [3] and active
learning [8, 9]. A seminal result of Nemhauser et al. [10] states that a simple greedy algorithm produces solutions competitive with the optimal (intractable) solution. In fact, if assuming nothing but
submodularity, no efficient algorithm produces better solutions in general [11, 12].
Data volumes are increasing faster than the ability of individual computers to process them. Distributed and parallel processing is therefore necessary to keep up with modern massive datasets.
The greedy algorithms that work well for centralized submodular optimization, however, are unfortunately sequential in nature; therefore they are poorly suited for parallel architectures. This
mismatch makes it inefficient to apply classical algorithms directly to distributed setups.
1
In this paper, we develop a simple, parallel protocol called G REE D I for distributed submodular
maximization. It requires minimal communication, and can be easily implemented in MapReduce
style parallel computation models [13]. We theoretically characterize its performance, and show that
under some natural conditions, for large data sets the quality of the obtained solution is competitive
with the best centralized solution. Our experimental results demonstrate the effectiveness of our
approach on a variety of submodular maximization problems. We show that for problems such as
exemplar-based clustering and active set selection, our approach leads to parallel solutions that are
very competitive with those obtained via centralized methods (98% in exemplar based clustering
and 97% in active set selection). We implement our approach in Hadoop, and show how it enables
sparse Gaussian process inference and exemplar-based clustering on data sets containing tens of
millions of points.
2
Background and Related Work
Due to the rapid increase in data set sizes, and the relatively slow advances in sequential processing
capabilities of modern CPUs, parallel computing paradigms have received much interest. Inhabiting
a sweet spot of resiliency, expressivity and programming ease, the MapReduce style computing
model [13] has emerged as prominent foundation for large scale machine learning and data mining
algorithms [14, 15]. MapReduce works by distributing the data to independent machines, where
it is processed in parallel by map tasks that produce key-value pairs. The output is shuffled, and
combined by reduce tasks. Hereby, each reduce task processes inputs that share the same key. Their
output either comprises the ultimate result, or forms the input to another MapReduce computation.
The problem of centralized maximization of submodular functions has received much interest, starting with the seminal work of [10]. Recent work has focused on providing approximation guarantees
for more complex constraints. See [5] for a recent survey. The work in [16] considers an algorithm
for online distributed submodular maximization with an application to sensor selection. However,
their approach requires k stages of communication, which is unrealistic for large k in a MapReduce
style model. The authors in [4] consider the problem of submodular maximization in a streaming
model; however, their approach is not applicable to the general distributed setting. There has also
been new improvements in the running time of the greedy solution for solving SET-COVER when
the data is large and disk resident [17]. However, this approach is not parallelizable by nature.
Recently, specific instances of distributed submodular maximization have been studied. Such scenarios often occur in large-scale graph mining problems where the data itself is too large to be
stored on one machine. Chierichetti et al. [18] address the MAX-COVER problem and provide a
(1?1/e?) approximation to the centralized algorithm, however at the cost of passing over the data
set many times. Their result is further improved by Blelloch et al. [19]. Lattanzi et al. [20] address
more general graph problems by introducing the idea of filtering, namely, reducing the size of the
input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on
a single machine. This idea is, in spirit, similar to our distributed method G REE D I. In contrast, we
provide a more general framework, and analyze in which settings performance competitive with the
centralized setting can be obtained.
3
The Distributed Submodular Maximization Problem
We consider the problem of selecting subsets out of a large data set, indexed by V (called ground
set). Our goal is to maximize a non-negative set function f : 2V ? R+ , where, for S ? V , f (S)
quantifies the utility of set S, capturing, e.g., how well S represents V according to some objective.
We will discuss concrete instances of functions f in Section 3.1. A set function f is naturally
associated with a discrete derivative
.
4f (e|S) = f (S ? {e}) ? f (S),
(1)
where S ? V and e ? V , which quantifies the increase in utility obtained when adding e to set S. f
is called monotone iff for all e and S it holds that 4f (e|S) ? 0. Further, f is submodular iff for all
A ? B ? V and e ? V \ B the following diminishing returns condition holds:
4f (e|A) ? 4f (e|B).
2
(2)
Throughout this paper, we focus on such monotone submodular functions. For now, we adopt the
common assumption that f is given in terms of a value oracle (a black box) that computes f (S) for
any S ? V . In Section 4.5, we will discuss the setting where f (S) itself depends on the entire data
set V , and not just the selected subset S. Submodular functions contain a large class of functions
that naturally arise in machine learning applications (c.f., [5, 4]). The simplest example of such
functions are modular functions for which the inequality (2) holds with equality.
The focus of this paper is on maximizing a monotone submodular function (subject to some constraint) in a distributed manner. Arguably, the simplest form of constraints are cardinality constraints. More precisely, we are interested in the following optimization problem:
max f (S) s.t. |S| ? k.
(3)
S?V
We will denote by Ac [k] the subset of size at most k that achieves the above maximization, i.e.,
the best centralized solution. Unfortunately, problem (3) is NP-hard, for many classes of submodular functions [12]. However, a seminal result by Nemhauser et al. [10] shows that a simple
greedy algorithm provides a (1 ? 1/e) approximation to (3). This greedy algorithm starts with
the empty set S0 , and at each iteration i, it chooses an element e ? V that maximizes (1), i.e.,
Si = Si?1 ? {arg maxe?V 4f (e|Si?1 )}. Let Agc [k] denote this greedy-centralized solution of size
at most k. For several classes of monotone submodular functions, it is known that (1 ? 1/e) is the
best approximation guarantee that one can hope for [11, 12, 21]. Moreover, the greedy algorithm
can be accelerated using lazy evaluations [22].
In many machine learning applications where the ground set |V | is large (e.g., cannot be stored
on a single computer), running a standard greedy algorithm or its variants (e.g., lazy evaluation)
in a centralized manner is infeasible. Hence, in those applications we seek a distributed solution,
e.g., one that can be implemented using MapReduce-style computations (see Section 5). From the
algorithmic point of view, however, the above greedy method is in general difficult to parallelize,
since at each step, only the object with the highest marginal gain is chosen and every subsequent
step depends on the preceding ones. More precisely, the problem we are facing in this paper is the
following. Let the ground set V be partitioned into V1 , V2 , . . . , Vm , i.e., V = V1 ? V2 , ? ? ? ? Vm and
Vi ? Vj = ? for i 6= j. We can think of Vi as a subset of elements (e.g., images) on machine i. The
questions we are trying to answer in this paper are: how to distribute V among m machines, which
algorithm should run on each machine, and how to merge the resulting solutions.
3.1
Example Applications Suitable for Distributed Submodular Maximization
In this part, we discuss two concrete problem instances, with their corresponding submodular objective functions f , where the size of the datasets often requires a distributed solution for the underlying
submodular maximization.
Active Set Selection in Sparse Gaussian Processes (GPs): Formally a GP is a joint probability distribution over a (possibly infinite) set of random variables XV , indexed by our ground set
V , such that every (finite) subset XS for S = {e1 , . . . , es } is distributed according to a multivariate normal distribution, i.e., P (XS = xS ) = N (xS ; ?S , ?S,S ), where ?S = (?e1 , . . . , ?es ) and
?S,S = [Kei ,ej ](1 ? i, j ? k) are the prior mean vector and prior covariance matrix, respectively.
The covariance matrix is parametrized via a (positive definite kernel) function K. For example, a
commonly used kernel function in practice where elements of the ground set V are embedded in a
Euclidean space is the squared exponential kernel Kei ,ej = exp(?|ei ? ej |22 /h2 ). In GP regression,
each data point e ? V is considered a random variable. Upon observations yA = xA + nA (where
nA is a vector of independent Gaussian noise with variance ? 2 ), the predictive distribution of a new
data point e ? V is a normal distribution P (Xe | yA ) = N (?e|A , ?2e|A ), where
2
?e|A = ?e + ?e,A (?A,A + ? 2 I)?1 (xA ? ?A ), ?e|A
= ?e2 ? ?e,A (?A,A + ? 2 I)?1 ?A,e . (4)
Note that evaluating (4) is computationally expensive as it requires a matrix inversion. Instead, most
efficient approaches for making predictions in GPs rely on choosing a small ? so called active ? set
of data points. For instance, in the Informative Vector Machine (IVM) one seeks a set S such that
the information gain, f (S) = I(YS ; XV ) = H(XV ) ? H(XV |YS ) = 12 log det(I + ? ?2 ?S,S ) is
maximized. It can be shown that this choice of f is monotone submodular [21]. For medium-scale
problems, the standard greedy algorithms provide good solutions. In Section 5, we will show how
G REE D I can choose near-optimal subsets out of a data set of 45 million vectors.
3
Exemplar Based Clustering: Suppose we wish to select a set of exemplars, that best represent a
massive data set. One approach for finding such exemplars is solving the k-medoid problem [23],
which aims to minimize the sum of pairwise dissimilarities between exemplars and elements of the
dataset. More precisely, let us assume that for the data set V we are given a distance function d : V ?
V ? R (not necessarily assumed symmetric, nor obeying the triangle inequality) such that d(?, ?) encodes dissimilarity between elements ofPthe underlying set V . Then, the loss function for k-medoid
can be defined as follows: L(S) = |V1 | e?V min??S d(e, ?). By introducing an auxiliary element
e0 (e.g., = 0) we can turn L into a monotone submodular function: f (S) = L({e0 }) ? L(S ? {e0 }).
In words, f measures the decrease in the loss associated with the set S versus the loss associated
with just the auxiliary element. It is easy to see that for suitable choice of e0 , maximizing f is
equivalent to minimizing L. Hence, the standard greedy algorithm provides a very good solution.
But again, the problem becomes computationally challenging when we have a large data set and we
wish to extract a small set of exemplars. Our distributed solution G REE D I addresses this challenge.
3.2
Naive Approaches Towards Distributed Submodular Maximization
One way of implementing the greedy algorithm in parallel would be the following. We proceed
in rounds. In each round, all machines ? in parallel ? compute the marginal gains of all elements
in their sets Vi . They then communicate their candidate to a central processor, who identifies the
globally best element, which is in turn communicated to the m machines. This element is then
taken into account when selecting the next element and so on. Unfortunately, this approach requires
synchronization after each of the k rounds. In many applications, k is quite large (e.g., tens of
thousands or more), rendering this approach impractical for MapReduce style computations.
An alternative approach for large k would be to ? on each machine ? greedily select k/m elements
independently (without synchronization), and then merge them to obtain a solution of size k. This
approach is much more communication efficient, and can be easily implemented, e.g., using a single
MapReduce stage. Unfortunately, many machines may select redundant elements, and the merged
solution may suffer from diminishing returns.
In Section 4, we introduce an alternative protocol G REE D I, which requires little communication,
while at the same time yielding a solution competitive with the centralized one, under certain natural
additional assumptions.
4
The G REE D I Approach for Distributed Submodular Maximization
In this section we present our main results. We first provide our distributed solution G REE D I for
maximizing submodular functions under cardinality constraints. We then show how we can make
use of the geometry of data inherent in many practical settings in order to obtain strong datadependent bounds on the performance of our distributed algorithm.
4.1
An Intractable, yet Communication Efficient Approach
Before we introduce G REE D I, we first consider an intractable, but communication?efficient parallel
protocol to illustrate the ideas. This approach, shown in Alg. 1, first distributes the ground set V to
m machines. Each machine then finds the optimal solution, i.e., a set of cardinality at most k, that
maximizes the value of f in each partition. These solutions are then merged, and the optimal subset
of cardinality k is found in the combined set. We call this solution f (Ad [m, k]).
As the optimum centralized solution Ac [k] achieves the maximum value of the submodular function,
it is clear that f (Ac [k]) ? f (Ad [m, k]). Further, for the special case of selecting a single element
k = 1, we have Ac [1] = Ad [m, 1]. In general, however, there is a gap between the distributed and
the centralized solution. Nonetheless, as the following theorem shows, this gap cannot be more than
1/ min(m, k). Furthermore, this is the best result one can hope for under our two-round model.
Theorem 4.1. Let f be a monotone submodular function and let k > 0. Then, f (Ad [m, k])) ?
1
c
min(m,k) f (A [k]). In contrast, for any value of m, and k, there is a data partition and a monotone
submodular function f such that f (Ac [k]) = min(m, k) ? f (Ad [m, k]).
4
Algorithm 1 Exact Distrib. Submodular Max.
Algorithm 2 Greedy Dist. Subm. Max. (G REE D I)
Input: Set V , #of partitions m, constraints k. Input: Set V , #of partitions m, constraints l, ?.
Output: Set Ad [m, k].
Output: Set Agd [m, ?, l].
1: Partition V into m sets V1 , V2 , . . . , Vm .
1: Partition V into m sets V1 , V2 , . . . , Vm .
2: In each partition Vi find the optimum set 2: Run the standard greedy algorithm on each set
Aci [k] of cardinality k.
Vi . Find a solution Agc
i [?].
gc
m
c
3: Merge the resulting sets: B = ?i=1 Ai [k].
3: Merge the resulting sets: B = ?m
i=1 Ai [?].
4: Find the optimum set of cardinality k in B. 4: Run the standard greedy algorithm on B until
Output this solution Ad [m, k].
l elements are selected. Return Agd [m, ?, l].
The proof of all the theorems can be found in the supplement. The above theorem fully characterizes the performance of two-round distributed algorithms in terms of the best centralized solution.
A similar result in fact also holds for non-negative (not necessarily monotone) functions. Due to
space limitation, the result is reported in the appendix. In practice, we cannot run Alg. 1. In particular, there is no efficient way to identify the optimum subset Aci [k] in set Vi , unless P=NP. In the
following, we introduce our efficient approximation G REE D I.
4.2
Our G REE D I Approximation
Our main efficient distributed method G REE D I is shown in Algorithm 2. It parallels the intractable
Algorithm 1, but replaces the selection of optimal subsets by a greedy algorithm. Due to the approximate nature of the greedy algorithm, we allow the algorithms to pick sets slightly larger than k. In
particular, G REE D I is a two-round algorithm that takes the ground set V , the number of partitions
m, and the cardinality constraints l (final solution) and ? (intermediate outputs). It first distributes
the ground set over m machines. Then each machine separately runs the standard greedy algorithm,
namely, it sequentially finds an element e ? Vi that maximizes the discrete derivative shown in
(1). Each machine i ? in parallel ? continues adding elements to the set Agc
i [?] until it reaches ?
gc
elements. Then the solutions are merged: B = ?m
i=1 Ai [?], and another round of greedy selection
is performed over B, which this time selects l elements. We denote this solution by Agd [m, ?, l]: the
greedy solution for parameters m, ? and l. The following result parallels Theorem 4.1.
Theorem 4.2. Let f be a monotone submodular function and let l, ?, k > 0. Then
f (Agd [m, ?, l])) ?
(1 ? e??/k )(1 ? e?l/? )
f (Ac [k]).
min(m, k)
2
(1?1/e)
For the special case of ? = l = k the result of 4.2 simplifies to f (Agd [m, ?, k]) ? min(m,k)
f (Ac [k]).
From Theorem 4.1, it is clear that in general one cannot hope to eliminate the dependency of the
distributed solution on min(k, m). However, as we show below, in many practical settings, the
ground set V and f exhibit rich geometrical structure that can be used to prove stronger results.
4.3
Performance on Datasets with Geometric Structure
In practice, we can hope to do much better than the worst case bounds shown above by exploiting
underlying structures often present in real data and important set functions. In this part, we assume
that a metric d exists on the data elements, and analyze performance of the algorithm on functions
that change gracefully with change in the input. We refer to these as Lipschitz functions. More
formally, a function f : 2V ? R is ?-Lipschitz, if for equal sized sets S = {e1 , e2 , . . . , ek } and
S 0 = {e01 , e02 , . . . , e0k } and for any matching of elements: M = {(e1 , e01 ), (e2 , e02 ) . . . , (ek , e0k )},
0
the difference between f (S) and
Xf (S ) is bounded by the total of distances between respective
elements: |f (S) ? f (S 0 )| ? ?
d(ei , e0i ). It is easy to see that the objective functions from both
i
examples in Section 3.1 are ?-Lipschitz for suitable kernels/distance functions. Two sets S and S 0
are ?-close with respect to f , if |f (S) ? f (S 0 )| ? ?. Sets that are close with respect to f can be
thought of as good candidates to approximate the value of f over each-other; thus one such set is
a good representative of the other. Our goal is to find sets that are suitably close to Ac [k]. At an
element v ? V , let us define its ?-neighborhood to be the set of elements within a distance ? from
5
v (i.e., ?-close to v): N? (v) = {w : d(v, w) ? ?}. We can in general consider ?-neighborhoods of
points of the metric space.
Our algorithm G REE D I partitions V into sets V1 , V2 , . . . Vm for parallel processing. In this subsection, we assume that G REE D I performs the partition by assigning elements uniformly randomly to
the machines. The following theorem says that if the ?-neighborhoods are sufficiently dense and f
is a ?-lipschitz function, then this method can produce a solution close to the centralized solution:
Theorem 4.3. If for each ei ? Ac [k], |N? (ei )| ? km log(k/? 1/m ), and algorithm G REE D I assigns
elements uniformly randomly to m processors , then with probability at least (1 ? ?),
f (Agd [m, ?, l]) ? (1 ? e??/k )(1 ? e?l/? )(f (Ac [k]) ? ??k).
4.4
Performance Guarantees for Very Large Data Sets
Suppose that our data set is a finite sample drawn from an underlying infinite set, according to some
unknown probability distribution. Let Ac [k] be an optimal solution in the infinite set such that around
each ei ? Ac [k], there is a neighborhood of radius at least ?? , where the probability density is at
least ? at all points, for some constants ?? and ?. This implies that the solution consists of elements
coming from reasonably dense and therefore representative regions of the data set.
Let us consider g : R ? R, the growth function of the metric. g(?) is defined to be the volume of a
ball of radius ? centered at a point in the metric space. This means, for ei ? Ac [k] the probability of
a random element being in N? (ei ) is at least ?g(?) and the expected number of ? neighbors of ei
is at least E[|N? (ei )|] = n?g(?). As a concrete example, Euclidean metrics of dimension D have
g(?) = O(?D ). Note that for simplicity we are assuming the metric to be homogeneous, so that the
growth function is the same at every point. For heterogeneous spaces, we require g to be a uniform
lower bound on the growth function at every point.
In these circumstances, the following theorem guarantees that if the data set V is sufficiently large
and f is a ?-lipschitz function, then G REE D I produces a solution close to the centralized solution.
8km log(k/? 1/m )
?
? ?? , if the algorithm G REE D I assigns
, where ?k
?
?g( ?k
)
elements uniformly randomly to m processors , then with probability at least (1 ? ?),
Theorem 4.4. For n ?
f (Agd [m, ?, l]) ? (1 ? e??/k )(1 ? e?l/? )(f (Ac [k]) ? ?).
4.5
Handling Decomposable Functions
So far, we have assumed that the objective function f is given to us as a black box, which we can
evaluate for any given set S independently of the data set V . In many settings, however, the objective
f depends itself on the entire data set. In such a setting, we cannot use G REE D I as presented above,
since we cannot evaluate f on the individual machines without access to the full set V . Fortunately,
many such functions have a simple structure which we call decomposable. More precisely, we call
a monotone submodular function f decomposable if it canP
be written as a sum of (non-negative)
monotone submodular functions as follows: f (S) = |V1 | i?V fi (S). In other words, there is
separate monotone submodular function associated with every data point i ? V . We require that
each fi can be evaluated without access to the full set V . Note that the exemplar based clustering
application we discussed in Section 3.1 is an instance of this framework, among many others.
P
1
Let us define the evaluation of f restricted to D ? V as follows: fD (S) = |D|
i?D fi (S). Then,
in the remaining of this section, our goal is to show that assigning each element of the data set
randomly to a machine and running G REE D I will provide a solution that is with high probability
close to the optimum solution. For this, let us assume the fi ?s are bounded, and without loss of
generality 0 ? fi (S) ? 1 for 1 ? i ? |V |, S ? V . Similar to Section 4.3 we assume that
G REE D I performs the partition by assigning elements uniformly randomly to the machines. These
machines then each greedily optimize fVi . The second stage of G REE D I optimizes fU , where
U ? V is chosen uniformly at random, of size dn/me. Then, we can show the following result.
6
Theorem 4.5. Let m, k, ? > 0, < 1/4 and let n0 be an integer such that for n ? n0 we have
ln(n)/n ? 2 /(mk). For n ? max(n0 , m log(?/4m)/2 ), and under the assumptions of Theorem 4.4, we have, with probability at least 1 ? ?,
f (Agd [m, ?, l]) ? (1 ? e??/k )(1 ? e?l/? )(f (Ac [k]) ? 2?).
The above result demonstrates why G REE D I performs well on decomposable submodular functions
with massive data even when they are evaluated locally on each machine. We will report our experimental results on exemplar-based clustering in the next section.
5
Experiments
In our experimental evaluation we wish to address the following questions: 1) how well does
G REE D I perform compared to a centralized solution, 2) how good is the performance of G REE D I
when using decomposable objective functions (see Section 4.5), and finally 3) how well does
G REE D I scale on massive data sets. To this end, we run G REE D I on two scenarios: exemplar based
clustering and active set selection in GPs. Further experiments are reported in the supplement.
We compare the performance of our G REE D I method (using different values of ? = ?/k) to the
following naive approaches: a) random/random: in the first round each machine simply outputs k
randomly chosen elements from its local data points and in the second round k out of the merged mk
elements, are again randomly chosen as the final output. b) random/greedy: each machine outputs
k randomly chosen elements from its local data points, then the standard greedy algorithm is run
over mk elements to find a solution of size k. c) greedy/merge: in the first round k/m elements are
chosen greedily from each machine and in the second round they are merged to output a solution
of size k. d) greedy/max: in the first round each machine greedily finds a solution of size k and in
the second round the solution with the maximum value is reported. For data sets where we are able
to find the centralized solution, we report the ratio of f (Adist [k])/f (Agc [k]), where Adist [k] is the
distributed solution (in particular Agd [m, ?k, k] = Adist [k] for G REE D I).
Exemplar based clustering. Our exemplar based clustering experiment involves G REE D I applied
to the clustering utility f (S) (see Sec. 3.1) with d(x, x0 ) = kx?x0 k2 . We performed our experiments
on a set of 10,000 Tiny Images [24]. Each 32 by 32 RGB pixel image was represented by a 3,072
dimensional vector. We subtracted from each vector the mean value, normalized it to unit norm, and
used the origin as the auxiliary exemplar. Fig. 1a compares the performance of our approach to the
benchmarks with the number of exemplars set to k = 50, and varying number of partitions m. It can
be seen that G REE D I significantly outperforms the benchmarks and provides a solution that is very
close to the centralized one. Interestingly, even for very small ? = ?/k < 1, G REE D I performs
very well. Since the exemplar based clustering utility function is decomposable, we repeated the
experiment for the more realistic case where the function evaluation in each machine was restricted
to the local elements of the dataset in that particular machine (rather than the entire dataset). Fig 1b
shows similar qualitative behavior for decomposable objective functions.
Large scale experiments with Hadoop. As our first large scale experiment, we applied G REE D I
to the whole dataset of 80,000,000 Tiny Images [24] in order to select a set of 64 exemplars. Our
experimental infrastructure was a cluster of 10 quad-core machines running Hadoop with the number
of reducers set to m = 8000. Hereby, each machine carried out a set of reduce tasks in sequence.
We first partitioned the images uniformly at random to reducers. Each reducer separately performed
the lazy greedy algorithm on its own set of 10,000 images (?123MB) to extract 64 images with
the highest marginal gains w.r.t. the local elements of the dataset in that particular partition. We
then merged the results and performed another round of lazy greedy selection on the merged results
to extract the final 64 exemplars. Function evaluation in the second stage was performed w.r.t a
randomly selected subset of 10,000 images from the entire dataset. The maximum running time per
reduce task was 2.5 hours. As Fig. 1c shows, G REE D I highly outperforms the other distributed
benchmarks and can scale well to very large datasets. Fig. 1d shows a set of cluster exemplars
discovered by G REE D I where each column in Fig. 1h shows 8 nearest images to exemplars 9 and
16 (shown with red borders) in Fig. 1d.
Active set selection. Our active set selection experiment involves G REE D I applied to the information gain f (S) (see Sec. 3.1) with Gaussian kernel, h = 0.75 and ? = 1. We used the Parkinsons
Telemonitoring dataset [25] consisting of 5,875 bio-medical voice measurements with 22 attributes
7
4
GreeDI (?=1)
?=4/m
2.15
Greedy/
Max
?=2/m
0.9
Random/
Random
Random/
Greedy
Greedy/
Merge
Greedy/
Max
Random/
Random
Random/
Greedy
Greedy/ Greedy/
Max
Merge
2.05
?=2/m
0.9
0.85
0.85
GreeDI (?=1)
?=4/m
?=2/m
2.1
0.95
Distributed
Greedy/
Merge
Distributed/Centralized
Distributed/Centralized
0.95
?=4/m
1
1
x 10
2.2
GreeDI (?=1)
Random/
Greedy
2
1.95
Random/
random
1.9
1.85
1.8
0.8
2
4
6
8
0.8
10
2
4
6
(a) Tiny Images 10K
GreeDI (?=1)
?=4/m
0.8
Random/
Greedy
?=4/m
0.7
0.85
2
6
8
10
Greedy/
Max
Greedy/
Merge
?=2/m
0.8
Random/
Random
0.75
m
(e) Parkinsons Telemonitoring
0.65
50
40
(d)
60
80
15
10
100
k
?=2/m
?=4/m
20
Random/
Greedy
5
20
60
GreeDI (?=1)
25
Random/
Greedy
0.7
?=2/m
4
40
Greedy/
Max
30
0.6
Greedy/
Merge
30
(c) Tiny Images 80M
Distributed
Distributed/Centralized
Random/
Random
20
35
GreeDI (?=1)
0.95
0.9
10
k
1
0.9 Greedy/
Max
0.5
1.75
(b) Tiny Images 10K
1.05
Distributed/Centralized
10
m
m
1
8
0
20
40
Random/
random
Greedy/
Merge
60
80
100
120
k
(f) Parkinsons Telemonitoring
(g) Yahoo! front page
(h)
Figure 1: Performance of G REE D I compared to the other benchmarks. a) and b) show the mean and standard
deviation of the ratio of distributed vs. centralized solution for global and local objective functions with budget
k = 50 and varying the number m of partitions, for a set of 10,000 Tiny Images. c) shows the distributed
solution with m = 8000 and varying k for local objective functions on the whole dataset of 80,000,000 Tiny
Images. e) shows the ratio of distributed vs. centralized solution with m = 10 and varying k for Parkinsons
Telemonitoring. f) shows the same ratio with k = 50 and varying m on the same dataset, and g) shows the
distributed solution for m = 32 with varying budget k on Yahoo! Webscope data. d) shows a set of cluster
exemplars discovered by G REE D I, and each column in h) shows 8 images nearest to exemplars 9 and 16 in d).
from people with early-stage Parkinson?s disease. We normalized the vectors to zero mean and unit
norm. Fig. 1f compares the performance G REE D I to the benchmarks with fixed k = 50 and varying
number of partitions m. Similarly, Fig 1e shows the results for fixed m = 10 and varying k. We
find that G REE D I significantly outperforms the benchmarks.
Large scale experiments with Hadoop. Our second large scale experiment consists of 45,811,883
user visits from the Featured Tab of the Today Module on Yahoo! Front Page [26]. For each visit,
both the user and each of the candidate articles are associated with a feature vector of dimension 6.
Here, we used the normalized user features. Our experimental setup was a cluster of 5 quad-core machines running Hadoop with the number of reducers set to m = 32. Each reducer performed the lazy
greedy algorithm on its own set of 1,431,621 vectors (?34MB) in order to extract 128 elements with
the highest marginal gains w.r.t the local elements of the dataset in that particular partition. We then
merged the results and performed another round of lazy greedy selection on the merged results to extract the final active set of size 128. The maximum running time per reduce task was 2.5 hours. Fig.
1g shows the performance of G REE D I compared to the benchmarks. We note again that G REE D I
significantly outperforms the other distributed benchmarks and can scale well to very large datasets.
6
Conclusion
We have developed an efficient distributed protocol G REE D I, for maximizing a submodular function
subject to cardinality constraints. We have theoretically analyzed the performance of our method and
showed under certain natural conditions it performs very close to the centralized (albeit impractical
in massive data sets) greedy solution. We have also demonstrated the effectiveness of our approach
through extensive large scale experiments using Hadoop. We believe our results provide an important step towards solving submodular optimization problems in very large scale, real applications.
Acknowledgments. This research was supported by SNF 200021-137971, DARPA MSEE
FA8650-11-1-7156, ERC StG 307036, a Microsoft Faculty Fellowship, an ETH Fellowship,
Scottish Informatics and Computer Science Alliance.
8
References
[1] Delbert Dueck and Brendan J. Frey. Non-metric affinity propagation for unsupervised image categorization. In ICCV, 2007.
[2] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning
(Adaptive Computation and Machine Learning). 2006.
[3] Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In North American chapter of the Assoc. for Comp. Linguistics/Human Lang. Tech., 2011.
[4] Ryan Gomes and Andreas Krause. Budgeted nonparametric learning from data streams. In Proc. International Conference on Machine Learning (ICML), 2010.
[5] Andreas Krause and Daniel Golovin. Submodular function maximization. In Tractability: Practical
Approaches to Hard Problems. Cambridge University Press, 2013.
? Tardos. Maximizing the spread of influence through a social
[6] David Kempe, Jon Kleinberg, and Eva
network. In Proceedings of the ninth ACM SIGKDD, 2003.
[7] Andreas Krause and Carlos Guestrin. Submodularity and its applications in optimized information gathering. ACM Transactions on Intelligent Systems and Technology, 2011.
[8] Andrew Guillory and Jeff Bilmes. Active semi-supervised learning using submodular functions. In
Uncertainty in Artificial Intelligence (UAI), Barcelona, Spain, July 2011. AUAI.
[9] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. Journal of Artificial Intelligence Research, 2011.
[10] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for
maximizing submodular set functions - I. Mathematical Programming, 1978.
[11] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a submodular
set function. Math. Oper. Research, 1978.
[12] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 1998.
[13] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI,
2004.
[14] Cheng-Tao Chu, Sang Kyun Kim, Yi-An Lin, YuanYuan Yu, Gary Bradski, and Andrew Y. Ng. Mapreduce for machine learning on multicore. In NIPS, 2006.
[15] Jaliya Ekanayake, Shrideep Pallickara, and Geoffrey Fox. Mapreduce for data intensive scientific analyses. In Proc. of the 4th IEEE Inter. Conf. on eScience.
[16] Daniel Golovin, Matthew Faulkner, and Andreas Krause. Online distributed sensor selection. In IPSN,
2010.
[17] Graham Cormode, Howard Karloff, and Anthony Wirth. Set cover algorithms for very large datasets. In
Proc. of the 19th ACM intern. conf. on Inf. knowl. manag.
[18] Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In Proceedings of the
19th international conference on World wide web, 2010.
[19] Guy E. Blelloch, Richard Peng, and Kanat Tangwongsan. Linear-work greedy parallel approximate set
cover and variants. In SPAA, 2011.
[20] Silvio Lattanzi, Benjamin Moseley, Siddharth Suri, and Sergei Vassilvitskii. Filtering: a method for
solving graph problems in mapreduce. In SPAA, 2011.
[21] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In Proc.
of Uncertainty in Artificial Intelligence (UAI), 2005.
[22] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization
Techniques, LNCS, pages 234?243, 1978.
[23] Leonard Kaufman and Peter J Rousseeuw. Finding groups in data: an introduction to cluster analysis,
volume 344. Wiley-Interscience, 2009.
[24] Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for
nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell., 2008.
[25] Athanasios Tsanas, Max A Little, Patrick E McSharry, and Lorraine O Ramig. Enhanced classical dysphonia measures and sparse regression for telemonitoring of parkinson?s disease progression. In IEEE
Int. Conf. Acoust. Speech Signal Process., 2010.
[26] Yahoo! academic relations. r6a, yahoo! front page today module user click log dataset, version 1.0, 2012.
[27] Tore Opsahl and Pietro Panzarasa. Clustering in weighted networks. Social networks, 2009.
9
| 5039 |@word faculty:1 manageable:2 agc:4 inversion:1 stronger:1 norm:2 disk:1 suitably:1 laurence:1 version:1 km:2 seek:2 rgb:1 covariance:2 pick:1 lorraine:1 selecting:5 daniel:3 document:2 interestingly:1 outperforms:4 si:3 yet:2 assigning:3 written:1 e01:2 fvi:1 lang:1 chu:1 subsequent:1 partition:16 informative:1 realistic:1 sergei:1 enables:1 n0:3 v:2 greedy:49 selected:3 intelligence:3 core:2 cormode:1 dysphonia:1 infrastructure:1 provides:3 math:1 mathematical:1 dn:1 qualitative:1 prove:1 consists:2 interscience:1 introduce:3 manner:2 pairwise:1 x0:2 theoretically:3 inter:1 expected:1 rapid:1 e02:2 nor:1 dist:1 behavior:1 peng:1 freeman:1 globally:1 siddharth:1 cpu:1 little:2 quad:2 cardinality:10 increasing:1 becomes:1 spain:1 moreover:1 underlying:4 maximizes:3 medium:1 bounded:2 kaufman:1 msee:1 developed:1 acoust:1 finding:2 impractical:4 guarantee:4 dueck:1 every:5 auai:1 growth:3 demonstrates:1 k2:1 assoc:1 bio:1 unit:2 medical:1 arguably:1 positive:1 before:1 local:7 frey:1 xv:4 consequence:1 mach:1 parallelize:1 ree:44 merge:11 black:2 studied:1 challenging:1 minoux:1 ease:1 range:1 practical:4 acknowledgment:1 ofpthe:1 practice:3 implement:1 definite:1 communicated:1 spot:1 lncs:1 featured:1 snf:1 eth:4 thought:1 significantly:3 matching:2 word:2 cannot:6 close:10 selection:14 influence:2 seminal:3 optimize:1 equivalent:1 map:2 demonstrated:1 dean:1 maximizing:9 williams:1 starting:1 independently:2 survey:1 focused:1 simplicity:1 identifying:1 assigns:2 decomposable:7 tardos:1 enhanced:1 suppose:2 today:2 massive:7 exact:1 programming:2 gps:3 homogeneous:1 user:4 carl:1 origin:1 element:41 expensive:1 recognition:1 continues:1 module:2 solved:1 worst:1 thousand:1 region:1 eva:1 decrease:1 highest:3 reducer:5 disease:2 benjamin:1 solving:4 manag:1 predictive:1 upon:1 triangle:1 easily:3 joint:1 darpa:1 represented:1 chapter:1 artificial:3 choosing:1 neighborhood:4 quite:1 emerged:1 modular:1 larger:1 say:1 ability:1 think:1 gp:2 itself:3 final:4 online:2 sequence:1 coming:1 mb:2 iff:2 poorly:1 amin:1 yuanyuan:1 exploiting:1 empty:1 optimum:5 cluster:6 produce:5 categorization:1 mirzasoleiman:1 object:2 illustrate:1 develop:2 ac:15 andrew:3 exemplar:24 multicore:1 nearest:2 received:2 edward:1 strong:1 implemented:4 coverage:1 auxiliary:3 implies:1 involves:2 submodularity:4 radius:2 merged:9 attribute:1 stochastic:1 ipsn:1 centered:1 human:1 implementing:1 require:5 blelloch:2 ryan:1 hold:4 sufficiently:2 considered:1 ground:9 normal:2 exp:1 around:1 algorithmic:1 matthew:1 achieves:2 adopt:1 early:1 torralba:1 purpose:1 proc:4 applicable:1 knowl:1 weighted:2 hope:4 sensor:2 gaussian:6 aim:1 rather:1 ej:3 parkinson:6 varying:8 focus:2 improvement:1 tech:1 contrast:2 brendan:1 greedily:4 stg:1 sigkdd:1 kim:1 inference:2 osdi:1 streaming:1 eliminate:1 entire:4 diminishing:2 relation:1 interested:1 selects:1 tao:1 pixel:1 arg:1 among:2 yahoo:5 special:2 kempe:1 marginal:4 equal:1 ng:1 represents:1 yu:1 unsupervised:1 icml:1 jon:1 np:2 others:1 report:2 richard:1 sweet:1 inherent:1 intelligent:1 modern:2 randomly:9 intell:1 individual:2 geometry:1 consisting:1 jeffrey:1 microsoft:1 william:1 centralized:27 interest:2 fd:1 mining:2 highly:1 bradski:1 evaluation:6 truly:1 analyzed:1 yielding:1 fu:1 necessary:1 respective:1 unless:1 indexed:2 fox:1 euclidean:2 alliance:1 e0:4 theoretical:1 minimal:1 mk:3 instance:6 column:2 marshall:1 cover:6 maximization:17 cost:1 introducing:2 deviation:1 subset:13 tractability:1 uniform:1 too:1 front:3 characterize:1 stored:2 reported:3 dependency:1 answer:1 guillory:1 combined:2 chooses:1 density:1 international:2 vm:5 informatics:1 concrete:3 na:2 squared:1 again:3 central:1 containing:1 choose:1 possibly:1 guy:1 conf:3 ek:2 inefficient:1 style:6 derivative:2 return:3 american:1 oper:1 account:1 distribute:1 sang:1 mcsharry:1 sec:2 north:1 int:1 depends:3 vi:7 ad:7 performed:7 view:1 stream:1 analyze:3 characterizes:1 red:1 competitive:5 start:1 tab:1 parallel:15 capability:1 carlos:1 minimize:1 variance:1 who:1 maximized:1 identify:1 bilmes:2 comp:1 processor:3 canp:1 parallelizable:1 reach:1 nonetheless:1 e2:3 hereby:2 naturally:2 associated:5 proof:1 gain:6 dataset:11 subsection:1 athanasios:1 supervised:1 improved:1 evaluated:2 box:2 generality:1 furthermore:1 just:2 stage:6 xa:2 uriel:1 until:2 e0i:1 web:1 ei:9 christopher:1 propagation:1 resident:1 quality:1 scientific:1 believe:1 contain:1 normalized:3 equality:1 hence:2 shuffled:1 symmetric:1 round:15 prominent:1 trying:1 demonstrate:2 performs:5 geometrical:1 image:17 suri:1 recently:1 fi:5 nonmyopic:1 common:1 volume:3 million:4 discussed:1 refer:1 measurement:1 cambridge:1 ai:3 similarly:1 erc:1 submodular:44 access:3 etc:1 patrick:1 multivariate:1 own:2 recent:2 showed:1 optimizes:1 inf:1 scenario:2 certain:3 inequality:2 xe:1 yi:1 flavio:1 seen:1 guestrin:2 additional:1 fortunately:1 preceding:1 george:1 paradigm:1 maximize:1 redundant:1 e0k:2 semi:1 july:1 full:3 signal:1 faster:1 xf:1 academic:1 lin:2 e1:4 y:2 visit:2 prediction:2 variant:2 regression:2 heterogeneous:1 circumstance:1 metric:7 iteration:1 kernel:7 represent:1 achieved:1 background:1 fellowship:2 krause:7 separately:2 webscope:1 subject:4 tangwongsan:1 spirit:1 effectiveness:3 call:3 integer:1 near:2 intermediate:1 easy:2 faulkner:1 rendering:2 variety:1 architecture:1 click:1 karloff:1 andreas:6 reduce:6 idea:3 simplifies:1 intensive:1 det:1 vassilvitskii:1 utility:4 distributing:1 ultimate:1 suffer:1 peter:1 fa8650:1 speech:1 passing:1 proceed:1 kanat:1 deep:1 greedi:6 antonio:1 clear:2 rousseeuw:1 nonparametric:2 ten:3 locally:1 processed:1 simplest:2 reduced:2 medoid:2 per:2 discrete:2 group:1 key:2 threshold:1 drawn:1 budgeted:1 ravi:1 v1:7 graph:3 monotone:13 pietro:1 sum:2 run:7 uncertainty:2 communicate:1 throughout:1 appendix:1 graham:1 capturing:1 bound:3 centrally:1 cheng:1 replaces:1 oracle:1 occur:1 constraint:11 precisely:4 scene:1 encodes:1 kleinberg:1 min:7 kumar:1 relatively:1 according:3 ball:1 smaller:1 slightly:1 feige:1 partitioned:2 rob:1 making:1 iccv:1 restricted:2 karbasi:1 gathering:2 taken:1 computationally:2 ln:2 zurich:3 discus:3 turn:2 end:1 generalizes:1 apply:1 progression:1 v2:5 subtracted:1 alternative:2 voice:1 clustering:15 running:7 remaining:1 linguistics:1 tomkins:1 graphical:1 approximating:2 classical:3 subm:1 objective:9 question:2 parametric:1 exhibit:1 nemhauser:4 affinity:1 distance:4 separate:1 parametrized:1 gracefully:1 me:1 considers:1 assuming:2 providing:1 minimizing:1 ratio:4 setup:2 unfortunately:4 difficult:1 telemonitoring:5 negative:3 anal:1 summarization:2 unknown:1 perform:1 observation:1 datasets:6 benchmark:8 finite:2 howard:1 kyun:1 communication:6 gc:2 discovered:2 ninth:1 sarkar:1 david:1 pair:1 namely:2 extensive:2 optimized:1 expressivity:1 hour:2 barcelona:1 nip:1 trans:1 address:4 able:1 below:1 sanjay:1 mismatch:1 pattern:1 challenge:1 baharan:1 including:1 max:15 unrealistic:1 suitable:3 natural:4 rely:1 escience:1 ramig:1 technology:1 numerous:2 identifies:1 carried:1 extract:5 naive:2 prior:2 geometric:1 mapreduce:13 embedded:1 loss:4 synchronization:2 fully:1 limitation:1 filtering:2 wolsey:2 facing:1 versus:1 geoffrey:1 scottish:1 foundation:1 h2:1 rik:1 s0:1 article:1 tiny:8 share:1 supported:1 rasmussen:1 infeasible:1 allow:1 neighbor:1 wide:1 sparse:4 distributed:40 edinburgh:1 dimension:2 evaluating:1 world:1 rich:1 computes:1 author:1 commonly:1 adaptive:2 simplified:1 kei:2 far:1 agd:9 social:3 transaction:1 approximate:3 keep:1 global:1 active:12 sequentially:1 uai:2 corpus:1 assumed:2 gomes:1 fergus:1 aci:2 quantifies:2 why:1 nature:3 reasonably:1 spaa:2 golovin:3 hadoop:7 alg:2 complex:2 necessarily:2 anthony:1 protocol:5 vj:1 main:2 dense:2 spread:1 whole:2 noise:1 arise:1 border:1 nothing:1 lattanzi:2 repeated:1 fig:9 representative:5 fashion:2 slow:1 chierichetti:2 delbert:1 wiley:1 comprises:1 wish:3 obeying:1 exponential:1 candidate:3 wirth:1 theorem:13 specific:1 ghemawat:1 x:4 tsanas:1 intractable:4 exists:1 albeit:1 sequential:2 adding:2 hui:1 supplement:2 dissimilarity:2 budget:2 kx:1 gap:2 suited:1 simply:1 intern:1 lazy:6 datadependent:1 gary:1 ivm:1 acm:4 goal:3 sized:1 leonard:1 towards:2 jeff:2 lipschitz:5 fisher:1 hard:2 change:2 infinite:3 reducing:1 uniformly:6 distributes:2 called:4 total:1 silvio:1 experimental:5 e:2 ya:2 moseley:1 maxe:1 formally:2 select:4 people:1 distrib:1 accelerated:2 evaluate:2 handling:1 |
4,464 | 504 | Green's Function Method for Fast On-line Learning
Algorithm of Recurrent Neural Networks
Guo-Zheng Sun, Hsing-Hen Chen and Yee-Chun Lee
Institute for Advanced Computer Studies
and
Laboratory for Plasma Research,
University of Maryland
College Park, MD 20742
Abstract
The two well known learning algorithms of recurrent neural networks are
the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its
off-line backward path in time for error cumulation. This violates the on-line
requirement in many practical applications. Although the forward propagation algorithm can be used in an on-line manner, the annoying drawback is
the heavy computation load required to update the high dimensional sensitivity matrix (0( fir) operations for each time step). Therefore, to develop a fast
forward algorithm is a challenging task. In this paper w~ proposed a forward
learning algorithm which is one order faster (only 0(fV3) operations for each
time step) than the sensitivity matrix algorithm. The basic idea is that instead
of integrating the high dimensional sensitivity dynamic equation we solve
forward in time for its Green's function to avoid the redundant computations,
and then update the weights whenever the error is to be corrected.
A Numerical example for classifying state trajectories using a recurrent
network is presented. It substantiated the faster speed of the proposed algorithm than the Williams and Zipser's algorithm.
I. Introduction.
In order to deal with sequential signals, recurrent neural networks are often put forward as a
useful model. A particularly pressing issue concerning recurrent networks is the search for an
efficient on-line training algorithm. Error back-propagation method (Rumelhart, Hinton, and
Williams[ I]) was originally proposed to handle feedforward networks. This method can be applied to train recurrent networks if one unfolds the time sequence of mappings into a multilayer
feed-forward net, each layer with identical weights. Due to the nature of backward path, it is
basically an off-line method. Pineda [2] generalized it to recurrent networks with hidden neurons. However, he is mostly interested in time-independent fixed point type ofbehaviocs. Pearlmutter [3] proposed a scheme to learn temporal trajectories which involves equations to be
solved backward in time. It is essentially a generalized version of error back-propagation to the
problem of learning a target state trajectory. The viable on-line method to date is the RTRL
(Real Time Recurrent Learning) algorithm (Williams and Zipser [4]), which propagates a sen333
334
Sun, Chen, and Lee
sitivity matrix forward in time. The main drawback of this algorithm is its high cost of computation. It needs O(JII) number of operations per time step. Therefore, a faster (less than O(JII)
operations) on-line algorithm appears to be desirable.
Toomarian and Barhen [5] proposed an O(N2) on-line algorithm. They derived the same
equations as Pearlmutter's back-propagation using adjoint-operator approach. They then tried
to convert the backward path into a forward path by adding a Delta function to its source term.
But this is not correct. The problem is not merely because it "precludes straightforward numerical implementation" as they acknowledged later [6]. Even in theory, the result is not correct.
The mistake is in their using a not well defined equity of the Delta function integration. Briefly
speaking, the equity
f% 0 (t -
t,)f(t) dt = f(t,) is not right if the functionj(t) is discontin-
uous at t = tf The value of the left-side integral depends on the distribution of functionj(t) and
therefore is not uniquely defined. If we deal with the discontinuity carefully by splitting time
interval from to to 'linto two segments: to to 1"? and tr? to 'land let E ~ 0, we will find out
that adding a Delta function to the source term does not affect the basic property of the adjoint
equation. Namely, it still has to be solved backward in time.
Recently, Toomarian and Barhen [6] modified their adjoint-operator approach and proposed
an alternative O(~) on-line training algorithm. Although, in nature, their result is very similar
to what we presented in this paper, it will be seen that our approach is more straightforward and
can be easily implemented numerically.
Schmidhuber[7] proposed an O(N3 ) algorithm which is a combination of back propagation
(within each data block of size N) and forward propagation (between blocks). It is therefore not
truly an on-line algorithm.
Sun, Chen and Lee [8] studied this problem, using a more general approach - variational approach, in which a constrained optimization problem with Lagrangian multipliers was considered. The dynamic equation of the Lagrangian multiplier was derived, which is exactly the
same as adjoint equation[5]. By taking advantage oflinearity of this equation an O(N3 ) on-line
algorithm was derived. But, the numerical implementation of the algorithm, especially the numerical instabilities are not addressed in the paper.
In this paper we will present a new approach to this problem - the Green's function method.
The advantages of the this method are the simple mathematical fonnulation and easy numerical
implementation. One numerical example of trajectory classification is presented to substantiate
the faster speed of the proposed algorithm. The numerical results are benchmarked with Williams and Zipser's algorithm.
II. Green's Function Approach.
(a) Definition of the Problem
Consider a fully recurrent network with neural activity represented by an N-dimensional vector x(t). The dynamic equations can be written in general as a set of first order differential equations:
i(t) = F(x(t),w,I(t?
(1)
where W is a matrix representing the set of weights and all other adjustable parameters, I(t) is a
vector representing the neuron units clamped by external input signals at time t. For a simple
network connected by first order weights the nonlinear function F may look like
F(x(t), w,I(t? = -x(t) +g(w ?x) +I(t)
(2)
where the scaler function g(u) could be, for instance, the Sigmoid function g(u) = 1 1(1+e" u).
Suppose that part of the state neurons {Xi liE M} are measurable and part of neurons {Xi liE
Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks
335
x
H} are hidden. For the measurable units we may have desired output (t) . In order to train
the network, an objective functional (or an error measure functional) is often given to be
tf
E (x, f)
= fe (x (t),f (t? dt
(3)
where functional E depends on weights w implicitly through the measurable neurons {Xi liE
M}. A typical error function is
e(x(t),x(t?
= (xU)
_x(t?2
(4)
The gradient descent learning is to modify the weights according to
Awoc:
oE = -rf~. ax
ow
ax ow
to
dt
.
(5)
In order 0 evaluate the integral in Eq. (5) one needs to know both de/dw and dx/dW. The
first term can be easily obtained by taking derivative of the given error function
e (x (t), f (t? . For the second term one needs to solve the differential equation
!l. ~ x) = of . Ox + of
(6)
dt dw
ow ow
which is easily derived by taking derivative of Eq.(l) with respect to w. The well known forward algorithm of recurrent networks [4] is to solve Equation (6) forward in time and make the
weight correction at the end (t = r.r) of the input sequence. (This algorithm was developed independently by several researchers, but due to the page limitation we could not refer all related
papers and now simply call it Williams and Zipser's algorithm) The on-line learning is to make
weight correction whenever an error is to be corrected during the input sequence
ax
A w (t)
= -11 ( ~ . ~
ax
dW
(7)
The proof of convergence of on-line learning algorithm will be addressed elsewhere.
The main drawback of this forward algorithm is that it requires O(Ni) operations per time
step to update the matrix dx/dW. Our goal of the Green's function approach is to find an online algorithm which requires less computation load.
(b). Green's Function Solution
First let us analyze the computational complexity when integrating Eq. (6) directly. Rewrite
Eq. (6) as
L.
= of
(8)
ax
ow
where the linear operator L is defined as L
ow
= !l. _ of
ax
dt
Two types of redundancy will be seen from Eq. (8). First, the operator L does not depend on w
explicitly, which means that what we did in solving for dx/dw is to repeatedly solve the identical differential equation for each components of w. This is redundant. It is especially wasteful
when higher order connection weights are used. The second redundancy is in the special form
of dF/dw for neural computations where the same activity function (say, Sigmoid function) is
336
Sun, Chen, and Lee
used for every neuron, so that
aFk
,
yUW i}
= g
(LW
I
kl ' xI)
8ki
Xj
(9)
where 8ki is the Kronecker delta function. It is seen from Eq. (9) that among N3 components of
this third order tensor most of them, N2(N-l), are zero (when k ~ i) and need not to be computed
repeatedly. In the original forward learning scheme, we did not pay attention to this redundancy.
Our Green's function approach is able to avoid the redundancy by solving for the low dimensional Green's function. And then we construct the solution ofEq. (8) by the dot product of (JF/
(Jw with the Green's function, which can in tum be reduced to a scaler product due to Eq. (9).
The Green's function of the operator L is defined as a dual time tensor function G(t-t) which
satisfies the following equation
d
-G(t-t)-- ?G(t-t) = 8(t-t)
(10)
dt
aF
ax
It is well known that, if the solution of Eq. (10) is known, the solution of the original equation
Eq. (6) (or (8? can be constructed using the source term (JF/dw through the integral
,
ax
dW (t) =
I
f (G (t -
aF
t) . dW (t? dt
(11)
'0
To find the Green's function solution we first introduce a tensor function V(t) that satisfies
the homogeneous form of Eq. (10)
~
aF
ax
d
-V(t) - - ' V(t) = 0
dt
(12)
(to) = 1
The solution ofEq. (10) or the Green's function can then be constructed as
G(t-t) = V(t) . VI (t)H(t-t)
where H(t-t) is the Heaviside function defined as
1
t~t
H(t- t) = {O
t<t
Using the well known equalities
:t
H (t - t)
(13)
= 8 (t - t)
and
J(t, t) 8 (t - t) = J(t, t) 8 (t - t)
one can easily verify that the constructed Green's function shown in Eq. (13) is correct, that is,
it satisfies Eq. (10). Substituting G(t-t) from Eq. (13) into Eq. (11) we obtain the solution of
Eq. (6) as,
t
: , (t) = V(t) .
f ?V(t?-l . ~: (t?dt
to
(14)
Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks
We note that this fonnal solution not only satisfies Eq. (6) but also satisfies the required initial
condition
ax
dw (to) = 0 .
(15)
The "on-line" weight correction at time t is obtained easily from Eq. (5)
Ow = -11 1e ? dx = -11 (de. V(t)
dx dw
dx
Jt
?V('t)f 1 . dF ('t?d'tJ
dw
(16)
to
(c) Implementation
To implement Eq. (16) numerically we will introduce two auxiliary memories. First, we define U(t) to be the inverse of matrix V(t), i.e. U(t) = V -I(t). It is easy to see that the dynamic
equation of U(t) is
d U (t) + U (t) . dF = 0
~ (to) =
dx
dt
(17)
1
Secondly, we define a third order tensor TIijk that satisfies
dTI = U(t) . dF
( dt
dw
(18)
TI (to) = 0
then the weight correction in Eq. (16) becomes
(19)
Ow = -11 (v(t) . TI(t?
where the vector v(t) is the solution of the linear equation
de
(20)
v (t) . U (t) = d x
In discrete time, Eqs. (17) - (20) become:
(
+ At
Utj (t) = U ij (t - 1)
L Uik(t _l):Fk
k
QXj
(21)
U .. (O) = 0"
'J
"
= TI?IJk (t-l)
IJ
( TI"k(t)
+ (At)
d~
U'I(t-l)~
QWjk
IJ
(22)
TIijk(O) = 0
L v? (t) .
i
I
de
U'J.. (t) = -d
x
j
(23)
337
338
Sun, Chen, and Lee
awi} =
-11 (~:>k (t) llkij (1?
(24)
k
To summarize the procedure of the Green's function method, we need to simultaneously inforward in time starting from Ui}{O) = Oij and
tegrate Eq. (21) and Eq. (22) for U(I) and
0ijk(O) = O. Whenever error message are generated, we shall solve Eq. (23) for v(t) and update
weights according to Eq. (24).
The memory size required by this algorithm is simply I?+fil for storing U(I) and O(t).
The speed of the algorithm is analyzed as follows. From Eq. (21) and Eq. (22) we see that the
update of U(I) and IT both need I? operations per time step. To solve for v(t) and update w,
we need also NJ operations per time step. So, the on-line Updating of weights needs totally 41?
operations per time step. This is one order of magnitude faster than the current forward learning
scheme.
n
In Numerical Simulation
We present in this section numerical examples to demonstrate the proposed learning algorithm
and benchmark it against Williams&Zipser's algorithm.
Class 1
Class 2
Class 3
Fig.1 PHASE SPACE TRAJECTORIES
Three different shapes of 2-D trajectory, each is shown in one column with three examples.
Recurrent neural networks are trained to recognize the different shapes of trajectory.
We consider the trajectory c1assitication problem. The input data are the time series of two
Green's Function Method for Fast On-line Learning Algorithm of Recurrent Neural Networks
dimensional coordinate pairs (x{t), yet)} sampled along three different types of trajectories in
the phase space. The sampling is taken uniformly with flt=27t160. The trajectory equations are
X{I)
{y (I)
= sin{I+~)lsin{I)1
= cos (I +~) Isin (I) I
X{I)
{y (I)
= sin(o.sl+~)sin(l.st)
= cos (0.51 +~) sin (l.5t)
X{I)
{y (I)
= sin(t+~)sin{21)
= cos (I +~) sin (21)
where ~ is a uniformly distributed random parameter. When J3 is changed, these trajectories
are distorted accordingly. Nine examples (three for each class) are shown in Fig.l. The neural
net used here is a fully recurrent first-order network with dynamics
+6
Si(t+l) = Si(t) +
(Tanh L Wi/se/)}))
(:
(25)
}=1
e
where S and I are vectors of state and input neurons, the symbol represents concatenation,
and N is the number of state. Six input neurons are used to represent the normalized vector {I,
x(t), yet), x(t)2, y(t)2, x(t)y(t)}. The neural network structure is shown in Fig. 2.
State {t +
1'\ ---? ? ?? ~
2
Check state neurons at
the end of input S.fquence.
error = Target - ;}
SN
State{t)
Input{t)
Fig.2 Recurre"t Neural Network for Trajectory ClassiflCatio"
For recognition, each trajectory data sequence needs to be fed to the input neurons and the
state neurons evolve according to the dynamics in Eq. (25). At the end of input series we check
the last three state neurons and classify the input trajectory according to the "winner-take-all"
rule. For training, we assign the desired final output for the three trajectory classes to (1,0,0),
(0,1,0) and (0,0,1) respectively. Meanwhile, we need to simultaneously integrate Eq. (21) for
U(t) and Eq. (22) for
At the end, we calculated the error from Eq. (4) and solve Eq. (23)
for vet) using LU decomposition algorithm. Finally, we update weights according to Eq. (24).
Since the classification error is generated at the end of input sequence, this learning does not
have to be on-line. We present this example only to compare the speeds of the proposed fast
algorithm against the Williams and Zipser's. We run the two algorithms for the same number
of iterations and compare the CPU time used. The results are shown in Table. 1, where in each
one iteration we present 150 training patterns, 50 for each class. These patterns are chosen by
randomly selecting ~ values. It is seen that the CPU time ratio is O( lIN), indicating the Green's.
function algorithm is one order faster in N.
Another issue to be considered is the error convergent rate (or learning rate, as usually
called). Although the two algorithms calculate the same weight correction as in Eq. (7), due to
different numerical schemes the outcomes may be different. As the result, the error convergent
rates are slightly different even if the same learning rate 11 is used. In all numerical simulations
we have conducted the learning results are very good (in testing, the recognition is perfect, no
single misclassification was found). But, during training the error convergence rates are different. The numerical experiments show that the proposed fast algorithm converges slower than
n.
339
340
Sun, Chen, and Lee
the Williams and Zipser's for the small size neural nets but faster for the large size neural net.
~
Simulation
N=4
(Number of Iterations = 200)
N=8
(Number of Iterations = 50)
N=12
(Number of Iterations = 50)
Fast Algorithm
Willillms&Zipser's
-ratio
1607.4
5020.8
1:3
1981.7
10807.0
1:5
5947.6
45503.0
1,' 8
Table 1. The CPU time (in seconds) comparison, implemented in DEC3100 Workstation,
for learning the trajectory classification example.
IV. Conclusion
The Green's function has been used to develop a faster on-line learning algorithm for recurrent neural networks. This algorithm requires O(tv3) operations for each time step, which is one
order faster than the Williams and Zipser's algorithm. The memory required is O(tv3).
One feature of this algorithm is its straightforward formula, which can be easily implemented
numerically. A numerical example of trajectory classification has been used to demonstrate the
speed of this fast algorithm compared to Williams and Zipser's algorithm.
References
[1] D.Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error
propagation. In Parallel distributed processing: VoU MIT press 1986. P. Werbos, Beyond Regression: New tools for prediction and analysis in the behavior sciences. Ph.D. thesis, Harvard
university, 1974.
[2] F. Pineda, Generalization of back-propagation to recurrent neural networks. Phys. Rev.
Letters, 19(59):2229, 1987.
[3] B. Pearlmutter, Learning state space trajectories in recurrent neural networks. Neural
Computation,1(2):263, 1989.
[4] R. Williams and D. Zipser, A learning algorithm for continually running fully recurrent
neural networks. Tech. Report ICS Report 8805, UCSD, La Jolla, CA 92093, November 1988.
[5] N. Toomarian, J. Barben and S. Gulati, "Application of Adjoint Operators to Neural
Learning", Appl. Math. Lett., 3(3), 13-18, 1990.
[6] N. Toomarian and J. Barhen, "Adjoint-Functions and Temporal Learning Algorithms in
Neural Networks", Advances in Neural Information Processing Systems 3, p. 113-120, Ed. by
R. Lippmann, J. Moody and D. Touretzky, Morgan Kaufmann, 1991.
[7] J . H. Schmidhuber, "An O(N3 ) Learning Algorithm for Fully Recurrent Networks", Tech
Report FKI-151-91, Institut fUr Informatik, Technische Universitiit MUnchen, May 1991.
[8] Guo-Zheng Sun, Hsing-Hen Chen and Yee-Chun Lee, "A Fast On-line Learning Algorithm for Recurrent Neural Networks", Proceedings of International Joint Conference on Neural Networks, Seattle, Washington, page 11-13, June 1991.
| 504 |@word briefly:1 version:1 annoying:1 simulation:3 tried:1 decomposition:1 tr:1 initial:1 series:2 selecting:1 current:1 cumulation:1 si:2 yet:2 dx:7 written:1 numerical:13 shape:2 update:7 accordingly:1 math:1 mathematical:1 along:1 constructed:3 differential:3 become:1 viable:1 introduce:2 manner:1 behavior:1 cpu:3 totally:1 becomes:1 toomarian:4 what:2 benchmarked:1 developed:1 nj:1 temporal:2 dti:1 every:1 ti:4 exactly:1 unit:2 continually:1 modify:1 mistake:1 path:4 awi:1 studied:1 challenging:1 appl:1 co:3 barhen:3 practical:1 testing:1 block:2 implement:1 procedure:1 integrating:2 operator:6 put:1 instability:1 yee:2 measurable:3 lagrangian:2 williams:13 straightforward:3 attention:1 independently:1 starting:1 splitting:1 rule:1 dw:14 handle:1 coordinate:1 target:2 suppose:1 homogeneous:1 harvard:1 rumelhart:3 recognition:2 particularly:1 updating:1 werbos:2 solved:2 calculate:1 connected:1 sun:7 oe:1 complexity:1 ui:1 dynamic:6 trained:1 depend:1 rewrite:1 segment:1 solving:2 easily:6 joint:1 represented:1 substantiated:1 train:2 fast:10 outcome:1 hsing:2 solve:7 say:1 precludes:1 final:1 online:1 pineda:2 sequence:5 pressing:1 advantage:2 net:4 product:2 date:1 adjoint:6 seattle:1 convergence:2 requirement:1 perfect:1 converges:1 recurrent:21 develop:2 ij:3 eq:33 implemented:3 auxiliary:1 involves:1 drawback:4 correct:3 violates:1 assign:1 generalization:1 secondly:1 correction:5 fil:1 considered:2 ic:1 mapping:1 substituting:1 tanh:1 barben:1 tf:2 tool:1 mit:1 modified:1 avoid:2 derived:4 ax:10 june:1 fur:1 check:2 tech:2 afk:1 el:1 hidden:2 interested:1 issue:2 classification:4 among:1 dual:1 constrained:1 integration:1 special:1 construct:1 washington:1 sampling:1 identical:2 represents:1 park:1 look:1 report:3 randomly:1 simultaneously:2 recognize:1 phase:2 message:1 zheng:2 truly:1 analyzed:1 tj:1 integral:3 institut:1 iv:1 desired:2 instance:1 column:1 classify:1 cost:1 technische:1 conducted:1 st:1 international:1 sensitivity:3 lee:7 off:2 moody:1 thesis:1 fir:1 external:1 derivative:2 jii:2 de:4 explicitly:1 depends:2 vi:1 later:1 analyze:1 parallel:1 ofeq:2 ni:1 kaufmann:1 fki:1 basically:1 lu:1 informatik:1 trajectory:18 researcher:1 phys:1 touretzky:1 whenever:3 ed:1 definition:1 against:2 proof:1 workstation:1 sampled:1 carefully:1 back:7 appears:1 feed:1 tum:1 originally:1 dt:11 higher:1 jw:1 ox:1 nonlinear:1 propagation:11 verify:1 multiplier:2 normalized:1 equality:1 laboratory:1 deal:2 sin:7 during:2 uniquely:1 substantiate:1 generalized:2 demonstrate:2 pearlmutter:3 variational:1 recently:1 sigmoid:2 functional:3 winner:1 he:1 numerically:3 refer:1 fk:1 dot:1 vou:1 jolla:1 schmidhuber:2 seen:4 morgan:1 redundant:2 signal:2 ii:1 desirable:1 faster:9 af:3 lin:1 concerning:1 prediction:1 j3:1 basic:2 regression:1 multilayer:1 essentially:1 df:4 iteration:5 represent:1 interval:1 addressed:2 source:3 lsin:1 fonnulation:1 call:1 zipser:12 feedforward:1 easy:2 affect:1 xj:1 idea:1 six:1 speaking:1 nine:1 repeatedly:2 useful:1 se:1 ph:1 reduced:1 sl:1 delta:4 per:5 discrete:1 shall:1 redundancy:4 acknowledged:1 wasteful:1 isin:1 backward:5 merely:1 convert:1 run:1 inverse:1 letter:1 distorted:1 layer:1 ki:2 pay:1 convergent:2 scaler:2 activity:2 kronecker:1 n3:4 speed:5 utj:1 according:5 combination:1 slightly:1 rtrl:1 wi:1 rev:1 taken:1 equation:17 know:1 fed:1 end:5 operation:9 munchen:1 alternative:1 slower:1 original:2 running:1 especially:2 fonnal:1 tensor:4 objective:1 md:1 gradient:1 ow:8 maryland:1 concatenation:1 ratio:2 mostly:1 fe:1 implementation:4 adjustable:1 neuron:12 benchmark:1 descent:1 november:1 hinton:2 fv3:1 ucsd:1 namely:1 required:4 kl:1 pair:1 connection:1 discontinuity:1 able:1 beyond:1 usually:1 pattern:2 summarize:1 rf:1 green:19 memory:3 misclassification:1 oij:1 advanced:1 representing:2 scheme:4 sn:1 gulati:1 hen:2 evolve:1 fully:4 limitation:1 integrate:1 propagates:1 classifying:1 storing:1 heavy:1 land:1 elsewhere:1 changed:1 last:1 side:1 institute:1 taking:3 distributed:2 calculated:1 lett:1 unfolds:1 forward:15 lippmann:1 implicitly:1 xi:4 search:1 vet:1 table:2 nature:2 learn:1 ca:1 meanwhile:1 did:2 main:3 n2:2 xu:1 fig:4 uik:1 lie:3 clamped:1 lw:1 third:2 formula:1 load:2 jt:1 symbol:1 chun:2 flt:1 sequential:1 adding:2 magnitude:1 chen:7 simply:2 satisfies:6 goal:1 jf:2 universitiit:1 typical:1 corrected:2 uniformly:2 called:1 plasma:1 equity:2 ijk:2 la:1 indicating:1 college:1 internal:1 guo:2 evaluate:1 heaviside:1 |
4,465 | 5,040 | Simultaneous Rectification and Alignment via Robust
Recovery of Low-rank Tensors
Xiaoqin Zhang, Di Wang
Institute of Intelligent System and Decision
Wenzhou University
[email protected], [email protected]
Zhengyuan Zhou
Department of Electrical Engineering
Stanford University
[email protected]
Yi Ma
Visual computing group
Microsoft Research Asia
[email protected]
Abstract
In this work, we propose a general method for recovering low-rank three-order
tensors, in which the data can be deformed by some unknown transformation and
corrupted by arbitrary sparse errors. Since the unfolding matrices of a tensor are
interdependent, we introduce auxiliary variables and relax the hard equality constraints by the augmented Lagrange multiplier method. To improve the computational efficiency, we introduce a proximal gradient step to the alternating direction
minimization method. We have provided proof for the convergence of the linearized version of the problem which is the inner loop of the overall algorithm.
Both simulations and experiments show that our methods are more efficient and
effective than previous work. The proposed method can be easily applied to simultaneously rectify and align multiple images or videos frames. In this context,
the state-of-the-art algorithms ?RASL? and ?TILT? can be viewed as two special
cases of our work, and yet each only performs part of the function of our method.
1 Introduction
In recent years, with the advances in sensorial and information technology, massive amounts of
high-dimensional data are available to us. It has become an increasingly pressing challenge to develop efficient and effective computational tools that can automatically extract the hidden structures
and hence useful information from such data. Many revolutionary new tools have been developed
that enable people to recover low-dimensional structures in the form of sparse vectors or low-rank
matrices in high dimensional data. Nevertheless, instead of vectors and matrices, many practical
data are given in their natural form as higher-order tensors, such as videos, hyper-spectral images,
and 3D range data. These data are often subject to all types of geometric deformation or corruptions
due to change of viewpoints, illuminations or occlusions. The true intrinsic structures of the data
will not be fully revealed unless these nuisance factors are undone in the processing stage.
In the literature, it has been shown that for matrix data, if the data is a deformed or corrupted version
of an intrinsically low-rank matrix, one can recover the rectified low-rank structure despite different
types of deformation (linear or nonlinear) and severe corruptions. Such concepts and methods have
been successfully applied to rectify the so-called low-rank textures [1] and to align multiple correlated images (such as video frames or human faces) [2, 3, 4, 5, 6]. However, when applied to the data
of higher-order tensorial form, such as videos or 3D range data, these tools are only able to harness
one type of low-dimensional structure at a time, and are not able to exploit the low-dimensional
1
tensorial structures in the data. For instance, the previous work of TILT rectifies a low-rank textural
region in a single image [1] while RASL aligns multiple correlated images [6]. They are highly
complementary to each other: they exploit spatial and temporal linear correlations respectively in
a given sequence of images. A natural question arises: can we simultaneously harness all such
low-dimensional structures in an image sequence by viewing it as a three-order tensor?
Actually, many existing visual data can be naturally viewed as three-order (or even higher-order)
tensors (e.g. color images, videos, hyper-spectral images, high-dynamical range images, 3D range
data etc.). Important structures or useful information will very often be lost if we process them as
a 1D signal or a 2D matrix. For tensorial data, however, one major challenge lies in an appropriate
definition of the rank of a tensor, which corresponds to the notion of intrinsic ?dimension? or ?degree
of freedom? for the tensorial data. Traditionally, there are two definitions of tensor rank, which are
based on PARAFAC decomposition [7] and Tucker decomposition [8] respectively. Similar to the
definition of matrix rank, the rank of a tensor based on PARAFAC decomposition is defined as the
minimum number of rank-one decompositions of a given tensor. However, this definition rank is
a nonconvex and nonsmooth function on the tensor space, and direct minimization of this function
is an NP-hard problem. An alternative definition of tensor rank is based on the so-called Tucker
decomposition, which results in a vector of the ranks of a set of matrices unfolded from the tensor.
Due to the recent breakthroughs in the recovery of low-rank matrices [9], the latter definition has
received increasing attention. Gandy et al. [10] adopt the sum of the ranks of the different unfolding
matrices as the rank of the tensor data, which is in turn approximated by the sum of their nuclear
norms. They then apply the alternating direction method (ADM) to solve the tensor completion
problem with Gaussian observation noise. Instead of directly adding up the ranks of the unfolding
matrices, a weighted sum of the ranks of the unfolding matrices is introduced by Liu et al. [12] and
they also proposed several optimization algorithms to estimate missing values for tensorial visual
data (such as color images). In [13], three different strategies have been developed to extend the
trace-norm regularization to tensors: (1) tensors treated as matrices; (2) traditional constrained optimization of low rank tensors as in [12]; (3) a mixture of low-rank tensors. The above-mentioned
work all addresses the tensor completion problem in which the locations of the missing entries are
known, and moreover, observation noise is assumed to be Gaussian. However, in practice, a fraction
of the tensorial entries can be arbitrarily corrupted by some large errors, and the number and the
locations of the corrupted entries are unknown. Li et al. [14] have extended the Robust Principal
Component Analysis [9] from recovering a low-rank matrix to the tensor case. More precisely, they
have proposed a method to recover a low-rank tensor with sparse errors. However, there are two
issues that limit the practicality of such methods: (1) The tensorial data are assumed to be well
aligned and rectified. (2) The optimization method can be improved in both accuracy and efficiency,
which will be discussed and validated in Section 4.
Inspired by the previous work and motivated by the above observations, we propose a more general
method for the recovery of low-rank tensorial data, especially three-order tensorial data, since our
main interests are visual data. The main contributions of our work are three-fold: (1) The data samples in the tensor do not need to be well-aligned or rectified, and can be arbitrarily corrupted with a
small fraction of errors. (2) This framework can simultaneously perform rectification and alignment
when applied to imagery data such as image sequences and video frames. In particular, existing
work of RASL and TILT can be viewed as two special cases of our method. (3) To resolve the interdependence among the nuclear norms of the unfolding matrices, we introduce auxiliary variables
and relax the hard equality constraints using the augmented Lagrange multiplier method. To further
improve the efficiency, we introduce a proximal gradient step to the alternating direction minimization method. The optimization is more efficient and effective than the previous work [6, 14], and
the convergence (of the linearized version) is guaranteed (the proof is shown in the supplementary
material).
2
Basic Tensor Algebra
We provide a brief notational summary here. Lowercase letters (a, b, c ? ? ? ) denote scalars; bold lowercase (a, b, c ? ? ? ) letters denote vectors; capital letters (A, B, C ? ? ? ) denote matrices; calligraphic
letters (A, B, C ? ? ? ) denote tensors. In the following subsections, the tensor algebra and the tensor
rank are briefly introduced.
2
I3
I3
I1
I2
I1
I2
I2
I2
A(1)
I1
I3
I1
I2
I2
I3
I3
I3
A(2)
I2
I3
I1
I2
I3
I1
I1
I1
A(3)
Figure 1: Illustration of unfolding a 3-order tensor.
2.1
Tensor Algebra
We denote an N -order tensor as A ? RI1 ?I2 ?????IN , where In (n = 1, 2, . . . , N ) is a positive
integer. Each element in this tensor is represented as ai1 ???in ???iN , where 1 ? in ? In . Each
order of a tensor is associated with a ?mode?. By unfolding a tensor along a mode, a tensor?s
unfolding matrix
? corresponding to this mode is obtained. For example, the mode-n unfolding matrix
A(n) ? RIn ?( i?=n Ii ) of A, represented as A(n) = unfoldn (A), consists of In -dimensional moden column vectors which are obtained by varying the nth-mode index in and keeping indices of the
other modes fixed. Fig. 1 shows an illustration of unfolding a 3-order tensor. The inverse operation
of the mode-n unfolding is the mode-n folding which restores the original tensor A from the mode-n
unfolding matrix A(n) , represented as A = foldn (A(n) ). The mode-n rank rn of A is defined as the
rank of the mode-n unfolding matrix A(n) : rn = rank(A(n) ). The operation of mode-n product of a
tensor and a matrix forms a new tensor. The mode-n product of tensor A and matrix U is denoted as
A ?n U . Let matrix U ? RJn ?In . Then, A ?n U ? RI1 ?????In?1 ?Jn ?In+1 ?????IN and its elements
are calculated by:
?
(A ?n U )i1 ???in?1 jn in+1 ???iN =
ai1 ???in ???iN ujn in .
(1)
in
? ?
The scalar product of two tensors A and B with the dimension is defined as ?A, B? = i1 i2 ? ?
?
?
? iN ai1 ???iN bi1 ???iN . The Frobenius norm of A ? RI1 ?I2 ?????IN is defined as: ||A||F = ?A, A?.
The
? l0 norm ||A||0 is defined to be the number of non-zero entries in A and the l1 norm ||A||1 =
i1 ,...,iN |ai1 ???iN | respectively. Observe that ||A||F = ||A(k) ||F , ||A||0 = ||A(k) ||0 and ||A||1 =
||A(k) ||1 for any 1 ? k ? N .
2.2 Tensor Rank
Traditionally, there are two definitions of tensor rank, which are based on PARAFAC decomposition
[7] and Tucker decomposition [8], respectively.
As stated in [7], in analogy to SVD, the rank of a tensor A can be defined as the minimum number
r for decomposing the tensor into rank-one components as follows:
r
?
(1)
(2)
(N )
A=
?j uj ? uj ? ? ? ? ? uj = D ?1 U (1) ?2 U (2) ? ? ? ?N U (N ) ,
(2)
j=1
where ? denotes outer product, D ? Rr?r?????r is an N -order diagonal tensor whose jth diagonal
(n)
(n)
element is ?j , and U (n) = [u1 , . . . , ur ]. The above decomposition model is called PARAFAC.
However, this rank definition is a highly nonconvex and discontinuous function on the tensor space.
In general, direct minimization of such a function is NP-hard.
Another kind of rank definition considers the mode-n rank rn of tensors, which is inspired by the
Tucker decomposition [8]. The tensor A can be decomposed as follows:
A = G ?1 U (1) ?2 U (2) ? ? ? ?N U (N ) ,
?
?
?
(3)
where G = A ?1 U (1) ?2 U (2) ? ? ? ?N U (N ) is the core tensor controlling the interaction
between the N mode matrices U (1) , . . . , U (N ) . In the sense of Tucker decomposition, an appropriate
definition of tensor rank should satisfy the follow condition: a low-rank tensor is a low-rank matrix
when unfolded appropriately. This means the rank of a tensor can be represented by the rank of the
3
tensor?s unfolding matrices. As illustrated in [8], the orthonormal column vectors of U (n) span the
column space of the mode-n unfolding matrix A(n) , (1 ? n ? N ), so that if U (n) ? RIn ?rn , n =
1, . . . , N , then the rank of the mode-n unfolding matrix A(n) is rn . Accordingly, we call A a rank(r1 , . . . , rN ) tensor. We adopt this tensor rank definition in this paper.
3
Low-rank Structure Recovery for Tensors
In this section, we first formulate the problem of recovering low-rank tensors despite deformation
and corruption, and then introduce an iterative optimization method to solve the low-rank recovery
problem. Finally, the relationship between our work and the previous work is discussed to show
why our work can simultaneously realize rectification and alignment.
3.1
Problem Formulation
Without loss of generality, in this paper we focus on 3-order tensors to study the low-rank recovery
problem. Most practical data and applications we experiment with belong to this class of tensors.
Consider a low-rank 3-order data tensor A ? RI1 ?I2 ?I3 . In real applications, the data are inevitably
corrupted by noise or errors. Rather than modeling the noise with a small Gaussian, we model it
with an additive sparse error term E which fulfills the following conditions: (1) only a small fraction
of entries are corrupted; (2) the errors are large in magnitude; (3) the number and the location of the
corrupted data are unknown.
Based on the above assumptions, the original tensor data A can be represented as
A = L + E,
(4)
where L is a low-rank tensor. In this paper, the notion of low-rankness will become clear once we
introduce our objective function in a few paragraphs. The ultimate goal of this work is to recover L
from the erroneous observations A.
An explicit assumption in Eq. (4) is that it requires the tensor to be well aligned. For real data
such as video and face images, the image frames (face images) should be well aligned to ensure
that the three-order tensor of the image stack to have low-rank. However, for most practical data,
precise alignments are not always guaranteed and even small misalignments will break the low-rank
structure of the data. To compensate for possible misalignments, we adopt a set of transformations
?1?1 , . . . , ?I?1
? Rp (p is the dimension of the transformations) which act on the two-dimensional
3
slices (matrices) of the tensor data1 . Based on the set of transformations ? = {?1 , . . . , ?I3 }, Eq. (4)
can be changed to
A ? ? = L + E,
(5)
where A ? ? means applying the transformation ?i to each matrix A(:, :, i), i = 1, . . . , I3 .
When both corruption and misalignment are modeled, the low-rank structure recovery for tensors
can be formalized as follows.
min rank(L) + ?||E||0 , s.t. A ? ? = L + E.
(6)
L,E,?
The above optimization problem is not directly tractable for the following two reasons: (1) both rank
and ?0 -norm are nonconvex and discontinuous; (2) the equality constraint A ? ? = L + E is highly
nonlinear due to the domain transformation ?.
To relax the limitation (1), we first recall the tensor rank definition in Section 2.2. In our work, we
adopt the rank definition based on the Tucker decomposition which can be represented as follows:
L is a rank-(r1 , r2 , r3 ) tensor where ri is the rank of unfolding matrix L(i) . In this way, tensor rank
can be converted to calculating a set of matrices? rank.
We know that the nuclear (or trace) norm is
?m
the convex envelop of the rank of matrix: ||L(i) ||? = k=1 ?k (L(i) ), where ?k (L(i) ) is kth singular
value of matrix L(i) . Therefore, we define the nuclear norm of a three-order tensor as follows:
||L||? =
N
?
?i ||L(i) ||? , N = 3.
(7)
i=1
?N
We assume i=1 ?i = 1 to make the definition consistent with the form of matrix. The rank of L
is replaced by ||L||? to make a convex relaxation of the optimization problem. It is well know that
1
In most applications, a three-order tensor can be naturally partitioned into a set of matrices (such as image
frames in a video) and transformations should be applied on these matrices
4
?1 -norm is a good convex surrogate of the ?0 -norm. We hence replace the ||E||0 with ||E||1 and the
optimization problem in (6) becomes
3
?
min
?i ||L(i) ||? + ?||E||1 , s.t. A ? ? = L + E.
(8)
L,E,?
i=1
For limitation (2), linearization with respect to the transformation ? parameters is a popular way to
approximate the above constraint when the change in ? is small or incremental. Accordingly, the
first-order approximation to the above problem is as follows.(
)
3
n
?
?
? ?
min
?i ||L(i) ||? + ?||E||1 , s.t. A ? ? + fold3 (
Ji ???i ?i )
= L + E, (9)
L,E,??
i=1
i=1
where Ji represents the Jacobian of A(:, :, i) with respect to the transformation parameters ?i , and
?i denotes the standard basis for Rn .
3.2
Optimization Algorithm
Although the problem in (9) is convex, it is still difficult to solve due to the interdependent nuclear
norm terms. To remove these interdependencies and to optimize these terms independently, we introduce three auxiliary matrices {Mi , i = 1, 2, 3} to replace {L(i) , i = 1, 2, 3}, and the optimization
problem changes to
3
?
? = L + E, L(i) = Mi , i = 1, 2, 3, (10)
min
?i ||Mi ||? + ?||E||1 s.t. A ? ? + ??
?
L,E,??
i=1
)
( ?n
.
?
?=
for simplicity.
where we define ??
fold3 ( i=1 Ji ???i ??
i )
To relax the above equality constraints, we apply the Augmented Lagrange Multiplier (ALM)
method [15] to the above problem, and obtain the following augmented Lagrangian function
3
?
1
? Y, Qi ) =
f? (Mi , L, E, ??,
?i ||Mi ||? + ?||E||1 ? ?Y, T ? +
||T ||2F
2?
i=1
3
?
1
+
(??Qi , Oi ? +
(11)
||Oi ||2F ),
2?
i
i=1
? and Oi = L(i) ? Mi . Y and Qi are the Lagrange
where we define T = L + E ? A ? ? ? ??
multiplier tensor and matrix respectively. ??, ?? denotes the inner product of matrices or tensors. ?
and ?i are positive scalars. To have fewer parameters, we set ? = ?i , i = 1, 2, 3 and ?i is replaced
by ? in the following sections including experiments and supplementary materials.
A typical iterative minimization process based on the alternating direction method of multipliers
(ADMM) [15, 16] can be written explicitly as
?
?k , Y k , Qki );
[Mik+1 , Lk+1 , E k+1 ] : = arg min f? (Mi , L, E, ??
?
?
Mi ,L,E
?
?
? ?k+1
? Y k , Qki );
??
:
= arg min f? (Mik+1 , Lk+1 , E k+1 , ??,
(12)
?
??
? Y k+1 :
k
k+1
?
=
Y
?
T
/?;
?
?
? k+1
k+1
Qi :
= Qki ? (Lk+1
)/?, i = 1, 2, 3.
(i) ? Mi
?k , Y k , Qki ) with respect
However, minimizing the augmented Lagrangian function f? (Mi , L, E, ??
to Mi , L and E using ADMM is expensive in practice, and moreover, the global convergence can not
be guaranteed. Therefore, we propose to solve the above problem by taking one proximal gradient
step.
?
(
)
2
?
k+1
1
k
k
k
k
?
?
M
?
?
(M
?
L
+
?Q
)
M
:
=
arg
min
?
||M
||
+
M
, i = 1, 2, 3;
?
i
1
i
i ?
i
i
i
i
(i)
2??1
?
Mi
F
?
?
(
(
))
2
?
3 (
?
)
?
?
k
k
k
k
k
? Lk+1 : = arg min 1
L ? Lk ? ?1
;
L
?
fold
(M
+
?Q
)
+
T
?
?Y
i
i
i
2??
1
L
i=1
F
(
(
))
2
?
1
k
k
k
k+1
?
E
?
E
?
?
T
?
?Y
;
E
:
=
arg
min
?
?E?
+
?
1
1
?
2??1
F
?
E
?
))
2
(
(
?
? ?k+1
1
?
k
k
k+1
k+1
?
?
?
?E
+ A ? ? + ?Y k
.
: = arg min 2??2
?? ? ?? ? ?2 ?? ? L
? ??
?
??
F
(13)
In detail, the solutions of each term are obtained as follows.
5
? For term Mik+1 :
Mik+1 = Ui D?i ??1 (?)Vi? ,
where Ui ?Vi? = Mik ? ?1 (Mik ? Lk(i) + ?Qki ) and D? (?) is the shrinkage operator:
D? (x) = sgn(x) max(|x| ? ?, 0).
? For term Lk+1 :
3
(?
)
( k
)
Lk+1 = Lk ? ?1
L ? foldi (Mik + ?Qki ) + T k ? ?Y k .
i=1
? For term E k+1 :
(
(
))
E k+1 = D???1 E k ? ?1 T k ? ?Y k .
?k+1 :
? For term ??
(
)
?k+1 = ??
?k ? ?2 ??
?k ? Lk+1 ? E k+1 + A ? ? + ?Y k .
??
?k+1 is a tensor, we can transform it to its original form as follows.
Here, ??
n
?
?k+1 )? ?i ??
??k+1 =
Ji+ (??
i ,
(3)
i=1
?k+1 )(3) is the mode-3 unfolding
where Ji+ = (Ji? Ji )?1 Ji? is pseudo-inverse of Ji and (??
k+1
?
matrix of tensor ??
.
k+1
k+1
? For terms Y
and Qi :
Y k+1 = Y k ? T k+1 /?;
k+1
Qk+1
= Qki ? (Lk+1
)/?,
i
(i) ? Mi
i = 1, 2, 3.
The global convergence of the above optimization process is guaranteed by the following theorem.
?k , Y k , Qki , i = 1, 2, 3} generated by the above proxiTheorem 1 The sequence {Mik , Lk , E k , ??
mal gradient descent scheme with ?1 < 1/5 and ?2 < 1 converges to the optimal solution to Problem
(10).
Proof. The proof of convergence can be found in the supplementary material.
As we see in Eq. (10), the optimization problem is similar to the problems addressed in [6, 1].
However, the proposed work differs from these earlier work in the following respects:
1. RASL and TILT can be viewed as two special cases of our work. Consider the mode3 unfolding matrix A(3) in the bottom row of Fig. 1. Suppose the tensor is formed by
stacking a set of images along the third mode. Setting ?1 = 0, ?2 = 0 and ?3 = 1,
our method reduces to RASL. While for the mode-1 and mode-2 unfolding matrices (see
Fig. 1), if we set ?1 = 0.5, ?2 = 0.5 and ?3 = 0, our method reduces to TILT. In this
sense, our formulation is more general as it tends to simultaneously perform rectification
and alignment.
2. Our work vs. RASL: In the image alignment applications, RASL treats each image as a
vector and does not make use of any spatial structure within each image. In contrast, as
shown in Fig. 1, in our work, the low-rank constraint on the mode-1 and mode-2 unfolding
matrices effectively harnesses the spatial structures within images.
3. Our work vs. TILT: TILT deals with only one image and harnesses spatial low-rank structures to rectify the image. However, TILT ignores the temporal correlation among multiple
images. Our work combines the merits of RASL and TILT, and thus can extract more
structural information in the visual data.
4 Experimental Results
In this section, we compare the proposed algorithm with two algorithms: RASL [6] and Li?s work
[14] (TILT [1] is not adopted for comparison because it can deal with only one sample). We choose
them for comparison because: (1) They represent the latest work that address similar problems as
ours. (2) The effectiveness and efficiency of our optimization method for recovery of low-rank tensors can be validated by comparing our work with RASL and Li?s work; These algorithms are tested
with several synthetic and real-world datasets, and the results are both qualitatively and quantitatively analyzed.
6
Error
0.2
60
Li?s work
RASL+APG
RASL+APGP
RASL+ALM
RASL+IALM
Our work
50
Running time (second)
0.3
0.25
0.15
0.1
0.05
30
20
10
0
?0.05
0
40
Li?s work
RASL+APG
RASL+APGP
RASL+ALM
RASL+IALM
Our work
5
10
15
20
25
30
35
0
0
40
5
10
15
c(%)
20
25
30
35
40
c(%)
Figure 2: Results on synthetic data.
(a) original data
(b) RASL
(c) Li?s work
(d) Our work
Figure 3: Results on the first data set.
Results on Synthetic Data. This part tests the above three algorithms with synthetic data. To
make a fair comparison, some implementation details are clarified as follows: (1) Since domain
transformations are not considered in Li?s work, we assume the synthetic data are well aligned.
(2) To eliminate the influence of different optimization methods, RASL is implemented with the
following four optimization methods: APG (Accelerated Proximal Gradient), APGP (Accelerated
Proximal Gradient with partial SVDs), ALM (Augmented Lagrange Multiplier) and IALM (Inexact
Augmented Lagrange Multiplier)2 . Moreover, since RASL is applied to one mode of the tensor, to
make it more competitive, we apply RASL to each mode of the tensor and take the mode that has
the minimal reconstruction error.
For synthetic data, we first randomly generate two data tensors: (1) a pure low-rank tensor Lo ?
R50?50?50 whose rank is (10,10,10); (2) an error tensor E ? R50?50?50 in which only a fraction c
of entries are non-zero (To ensure the error to be sparse, the maximal value of c is set to 40%). Then
the testing tensor A can be obtained as A = Lo + E. All the above three algorithms are applied to
recover the low-rank structure of A, which is represented as Lr . Therefore, the reconstruction error
o ?Lr ||F
is defined as error = ||L||L
. The result of a single run is a random variable, because the data
o ||F
are randomly generated, so the experiment is repeated 50 times to generate statistical averages.
The left column of Fig. 2 shows the reconstruction error, from which we can see that our work
can achieve the most accurate result of reconstruction among all the algorithms. Even when 40% of
entries are corrupted, the reconstruction error of our work is about 0.08. As shown in right column of
Fig. 2, comparing with Li?s work and RASL+ALM, our work can achieve about 3-4 times speed-up.
Moveover, the result shows that the average running time of our work is higher than RASL+APG,
RASL+APGP and RASL+IALM. However, these three methods only optimize on a single mode
while our work optimize on all three modes and the variables evolved in (10) are about three times
of those in RASL. The above results demonstrate the effectiveness and efficiency of our proposed
optimization method for low-rank tensor recovery.
Results on Real World Data. In this part, we apply all three algorithms (RASL here is solved
by ALM which gives the best results) to several real-world datasets. The first dataset contains 16
images of the side of a building, taken from various viewpoints by a perspective camera, and with
various occlusions due to tree branches. Fig. 3 illustrates the low-rank recovery results on this data
set, in which Fig. 3(a) shows the original image and Fig. 3(b)-(d) show the results of the three
algorithms. Compared with RASL, we can see that our work and Li?s work not only remove the
2
For more detail, please refer to http://perception.csl.illinois.edu/matrix-rank/sample code.html
7
(a) original data
(b) RASL
(c) Li?s work
(d) Our work
Figure 4: Results on the second data set.
(a) original data
(b) RASL
(c) Li?s work
(d) Our work
Figure 5: Results on the third data set.
branches from the windows, but also rectifiy window position. Moreover, the result obtained by our
work is noticeably sharper than Li?s work.
The second data set contains 100 images of the handwritten number ?3?, with a fair amount of
diversity. For example, as shown in Fig. 4(a), the number ?3? in the column 1 and row 6 is barely
recognizable. The results of the three algorithms on this dataset are shown in Fig. 4(b)-(d). We
can see that our work has achieved better performance than the other two algorithms from human?s
perception, in which the 3?s are more clear and their poses are upright.
The third data set contains 140 frames of a video showing Al Gore talking. As shown in Fig. 5,
the face alignment results obtained by our work is significantly better than those obtained by the
other two algorithms. The reason is that human face has a rich spatial low-rank structures due to
symmetry, and our method simultaneously harnesses both temporal and spatial low-rank structures
for rectification and alignment.
5
Conclusion
We have in this paper proposed a general low-rank recovery framework for arbitrary tensor data,
which can simultaneously realize rectification and alignment. We have adopted a proximal gradient based alternating direction method to solve the optimization problem, and have shown that the
convergence of our algorithm is guaranteed. By comparing our work with the three state-of-the-art
work through extensive simulations, we have demonstrated the effectiveness and efficiency of our
method.
6
Acknowledgment
This work is partly supported by NSFC (Grant Nos. 61100147, 61203241 and 61305035),
Zhejiang Provincial Natural Science Foundation (Grants Nos. LY12F03016, LQ12F03004 and
LQ13F030009).
8
References
[1] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma, ?TILT: Transform-Invariant Low-rank Textures?, International Journal of Computer Vision, 99(1): 1-24, 2012.
[2] G. Huang, V. Jain, and E. Learned-Miller, ?Unsupervised joint alignment of complex images?, International Conference on Computer Vision pp. 1-8, 2007.
[3] E. Learned-Miller, ?Data Driven Image Models Through Continuous Joint Alignment?, IEEE Trans. on
Pattern Analysis and Machine Intelligence, 28(2):236C250, 2006.
[4] M. Cox, S. Lucey, S. Sridharan, and J. Cohn, ?Least Squares Congealing for Unsupervised Alignment of
Images?, International Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[5] A. Vedaldi, G. Guidi, and S. Soatto, ?Joint Alignment Up to (Lossy) Transforamtions?, International
Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[6] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, ?RASL: Robust Alignment by Sparse and Lowrank Decomposition for Linearly Correlated Images?, IEEE Trans. on Pattern Analysis and Machine
Intelligence, 34(11): 2233-2246, 2012.
[7] J. Kruskal, ?Three-way arrays: rank and uniqueness of trilinear decompositions, with application to
arithmetic complexity and statistics?, Linear Algebra and its Applications, 18(2): 95-138, 1977.
[8] T. Kolda and B. Bader, ?Tensor decompositions and applications?, SIAM Review, 51(3): 455-500, 2009.
[9] E. Candes, X. Li, Y. Ma, and J. Wright, ?Robust principal component analysis??, Journal of the ACM,
2011.
[10] S. Gandy, B. Recht, and I. Yamada, ?Tensor Completion and Low- N-Rank Tensor Recovery via Convex
Optimization?, Inverse Problem, 2011.
[11] M. Signoretto, L. Lathauwer, and J. Suykens, ?Nuclear Norms for Tensors and Their Use for Convex
Multilinear Estimation?, Linear Algebra and Its Applications, 2010.
[12] J. Liu, P. Musialski, P. Wonka, and J. Ye, ?Tensor Completion for Estimating Missing Values in Visual
Data?, IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(1): 208-220, 2013.
[13] R. Tomioka, K. Hayashi, and H. Kashima, ?Estimation of low-rank tensors via convex optimization?,
Technical report, arXiv:1010.0789, 2011.
[14] Y. Li, J. Yan, Y. Zhou, and J. Yang, ?Optimum Subspace Learning and Error Correction for Tensors?,
European Conference on Computer Vision, pp. 790-803, 2010.
[15] Z. Lin, M. Chen, L. Wu, and Y. Ma, ?The augmented lagrange multiplier method for exact recovery of
corrupted low-rank matrices?, Technical Report UILU-ENG-09-2215, UIUC Technical Report, 2009.
[16] J. Yang and X. Yuan, ?Linearized augmented lagrangian and alternating direction methods for nuclear
norm minimization?, Mathematics of Computation, 82(281): 301-329, 2013.
9
| 5040 |@word deformed:2 cox:1 version:3 briefly:1 norm:14 tensorial:9 simulation:2 linearized:3 decomposition:14 eng:1 liu:2 contains:3 ours:1 existing:2 com:2 comparing:3 gmail:1 yet:1 written:1 realize:2 additive:1 remove:2 v:2 intelligence:3 fewer:1 accordingly:2 core:1 yamada:1 lr:2 location:3 clarified:1 zhang:2 along:2 lathauwer:1 direct:2 become:2 yuan:1 consists:1 combine:1 paragraph:1 recognizable:1 introduce:7 interdependence:1 alm:6 peng:1 uiuc:1 inspired:2 decomposed:1 automatically:1 unfolded:2 resolve:1 csl:1 window:2 increasing:1 becomes:1 provided:1 estimating:1 moreover:4 evolved:1 kind:1 developed:2 transformation:10 temporal:3 pseudo:1 act:1 gore:1 grant:2 positive:2 engineering:1 textural:1 tends:1 limit:1 treat:1 despite:2 nsfc:1 range:4 zhejiang:1 practical:3 camera:1 acknowledgment:1 testing:1 lost:1 practice:2 differs:1 yan:1 undone:1 significantly:1 vedaldi:1 operator:1 context:1 applying:1 influence:1 optimize:3 lagrangian:3 missing:3 demonstrated:1 latest:1 attention:1 independently:1 convex:7 formulate:1 formalized:1 recovery:13 simplicity:1 pure:1 array:1 nuclear:7 orthonormal:1 notion:2 traditionally:2 kolda:1 controlling:1 suppose:1 massive:1 exact:1 element:3 approximated:1 expensive:1 recognition:2 bottom:1 wang:1 electrical:1 solved:1 svds:1 mal:1 region:1 mentioned:1 ui:2 complexity:1 algebra:5 efficiency:6 rin:2 misalignment:3 basis:1 easily:1 joint:3 represented:7 various:2 jain:1 effective:3 congealing:1 hyper:2 whose:2 stanford:2 solve:5 supplementary:3 relax:4 statistic:1 transform:2 sequence:4 pressing:1 rr:1 propose:3 reconstruction:5 interaction:1 product:5 maximal:1 aligned:5 loop:1 achieve:2 frobenius:1 convergence:6 optimum:1 r1:2 incremental:1 converges:1 develop:1 completion:4 pose:1 lowrank:1 received:1 eq:3 recovering:3 auxiliary:3 implemented:1 zyzhou:1 direction:6 discontinuous:2 bader:1 human:3 sgn:1 enable:1 viewing:1 material:3 noticeably:1 bi1:1 multilinear:1 correction:1 zhengyuan:1 considered:1 wright:2 major:1 kruskal:1 adopt:4 uniqueness:1 estimation:2 successfully:1 tool:3 weighted:1 unfolding:21 minimization:6 gaussian:3 always:1 i3:11 rather:1 zhou:2 shrinkage:1 varying:1 parafac:4 validated:2 l0:1 focus:1 notational:1 rank:80 contrast:1 sense:2 lowercase:2 gandy:2 eliminate:1 hidden:1 sensorial:1 i1:11 overall:1 adm:1 issue:1 among:3 denoted:1 arg:6 html:1 art:2 special:3 spatial:6 breakthrough:1 constrained:1 restores:1 once:1 represents:1 unsupervised:2 nonsmooth:1 np:2 intelligent:1 quantitatively:1 few:1 report:3 randomly:2 simultaneously:7 replaced:2 occlusion:2 envelop:1 microsoft:2 freedom:1 interest:1 highly:3 rectifies:1 ai1:4 severe:1 alignment:14 mixture:1 analyzed:1 accurate:1 partial:1 unless:1 tree:1 deformation:3 minimal:1 instance:1 column:6 modeling:1 earlier:1 stacking:1 uilu:1 entry:7 ri1:4 corrupted:10 proximal:6 synthetic:6 recht:1 international:4 siam:1 imagery:1 choose:1 huang:1 li:14 converted:1 diversity:1 bold:1 ialm:4 satisfy:1 explicitly:1 vi:2 break:1 competitive:1 recover:5 candes:1 contribution:1 oi:3 formed:1 accuracy:1 square:1 qk:1 miller:2 trilinear:1 handwritten:1 rectified:3 corruption:4 simultaneous:1 aligns:1 definition:14 inexact:1 pp:4 tucker:6 naturally:2 proof:4 di:1 associated:1 mi:13 dataset:2 intrinsically:1 popular:1 recall:1 color:2 subsection:1 musialski:1 actually:1 higher:4 follow:1 asia:1 harness:5 improved:1 formulation:2 generality:1 stage:1 correlation:2 cohn:1 nonlinear:2 ganesh:2 mode:28 lossy:1 building:1 ye:1 concept:1 true:1 multiplier:8 equality:4 hence:2 regularization:1 alternating:6 soatto:1 i2:12 illustrated:1 deal:2 nuisance:1 please:1 demonstrate:1 performs:1 l1:1 image:32 data1:1 ji:9 tilt:11 extend:1 discussed:2 belong:1 refer:1 mathematics:1 illinois:1 rectify:3 etc:1 align:2 recent:2 perspective:1 driven:1 nonconvex:3 calligraphic:1 arbitrarily:2 yi:1 minimum:2 signal:1 ii:1 branch:2 multiple:4 interdependency:1 arithmetic:1 reduces:2 technical:3 compensate:1 lin:1 qi:5 basic:1 vision:5 arxiv:1 represent:1 achieved:1 suykens:1 folding:1 addressed:1 singular:1 appropriately:1 subject:1 sridharan:1 effectiveness:3 integer:1 call:1 structural:1 yang:2 revealed:1 inner:2 cn:1 motivated:1 ultimate:1 useful:2 clear:2 amount:2 ujn:1 generate:2 http:1 group:1 four:1 nevertheless:1 capital:1 rjn:1 relaxation:1 fraction:4 year:1 sum:3 run:1 inverse:3 letter:4 revolutionary:1 wu:1 decision:1 apg:4 guaranteed:5 fold:2 constraint:6 precisely:1 ri:1 u1:1 speed:1 span:1 min:10 department:1 increasingly:1 ur:1 partitioned:1 invariant:1 taken:1 rectification:6 r50:2 turn:1 r3:1 know:2 merit:1 tractable:1 adopted:2 available:1 operation:2 decomposing:1 apply:4 observe:1 spectral:2 appropriate:2 kashima:1 alternative:1 rp:1 jn:2 original:7 denotes:3 running:2 ensure:2 calculating:1 exploit:2 practicality:1 especially:1 uj:3 tensor:92 objective:1 question:1 strategy:1 traditional:1 diagonal:2 surrogate:1 gradient:7 kth:1 subspace:1 outer:1 considers:1 reason:2 barely:1 code:1 index:2 relationship:1 illustration:2 modeled:1 minimizing:1 liang:1 difficult:1 sharper:1 trace:2 stated:1 wonka:1 implementation:1 unknown:3 perform:2 observation:4 datasets:2 descent:1 inevitably:1 extended:1 precise:1 frame:6 rn:7 stack:1 arbitrary:2 introduced:2 extensive:1 learned:2 trans:3 address:2 able:2 dynamical:1 perception:2 pattern:5 challenge:2 including:1 max:1 video:9 natural:3 treated:1 nth:1 scheme:1 improve:2 technology:1 brief:1 lk:12 qki:8 extract:2 review:1 interdependent:2 geometric:1 literature:1 fully:1 loss:1 limitation:2 analogy:1 foundation:1 degree:1 consistent:1 viewpoint:2 rasl:32 row:2 lo:2 summary:1 changed:1 supported:1 keeping:1 jth:1 lucey:1 side:1 mik:8 institute:1 face:5 taking:1 sparse:6 slice:1 dimension:3 calculated:1 world:3 rich:1 ignores:1 qualitatively:1 approximate:1 global:2 assumed:2 continuous:1 iterative:2 why:1 robust:4 symmetry:1 complex:1 european:1 domain:2 main:2 linearly:1 noise:4 fair:2 complementary:1 repeated:1 xu:1 augmented:9 fig:12 tomioka:1 position:1 explicit:1 lie:1 jacobian:1 third:3 theorem:1 erroneous:1 showing:1 r2:1 intrinsic:2 adding:1 effectively:1 texture:2 magnitude:1 linearization:1 illumination:1 illustrates:1 rankness:1 chen:1 visual:6 lagrange:7 signoretto:1 scalar:3 talking:1 hayashi:1 corresponds:1 moveover:1 acm:1 ma:5 viewed:4 goal:1 replace:2 admm:2 hard:4 change:3 typical:1 upright:1 principal:2 called:3 partly:1 svd:1 experimental:1 people:1 latter:1 arises:1 fulfills:1 accelerated:2 tested:1 correlated:3 |
4,466 | 5,041 | Phase Retrieval using Alternating Minimization
Praneeth Netrapalli
Department of ECE
The University of Texas at Austin
Austin, TX 78712
[email protected]
Prateek Jain
Microsoft Research India
Bangalore, India
[email protected]
Sujay Sanghavi
Department of ECE
The University of Texas at Austin
Austin, TX 78712
[email protected]
Abstract
Phase retrieval problems involve solving linear equations, but with missing sign
(or phase, for complex numbers). Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating
minimization; i.e. alternating between estimating the missing phase information,
and the candidate solution. In this paper, we show that a simple alternating minimization algorithm geometrically converges to the solution of one such problem
? finding a vector x from y, A, where y = |AT x| and |z| denotes a vector of
element-wise magnitudes of z ? under the assumption that A is Gaussian.
Empirically, our algorithm performs similar to recently proposed convex techniques for this variant (which are based on ?lifting? to a convex matrix problem)
in sample complexity and robustness to noise. However, our algorithm is much
more efficient and can scale to large problems. Analytically, we show geometric
convergence to the solution, and sample complexity that is off by log factors from
obvious lower bounds. We also establish close to optimal scaling for the case
when the unknown vector is sparse. Our work represents the only known theoretical guarantee for alternating minimization for any variant of phase retrieval
problems in the non-convex setting.
1
Introduction
In this paper we are interested in recovering a complex1 vector x? ? Cn from magnitudes of its
linear measurements. That is, for ai ? Cn , if
?
yi = |hai , x? i|,
for i = 1, . . . , m
(1)
then the task is to recover x using y and the measurement matrix A = [a1 a2 . . . am ].
The above problem arises in many settings where it is harder / infeasible to record the phase of measurements, while recording the magnitudes is significantly easier. This problem, known as phase
retrieval, is encountered in several applications in crystallography, optics, spectroscopy and tomography [14]. Moreover, the problem is broadly studied in the following two settings:
(i) The measurements in (1) correspond to the Fourier transform (the number of measurements
here is equal to n) and there is some apriori information about the signal.
1
Our results also cover the real case, i.e. where all quantities are real.
1
(ii) The set of measurements y are overcomplete (i.e., m > n), while some apriori information
about the signal may or may not be available.
In the first case, various types of apriori information about the underlying signal such as positivity,
magnitude information on the signal [11], sparsity [25] and so on have been studied. In the second
case, algorithms for various measurement schemes such as Fourier oversampling [21], multiple
random illuminations [4, 28] and wavelet transform [28] have been suggested.
By and large, the most well known methods for solving this problem are the error reduction algorithms due to Gerchberg and Saxton [13] and Fienup [11], and variants thereof. These algorithms
are alternating projection algorithms that iterate between the unknown phases of the measurements
and the unknown underlying vector. Though the empirical performance of these algorithms has been
well studied [11, 19], and they are used in many applications [20], there are not many theoretical
guarantees regarding their performance.
More recently, a line of work [7, 6, 28] has approached this problem from a different angle, based
on the realization that recovering x? is equivalent to recovering the rank-one matrix x? x? T , i.e., its
outer product. Inspired by the recent literature on trace norm relaxation of the rank constraint, they
design SDPs to solve this problem. Refer Section 1.1 for more details.
In this paper we go back to the empirically more popular ideology of alternating minimization;
we develop a new alternating minimization algorithm, for which we show that (a) empirically, it
noticeably outperforms convex methods, and (b) analytically, a natural resampled version of this
algorithm requires O(n log3 n) i.i.d. random Gaussian measurements to geometrically converge to
the true vector.
Our contribution:
? The iterative part of our algorithm is implicit in previous work [13, 11, 28, 4]; the novelty
in our algorithmic contribution is the initialization step which makes it more likely for the
iterative procedure to succeed - see Figures 1 and 2.
? Our analytical contribution is the first theoretical guarantee regarding the convergence of
alternating minimization for the phase retrieval problem in a non-convex setting.
? When the underlying
another algorithm that achieves a sample
vector is sparse, we design
?4
complexity of O (x?min )
log n + log3 k where k is the sparsity and x?min is the minimum non-zero entry of x? . This algorithm also runs over Cn and scales much better than
SDP based methods.
Besides being an empirically better algorithm for this problem, our work is also interesting in a
broader sense: there are many problems in machine learning where the natural formulation of a
problem is non-convex; examples include rank constrained problems, applications of EM algorithms
etc., and alternating minimization has good empirical performance. However, the methods with the
best (or only) analytical guarantees involve convex relaxations (e.g., by relaxing the rank constraint
and penalizing the trace norm). In most of these settings, correctness of alternating minimization is
an open question. We believe that our results in this paper are of interest, and may have implications,
in this larger context.
The rest of the paper is organized as follows: In section 1.1, we briefly review related work. We
clarify our notation in Section 2. We present our algorithm in Section 3 and the main results in
Section 4. We present our results for the sparse case in Section 5. Finally, we present experimental
results in Section 6.
1.1
Related Work
Phase Retrieval via Non-Convex Procedures: Inspite of the huge amount of work it has attracted,
phase retrieval has been a long standing open problem. Early work in this area focused on using
holography to capture the phase information along with magnitude measurements [12]. However,
computational methods for reconstruction of the signal using only magnitude measurements received a lot of attention due to their applicability in resolving spurious noise, fringes, optical system
aberrations and so on and difficulties in the implementation of interferometer setups [9]. Though
such methods have been developed to solve this problem in various practical settings [8, 20], our
2
theoretical understanding of this problem is still far from complete. Many papers have focused on
determining conditions under which (1) has a unique solution - see [24] and references therein.
However, the uniqueness results of these papers do not resolve the algorithmic question of how to
find the solution to (1).
Since the seminal work of Gerchberg and Saxton [13] and Fienup [11], many iterated projection
algorithms have been developed targeted towards various applications [1, 10, 2]. [21] first suggested
the use of multiple magnitude measurements to resolve the phase problem. This approach has been
successfully used in many practical applications - see [9] and references there in. Following the
empirical success of these algorithms, researchers were able to explain its success in some of the
instances [29] using Bregman?s theory of iterated projections onto convex sets [3]. However, many
instances, such as the one we consider in this paper, are out of reach of this theory since they involve
magnitude constraints which are non-convex. To the best of our knowledge, there are no theoretical
results on the convergence of these approaches in a non-convex setting.
Phase Retrieval via Convex Relaxation: An interesting recent approach for solving this problem
formulates it as one of finding the rank-one solution to a system of linear matrix equations. The
papers [7, 6] then take the approach of relaxing the rank constraint by a trace norm penalty, making
the overall algorithm a convex program (called PhaseLift) over n ? n matrices. Another recent line
of work [28] takes a similar but different approach : it uses an SDP relaxation (called PhaseCut) that
is inspired by the classical SDP relaxation for the max-cut problem. To date, these convex methods
are the only ones with analytical guarantees on statistical performance [5, 28] (i.e. the number m of
measurements required to recover x? ) ? under an i.i.d. random Gaussian model on the measurement
vectors ai . However, by ?lifting? a vector problem to a matrix one, these methods lead to a much
larger representation of the state space, and higher computational cost as a result.
Sparse Phase Retrieval: A special case of the phase retrieval problem which has received a lot
of attention recently is when the underlying signal x? is known to be sparse. Though this problem
is closely related to the compressed sensing problem, lack of phase information makes this harder.
However, the ?1 regularization approach of compressed sensing has been successfully used in this
setting as well. In particular, if x? is sparse, then the corresponding lifted matrix x? x? T is also
sparse. [22, 18] use this observation to design ?1 regularized SDP algorithms for phase retrieval
of sparse vectors. For random Gaussian measurements, [18] shows that ?1 regularized PhaseLift
recovers x? correctly if the number of measurements is ?(k 2 log n). By the results of [23], this
result is tight up to logarithmic factors for ?1 and trace norm regularized SDP relaxations.
Alternating Minimization (a.k.a. ALS): Alternating minimization has been successfully applied
to many applications in the low-rank matrix setting. For example, clustering, sparse PCA, nonnegative matrix factorization, signed network prediction etc. - see [15] and references there in.
However, despite empirical success, for most of the problems, there are no theoretical guarantees
regarding its convergence except to a local minimum. The only exceptions are the results in [16, 15]
which give provable guarantees for alternating minimization for the problems of matrix sensing and
matrix completion.
2
Notation
We use bold capital letters (A, B etc.) for matrices, bold small case letters (x, y etc.) for vectors
and non-bold letters (?, U etc.) for scalars. For every complex vector w ? Cn , |w| ? Rn denotes
its element-wise magnitude vector. wT and AT denote the Hermitian transpose of the vector w
and the matrix A respectively. e1 , e2 , etc. denote the canonical basis vectors in Cn . z denotes the
complex conjugate of the complex number z. In this paper we use the standard Gaussian (or normal)
distribution over Cn . a is said to be distributed according to this distribution if a = a1 + ia2 , where
def z
a1 and a2 are independent and are distributed according to N (0, I). We also define Ph (z) = |z|
r
2
def
1 ,w2 i
for every z ? C, and dist (w1 , w2 ) = 1 ? kwhw
for every w1 , w2 ? Cn . Finally, we
1 k kw2 k
2
2
use the shorthand wlog for without loss of generality and whp for with high probability.
3
Algorithm
In this section, we present our alternating minimization based algorithm for solving the phase retrieval problem. Let A ? Cn?m be the measurement matrix, with ai as its ith column; similarly let
3
Algorithm 1 AltMinPhase
input A, y, t0
P 2
T
1: Initialize x0 ? top singular vector of
i yi ai ai
2: for t = 0, ? ? ? , t0 ? 1 do
3:
Ct+1 ? Diag Ph A
T xt
4:
xt+1 ? argminx?Rn
AT x ? Ct+1 y
2
5: end for
output xt0
y be the vector of recorded magnitudes. Then,
y = | AT x? |.
Recall that, given y and A, the goal is to recover x? . If we had access to the true phase c? of AT x?
(i.e., c?i = Ph (hai , x? i)) and m ? n, then our problem reduces to one of solving a system of linear
equations:
C? y = AT x ? ,
def
where C? = Diag(c? ) is the diagonal matrix of phases. Of course we do not know C? , hence one
approach to recovering x? is to solve:
argmin kAT x ? Cyk2 ,
(2)
C,x
where x ? Cn and C ? Cm?m is a diagonal matrix with each diagonal entry of magnitude 1. Note
that the above problem is not convex since C is restricted to be a diagonal phase matrix and hence,
one cannot use standard convex optimization methods to solve it.
Instead, our algorithm uses the well-known alternating minimization: alternatingly update x and C
so as to minimize (2). Note that given C, the vector x can be obtained by solving the following least
squares problem: minx kAT x ? Cyk2 . Since the number of measurements m is larger than the
dimensionality n and since each entry of A is sampled from independent Gaussians, A is invertible
with probability 1. Hence, the above least squares problem
has a unique solution. On the other hand,
given x, the optimal C is given by C = Diag Ph AT x .
While the above algorithm is simple and intuitive, it is known that with bad initial points, the solution might not converge to x? . In fact, this algorithm with a uniformly random initial point has been
empirically evaluated for example in [28], where it performs worse than SDP based methods. Moreover, since the underlying problem is non-convex, standard analysis techniques fail to guarantee
convergence to the global optimum, x? . Hence, the key challenges here are: a) a good initialization
step for this method, b) establishing this method?s convergence to x? .
We address the first key challenge in our AltMinPhase
(Algorithm 1) by initializing x as
P 2algorithm
1
T
the largest singular vector of the matrix S = m
y
a
a
.
Theorem
4.1 shows that when A is
i
i
i i
sampled from standard complex normal distribution, this initialization is accurate. In particular, if
m ? C1 n log3 n for large enough C1 > 0, then whp we have kx0 ? x? k2 ? 1/100 (or any other
constant).
Theorem 4.2 addresses the second key challenge and shows that a variant of AltMinPhase (see
Algorithm 2) actually converges to the global optimum x? at linear rate. See section 4 for a detailed
analysis of our algorithm.
We would like to stress that not only does a natural variant of our proposed algorithm have rigorous
theoretical guarantees, it also is effective practically as each of its iterations is fast, has a closed form
solution and does not require SVD computation. AltMinPhase has similar statistical complexity to
that of PhaseLift and PhaseCut while being much more efficient computationally. In particular, for
accuracy ?, we only need to solve each least squares problem only up to accuracy O (?). Now, since
the measurement matrix A is sampled from Gaussian with m > Cn,it is well conditioned. Hence,
using conjugate gradient method, each such step takes O mn log 1? time. When m = O (n) and
we have geometric convergence, the total time taken by the algorithm is O n2 log2 1? . SDP based
?
methods on the other hand require ?(n3 / ?) time. Moreover, our initialization step increases the
likelihood of successful recovery as opposed to a random initialization (which has been considered
so far in prior work). Refer Figure 1 for an empirical validation of these claims.
4
(a)
(b)
Figure 1: Sample and Time complexity of various methods for Gaussian measurement matrices A.
Figure 1(a) compares the number of measurements required for successful recovery by various methods. We note that our initialization improves sample complexity over that of random initialization
(AltMin (random init)) by a factor of 2. AltMinPhase requires similar number of measurements as
PhaseLift and PhaseCut. Figure 1(b) compares the running time of various algorithms on log-scale.
Note that AltMinPhase is almost two orders of magnitude faster than PhaseLift and PhaseCut.
4
Main Results: Analysis
In this section we describe the main contribution of this paper: provable statistical guarantees for the
success of alternating minimization in solving the phase recovery problem. To this end, we consider
the setting where each measurement vector ai is iid and is sampled from the standard complex
normal distribution. We would like to stress that all the existing guarantees for phase recovery also
use exactly the same setting [6, 5, 28]. Table 1 presents a comparison of the theoretical guarantees
of Algorithm 2 as compared to PhaseLift and PhaseCut.
Algorithm 2
PhaseLift [5]
PhaseCut [28]
Sample complexity
O n log3 n + log 1? log log 1?
O (n)
O (n)
Comp. complexity
O n2 log3 n + log2 1? log log 1?
O n3 /?2
?
O n3 / ?
Table 1: Comparison of Algorithm 2 with PhaseLift and PhaseCut: Though the sample complexity
of Algorithm 2 is off by log factors from that of PhaseLift and PhaseCut, it is O (n) better than them
in computational complexity. Note that, we can solve the least squares problem in each iteration
approximately by using conjugate gradient method which requires only O (mn) time.
Our proof for convergence of alternating minimization can be broken into two key results. We first
show that if m ? Cn log3 n, then whp the initialization step used by AltMinPhase returns x0 which
is at most a constant distance away from x? . Furthermore, that constant can be controlled by using
more samples (see Theorem 4.1).
We then show that if xt is a fixed vector such that dist xt , x? < c (small enough) and A is sampled
independently of xt with m > Cn (C large enough) then whp xt+1 satisfies: dist xt+1 , x? <
3
t
?
(see Theorem 4.2). Note that our analysis critically requires xt to be ?fixed? and
4 dist x , x
be independent of the sample matrix A. Hence, we cannot re-use the same A in each iteration;
instead, we need to resample A in every iteration. Using these results, we prove the correctness of
Algorithm 2, which is a natural resampled version of AltMinPhase.
We now present the two results mentioned above. For our proofs, wlog, we assume that kx? k2 = 1.
Our first result guarantees a good initial vector.
Theorem 4.1. There exists a constant C1 such that if m >
probability greater than 1 ? 4/m2 we have:
kx0 ? x? k2 < c.
5
3
C1
c2 n log
n, then in Algorithm 2, with
Algorithm 2 AltMinPhase with Resampling
input A, y, ?
1: t0 ? c log 1?
2: Partition y and (the corresponding columns of) A into t0 + 1 equal disjoint sets:
(y0 , A0 ), (y1 , A1 ), ? ? ? , (yt0 , At0 )
P 0 2 0 0 T
3: x0 ? top singular vector of
a? a?
l yl
4: for t = 0, ? ? ? , t0 ? 1 do
T
5:
Ct+1 ? Diag Ph At+1 xt
T
6:
xt+1 ? argminx?Rn
At+1 x ? Ct+1 yt+1
2
7: end for
output xt0
The second result proves geometric decay of error assuming a good initialization.
Theorem 4.2.
c and e
c such that in iteration t of Algorithm 2, if
There exist constants c, b
dist xt , x? < c and the number of columns of At is greater than b
cn log ?1 then, with probability
more than 1 ? ?, we have:
3
c dist xt , x? .
dist xt+1 , x? < dist xt , x? , and kxt+1 ? x? k2 < e
4
Proof. For simplicity of notation in the proof of the theorem, we will use A for At+1 , C for Ct+1 ,
x for xt , x+ for xt+1 , and y for yt+1 . Now consider the update in the (t + 1)th iteration:
?1
?1
e ? Cy
2 = AAT
ACy = AAT
ADAT x? ,
(3)
x+ = argmin
AT x
e?Rn
x
def
where D is a diagonal matrix with Dll = Ph a? T x ? a? T x? . Now (3) can be rewritten as:
x+ = AAT
?1
ADAT x? = x? + AAT
?1
A (D ? I) AT x? ,
(4)
that is, x+ can be viewed as a perturbation of x? and the goal is to bound the error term (the second
term above). We break the proof into two main steps:
1. ? a constant c1 such that |hx? , x+ i| ? 1 ? c1 dist (x, x? ) (see Lemma A.2), and
2. |hz, x+ i| ? 59 dist (x, x? ), for all z s.t. zT x? = 0. (see Lemma A.4)
Assuming the above two bounds and choosing c <
dist x+ , x?
2
1
100c1 ,
we can prove the theorem:
2
<
(25/81) ? dist (x, x? )
9
2
dist (x, x? ) ,
?
?
2
(1 ? c1 dist (x, x ))
16
proving the first part of the theorem. The second part follows easily from (3) and Lemma A.2.
Intuition and key challenge: If we look at step 6 of Algorithm 2, we see that, for the measurements,
we use magnitudes calculated from x? and phases calculated from x. Intuitively, this means that we
are trying to push x+ towards x? (since we use its magnitudes) and x (since we use its phases) at
the same time. The key intuition behind the success of this procedure is that the push towards x? is
stronger than the push towards x, when x is close to x? . The key lemma that captures this effect is
stated below:
Lemma 4.3. Let w1 and
independent
standard complex Gaussian random variables2 .
w2?be two
2
w2
Let U = |w1 | w2 Ph 1 + 1??
? 1 . Fix ? > 0. Then, there exists a constant ? > 0 such
?|w1 |
?
?
2
that if 1 ? ? < ?, then: E [U ] ? (1 + ?) 1 ? ?2 .
2
z is standard complex Gaussian if z = z1 + iz2 where z1 and z2 are independent standard normal random
variables.
6
Algorithm 3 SparseAltMinPhase
input A, y, k
Pm
1: S ? top-k argmaxj?[n] i=1 |aij yi | {Pick indices of k largest absolute value inner product}
2: Apply Algorithm 2 on AS , yS and output the resulting vector with elements in S c set to zero.
Algorithm 3
?1 -PhaseLift [18]
Sample complexity
O k k log n + log 1? log log 1?
O k 2 log n
Comp. complexity
O k 2 kn log n + log2 1? log log 1?
O n3 /?2
?
Table 2: Comparison of Algorithm 3 with ?1 -PhaseLift when x?min = ? 1/ k . Note that the
complexity of Algorithm 3 is dominated by the support finding step. If k = O (1), Algorithm 3 runs
in quasi-linear time.
See Appendix A for a proof of the above lemma and how we use it to prove Theorem 4.2.
c dist xT , x? for
Combining Theorems 4.1 and 4.2, and a simple observation that kxT ? x? k2 < e
a constant e
c, we can establish the correctness of Algorithm 2.
Theorem 4.4. Suppose the measurement vectors in (1) are independent standard complex normal
vectors. For every ? > 0, there exists a constant c such that if m > cn log3 n + log 1? log log 1?
then, with probability greater than 1 ? ?, Algorithm 2 outputs xt0 such that kxt0 ? x? k2 < ?.
5
Sparse Phase Retrieval
In this section, we consider the case where x? is known to be sparse, with sparsity k. A natural
and practical question to ask here is: can the sample and computational complexity of the recovery
algorithm be improved when k ? n.
Recently, [18] studied this problem for Gaussian A and showed that for ?1 regularized PhaseLift,
m = O(k 2 log n) samples suffice for exact recovery of x? . However, the computational complexity
of this algorithm is still O(n3 /?2 ).
In this section, we provide a simple extension of our AltMinPhase algorithm that we call SparseAltMinPhase, for the case of sparse x? . The main idea behind our algorithm is to first recover the
support of x? . Then, the problem reduces to phase retrieval of a k-dimensional signal. We then
solve the reduced problem using Algorithm 2. The pseudocode for SparseAltMinPhase is presented
in Algorithm 3. Table 2 provides a comparison of Algorithm 3 with ?1 -regularized PhaseLift in
terms of sample complexity as well as computational complexity.
The following lemma shows that if the number of measurements is large enough, step 1 of SparseAltMinPhase recovers the support of x? correctly.
Lemma 5.1. Suppose x? is k-sparse with support S and kx? k2 = 1. If ai are standard complex
Gaussian random vectors and m > ?c 4 log n? , then Algorithm 3 recovers S with probability
(xmin )
greater than 1 ? ?, where x?min is the minimum non-zero entry of x? .
P
The key step of our proof is to show that if j ? supp(x? ), then random variable Zij = i |aij yi |
?
has significantly higher mean than for the case when j ?
/ supp(x ). Now, by applying appropriate
? ) |Zij | and hence our
concentration bounds, we can ensure that minj?supp(x? ) |Zij | > maxj ?supp(x
/
algorithm never picks up an element outside the true support set supp(x? ). See Appendix B for a
detailed proof of the above lemma.
The correctness of Algorithm 3 now is a direct consequence of Lemma 5.1 and Theorem 4.4. For the
special case where each non-zero value in x? is from {? ?1k , ?1k }, we have the following corollary:
Corollary 5.2. Suppose x? is k-sparse with non-zero elements ? ?1k . If the number of measurements
m > c k 2 log n? + k log2 k + k log 1? , then Algorithm 3 will recover x? up to accuracy ? with
probability greater than 1 ? ?.
7
(a)
(b)
(c)
Figure 2: (a) & (b): Sample and time complexity for successful recovery using random Gaussian
illumination filters. Similar to Figure 1, we observe that AltMinPhase has similar number of filters
(J) as PhaseLift and PhaseCut, but is computationally much more efficient. We also see that AltMinPhase performs better than AltMin (randominit). (c): Recovery error kx ? x? k2 incurred by
various methods with increasing amount of noise (?). AltMinPhase and PhaseCut perform comparably while PhaseLift incurs significantly larger error.
6
Experiments
In this section, we present experimental evaluation of AltMinPhase (Algorithm 1) and compare its
performance with the SDP based methods PhaseLift [6] and PhaseCut [28]. We also empirically
demonstrate the advantage of our initialization procedure over random initialization (denoted by
AltMin (random init)), which has thus far been considered in the literature [13, 11, 28, 4]. AltMin
(random init) is the same as AltMinPhase except that step 1 of Algorithm 1 is replaced with:x0 ?
Uniformly random vector from the unit sphere.
We first choose x? uniformly at random from the unit sphere. In the noiseless setting, a trial is said
to succeed if the output x satisfies kx ? x? k2 < 10?2 . For a given dimension, we do a linear search
for smallest m (number of samples) such that empirical success ratio over 20 runs is at least 0.8. We
implemented our methods in Matlab, while we obtained the code for PhaseLift and PhaseCut from
the authors of [22] and [28] respectively.
We now present results from our experiments in three different settings.
Independent Random Gaussian Measurements: Each measurement vector ai is generated from
the standard complex Gaussian distribution. This measurement scheme was first suggested by [6]
and till date, this is the only scheme with theoretical guarantees.
Multiple Random Illumination Filters: We now present our results for the setting where the measurements are obtained using multiple illumination filters; this setting was suggested by [4]. In
particular, choose J vectors z(1) , ? ? ? , z(J) and compute the following discrete Fourier transforms:
b(u) = DFT x? ? ? z(u) ,
x
where ?? denotes component-wise multiplication. Our measurements will then be the magnitudes of
b(1) , ? ? ? , x
b(J) . The above measurement scheme can be implemented by
components of the vectors x
modulating the light beam or by the use of masks; see [4] for more details.
We again perform the same experiments as in the previous setting. Figures 2 (a) and (b) present the
results. We again see that the measurement complexity of AltMinPhase is similar to that of PhaseCut
and PhaseLift, but AltMinPhase is orders of magnitude faster than PhaseLift and PhaseCut.
Noisy Phase Retrieval: Finally, we study our method in the following noisy measurement scheme:
yi = |hai , x? + wi i|
for i = 1, . . . , m,
(5)
where wi is the noise in the i-th measurement and is sampled from N (0, ? 2 ). We fix n = 64
and m = 6n. We then vary the amount of noise added ? and measure the ?2 error in recovery,
i.e., kx ? x? k2 , where x is the recovered vector. Figure 2(c) compares the performance of various
methods with varying amount of noise. We observe that our method outperforms PhaseLift and has
similar recovery error as PhaseCut.
Acknowledgments
S. Sanghavi would like to acknowledge support from NSF grants 0954059, 1302435, ARO grant
W911NF-11-1-0265 and a DTRA YIP award.
8
References
[1] J. Abrahams and A. Leslie. Methods used in the structure determination of bovine mitochondrial f1
atpase. Acta Crystallographica Section D: Biological Crystallography, 52(1):30?42, 1996.
[2] H. H. Bauschke, P. L. Combettes, and D. R. Luke. Hybrid projection?reflection method for phase retrieval.
JOSA A, 20(6):1025?1034, 2003.
[3] L. Bregman. Finding the common point of convex sets by the method of successive projection.(russian).
In Dokl. Akad. Nauk SSSR, volume 162, pages 487?490, 1965.
[4] E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM
Journal on Imaging Sciences, 6(1):199?225, 2013.
[5] E. J. Candes and X. Li. Solving quadratic equations via phaselift when there are about as many equations
as unknowns. arXiv preprint arXiv:1208.6247, 2012.
[6] E. J. Candes, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude
measurements via convex programming. Communications on Pure and Applied Mathematics, 2012.
[7] A. Chai, M. Moscoso, and G. Papanicolaou. Array imaging using intensity-only measurements. Inverse
Problems, 27(1):015005, 2011.
[8] J. C. Dainty and J. R. Fienup. Phase retrieval and image reconstruction for astronomy. Image Recovery:
Theory and Application, ed. byH. Stark, Academic Press, San Diego, pages 231?275, 1987.
[9] H. Duadi, O. Margalit, V. Mico, J. A. Rodrigo, T. Alieva, J. Garcia, and Z. Zalevsky. Digital holography
and phase retrieval. Source: Holography, Research and Technologies. InTech, 2011.
[10] V. Elser. Phase retrieval by iterated projections. JOSA A, 20(1):40?55, 2003.
[11] J. R. Fienup et al. Phase retrieval algorithms: a comparison. Applied optics, 21(15):2758?2769, 1982.
[12] D. Gabor. A new microscopic principle. Nature, 161(4098):777?778, 1948.
[13] R. W. Gerchberg and W. O. Saxton. A practical algorithm for the determination of phase from image and
diffraction plane pictures. Optik, 35:237, 1972.
[14] N. E. Hurt. Phase Retrieval and Zero Crossings: Mathematical Methods in Image Reconstruction, volume 52. Kluwer Academic Print on Demand, 2001.
[15] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. arXiv
preprint arXiv:1212.0467, 2012.
[16] R. H. Keshavan. Efficient algorithms for collaborative filtering. Phd Thesis, Stanford University, 2012.
[17] W. V. Li and A. Wei. Gaussian integrals involving absolute value functions. In Proceedings of the
Conference in Luminy, 2009.
[18] X. Li and V. Voroninski. Sparse signal recovery from quadratic measurements via convex programming.
arXiv preprint arXiv:1209.4785, 2012.
[19] S. Marchesini. Invited article: A unified evaluation of iterative projection algorithms for phase retrieval.
Review of Scientific Instruments, 78(1):011301?011301, 2007.
[20] J. Miao, P. Charalambous, J. Kirz, and D. Sayre. Extending the methodology of x-ray crystallography to
allow imaging of micrometre-sized non-crystalline specimens. Nature, 400(6742):342?344, 1999.
[21] D. Misell. A method for the solution of the phase problem in electron microscopy. Journal of Physics D:
Applied Physics, 6(1):L6, 1973.
[22] H. Ohlsson, A. Y. Yang, R. Dong, and S. S. Sastry. Compressive phase retrieval from squared output
measurements via semidefinite programming. arXiv preprint arXiv:1111.6323, 2011.
[23] S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Hassibi. Simultaneously structured models with
application to sparse and low-rank matrices. arXiv preprint arXiv:1212.3753, 2012.
[24] J. L. Sanz. Mathematical considerations for the problem of fourier transform phase retrieval from magnitude. SIAM Journal on Applied Mathematics, 45(4):651?664, 1985.
[25] Y. Shechtman, Y. C. Eldar, A. Szameit, and M. Segev. Sparsity based sub-wavelength imaging with
partially incoherent light via quadratic compressed sensing. arXiv preprint arXiv:1104.4406, 2011.
[26] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational
Mathematics, 12(4):389?434, 2012.
[27] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027, 2010.
[28] I. Waldspurger, A. d?Aspremont, and S. Mallat. Phase recovery, maxcut and complex semidefinite programming. arXiv preprint arXiv:1206.0102, 2012.
[29] D. C. Youla and H. Webb. Image restoration by the method of convex projections: Part 1theory. Medical
Imaging, IEEE Transactions on, 1(2):81?94, 1982.
9
| 5041 |@word trial:1 ia2:1 briefly:1 version:2 norm:4 stronger:1 open:2 phasecut:15 pick:2 incurs:1 harder:2 marchesini:1 shechtman:1 reduction:1 initial:3 zij:3 outperforms:2 kx0:2 existing:1 recovered:1 com:1 whp:4 z2:1 attracted:1 partition:1 update:2 resampling:1 plane:1 ith:1 record:1 provides:1 successive:1 mathematical:2 along:1 c2:1 direct:1 shorthand:1 prove:3 ray:1 hermitian:1 x0:4 mask:1 dist:15 sdp:8 inspired:2 resolve:2 increasing:1 estimating:1 notation:3 moreover:3 underlying:5 suffice:1 elser:1 prateek:1 argmin:2 cm:1 developed:2 compressive:1 unified:1 finding:4 astronomy:1 guarantee:14 every:5 friendly:1 mitochondrial:1 exactly:1 k2:10 unit:2 grant:2 medical:1 aat:4 local:1 variables2:1 consequence:1 despite:1 establishing:1 approximately:1 signed:1 might:1 initialization:11 studied:4 therein:1 acta:1 relaxing:2 luke:1 factorization:1 kw2:1 fazel:1 practical:4 unique:2 acknowledgment:1 kat:2 procedure:4 area:1 empirical:7 significantly:3 gabor:1 projection:8 onto:1 close:2 cannot:2 context:1 applying:1 seminal:1 equivalent:1 missing:2 yt:2 go:1 attention:2 independently:1 convex:21 focused:2 simplicity:1 recovery:14 pure:1 m2:1 array:1 proving:1 hurt:1 diego:1 suppose:3 mallat:1 user:1 exact:2 programming:4 us:2 element:5 crossing:1 cut:1 preprint:8 initializing:1 capture:2 cy:1 xmin:1 mentioned:1 intuition:2 broken:1 complexity:19 saxton:3 solving:8 tight:1 basis:1 easily:1 various:9 tx:2 jain:2 fast:1 effective:1 describe:1 approached:1 choosing:1 outside:1 larger:4 solve:7 stanford:1 compressed:3 transform:3 noisy:2 kxt:2 advantage:1 analytical:3 reconstruction:3 aro:1 product:2 combining:1 realization:1 date:2 till:1 nauk:1 intuitive:1 sanz:1 waldspurger:1 chai:1 convergence:8 optimum:2 extending:1 converges:2 develop:1 completion:3 received:2 netrapalli:2 recovering:4 implemented:2 closely:1 sssr:1 filter:4 noticeably:1 require:2 hx:1 fix:2 f1:1 biological:1 ideology:1 extension:1 clarify:1 practically:1 considered:2 normal:5 algorithmic:2 claim:1 electron:1 achieves:1 early:1 a2:2 smallest:1 resample:1 vary:1 uniqueness:1 utexas:2 modulating:1 largest:2 correctness:4 successfully:3 minimization:17 gaussian:15 lifted:1 varying:1 broader:1 corollary:2 rank:9 likelihood:1 rigorous:1 sense:1 am:1 a0:1 margalit:1 spurious:1 quasi:1 interested:1 voroninski:3 bovine:1 overall:1 eldar:3 denoted:1 constrained:1 special:2 initialize:1 yip:1 apriori:3 equal:2 never:1 represents:1 look:1 sanghavi:4 acy:1 bangalore:1 simultaneously:1 maxj:1 replaced:1 phase:42 argminx:2 microsoft:2 interest:1 huge:1 evaluation:2 semidefinite:2 light:2 behind:2 strohmer:2 implication:1 accurate:1 bregman:2 integral:1 phaselift:22 re:1 overcomplete:1 theoretical:9 instance:2 column:3 cover:1 w911nf:1 formulates:1 restoration:1 leslie:1 applicability:1 cost:1 entry:4 successful:3 bauschke:1 kn:1 vershynin:1 siam:2 oymak:1 standing:1 off:2 yl:1 physic:2 invertible:1 dong:1 w1:5 squared:1 again:2 recorded:1 thesis:1 opposed:1 choose:2 positivity:1 worse:1 return:1 stark:1 li:3 supp:5 bold:3 break:1 lot:2 closed:1 recover:5 candes:3 contribution:4 minimize:1 square:4 dll:1 accuracy:3 collaborative:1 correspond:1 sdps:1 iterated:3 critically:1 iid:1 comparably:1 ohlsson:1 comp:2 researcher:1 alternatingly:1 szameit:1 explain:1 minj:1 reach:1 ed:1 obvious:1 thereof:1 e2:1 proof:8 recovers:3 josa:2 sampled:6 popular:2 ask:1 recall:1 knowledge:1 dimensionality:1 improves:1 organized:1 actually:1 back:1 higher:2 miao:1 methodology:1 improved:1 wei:1 formulation:1 evaluated:1 though:4 generality:1 furthermore:1 implicit:1 hand:2 tropp:1 keshavan:1 lack:1 holography:3 scientific:1 believe:1 russian:1 effect:1 true:3 analytically:2 regularization:1 hence:7 alternating:19 trying:1 stress:2 complete:1 demonstrate:1 performs:3 optik:1 reflection:1 image:5 wise:3 consideration:1 recently:4 common:1 pseudocode:1 empirically:6 at0:1 volume:2 tail:1 kluwer:1 measurement:40 refer:2 ai:8 dft:1 sujay:1 sastry:1 pm:1 similarly:1 mathematics:3 maxcut:1 had:1 access:1 stable:1 etc:6 recent:3 showed:1 success:6 yi:5 minimum:3 greater:5 dtra:1 specimen:1 converge:2 novelty:1 signal:9 ii:1 resolving:1 multiple:4 reduces:2 faster:2 determination:2 academic:2 long:1 retrieval:25 sphere:2 e1:1 y:1 award:1 a1:4 controlled:1 prediction:1 variant:6 involving:1 noiseless:1 arxiv:16 iteration:6 aberration:1 microscopy:1 c1:8 beam:1 singular:3 source:1 w2:6 rest:1 invited:1 recording:1 hz:1 call:1 yang:1 enough:4 iterate:1 inner:1 praneeth:1 cn:14 regarding:3 idea:1 texas:2 t0:5 papanicolaou:1 pca:1 penalty:1 matlab:1 detailed:2 involve:3 amount:4 transforms:1 ph:7 tomography:1 reduced:1 exist:1 canonical:1 oversampling:1 nsf:1 sign:1 disjoint:1 correctly:2 broadly:1 discrete:1 key:8 capital:1 penalizing:1 imaging:5 relaxation:6 geometrically:2 sum:1 run:3 angle:1 letter:3 inverse:1 almost:1 appendix:2 scaling:1 diffraction:1 bound:5 resampled:2 def:4 ct:5 quadratic:3 encountered:1 nonnegative:1 optic:2 constraint:4 segev:1 n3:5 dominated:1 fourier:4 min:4 optical:1 altminphase:17 department:2 structured:1 according:2 conjugate:3 em:1 y0:1 wi:2 making:1 intuitively:1 restricted:1 taken:1 computationally:2 equation:5 fail:1 argmaxj:1 know:1 prajain:1 end:3 instrument:1 available:1 gaussians:1 rewritten:1 apply:1 observe:2 altmin:4 generic:1 away:1 appropriate:1 robustness:1 denotes:4 clustering:1 include:1 top:3 running:1 ensure:1 log2:4 adat:2 l6:1 prof:1 establish:2 classical:1 question:3 quantity:1 added:1 print:1 concentration:1 diagonal:5 jalali:1 hai:3 said:2 minx:1 gradient:2 microscopic:1 distance:1 outer:1 mail:1 provable:2 assuming:2 besides:1 code:1 index:1 ratio:1 akad:1 setup:1 webb:1 trace:4 stated:1 design:3 implementation:1 zt:1 unknown:4 perform:2 observation:2 acknowledge:1 communication:1 y1:1 rn:4 perturbation:1 intensity:1 required:2 z1:2 address:2 able:1 suggested:4 dokl:1 below:1 sparsity:4 challenge:4 program:1 max:1 crystalline:1 natural:5 difficulty:1 regularized:5 hybrid:1 mn:2 scheme:5 technology:1 picture:1 incoherent:1 aspremont:1 review:2 geometric:3 literature:2 understanding:1 prior:1 multiplication:1 determining:1 asymptotic:1 loss:1 interesting:2 filtering:1 validation:1 digital:1 foundation:1 incurred:1 fienup:4 article:1 principle:1 austin:4 course:1 yt0:1 last:1 transpose:1 infeasible:1 aij:2 allow:1 india:2 rodrigo:1 absolute:2 sparse:16 distributed:2 calculated:2 dimension:1 author:1 san:1 far:3 log3:7 transaction:1 gerchberg:3 global:2 search:1 iterative:3 decade:1 table:4 nature:2 init:3 spectroscopy:1 complex:12 diag:4 main:5 abraham:1 noise:6 n2:2 wlog:2 combettes:1 hassibi:1 sub:1 candidate:1 wavelet:1 theorem:13 bad:1 xt:17 sensing:4 decay:1 exists:3 lifting:2 magnitude:18 phd:1 illumination:4 conditioned:1 push:3 kx:5 demand:1 easier:1 crystallography:3 logarithmic:1 garcia:1 wavelength:1 likely:1 xt0:3 partially:1 scalar:1 satisfies:2 succeed:2 fringe:1 goal:2 targeted:1 viewed:1 sized:1 towards:4 except:2 uniformly:3 wt:1 lemma:10 called:2 total:1 ece:2 experimental:2 svd:1 exception:1 support:6 arises:1 |
4,467 | 5,042 | Machine Teaching for Bayesian Learners
in the Exponential Family
Xiaojin Zhu
Department of Computer Sciences, University of Wisconsin-Madison
Madison, WI, USA 53706
[email protected]
Abstract
What if there is a teacher who knows the learning goal and wants to design good
training data for a machine learner? We propose an optimal teaching framework
aimed at learners who employ Bayesian models. Our framework is expressed as
an optimization problem over teaching examples that balance the future loss of the
learner and the effort of the teacher. This optimization problem is in general hard.
In the case where the learner employs conjugate exponential family models, we
present an approximate algorithm for finding the optimal teaching set. Our algorithm optimizes the aggregate sufficient statistics, then unpacks them into actual
teaching examples. We give several examples to illustrate our framework.
1
Introduction
Consider the simple task of learning a threshold classifier in 1D (Figure 1). There is an unknown
threshold ? ? [0, 1]. For any item x ? [0, 1], its label y is white if x < ? and black otherwise.
? What is the error |?? ? ?|? The answer
After seeing n training examples the learner?s estimate is ?.
depends on the learning paradigm. If the learner receives iid noiseless training examples where
x ? uniform[0, 1], then with large probability |?? ? ?| = O( n1 ). This is because the inner-most
white and black items are 1/(n + 1) apart on average. If the learner performs active learning and
an oracle provides noiseless labels, then the error reduces faster |?? ? ?| = O( 21n ) since the optimal
strategy is binary search. However, a helpful teacher can simply teach with n = 2 items (? ?
/2, white), (? + /2, black) to achieve an arbitrarily small error . The key difference is that an
active learner still needs to explore the boundary, while a teacher can guide.
O(1/n)
passive learning "waits"
?
?
{
{
?
O(1/2n)
active learning "explores"
teaching "guides"
Figure 1: Teaching can require far fewer examples than passive or active learning
We impose the restriction that teaching be conducted only via teaching examples (rather than somehow directly giving the parameter ? to the learner). What, then, are the best teaching examples?
Understanding the optimal teaching strategies is important for both machine learning and education:
(i) When the learner is a human student (as modeled in cognitive psychology), optimal teaching
theory can design the best lessons for education. (ii) In cyber-security the teacher may be an adversary attempting to mislead a machine learning system via ?poisonous training examples.? Optimal
teaching quantifies the power and limits of such adversaries. (iii) Optimal teaching informs robots
as to the best ways to utilize human teaching, and vice versa.
1
Our work builds upon three threads of research. The first thread is the teaching dimension theory by
Goldman and Kearns [10] and its extensions in computer science(e.g., [1, 2, 11, 12, 14, 25]). Our
framework allows for probabilistic, noisy learners with infinite hypothesis space, arbitrary loss functions, and the notion of teaching effort. Furthermore, in Section 3.2 we will show that the original
teaching dimension is a special case of our framework. The second thread is the research on representativeness and pedagogy in cognitive science. Tenenbaum and Griffiths is the first to note that
representative data is one that maximizes the posterior probability of the target model [22]. Their
work on Gaussian distributions, and later work by Rafferty and Griffiths on multinomial distributions [19], find representative data by matching sufficient statistics. Our framework can be viewed
as a generalization. Specifically, their work corresponds to the specific choice (to be defined in Section 2) of loss() = KL divergence and effort() being either zero or an indicator function to fix the
data set size at n. We made it explicit that these functions can have other designs. Importantly, we
also show that there are non-trivial interactions between loss() and effort(), such as not-teachingat-all in Example 4, or non-brute-force-teaching in Example 5. An interesting variant studied in
cognitive science is when the learner expects to be taught [20, 8]. We defer the discussion on this
variant, known as ?collusion? in computational teaching theory, and its connection to information
theory to section 5. In addition, our optimal teaching framework may shed light on the optimality
of different method of teaching humans [9, 13, 17, 18]. The third thread is the research on better
ways to training machine learners such as curriculum learning or easy-to-hard ordering of training items [3, 15, 16], and optimal reward design in reinforcement learning [21]. Interactive systems
have been built which employ or study teaching heuristics [4, 6]. Our framework provides a unifying
optimization view that balances the future loss of the learner and the effort of the teacher.
2
Optimal Teaching for General Learners
We start with a general framework for teaching and gradually specialize the framework in later
sections. Our framework consists of three entities: the world, the learner, and the teacher. (i) The
world is defined by a target model ?? . Future test items for the learner will be drawn iid from this
model. This is the same as in standard machine learning. (ii) The learner has to learn ?? from
training data. Without loss of generality let ?? ? ?, the hypothesis space of the learner (if not, we
can always admit approximation error and define ?? to be the distribution in ? closest to the world
distribution). The learner is the same as in standard machine learning (learners who anticipate to
be taught are discussed in section 5). The training data, however, is provided by a teacher. (iii)
The teacher is the new entity in our framework. It is almost omnipotent: it knows the world ?? ,
the learner?s hypothesis space ?, and importantly how the learner learns given any training data.1
However, it can only teach the learner by providing teaching (or, from the learner?s perspective,
training) examples. The teacher?s goal is to design a teaching set D so that the learner learns ?? as
accurately and effortlessly as possible. In this paper, we consider batch teaching where the teacher
presents D to the learner all at once, and the teacher can use any item in the example domain.
Being completely general, we leave many details unspecified. For instance, the world?s model can
be supervised p(x, y; ?? ) or unsupervised p(x; ?? ); the learner may or may not be probabilistic; and
when it is, ? can be parametric or nonparametric. Nonetheless, we can already propose a generic
optimization problem for optimal teaching:
?
min loss(fc
(1)
D , ? ) + effort(D).
D
The function loss() measures the learner?s deviation from the desired ?? . The quantity fc
D represents
the state of the learner after seeing the teaching set D. The function effort() measures the difficulty
the teacher experiences when teaching with D. Despite its appearance, the optimal teaching problem (1) is completely different from regularized parameter estimation in machine learning. The
desired parameter ?? is known to the teacher. The optimization is instead over the teaching set D.
This can be a difficult combinatorial problem ? for instance we need to optimize over the cardinality
of D. Neither is the effort function a regularizer. The optimal teaching problem (1) so far is rather
abstract. For the sake of concreteness we next focus on a rich family of learners, namely Bayesian
models. However, we note that our framework can be adapted to other types of learners, as long as
we know how they react to the teaching set D.
1
This is a strong assumption. It can be relaxed in future work, where the teacher has to estimate the state of
the learner by ?probing? it with tests.
2
3
Optimal Teaching for Bayesian Learners
We focus on Bayesian learners because they are widely used in both machine learning and cognitive
science [7, 23, 24] and because of their predictability: they react to any teaching examples in D by
performing Bayesian updates.2 Before teaching, a Bayesian learner?s state is captured by its prior
distribution p0 (?). Given D, the learner?s likelihood function is p(D | ?). Both the prior and the
likelihood are assumed to be known to the teacher. The learner?s state after seeing D is the posterior
?1
R
p (?)p(D | ?)d?
p0 (?)p(D | ?).
distribution fc
D ? p(? | D) =
? 0
3.1
The KL Loss and Various Effort Functions, with Examples
The choice of loss() and effort() is problem-specific and depends on the teaching goal. In this paper,
?
we will use the Kullback-Leibler divergence so that loss(fc
D , ? ) = KL (?? ? kp(? | D)), where ?? ?
? 3
is a point mass distribution at ? . This loss encourages the learner?s posterior to concentrate around
the world model ?? . With the KL loss, it is easy to verify that the optimal teaching problem (1) can
be equivalently written as
min ? log p(?? | D) + effort(D).
D
(2)
We remind the reader that this is not a MAP estimate problem. Instead, the intuition is to find a good
teaching set D to make ?? ?stand out? in the posterior distribution.
The effort() function reflects resource constraints on the teacher and the learner: how hard is it to
create the teaching examples, to deliver them to the learner, and to have the learner absorb them? For
most of the paper we use the cardinality of the teaching set effort(D) = c|D| where c is a positive
per-item cost. This assumes that the teaching effort is proportional to the number of teaching items,
which is reasonable in many problems. We will demonstrate a few other effort functions in the
examples below.
How good is any teaching set D? We hope D guides the learner?s posterior toward the world?s ?? ,
but we also hope D takes little effort to teach. The proper quality measure is the objective value (2)
which balances the loss() and effort() terms.
Definition 1 (Teaching Impedance). The Teaching Impedance (TI) of a teaching set D is the objective value ? log p(?? | D) + effort(D). The lower the TI, the better.
We now give examples to illustrate our optimal teaching framework for Bayesian learners.
Example 1 (Teaching a 1D threshold classifier). The classification task is the same as in Figure 1,
with x ? [0, 1] and y ? {?1, 1}. The parameter space is ? = [0, 1]. The world has a threshold
?? ? ?. Let the learner?s prior be uniform p0 (?) = 1. The learner?s likelihood function is p(y =
1 | x, ?) = 1 if x ? ? and 0 otherwise.
The teacher wants the learner to arrive at a posterior p(? | D) peaked at ?? by designing a small
D. As discussed above, this can be formulated as (2) with the KL loss() and the cardinality effort()
functions: minD ? log p(?? | D) + c|D|. For any teaching set D = {(x1 , y1 ), . . . , (xn , yn )},
the learner?s posterior is simply p(? | D) = uniform [maxi:yi =?1 (xi ), mini:yi =1 (xi )], namely
uniform over the version space consistent with D. The optimal teaching problem becomes
1
minn,x1 ,y1 ,...,xn ,yn ? log mini:y =1 (xi )?max
+ cn. One solution is the limiting case
i:y =?1 (xi )
i
i
with a teaching set of size two D = {(?? ? /2, ?1), (?? + /2, 1)} as ? 0, since the Teaching
Impedance T I = log() + 2c approaches ??. In other words, the teacher teaches by two examples
arbitrarily close to, but on the opposite sides of, the decision boundary as in Figure 1(right).
Example 2 (Learner cannot tell small differences apart). Same as Example 1, but the learner has
poor perception (e.g., children or robots) and cannot distinguish similar items very well. We may
2
Bayesian learners typically assume that the training data is iid; optimal teaching intentionally violates this
assumption because the designed teaching examples in D will typically be non-iid. However, the learners are
oblivious to this fact and will perform Bayesian update as usual.
3
If we allow the teacher to be uncertain about the world ?? , we may encode the teacher?s own belief as a
distribution p? (?) and replace ??? with p? (?).
3
c
encode this in effort() as, for example, effort(D) = minx ,x ?D
|xi ?xj | . That is, the teaching exi j
amples require more effort to learn if any two items are too close. With two teaching examples
as in Example 1, T I = log() + c/. It attains minimum at = c. The optimal teaching set is
D = {(?? ? c/2, ?1), (?? + c/2, 1)}.
Example 3 (Teaching to pick one model out of two). There are two Gaussian distributions ?A =
N (? 41 , 12 ), ?B = N ( 14 , 12 ). The learner has ? = {?A , ?B }, and we want to teach it the fact
that the world is using ?? = ?A . Let the learner have equal prior p0 (?A ) = p0 (?B ) = 21 . The
learner observes examples x ? R, and its likelihood function is p(x | ?) = N (x | ?). Let D =
{x1 , . . . ,Q
xn }. With these specific parameters, the KL loss can be shown to be ? log p(?? | D) =
n
log (1 + i=1 exp(xi )).
For this example, let us suppose that teaching with extreme item values is undesirable (note
xi ? P
?? minimizes the KL loss). We combine cardinality and range preferences in effort(D) =
n
cn + i=1 I(|xi | ? d), where the indicator function I(z) = 0 if z is true, and +? otherwise.
In other words, the teaching items must beQin some interval [?d, d].
Pn This leads to the optimal
n
teaching problem minn,x1 ,...,xn log (1 + i=1 exp(xi )) + cn + i=1 I(|xi | ? d). This is a
mixed integer program (even harder?the number of variables has to be optimized as well). We
first relax n to real values. By inspection, the solution is to let all xi = ?d and
let n minimize
T I = log (1 + exp(?dn)) + cn. The minimum is achieved at n = d1 log dc ? 1 . We then round n
and force nonnegativity: n = max 0, d1 log dc ? 1 . This D is sensible: ?? = ?A is the model
on the left, and showing the learner n copies of ?d lends the most support to that model. Note, however, that n = 0 for certain combinations of c, d (e.g., when c ? d): the effort of teaching outweighs
the benefit. The teacher may choose to not teach at all and maintain the status quo (prior p0 ) of the
learner!
3.2
Teaching Dimension is a Special Case
In this section we provide a comparison to one of the most influential teaching models, namely the
original teaching dimension theory [10]. It may seem that our optimal teaching setting (2) is more
restrictive than theirs, since we make strong assumptions about the learner (that it is Bayesian, and
the form of the prior and likelihood). Their query learning setting in fact makes equally strong
assumptions, in that the learner updates its version space to be consistent with all teaching items.
Indeed, we can cast their setting as a Bayesian learning problem, showing that their problem is a
special case of (2). Corresponding
to the concept class C = {c} in [10], we define the conditional
1, if c(x) = +
probability P (y = 1 | x, ?c ) =
and the joint distribution P (x, y | ?c ) =
0, if c(x) = ?
P (x)P (y | x, ?c ) where P (x) is uniform over the domain X . The world has ?? = ?c? corresponding
to the target concept c? ? C. The learner has ? = {?c | c ? C}. The learner?s prior is p0 (?) =
1
uniform(?) = |C|
, and its likelihood function is P (x, y | ?c ). The learner?s posterior after teaching
with D is
1/(number of concepts in C consistent with D), if c is consistent with D
P (?c | D) =
(3)
0,
otherwise
Teaching dimension T D(c? ) is the minimum cardinality of D that uniquely identifies the target
concept. We can formulate this using our optimal teaching framework
min ? log P (?c? | D) + ?|D|,
(4)
D
where we used the cardinality effort() function (and renamed the cost ? for clarity). We can make
sure that the loss term is minimized to 0, corresponding to successfully identifying the target concept,
1
1
?
?
if ? < T D(c
? ) . But since T D(c ) is unknown beforehand, we can set ? ? |C| since |C| ? T D(c )
(one can at least eliminate one concept from the version space with each well-designed teaching
item). The solution D to (4) is then a minimum teaching set for the target concept c? , and |D| =
T D(c? ).
4
Optimal Teaching for Bayesian Learners in the Exponential Family
While we have proposed an optimization-based framework for teaching any Bayesian learner and
provided three examples, it is not clear if there is a unified approach to solve the optimization
4
problem (2). In this section, we further restrict ourselves to a subset of Bayesian learners whose
prior and likelihood are in the exponential family and are conjugate. For this subset of Bayesian
learners, finding the optimal teaching set D naturally decomposes into two steps: In the first step
one solves a convex optimization problem to find the optimal aggregate sufficient statistics for D. In
the second step one ?unpacks? the aggregate sufficient statistics into actual teaching examples. We
present an approximate algorithm for doing so.
We recall that an exponential
family distribution (see e.g. [5]) takes the form p(x | ?) =
h(x) exp ?> T (x) ? A(?) where T (x) ? RD is the D-dimensional sufficient statistics of x,
? ? RD is the natural parameter, A(?) is the log partition function, and h(x) modifies the base
measure. For a set D = {x
under the exponential family takes a
Q1n, . . . , xn }, the likelihood function
similar form p(D | ?) = ( i=1 h(xi )) exp ?> s ? nA(?) , where we define
s?
n
X
T (xi )
(5)
i=1
to be the aggregate sufficient statistics over D. The corresponding conjugate prior is the exD
ponential family distribution with natural
parameters (?1 , ?2 ) ? R ? R: p(? | ?1 , ?2 ) =
>
is p(? | D, ?1 , ?2 ) =
h0 (?) exp ?1 ? ? ?2 A(?) ? A0 (?1 , ?2 ) . The posterior distribution
>
h0 (?) exp (?1 + s) ? ? (?2 + n)A(?) ? A0 (?1 + s, ?2 + n) . The posterior has the same form
as the prior but with natural parameters (?1 + s, ?2 + n). Note that the data D enters the posterior
only via the aggregate sufficient statistics s and cardinality n. If we further assume that effort(D)
can be expressed in n and s, then we can write our optimal teaching problem (2) as
min ??? > (?1 + s) + A(?? )(?2 + n) + A0 (?1 + s, ?2 + n) + effort(n, s),
(6)
n,s
P
where n ? Z?0 and s ? {t ? RD | ?{xi }i?I such that t = i?I T (xi )}. We relax the problem
to n ? R and s ? RD , resulting in a lower bound of the original objective.4 Since the log partition
function A0 () is convex in its parameters, we have a convex optimization problem (6) at hand if we
design effort(n, s) to be convex, too. Therefore, the main advantage of using the exponential family
distribution and conjugacy is this convex formulation, which we use to efficiently optimize over n
and s. This forms the first step in finding D.
However, we cannot directly teach with the aggregate sufficient statistics. We first turn n back into
an integer by max(0, [n]) where [] denotes rounding.5 We then need to find n teaching examples
whose aggregate sufficient statistics is s. The difficulty of this second ?unpacking? step depends
on the form of the sufficient statistics T (x). For some exponential family distributions unpacking
is trivial. For example, the exponential distribution has T (x) = x. Given n and s we can easily
unpack the teaching set D = {x1 , . . . , xn } by x1 = . . . = xn = s/n. The Poisson distribution
has T (x) = x as well, but the items x need to be integers. This is still relatively easy to achieve
by rounding x1 , . . . , xn and making adjustments to make sure they still sum to s. The univariate
Gaussian distribution has T (x) = (x, x2 ) and unpacking is harder: given n = 3, s = (3, 5) it
may not be immediately
obvious
that we can unpack into {x1 = 0, x2 = 1, x3 = 2} or even
?
?
{x1 = 21 , x2 = 5+4 13 , x3 = 5?4 13 }. Clearly, unpacking is not unique.
In this paper, we use an approximate unpacking algorithm. We initialize the n teaching examples
iid
by xi ? p(x | ?? ), i = 1 . . . n. 6 We then improve the examples by solving an unconstrained
optimization problem to match the examples? aggregate sufficient statistics to the given s:
n
X
min ks ?
T (xi )k2 .
(7)
x1 ,...,xn
i=1
4
For higher solution quality we may impose certain convex constraints on s based on the structure of T (x).
For example, univariate Gaussian has T (x) = (x, x2 ). Let s = (s1 , s2 ). It is easy to show that s must satisfy
the constraint s2 ? s21 /n.
5
Better results can be obtained by comparing the objective of (6) under several integers around n and picking
the smallest one.
6
As we will see later, such iid samples from the target distribution are not great teaching examples for two
main reasons: (i) We really should compensate for the learner?s prior by aiming not at the target distribution
but overshooting a bit in the opposite direction of the prior. (ii) Randomness in the samples also prevents them
from achieving the aggregate sufficient statistics.
5
This problem is non-convex in general but can be solved up to a local minimum. The gradient is
P
> 0
?
i T (xi )) T (xj ). Additional post-processing such as enforcing x to be integers
?xj = ?2 (s ?
is then carried out if necessary. The complete algorithm is summarized in Algorithm 1.
Algorithm 1 Approximately optimal teaching for Bayesian learners in the exponential family
input target ?? ; learner information T (), A(), A0 (), ?1 , ?2 ; effort()
Step 1: Solve for aggregate sufficient statistics n, s by convex optimization (6)
Step 2: Unpacking: n ? max(0, [n]); find x1 , . . . , xn by (7)
output D = {x1 , . . . , xn }
We illustrate Algorithm 1 with several examples.
Example 4 (Teaching the mean of a univariate Gaussian). The world consists of a
N (x; ?? , ? 2 ) where ? 2 is fixed and known to the learner while ?? is to be taught.
nential family form p(x | ?) = h(x) exp (?T (x) ? A(?)) with T (x) = x alone
?
?1
2 2
?2
x2
is fixed), ? = ??2 , A(?) = 2?
= ? 2? , and h(x) =
2??
.
exp ? 2?
2
2
Gaussian
In expo(since ? 2
Its con-
jugate prior
learner?s
with the form p(? | ?1 , ?2 ) =
(which is2 the
initial state) is Gaussian
?21
? ?2
h0 (?) exp ?1 ? ? ?2 2 ? A0 (?) where A0 (?1 , ?2 ) = 2?2 ?2 ? 21 log(? 2 ?2 ).
To find a goodP
teaching set D, in step 1 we first find its optimal cardinality n and aggregate sufficient
statistics s = i?D xi using (6). The optimization problem becomes
min ??? s +
n,s
(?1 + s)2
1
?2 ?? 2
n+ 2
? log(? 2 (?2 + n)) + effort(n, s)
2
2? (?2 + n) 2
(8)
where ?? = ?? /? 2 . The result is more intuitive if we rewrite the conjugate prior in its standard form
2
2
? ? N (? | ?0 , ?02 ) with the relation ?1 = ??0 ?2 , ?2 = ??2 . With this notation, the optimal aggregate
0
0
sufficient statistics is
?2
s = 2 (?? ? ?0 ) + ?? n.
(9)
?0
Note an interesting fact here: the average of teaching examples ns is not the target ?? , but should
compensate for the learner?s initial belief ?0 . This is the
discussed earlier. Putting (9)
?overshoot?
2
back in (8) the optimization over n is minn ? 21 log ? 2 ??2 + n + effort(n). Consider any differ0
entiable effort function (w.r.t. the relaxed n) with derivative effort0 (n), the optimal n is the solution
2
1
1
?2
to n ? 2 effort
= 0. For example, with the cardinality effort(n) = cn we have n = 2c
? ??2 .
0 (n) +
?2
0
0
In step 2 we unpack n and s into D. We discretize n by max(0, [n]). Another interesting fact is that
the optimal teaching strategy may be to not teach at all (n = 0). This is the case when the learner
has literally a narrow mind to start with: ?02 < 2c? 2 (recall ?02 is the learner?s prior variance on the
mean). Intuitively, the learner is too stubborn to change its prior belief by much, and such minuscule
change does not justify the teaching effort.
Having picked n, unpacking s is trivial since T (x) = x. For example, we can let D be x1 = . . . =
xn = s/n as discussed earlier, without employing optimization (7). Yet another interesting fact is
that such an alarming teaching set (with n identical examples) is likely to contradict the world?s
model variance ? 2 , but the discrepancy does not affect teaching because the learner fixes ? 2 .
Example 5 (Teaching a multinomial distribution). The world is a multinomial distribution ? ? =
?
?
(?P
1 , . . . , ?K ) of dimension K. The learner starts with a conjugate Dirichlet prior p(? | ?) =
QK
?(
?
)
?k ?1
k
Q
. Each teaching item is x ? {1, . . . , K}. The teacher needs to decide the total
k=1 ?k
?(?k )
PK
number of teaching items n and the split s = (s1 , . . . , sK ) where n = k=1 sk .
In step 1, the sufficient statistics is s1 , . . . , sK?1 but for clarity we write (6) using s and standard
parameters:
! K
K
K
X
X
X
min ? log ?
(?k + sk ) +
log ?(?k + sk ) ?
(?k + sk ? 1) log ?k? + effort(s). (10)
s
k=1
k=1
k=1
6
This is an integer program; we relax s ? RK
?0 , making it a continuous optimization problem with
nonnegativity constraints. Assuming a differentiable effort(),
P the optimalaggregate sufficient statisK
?
tics can be readily solved with the gradient ?s?k = ??
k=1 (?k + sk ) + ?(?k + sk ) ? log ?k +
?effort(s)
,
?sk
where ?() is the digamma function. In step 2, unpacking is again trivial: we simply let
sk ? [sk ] for k = 1 . . . K.
1
3
6
Let us look at a concrete problem. Let the teaching target be ? ? = ( 10
, 10
, 10
). Let the
learner?s prior Dirichlet parameters be quite different: ? = (6, 3, 1). If we say that teaching requires no effort by setting effort(s) = 0, then the optimal teaching set D found by Algorithm 1 is s = (317, 965, 1933) as implemented with Matlab fmincon. The MLE from D is
(0.099, 0.300, 0.601) and is very close to ? ? . In fact, in our experiments, fmincon stopped because it exceeded the default function evaluation limit. Otherwise, the counts would grow even
higher with MLE? ? ? . This is ?brute-force teaching?: using unlimited data to overwhelm the
prior in the learner.
PK
But if we say teaching is costly by setting effort(s) = 0.3 k=1 sk , the optimal D found by Algorithm 1 is instead s = (0, 2, 8) with merely ten items. Note that it did not pick (1, 3, 6) which
also has ten items and whose MLE is ? ? : this is again to compensate for the biased prior Dir(?)
in the learner. Our optimal teaching set (0, 2, 8) has Teaching Impedance T I = 2.65. In contrast,
the set (1, 3, 6) has T I = 4.51 and the previous set (317, 965, 1933) has T I = 956.25 due to its
size. We can also attempt to sample teaching sets of size ten from multinomial(10, ? ? ). In 100,000
simulations with such random teaching sets the average T I = 4.97 ? 1.88 (standard deviation),
minimum T I = 2.65, and maximum T I = 18.7. In summary, our optimal teaching set (0, 2, 8) is
very good.
We remark that one can teach complex models using simple ones as building blocks. For instance,
with the machinery in Example 5 one can teach the learner a full generative model for a Na??ve Bayes
classifier. Let the target Na??ve Bayes classifier have K classes with class probability p(y = k) = ?k? .
Let v be the vocabulary size. Let the target class conditional probability be p(x = i | y = k) =
?
?ki
for word type i = 1 . . . v and label k = 1 . . . K. Then the aggregate sufficient statistics are
n1 . . . nK , m11 . . . m1v , . . . , mK1 . . . mKv where nk is the number of documents with label k, and
mki is the number of times word i appear in all documents with label k. The optimal choice of
these n?s and m?s for teaching can be solved separately as in Example 5 as long as effort() can be
separated. The unpacking step is easy: we know we need nk teaching documents with label k. These
nk documents together need mki counts of word type i. They can evenly
In the
split those counts.
mk1
mkv
end, each teaching document with label k will have the bag-of-words nk , . . . , nk , subject to
rounding.
Example 6 (Teaching a multivariate Gaussian). Now we consider the general case of
teaching both the mean and the covariance of a multivariate Gaussian.
The world
has the target ?? ? RD and ?? ? RD?D .
The likelihood is N (x | ?, ?).
The learner starts with a Normal-Inverse-Wishart (NIW) conjugate prior p(?, ?
|
D ?1
QD
?0
?0 +D+2
?0 D
D(D?1)
2
?0 +1?i
|?0 |? 2 2?
|?|? 2
?0 , ?0 , ?0 , ??1
=
2 2 ? 4
0 )
i=1 ?
2
?0
exp ? 12 tr(??1 ?0 ) ? ?20 (? ? ?0 )> ??1 (? ? ?0 ) .
Given data x1 , . . . , xn ? RD , the
Pn
Pn
>
The posterior is NIW
aggregate sufficient statistics are s =
i=1 xi , S =
i=1 xi xi .
?
1
0
p(?, ? | ?n , ?n , ?n , ??1
)
with
parameters
?
=
?
+
s,
?
=
?0 + n, ?n = ?0 + n,
n
n
n
?0 +n 0
?0 +n
?0 n
2?0
1
>
>
>
?n = ?0 + S + ?0 +n ?0 ?0 ? ?0 +n ?0 s ? ?0 +n ss . We formulate the optimal aggregate
sufficient statistics problem by putting the posterior into (6). Note S by definition needs to be
positive semi-definite. In addition, with Cauchy-Schwarz inequality one can show that Sii ? s2i /2
for i = 1 . . . n. Step 1 is thus the following SDP:
D
X
?n + 1 ? i
?n
D
?n
D log 2
?n +
log ?
?
log |?n | ? log ?n +
log |?? |
min
n,s,S
2
2
2
2
2
i=1
s.t.
?n ?
1
+ tr(?? ?1 ?n ) +
(? ? ?n )> ?? ?1 (?? ? ?n ) + effort(n, s, S)
2
2
S 0; Sii ? s2i /2, ?i.
7
(11)
(12)
iid
In step 2, we unpack s, S by initializing x1 , . . . , xn ? N (?? , ?? ). Again, such iid samples are
typically not good teaching examples. We improve them with the optimization (7) where T (x) is the
(D + D2 )-dim vector formed by the elements of x and xx> , and similarly the aggregate sufficient
statistics vector s is formed by the elements of s and S.
We illustrate the results on a concrete problem in D = 3. The target Gaussian is ?? = (0, 0, 0) and
?? = I. The target mean is visualized in each plot of Figure 2 as a black dot. The learner?s initial
state is captured by the NIW with parameters ?0 = (1, 1, 1), ?0 = 1, ?0 = 2 + 10?5 , ?0 = 10?5 I.
Note the learner?s prior mean ?0 is different than ?? , and is shown by the red dot in Figure 2. The
red dot has a stem extending to the z-axis=0 plane for better visualization. We used an ?expensive?
effort function effort(n, s, S) = n. Algorithm 1 decides to use n = 4 teaching examples with s =
!
4.63 ?1
?1
(?1, ?1, ?1) and S = ?1 4.63 ?1 . These unpack into D = {x1 . . . x4 }, visualized by the
?1 ?1 4.63
four empty blue circles. The three panels of Figure 2 show unpacking results starting from different
initial seeds sampled from N (?? , ?? ). These teaching examples form a tetrahedron (edges added for
clarity). This is sensible: in fact, one can show that the minimum teaching set for a D-dimensional
Gaussian is the D + 1 points at the vertices of a D-dimensional tetrahedron. Importantly the mean
of D, (?1/4, ?1/4, ?1/4) shown as the solid blue dot with a stem, is offset from the target ?? and
to the opposite side of the learner?s prior ?0 . This again shows that D compensates for the learner?s
prior. Our optimal teaching set D has T I = 1.69. In contrast, teaching sets with four iid random
samples from the target N (?? , ?? ) have worse TI. In 100,000 simulations such random teaching
sets have average T I = 9.06 ? 3.34, minimum T I = 1.99, and maximum T I = 35.51.
1
1
1
0.5
0.5
0
0
0
?0.5
?1
?1.5
?0.5
?1
?1
?1
0
1
?2
0
2
?1
0
?2
1
0
2
?2
?1
0
1
?2
0
2
Figure 2: Teaching a multivariate Gaussian
5
Discussions and Conclusion
What if the learner anticipates teaching? Then the teaching set may be further reduced. For example, the task in Figure 1 may only require a single teaching example D = {x1 = ?? }, and the
learner can figure out that this x1 encodes the decision boundary. Smart learning behaviors similar to this have been observed in humans by Shafto and Goodman [20]. In fact, this is known as
?collusion? in computational teaching theory (see e.g. [10]), and has strong connections to compression in information theory. In one extreme of collusion, the teacher and the learner agree upon an
information-theoretical coding scheme beforehand. Then, the teaching set D is not used in a traditional machine learning training set sense, but rather as source coding. For example, x1 itself would
be a floating-point encoding of ?? up to machine precision. In contrast, the present paper assumes
that the learner does not collude.
We introduced an optimal teaching framework that balances teaching loss and effort. we hope this
paper provides a ?stepping stone? for follow-up work, such as 0-1 loss() for classification, nonBayesian learners, uncertainty in learner?s state, and teaching materials beyond training items.
Acknowledgments
We thank Bryan Gibson, Robert Nowak, Stephen Wright, Li Zhang, and the anonymous reviewers
for suggestions that improved this paper. This research is supported in part by National Science
Foundation grants IIS-0953219 and IIS-0916038.
8
References
[1] D. Angluin. Queries revisited. Theor. Comput. Sci., 313(2):175?194, 2004.
[2] F. J. Balbach and T. Zeugmann. Teaching randomized learners. In COLT, pages 229?243.
Springer, 2006.
[3] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009.
[4] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In
ICML, 2012.
[5] L. D. Brown. Fundamentals of statistical exponential families: with applications in statistical
decision theory. Institute of Mathematical Statistics, Hayworth, CA, USA, 1986.
[6] M. Cakmak and M. Lopes. Algorithmic and human teaching of sequential decision tasks. In
AAAI Conference on Artificial Intelligence, 2012.
[7] N. Chater and M. Oaksford. The probabilistic mind: prospects for Bayesian cognitive science.
OXFORD University Press, 2008.
[8] M. C. Frank and N. D. Goodman. Predicting Pragmatic Reasoning in Language Games. Science, 336(6084):998, May 2012.
[9] G. Gigu`ere and B. C. Love. Limits in decision making arise from limits in memory retrieval.
Proceedings of the National Academy of Sciences, Apr. 2013.
[10] S. Goldman and M. Kearns. On the complexity of teaching. Journal of Computer and Systems
Sciences, 50(1):20?31, 1995.
[11] S. Hanneke. Teaching dimension and the complexity of active learning. In COLT, page 6681,
2007.
[12] T. Heged?us. Generalized teaching dimensions and the query complexity of learning. In COLT,
pages 108?117, 1995.
[13] F. Khan, X. Zhu, and B. Mutlu. How do humans teach: On curriculum learning and teaching
dimension. In Advances in Neural Information Processing Systems (NIPS) 25. 2011.
[14] H. Kobayashi and A. Shinohara. Complexity of teaching by a restricted number of examples.
In COLT, pages 293?302, 2009.
[15] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In NIPS,
2010.
[16] Y. J. Lee and K. Grauman. Learning the easy things first: Self-paced visual category discovery.
In CVPR, 2011.
[17] B. D. McCandliss, J. A. Fiez, A. Protopapas, M. Conway, and J. L. McClelland. Success
and failure in teaching the [r]-[l] contrast to Japanese adults: Tests of a Hebbian model of
plasticity and stabilization in spoken language perception. Cognitive, Affective, & Behavioral
Neuroscience, 2(2):89?108, 2002.
[18] H. Pashler and M. C. Mozer. When does fading enhance perceptual category learning? Journal
of Experimental Psychology: Learning, Memory, and Cognition, 2013. In press.
[19] A. N. Rafferty and T. L. Griffiths. Optimal language learning: The importance of starting
representative. 32nd Annual Conference of the Cognitive Science Society, 2010.
[20] P. Shafto and N. Goodman. Teaching Games: Statistical Sampling Assumptions for Learning
in Pedagogical Situations. In CogSci, pages 1632?1637, 2008.
[21] S. Singh, R. L. Lewis, A. G. Barto, and J. Sorg. Intrinsically motivated reinforcement learning:
An evolutionary perspective. IEEE Trans. on Auton. Ment. Dev., 2(2):70?82, June 2010.
[22] J. B. Tenenbaum and T. L. Griffiths. The rational basis of representativeness. 23rd Annual
Conference of the Cognitive Science Society, 2001.
[23] J. B. Tenenbaum, T. L. Griffiths, and C. Kemp. Theory-based Bayesian models of inductive
learning and reasoning. Trends in Cognitive Sciences, 10(7):309?318, 2006.
[24] F. Xu and J. B. Tenenbaum. Word learning as Bayesian inference. Psychological review,
114(2), 2007.
[25] S. Zilles, S. Lange, R. Holte, and M. Zinkevich. Models of cooperative teaching and learning.
Journal of Machine Learning Research, 12:349?384, 2011.
9
| 5042 |@word version:3 compression:1 nd:1 d2:1 simulation:2 covariance:1 p0:7 pick:2 tr:2 solid:1 harder:2 initial:4 quo:1 document:5 comparing:1 collude:1 yet:1 written:1 must:2 readily:1 sorg:1 partition:2 plasticity:1 s21:1 designed:2 plot:1 update:3 overshooting:1 alone:1 generative:1 fewer:1 intelligence:1 item:20 inspection:1 plane:1 provides:3 revisited:1 preference:1 attack:1 zhang:1 mathematical:1 dn:1 sii:2 specialize:1 consists:2 combine:1 affective:1 behavioral:1 indeed:1 behavior:1 love:1 sdp:1 goldman:2 actual:2 little:1 cardinality:9 becomes:2 provided:2 xx:1 notation:1 maximizes:1 mass:1 panel:1 what:4 tic:1 unspecified:1 minimizes:1 unified:1 finding:3 spoken:1 ti:3 shed:1 interactive:1 grauman:1 classifier:4 k2:1 brute:2 fmincon:2 grant:1 yn:2 appear:1 before:1 kobayashi:1 positive:2 local:1 limit:4 aiming:1 despite:1 encoding:1 oxford:1 approximately:1 black:4 studied:1 k:1 range:1 unique:1 acknowledgment:1 block:1 definite:1 x3:2 gibson:1 matching:1 word:7 griffith:5 seeing:3 wait:1 cannot:3 close:3 undesirable:1 pashler:1 restriction:1 optimize:2 map:1 reviewer:1 zinkevich:1 modifies:1 starting:2 convex:8 formulate:2 mislead:1 identifying:1 immediately:1 react:2 importantly:3 notion:1 limiting:1 target:18 suppose:1 designing:1 hypothesis:3 element:2 trend:1 expensive:1 cooperative:1 observed:1 enters:1 solved:3 initializing:1 ordering:1 prospect:1 observes:1 intuition:1 mozer:1 complexity:4 reward:1 overshoot:1 singh:1 solving:1 rewrite:1 smart:1 deliver:1 upon:2 learner:94 completely:2 basis:1 easily:1 exi:1 joint:1 various:1 s2i:2 regularizer:1 separated:1 kp:1 query:3 artificial:1 tell:1 aggregate:17 cogsci:1 h0:3 whose:3 heuristic:1 widely:1 solve:2 quite:1 say:2 relax:3 otherwise:5 s:1 compensates:1 cvpr:1 statistic:21 noisy:1 itself:1 advantage:1 differentiable:1 propose:2 ment:1 interaction:1 minuscule:1 achieve:2 academy:1 intuitive:1 empty:1 extending:1 leave:1 illustrate:4 informs:1 solves:1 strong:4 implemented:1 c:1 qd:1 concentrate:1 direction:1 shafto:2 stabilization:1 human:6 violates:1 material:1 education:2 mkv:2 require:3 mccandliss:1 fix:2 generalization:1 really:1 anonymous:1 mki:2 anticipate:1 theor:1 extension:1 effortlessly:1 around:2 wright:1 normal:1 exp:11 great:1 seed:1 algorithmic:1 cognition:1 gigu:1 smallest:1 estimation:1 bag:1 label:7 combinatorial:1 schwarz:1 vice:1 create:1 successfully:1 ere:1 reflects:1 hope:3 clearly:1 gaussian:12 always:1 rather:3 pn:3 barto:1 chater:1 encode:2 focus:2 june:1 likelihood:9 digamma:1 contrast:4 attains:1 sense:1 helpful:1 dim:1 inference:1 alarming:1 typically:3 eliminate:1 a0:7 relation:1 koller:1 classification:2 colt:4 special:3 initialize:1 equal:1 once:1 balbach:1 having:1 shinohara:1 sampling:1 identical:1 represents:1 x4:1 look:1 unsupervised:1 icml:2 peaked:1 discrepancy:1 minimized:1 future:4 oblivious:1 employ:3 few:1 packer:1 divergence:2 ve:2 national:2 floating:1 ourselves:1 n1:2 maintain:1 attempt:1 evaluation:1 extreme:2 light:1 beforehand:2 edge:1 nowak:1 necessary:1 experience:1 machinery:1 literally:1 desired:2 circle:1 theoretical:1 uncertain:1 stopped:1 instance:3 psychological:1 earlier:2 dev:1 cost:2 vertex:1 jerryzhu:1 deviation:2 expects:1 subset:2 uniform:6 rounding:3 conducted:1 too:3 answer:1 teacher:24 dir:1 anticipates:1 explores:1 randomized:1 fundamental:1 rafferty:2 probabilistic:3 lee:1 picking:1 conway:1 together:1 enhance:1 concrete:2 na:3 again:4 aaai:1 choose:1 wishart:1 worse:1 cognitive:9 admit:1 derivative:1 li:1 amples:1 student:1 summarized:1 representativeness:2 coding:2 satisfy:1 depends:3 collobert:1 later:3 view:1 picked:1 doing:1 mutlu:1 red:2 start:4 bayes:2 defer:1 minimize:1 formed:2 variance:2 who:3 efficiently:1 qk:1 lesson:1 bayesian:20 accurately:1 iid:9 hanneke:1 randomness:1 tetrahedron:2 definition:2 against:1 failure:1 nonetheless:1 intentionally:1 obvious:1 naturally:1 con:1 sampled:1 rational:1 intrinsically:1 recall:2 back:2 exceeded:1 higher:2 supervised:1 follow:1 improved:1 formulation:1 generality:1 furthermore:1 hand:1 receives:1 somehow:1 quality:2 usa:2 building:1 verify:1 true:1 concept:7 unpacking:10 brown:1 inductive:1 leibler:1 white:3 round:1 game:2 self:2 encourages:1 uniquely:1 biggio:1 generalized:1 stone:1 complete:1 demonstrate:1 pedagogical:1 performs:1 passive:2 reasoning:2 multinomial:4 stepping:1 discussed:4 theirs:1 versa:1 rd:8 unconstrained:1 similarly:1 teaching:136 language:3 dot:4 robot:2 base:1 posterior:13 closest:1 own:1 multivariate:3 perspective:2 optimizes:1 apart:2 certain:2 inequality:1 binary:1 arbitrarily:2 success:1 yi:2 niw:3 captured:2 minimum:8 additional:1 relaxed:2 impose:2 holte:1 paradigm:1 ii:5 semi:1 full:1 stephen:1 reduces:1 stem:2 hebbian:1 faster:1 match:1 unpack:5 long:2 compensate:3 retrieval:1 post:1 equally:1 mle:3 mk1:2 variant:2 noiseless:2 poisson:1 achieved:1 addition:2 want:3 separately:1 interval:1 grow:1 source:1 goodman:3 biased:1 sure:2 subject:1 cyber:1 thing:1 seem:1 integer:6 iii:2 easy:6 split:2 bengio:1 xj:3 affect:1 psychology:2 restrict:1 opposite:3 inner:1 lange:1 cn:5 thread:4 motivated:1 effort:46 remark:1 matlab:1 clear:1 aimed:1 nonparametric:1 tenenbaum:4 ten:3 visualized:2 category:2 mcclelland:1 reduced:1 angluin:1 zeugmann:1 heged:1 neuroscience:1 per:1 bryan:1 blue:2 write:2 taught:3 key:1 putting:2 four:2 threshold:4 achieving:1 drawn:1 wisc:1 clarity:3 neither:1 utilize:1 concreteness:1 merely:1 sum:1 inverse:1 uncertainty:1 lope:1 arrive:1 family:13 almost:1 reader:1 reasonable:1 decide:1 decision:5 bit:1 bound:1 ki:1 laskov:1 distinguish:1 paced:2 oracle:1 annual:2 adapted:1 hayworth:1 constraint:4 fading:1 expo:1 x2:5 encodes:1 unlimited:1 sake:1 collusion:3 optimality:1 min:8 kumar:1 attempting:1 performing:1 poisoning:1 relatively:1 department:1 influential:1 combination:1 poor:1 conjugate:6 renamed:1 wi:1 making:3 s1:3 intuitively:1 gradually:1 restricted:1 resource:1 conjugacy:1 visualization:1 overwhelm:1 turn:1 count:3 agree:1 nonbayesian:1 know:4 mind:3 end:1 auton:1 generic:1 batch:1 original:3 assumes:2 denotes:1 dirichlet:2 outweighs:1 madison:2 unifying:1 giving:1 restrictive:1 build:1 society:2 objective:4 already:1 quantity:1 added:1 strategy:3 parametric:1 costly:1 usual:1 traditional:1 evolutionary:1 minx:1 lends:1 gradient:2 thank:1 sci:1 entity:2 sensible:2 evenly:1 nelson:1 cauchy:1 kemp:1 trivial:4 toward:1 reason:1 enforcing:1 assuming:1 modeled:1 remind:1 mini:2 providing:1 balance:4 minn:3 equivalently:1 difficult:1 robert:1 frank:1 teach:11 design:6 proper:1 unknown:2 perform:1 discretize:1 m11:1 situation:1 y1:2 dc:2 arbitrary:1 stubborn:1 introduced:1 namely:3 cast:1 kl:7 khan:1 connection:2 optimized:1 security:1 narrow:1 poisonous:1 nip:2 trans:1 adult:1 beyond:1 adversary:2 below:1 perception:2 program:2 built:1 max:5 memory:2 belief:3 power:1 difficulty:2 force:3 regularized:1 natural:3 indicator:2 curriculum:3 predicting:1 zhu:2 scheme:1 improve:2 oaksford:1 fiez:1 identifies:1 axis:1 carried:1 xiaojin:1 prior:24 understanding:1 discovery:1 review:1 wisconsin:1 loss:20 mixed:1 interesting:4 suggestion:1 proportional:1 foundation:1 sufficient:21 consistent:4 ponential:1 summary:1 supported:1 copy:1 guide:3 side:2 allow:1 institute:1 zilles:1 benefit:1 boundary:3 dimension:9 xn:14 world:15 stand:1 rich:1 default:1 vocabulary:1 made:1 reinforcement:2 cakmak:1 far:2 employing:1 approximate:3 contradict:1 kullback:1 absorb:1 status:1 active:5 decides:1 assumed:1 xi:22 search:1 continuous:1 latent:1 quantifies:1 decomposes:1 sk:12 impedance:4 learn:2 ca:1 complex:1 japanese:1 domain:2 louradour:1 did:1 pk:2 main:2 apr:1 s2:2 arise:1 child:1 q1n:1 x1:19 xu:1 representative:3 pedagogy:1 probing:1 predictability:1 is2:1 n:1 precision:1 nonnegativity:2 explicit:1 exponential:11 comput:1 perceptual:1 third:1 omnipotent:1 learns:2 rk:1 specific:3 showing:2 maxi:1 offset:1 sequential:1 importance:1 nk:6 fc:4 simply:3 explore:1 appearance:1 univariate:3 likely:1 visual:1 prevents:1 expressed:2 adjustment:1 springer:1 corresponds:1 lewis:1 weston:1 conditional:2 goal:3 viewed:1 formulated:1 replace:1 hard:3 change:2 infinite:1 specifically:1 justify:1 kearns:2 total:1 experimental:1 differ0:1 pragmatic:1 support:2 d1:2 |
4,468 | 5,043 | Analyzing Hogwild Parallel Gaussian Gibbs Sampling
Matthew J. Johnson
EECS, MIT
[email protected]
James Saunderson
EECS, MIT
[email protected]
Alan S. Willsky
EECS, MIT
[email protected]
Abstract
Sampling inference methods are computationally difficult to scale for many models in part because global dependencies can reduce opportunities for parallel computation. Without strict conditional independence structure among variables, standard Gibbs sampling theory requires sample updates to be performed sequentially,
even if dependence between most variables is not strong. Empirical work has
shown that some models can be sampled effectively by going ?Hogwild? and simply running Gibbs updates in parallel with only periodic global communication,
but the successes and limitations of such a strategy are not well understood.
As a step towards such an understanding, we study the Hogwild Gibbs sampling
strategy in the context of Gaussian distributions. We develop a framework which
provides convergence conditions and error bounds along with simple proofs and
connections to methods in numerical linear algebra. In particular, we show that if
the Gaussian precision matrix is generalized diagonally dominant, then any Hogwild Gibbs sampler, with any update schedule or allocation of variables to processors, yields a stable sampling process with the correct sample mean.
1
Introduction
Scaling probabilistic inference algorithms to large datasets and parallel computing architectures is a
challenge of great importance and considerable current research interest, and great strides have been
made in designing parallelizeable algorithms. Along with the powerful and sometimes complex
new algorithms, a very simple strategy has proven to be surprisingly successful in some situations:
running Gibbs sampling updates, derived only for the sequential setting, in parallel without globally
synchronizing the sampler state after each update. Concretely, the strategy is to apply an algorithm
like Algorithm 1. We refer to this strategy as ?Hogwild Gibbs sampling? in reference to recent
work [1] in which sequential computations for computing gradient steps were applied in parallel
(without global coordination) to great beneficial effect.
This Hogwild Gibbs sampling strategy has long been considered a useful hack, perhaps for preparing
decent initial states for a proper serial Gibbs sampler, but extensive empirical work on Approximate
Distributed Latent Dirichlet Allocation (AD-LDA) [2, 3, 4, 5, 6], which applies the strategy to
generate samples from a collapsed LDA model, has demonstrated its effectiveness in sampling LDA
models with the same predictive performance as those generated by standard serial Gibbs [2, Figure
3]. However, the results are largely empirical and so it is difficult to understand how model properties
and algorithm parameters might affect performance, or whether similar success can be expected
for any other models. There have been recent advances in understanding some of the particular
structure of AD-LDA [6], but a thorough theoretical explanation for the effectiveness and limitations
of Hogwild Gibbs sampling is far from complete.
Sampling inference algorithms for complex Bayesian models have notoriously resisted theoretical
analysis, so to begin an analysis of Hogwild Gibbs sampling we consider a restricted class of models that is especially tractable for analysis: Gaussians. Gaussian distributions and algorithms are
tractable because of their deep connection with linear algebra. Further, Gaussian sampling is of
1
Algorithm 1 Hogwild Gibbs Sampling
Require: Samplers Gi (?
x?i ) which sample p(xi |x?i = x
??i ), a partition {I1 , I2 , . . . , IK } of
{1, 2, . . . , n}, and an inner iteration schedule q(k, `) ? 0
1: Initialize x
?(1)
2: for ` = 1, 2, . . . until convergence do
. global iterations/synchronizations
3:
for k = 1, 2, . . . , K in parallel do
. for each of K parallel processors
(1)
`
4:
y?Ik ? x
?Ik
5:
for j = 1, 2, . . . , q(k, `) do
. run local Gibbs steps with old
6:
for i ? Ik do
. statistics from other processors
(j)
(`)
(j)
(`)
7:
y?i ? Gi (?
xI1 , . . . , y?Ik \{i} , . . . , x
? IK )
8:
(q(1,`))
x
?(`+1) ? (?
yI1
(q(K,`))
? ? ? y?IK
)
. globally synchronize statistics
great interest in its own right, and there is active research in developing powerful Gaussian samplers [7, 8, 9, 10]. Gaussian Hogwild Gibbs sampling could be used in conjunction with those
methods to allow greater parallelization and scalability, given an understanding of its applicability
and tradeoffs.
Toward the goal of understanding Gaussian Hogwild Gibbs sampling, the main contribution of this
paper is a linear algebraic framework for analyzing the stability and errors in Gaussian Hogwild
Gibbs sampling. Our framework yields several results, including a simple proof for a sufficient
condition for all Gaussian Hogwild Gibbs sampling processes to be stable and yield the correct
asymptotic mean no matter the allocation of variables to processors or number of sub-iterations
(Proposition 1, Theorem 1), as well as an analysis of errors introduced in the process variance.
Code to regenerate our plots is available at https://github.com/mattjj/gaussian-hogwild-gibbs.
2
Related Work
There has been significant work on constructing parallel Gibbs sampling algorithms, and the contributions are too numerous to list here. One recent body of work [11] provides exact parallel Gibbs
samplers which exploit graphical model structure for parallelism. The algorithms are supported by
the standard Gibbs sampling analysis, and the authors point out that while heuristic parallel samplers such as the AD-LDA sampler offer easier implementation and often greater parallelism, they
are currently not supported by much theoretical analysis.
The parallel sampling work that is most relevant to the proposed Hogwild Gibbs sampling analysis
is the thorough empirical demonstration of AD-LDA [2, 3, 4, 5, 6] and its extensions. The AD-LDA
sampling algorithm is an instance of the strategy we have named Hogwild Gibbs, and Bekkerman
et al. [5, Chapter 11] suggests applying the strategy to other latent variable models.
The work of Ihler et al. [6] provides some understanding of the effectiveness of a variant of AD-LDA
by bounding in terms of run-time quantities the one-step error probability induced by proceeding
with sampling steps in parallel, thereby allowing an AD-LDA user to inspect the computed error
bound after inference [6, Section 4.2]. In experiments, the authors empirically demonstrate very
small upper bounds on these one-step error probabilities, e.g. a value of their parameter ? = 10?4
meaning that at least 99.99% of samples are expected to be drawn just as if they were sampled
sequentially. However, this per-sample error does not necessarily provide a direct understanding
of the effectiveness of the overall algorithm because errors might accumulate over sampling steps;
indeed, understanding this potential error accumulation is of critical importance in iterative systems.
Furthermore, the bound is in terms of empirical run-time quantities, and thus it does not provide
guidance regarding on which other models the Hogwild strategy may be effective. Ihler et al. [6,
Section 4.3] also provides approximate scaling analysis by estimating the order of the one-step
bound in terms of a Gaussian approximation and some distributional assumptions.
Finally, Niu et al. [1] provides both a motivation for Hogwild Gibbs sampling as well as the Hogwild name. The authors present ?a lock-free approach to parallelizing stochastic gradient descent?
(SGD) by providing analysis that shows, for certain common problem structures, that the locking
2
and synchronization needed to run a stochastic gradient descent algorithm ?correctly? on a multicore architecture are unnecessary, and in fact the robustness of the SGD algorithm compensates for
the uncertainty introduced by allowing processors to perform updates without locking their shared
memory.
3
Background
In this section we fix notation for Gaussian distributions and describe known connections between
Gaussian sampling and a class of stationary iterative linear system solvers which are useful in analyzing the behavior of Hogwild Gibbs sampling.
The density of a Gaussian distribution on n variables with mean vector ? and positive definite1
covariance matrix ? 0 has the form
n
o
T
p(x) ? exp ? 12 (x ? ?) ??1 (x ? ?) ? exp ? 21 xT Jx + hT x
(1)
where we have written the information parameters J := ??1 and h := J?. The matrix J is often
called the precision matrix or information matrix, and it has a natural interpretation in the context of
Gaussian graphical models: its entries are the coefficients on pairwise log potentials and its sparsity
pattern is exactly the sparsity pattern of a graphical model. Similarly h, also called the potential
vector, encodes node potentials and evidence.
In many problems [12] one has access to the pair (J, h) and must compute or estimate the moment
parameters ? and ? (or just the diagonal) or generate samples from N (?, ?). Sampling provides
both a means for estimating the moment parameters and a subroutine for other algorithms. Computing ? from (J, h) is equivalent to solving the linear system J? = h for ?.
One way to generate samples is via Gibbs sampling, in which one iterates sampling each xi conditioned on all other variables to construct a Markov chain for which the invariant distribution is
the target
N (?, ?). The conditional distributions for Gibbs sampling steps are p(xi |x?i = x
??i ) ?
??i ) + vi
exp ? 21 Jii x2i + (hi ? Ji?i x
??i )xi . That is, we update each xi via xi ? J1ii (hi ? Ji?i x
iid
where vi ? N (0, J1ii ).
Since each variable update is a linear function of other variables with added Gaussian noise, we can
collect one scan for i = 1, 2, . . . , n into a matrix equation relating the sampler state at t and t + 1:
iid
1
x(t+1) = ?D?1 Lx(t+1) ? D?1 LT x(t) + D?1 h + D? 2 v?(t) , v?(t) ? N (0, I).
where we have split J = L + D + LT into its strictly lower-triangular, diagonal, and strictly uppertriangular parts, respectively. Note that x(t+1) appears on both sides of the equation, and that the
sparsity patterns of L and LT ensure that each entry of x(t+1) depends on the appropriate entries of
x(t) and x(t+1) . We can re-arrange the equation into an update expression:
?1
x(t+1) = ?(D + L)
?1
LT x(t) + (D + L)
?1 (t)
iid
v? , v?(t) ? N (0, D).
h + (D + L)
The expectation of this update is exactly the Gauss-Seidel iterative linear system solver update [13,
?1
?1
Section 7.3] applied to J? = h, i.e. x(t+1) = ?(D + L) LT x(t) + (D + L) h. Therefore a
Gaussian Gibbs sampling process can be interpreted as Gauss-Seidel iterates on the system J? = h
with appropriately-shaped noise injected at each iteration.
Gauss-Seidel is one instance of a stationary iterative linear solver based on a matrix splitting. In
general, one can construct a stationary iterative linear solver for any splitting J = M ? N where M
is invertible, and similarly one can construct iterative Gaussian samplers via
iid
x(t+1) = (M ?1 N )x(t) + M ?1 h + M ?1 v (t) , v (t) ? N (0, M T + N )
(2)
with the constraint that M T + N 0 (i.e. that the splitting is P-regular [14]). For an iterative
process like (2) to be stable or convergent for any initialization we require the eigenvalues of its
1
Assume models are non-degenerate: matrix parameters are of full rank and densities are finite everywhere.
3
update map to lie in the interior of the complex unit disk, i.e. ?(M ?1 N ) := maxi |?i (M ?1 N )| < 1
[13, Lemma 7.3.6]. The Gauss-Seidel solver (and Gibbs sampling) correspond to choosing M to be
the lower-triangular part of J and N to be the negative of the strict upper-triangle of J. J 0 is a
sufficient condition for Gauss-Seidel to be convergent [13, Theorem 7.5.41] [15], and the connection
to Gibbs sampling provides an independent proof.
For solving linear systems with splitting-based algorithms, the complexity of solving linear systems
in M directly affects the computational cost per iteration. For the Gauss-Seidel splitting (and hence
Gibbs sampling), M is chosen to be lower-triangular so that the corresponding linear system can
be solved efficiently via backsubstitution. In the sampling context, the per-iteration computational
complexity is also determined by the covariance of the injected noise process v (t) , because at each
iteration one must sample from a Gaussian distribution with covariance M T + N .
We highlight one other standard stationary iterative linear solver that is relevant to analyzing Gaussian Hogwild Gibbs sampling: Jacobi iterations, in which one splits J = D ? A where D is the
diagonal part of J and A is the negative of the off-diagonal part. Due to the choice of a diagonal
M , each coordinate update depends only on the previous sweep?s output, and thus the Jacobi update
sweep can be performed in parallel. A sufficient condition for the convergence of Jacobi iterates
is for J to be a generalized diagonally dominant matrix (i.e. an H-matrix) [13, Definition 5.13]. A
simple proof 2 due to Ruozzi et al. [16], is to consider Gauss-Seidel iterations on a lifted 2n ? 2n
system:
0
D?1 A
D ?A G-S update
D?1
0
0 A
(3)
???????
=
2 .
?A D
0 0
D?1 AD?1 D?1
0 (D?1 A)
Therefore one iteration of Gauss-Seidel on the lifted system is exactly two applications of the Jacobi
update D?1 A to the second half of the state vector, so Jacobi iterations converge if Gauss-Seidel
on the lifted system converges. Furthermore, a sufficient condition for Gauss-Seidel to converge
on the lifted system is for it to be positive semi-definite, and by taking Schur complements we
1
1
1
1
require D ? AD?1 A 0 or I ? (D? 2 AD? 2 )(D? 2 AD? 2 ) 0, which is equivalent to requiring
generalized diagonal dominance [13, Theorem 5.14].
4
Gaussian Hogwild Analysis
Given that Gibbs sampling iterations and Jacobi solver iterations, which can be computed in parallel,
can each be written as iterations of a stochastic linear dynamical system (LDS), it is not surprising
that Gaussian Hogwild Gibbs sampling can also be expressed as an LDS by appropriately composing
these ideas. In this section we describe the LDS corresponding to Gaussian Hogwild Gibbs sampling
and provide convergence and error analysis, along with a connection to a class of linear solvers.
For the majority of this section, we assume that the number of inner iterations performed on each
processor is constant across time and processor index; that is, we have a single number q = q(k, `)
of sub-iterations per processor for each outer iteration. We describe how to relax the assumption at
the end of this subsection.
Given a joint Gaussian distribution of dimension n represented by a pair (J, h) as in (1), we represent an allocation of the n scalar variables to local processors by a partition of {1, 2, . . . , n}, where
we assume partition elements are contiguous without loss of generality. Consider a block-Jacobi
splitting of J into its block diagonal and block off-diagonal components, J = Dblock ? A, according to the partition. A includes the cross-processor potentials, and this block-Jacobi splitting will
model the outer iterations in Algorithm 1. We further perform a Gauss-Seidel splitting on Dblock
into (block-diagonal) lower-triangular and strictly upper-triangular parts, Dblock = B ? C; these
processor-local Gauss-Seidel splittings model the inner Gibbs sampling steps in Algorithm 1. We
refer to this splitting J = B ? C ? A as the Hogwild splitting; see Figure 1a for an example.
For each outer iteration of the Hogwild Gibbs sampler we perform q processor-local Gibbs steps,
effectively applying the block-diagonal update B ?1 C repeatedly using Ax(t) + h as a potential
2
When J is symmetric one can arrive at the same condition by applying a similarity transform as in Proposition 5. We use the lifting argument here because we extend the idea in our other proofs.
4
vector that includes out-of-date statistics from the other processors. The resulting update operator
for one outer iteration of the Hogwild Gibbs sampling process can be written as
q?1
X
q (t)
j
iid
(t+1)
?1
x
= (B C) x +
(B ?1 C) B ?1 Ax(t) + h + v (t,j) , v (t,j) ? N (0, D) (4)
j=0
where D is the diagonal of J. Note that we shape the noise diagonally because in Hogwild Gibbs
sampling we simply apply standard Gibbs updates in the inner loop.
As mentioned previously, the update in (4) is written so that the number of sub-iterations is homogeneous, but the expression can easily be adapted to model any numbers of sub-iterations by writing
a separate sum over j for each block row of the output and a separate matrix power for each block
in the first B ?1 C term. The proofs and arguments in the following subsections can also be extended
with extra bookkeeping, so we focus on the homogeneous q case for convenience.
4.1
Convergence and Correctness of Means
Because the Gaussian Hogwild Gibbs sampling iterates form a Gaussian linear dynamical system,
the process is stable (i.e. its iterates converge in distribution) if and only if [13, Lemma 7.3.6] the
deterministic part of the update map (4) has spectral radius less than unity, i.e.
q
T := (B ?1 C) +
q?1
X
j=0
j
q
q
?1
(B ?1 C) B ?1 A = (B ?1 C) + (I ? (B ?1 C) )(B ? C)
A
(5)
q
q
satisfies ?(T ) < 1. We can write T = Tind
+ (I ? Tind
)Tblock where Tind is the purely GaussSeidel update when A = 0 and Tblock for the block Jacobi update, which corresponds to solving the
processor-local linear systems exactly at each outer iteration. The update (5) falls into the class of
two-stage splitting methods [14, 17, 18], and the next proposition is equivalent to such two-stage
solvers having the correct fixed point.
Proposition 1. If a Gaussian Hogwild Gibbs process is stable, the asymptotic mean is correct.
Proof. If the process is stable the mean process has a unique fixed point, and from (4) and (5) we can
write the fixed-point equation for the process mean ?hog as (I ?T )?hog = (I ?Tind )(I ?Tblock )?hog =
(I?Tind )(B?C)?1 h, hence (I?(B?C)?1 A)?hog = (B?C)?1 h and ?hog = (B?C?A)?1 h.
The behavior of the spectral radius of the update map can be very complicated, even generically
over simple ensembles. In Figure 1b, we compare ?(T ) for q = 1 and q = ? (corresponding to
T = Tblock ) with models sampled from a natural random ensemble; we see that there is no general
relationship between stability at q = 1 and at q = ?.
Despite the complexity of the update map?s stability, in the next subsection we give a simple argument that identifies its convergence with the convergence of Gauss-Seidel iterates on a larger,
non-symmetric linear system. Given that relationship we then prove a condition on the entries of
J that ensures the convergence of the Gaussian Hogwild Gibbs sampling process for any choice of
partition or sub-iteration count.
4.1.1
A lifting argument and sufficient condition
First observe that we can write multiple steps of Gauss-Seidel as a single step of Gauss-Seidel on
a larger system: given J = L ? U where L is lower-triangular (including the diagonal, unlike the
notation of Section 3) and U is strictly upper-triangular, consider applying Gauss-Seidel to a larger
block k ? k system:
?
?
?
?
!
L?1
L
?U
?1
?
?U L
.. ..
.
.
?U
L
G-S
??
?? ?
L?1 U L?1
..
.
(L?1 U )
k?1
L?1
L?1 ???
L
U
..
.
L?1 U L?1 L?1
?
=
U
..
.
(6)
(L?1 U )k
Therefore one step of Gauss-Seidel on the larger system corresponds to k applications of the GaussSeidel update L?1 U from the original system to the last block element of the lifted state vector.
Now we provide a lifting on which Gauss-Seidel corresponds to Gaussian Hogwild Gibbs iterations.
5
1.2
?(T ), q = ?
1.1
1.0
A
B
0.9
C
0.8
(a) Support pattern (in black) of the Hogwild splitting J = B ? C ? A with n = 9 and the processor
partition {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
0.8
0.9
1.0
1.1
1.2
?(T ), q = 1
(b) ?(T ) for q = 1 versus for q = ?
A=0
A=0
?(B ?1C)q = 0.670
0.0010
off-block-diagonal error
block diagonal error
0.0012
?(B ?1C)q = 0.448
?(B ?1C)q = 0.300
0.0008
?(B ?1C)q = 0.201
0.0006
0.0004
0.0002
0.0000
0.00
0.05
0.10
0.15
?(B ?1C)q = 0.670
0.015
?(B ?1C)q = 0.448
?(B ?1C)q = 0.300
?(B ?1C)q = 0.201
0.010
0.005
0.000
0.00
0.20
t
0.05
0.10
0.15
0.20
t
(c) ? projects to the block diagonal
(d) ? projects to the off-block-diagonal
Figure 1: (a) visualization of the Hogwild splitting; (b) Hogwild stability for generic models; (c)
and (d) typical plots of ||?(? ? ?hog )||Fro . In (b) each point corresponds to a sampled model
iid
iid
J = QQT + nrI with Qij ? N (0, 1), r ? Uniform[0.5, 1] , n = 24 with an even partition of
size 4. In (c) and (d), models are J = B ? C ? tA where B ? C ? A = QQT , n = 150 with an
even partition of size 3. The plots can be generated with python figures.py -seed=0.
Proposition 2. Two applications of the Hogwild update T of (5) are equivalent to the update to the
last block element of the state vector in one Gauss-Seidel iteration on the (2qn) ? (2qn) system
!
!
h
B
A+C
?C B
A
E ?F
.
F =
.
(7)
x
? = .. with E =
..
.. ..
?F
E
.
.
.
h
?C B
A
Proof. By comparing to the block update in (3), it suffices to consider E ?1 F . Furthermore, since
the claim concerns the last block entry, we need only consider the last block row of E ?1 F . E is
block lower-bidiagonal as the matrix that is inverted in (6), so E ?1 has the same lower-triangular
form as in (6) and the product of the last block row of E ?1 with the last block column of F yields
Pq?1
q
j
(B ?1 C) + j=0 (B ?1 C) B ?1 A = T.
Proposition 3. Gaussian Hogwild Gibbs sampling is convergent if Gauss-Seidel converges on (7).
Unfortunately the lifting is not symmetric and so we cannot impose positive semi-definiteness on
the lifted system; however, another sufficient condition for Gauss-Seidel stability can be applied:
Theorem 1. If J is generalized diagonally dominant (i.e. an H-matrix, see Berman et al. [13, Definition 5.13, Theorem 5.14]) then Hogwild Gibbs sampling is convergent for any variable partition
and any number of sub-iterations.
Proof. If J is generalized diagonally dominant then there exists a diagonal scaling matrix R such
P
e
that Je := JR is row diagonally dominant, i.e. Jeii ?
j6=i |Jij |. Since each scalar row of the
coefficient matrix in (7) contains only entries from one row of J and zeros, it is generalized diagonally dominant with a scaling matrix that consists of 2q copies of R along the diagonal. Finally,
Gauss-Seidel iterations on generalized diagonally dominant systems are convergent [13, Theorem
5.14], so by Proposition 3 the corresponding Hogwild Gibbs iterations are convergent.
6
In terms of Gaussian graphical models, generalized diagonally dominant models include tree models
and latent tree models (since H-matrices are closed under Schur complements), in which the density of the distribution can be written as a tree-structured set of pairwise potentials over the model
variables and a set of latent variables. Latent tree models are useful in modeling data with hierarchical or multi-scaled relationships, and this connection to latent tree structure is evocative of many
hierarchical Bayesian models. More broadly, diagonally dominant systems are well-known for their
tractability and applicability in many other settings [19], and Gaussian Hogwild Gibbs provides
another example of their utility.
Because of the connection to linear system solvers based on two-stage multisplittings, this result
can be identified with [18, Theorem 2.3], which shows that if the coefficient matrix is an H-matrix
then the two-stage iterative solver is convergent. Indeed, by the connection between solvers and
samplers one can prove our Theorem 1 as a corollary to [18, Theorem 2.3] (or vice-versa), though
our proof technique is much simpler. The other results on two-stage multisplittings [18, 14], can
also be applied immediately for results on the convergence of Gaussian Hogwild Gibbs sampling.
The sufficient condition provided by Theorem 1 is coarse in that it provides convergence for any partition or update schedule. However, given the complexity of the processes, as exhibited in Figure 1b,
it is difficult to provide general conditions without taking into account some model structure.
4.1.2
Exact local block samples
Convergence analysis simplifies greatly in the case where exact block samples are drawn at each
processor because q is sufficiently large or because another exact sampler [9, 10] is used on each
processor. This regime of Hogwild Gibbs sampling is particularly interesting because it minimizes
communication between processors.
In (4), we see that as q ? ? we have T ? Tblock ; that is, the deterministic part of the update
becomes the block Jacobi update map, which admits a natural sufficient condition for convergence:
1
1
Proposition 4. If ((B ? C)? 2 A(B ? C)? 2 )2 ? I, then block Hogwild Gibbs sampling converges.
1
1
Proof. Since similarity transformations preserve eigenvalues, with A? := (B ? C)? 2 A(B ? C)? 2
1
1
? and since A? is symmetric
we have ?(Tblock ) = ?((B ? C) 2 (B ? C)?1 A(B ? C)? 2 ) = ?(A)
2
?
?
A ? I ? ?(A) < 1 ? ?(Tblock ) < 1.
4.2
Variances
Since we can analyze Gaussian Hogwild Gibbs sampling as a linear dynamical system, we can write
an expression for the steady-state covariance ?hog of the process when it is stable. For a general
stable LDS of the form x(t+1) = T x(t) + v (t) with v (t) ? N (0, ?inj ) the steady-state covariance
P?
is given by the series t=0 T t ?inj T tT , which is the solution to the linear discrete-time Lyapunov
equation ? ? T ?T T = ?inj in ?.
The injected noise for the outer loop of the Hogwild iterations is generated by the inner loop, which
itself has injected noise with covariance D, the diagonal of J, so for Hogwild sampling we have
Pq?1
?inj = j=0 (B ?1 C)j B ?1 DB ?T (B ?1 C)jT . The target covariance is J ?1 = (B ? C ? A)?1 .
Composing these expressions we see that the Hogwild covariance is complicated, but we can analyze
some salient properties in at least two regimes: when A is small and when local processors draw
exact block samples (e.g. when q ? ?).
4.2.1
First-order effects in A
Intuitively, the Hogwild strategy works best when cross-processor interactions are small, and so it
is natural to analyze the case when A is small and we can discard terms that include powers of A
beyond first order.
When A = 0, the model is independent across processors and both the exact covariance and the
Hogwild steady-state covariance for any q is (B ? C)?1 . For small nonzero A, we consider ?hog (A)
7
to be a function of A and linearize around A = 0 to write ?hog (A) ? (B ? C)?1 + [D0 ?hog ](A),
where the derivative [D0 ?hog ](A) is a matrix determined by the linear equation
e ? S AS
e T ? (I ? S)A(I
e ? S)T
[D0 ?hog ](A) ? S[D0 ?hog ](A)S T = A
e := (B ? C)?1 A(B ? C)?1 and S := (B ?1 C)q . See the supplementary materials. We
where A
can compare this linear approximation to the linear approximation for the exact covariance:
e
J ?1 = [I + (B ? C)?1 A + ((B ? C)?1 A)2 + ? ? ? ](B ? C)?1 ? (B ? C)?1 + A.
(8)
e has zero block-diagonal and S is block-diagonal, we see that to first order A has no effect on
Since A
the block-diagonal of either the exact covariance or the Hogwild covariance. As shown in Figure 1c,
in numerical experiments higher-order terms improve the Hogwild covariance on the block diagonal
relative to the A = 0 approximation, and the improvements increase with local mixing rates.
The off-block-diagonal first-order term in the Hogwild covariance is nonzero and it depends on the
local mixing performed by S. In particular, if global synchronization happens infrequently relative
to the speed of local sampler mixing (e.g. if q is large), S ? 0 and D0 ?hog ? 0, so ?hog ?
(B ? C)?1 (to first order in A) and cross-processor interactions are ignored (though they are still
used to compute the correct mean, as per Proposition 1). However, when there are directions in
e
which S is slow to mix, D0 ?hog picks up some parts of the correct covariance?s first-order term, A.
Figure 1d shows the off-block-diagonal error increasing with faster local mixing for small A.
Intuitively, more local mixing, and hence relatively less frequent global synchronization, degrades
the Hogwild approximation of the cross-processor covariances. Such an effect may be undesirable
because increased local mixing reflects greater parallelism (or an application of more powerful local
samplers [9, 10]). In the next subsection we show that this case admits a special analysis and even an
inexpensive correction to recover asymptotically unbiased estimates for the full covariance matrix.
4.2.2
Exact local block samples
As local mixing increases, e.g. as q ? ? or if we use an exact block local sampler between global
synchronizations, we are effectively sampling in the lifted model of Eq. (3) and therefore we can use
the lifting construction to analyze the error in variances:
Proposition 5. When local block samples are exact, the Hogwild sampled covariance ?Hog satisfies
? = (I + (B ? C)?1 A)?Hog and ||? ? ?Hog || ? ||(B ? C)?1 A|| ||?Hog ||
where ? = J ?1 is the exact target covariance and || ? || is any submultiplicative matrix norm.
Proof. Using the lifting in (3), the Hogwild process steady-state covariance is the marginal covariance of half of the lifted state vector, so using Schur complements we can write ?Hog = ((B ? C) ?
A(B ?C)?1 A)?1 = [I +((B ?C)?1 A)2 +? ? ? ](B ?C)?1 . We can compare this series to the exact
expansion in (8) to see that ?Hog includes exactly the even powers (due to the block-bipartite lifting),
so therefore ???Hog = [(B ?C)?1 A+((B ?C)?1 A)3 +? ? ? ](B ?C)?1 = (B ?C)?1 A?Hog .
5
Conclusion
We have introduced a framework for understanding Gaussian Hogwild Gibbs sampling and shown
some results on the stability and errors of the algorithm, including (1) quantitative descriptions for
when a Gaussian model is not ?too dependent? to cause Hogwild sampling to be unstable (Proposition 2, Theorem 1, Proposition 4); (2) given stability, the asymptotic Hogwild mean is always correct
(Proposition 1); (3) in the linearized regime with small cross-processor interactions, there is a tradeoff between local mixing and error in Hogwild cross-processor covariances (Section 4.2.1); and (4)
when local samplers are run to convergence we can bound the error in the Hogwild variances and
even efficiently correct estimates of the full covariance (Proposition 5). We hope these ideas may be
extended to provide further insight into Hogwild Gibbs sampling, in the Gaussian case and beyond.
6
Acknowledgements
This research was supported in part under AFOSR Grant FA9550-12-1-0287.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
F. Niu, B. Recht, C. R?, and S.J. Wright. ?Hogwild!: A lock-free approach to parallelizing
stochastic gradient descent?. In: Advances in Neural Information Processing Systems (2011).
D. Newman, A. Asuncion, P. Smyth, and M. Welling. ?Distributed inference for latent dirichlet allocation?. In: Advances in Neural Information Processing Systems 20.1081-1088 (2007),
pp. 17?24.
D. Newman, A. Asuncion, P. Smyth, and M. Welling. ?Distributed algorithms for topic models?. In: The Journal of Machine Learning Research 10 (2009), pp. 1801?1828.
Z. Liu, Y. Zhang, E.Y. Chang, and M. Sun. ?PLDA+: Parallel latent dirichlet allocation with
data placement and pipeline processing?. In: ACM Transactions on Intelligent Systems and
Technology (TIST) 2.3 (2011), p. 26.
R. Bekkerman, M. Bilenko, and J. Langford. Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2012.
A. Ihler and D. Newman. ?Understanding Errors in Approximate Distributed Latent Dirichlet Allocation?. In: Knowledge and Data Engineering, IEEE Transactions on 24.5 (2012),
pp. 952?960.
Y. Liu, O. Kosut, and A. S. Willsky. ?Sampling GMRFs by Subgraph Correction?. In: NIPS
2012 Workshop: Perturbations, Optimization, and Statistics (2012).
G. Papandreou and A. Yuille. ?Gaussian sampling by local perturbations?. In: Neural Information Processing Systems (NIPS). 2010.
A. Parker and C. Fox. ?Sampling Gaussian distributions in Krylov spaces with conjugate
gradients?. In: SIAM Journal on Scientific Computing 34.3 (2012), pp. 312?334.
Colin Fox and Albert Parker. ?Convergence in Variance of First-Order and Second-Order
Chebyshev Accelerated Gibbs Samplers?. 2013. URL: http://www.physics.otago.
ac.nz/data/fox/publications/SIAM_CS_2012-11-30.pdf.
J. Gonzalez, Y. Low, A. Gretton, and C. Guestrin. ?Parallel Gibbs Sampling: From Colored
Fields to Thin Junction Trees?. In: In Artificial Intelligence and Statistics (AISTATS). Ft.
Lauderdale, FL, May 2011.
M. J. Wainwright and M. I. Jordan. ?Graphical models, exponential families, and variational
R in Machine Learning 1.1-2 (2008), pp. 1?305.
inference?. In: Foundations and Trends
A. Berman and R.J. Plemmons. ?Nonnegative Matrices in the Mathematical Sciences?. In:
Classics in Applied Mathematics, 9 (1979).
M. J. Castel V. Migall?n and J. Penad?s. ?On Parallel two-stage methods for Hermitian positive definite matrices with applications to preconditioning?. In: Electronic Transactions on
Numerical Analysis 12 (2001), pp. 88?112.
D. Serre. Nov. 2011. URL: http : / / mathoverflow . net / questions / 80793 /
is - gauss - seidel - guaranteed - to - converge - on - semi - positive definite-matrices/80845#80845.
Nicholas Ruozzi and Sekhar Tatikonda. ?Message-Passing Algorithms for Quadratic Minimization?. In: Journal of Machine Learning Research 14 (2013), pp. 2287?2314. URL:
http://jmlr.org/papers/v14/ruozzi13a.html.
A. Frommer and D.B. Szyld. ?On asynchronous iterations?. In: Journal of computational and
applied mathematics 123.1 (2000), pp. 201?216.
A. Frommer and D.B. Szyld. ?Asynchronous two-stage iterative methods?. In: Numerische
Mathematik 69.2 (1994), pp. 141?153.
J. A. Kelner, L. Orecchia, A. Sidford, and Z. A. Zhu. A Simple, Combinatorial Algorithm for
Solving SDD Systems in Nearly-Linear Time. 2013. arXiv: 1301.6628 [cs.DS].
9
| 5043 |@word norm:1 bekkerman:2 disk:1 linearized:1 covariance:24 pick:1 sgd:2 thereby:1 moment:2 initial:1 liu:2 contains:1 series:2 tist:1 current:1 com:1 comparing:1 surprising:1 written:5 must:2 numerical:3 partition:10 shape:1 plot:3 update:33 stationary:4 half:2 intelligence:1 yi1:1 fa9550:1 colored:1 provides:9 iterates:6 node:1 coarse:1 lx:1 org:1 simpler:1 zhang:1 kelner:1 mathematical:1 along:4 direct:1 ik:7 qij:1 prove:2 consists:1 hermitian:1 pairwise:2 indeed:2 expected:2 behavior:2 multi:1 plemmons:1 globally:2 bilenko:1 solver:12 increasing:1 becomes:1 begin:1 estimating:2 notation:2 project:2 provided:1 interpreted:1 minimizes:1 transformation:1 thorough:2 quantitative:1 exactly:5 scaled:1 unit:1 grant:1 positive:5 understood:1 local:21 engineering:1 despite:1 analyzing:4 niu:2 might:2 black:1 initialization:1 nz:1 suggests:1 collect:1 unique:1 block:37 definite:3 empirical:5 regular:1 convenience:1 interior:1 cannot:1 operator:1 undesirable:1 context:3 collapsed:1 applying:4 writing:1 py:1 accumulation:1 equivalent:4 map:5 demonstrated:1 deterministic:2 www:1 sekhar:1 numerische:1 splitting:13 immediately:1 insight:1 stability:7 classic:1 coordinate:1 target:3 construction:1 user:1 exact:13 smyth:2 homogeneous:2 designing:1 element:3 infrequently:1 trend:1 particularly:1 distributional:1 ft:1 solved:1 ensures:1 sun:1 mentioned:1 locking:2 complexity:4 solving:5 algebra:2 predictive:1 purely:1 yuille:1 bipartite:1 triangle:1 preconditioning:1 easily:1 joint:1 chapter:1 represented:1 effective:1 describe:3 artificial:1 newman:3 choosing:1 heuristic:1 larger:4 supplementary:1 relax:1 compensates:1 triangular:8 statistic:5 gi:2 transform:1 itself:1 eigenvalue:2 net:1 interaction:3 product:1 jij:1 frequent:1 relevant:2 loop:3 date:1 mixing:8 degenerate:1 subgraph:1 description:1 scalability:1 convergence:14 converges:3 develop:1 linearize:1 ac:1 multicore:1 eq:1 strong:1 c:1 berman:2 lyapunov:1 direction:1 radius:2 correct:8 stochastic:4 material:1 require:3 fix:1 suffices:1 proposition:14 extension:1 strictly:4 correction:2 sufficiently:1 considered:1 around:1 wright:1 exp:3 great:4 seed:1 claim:1 matthew:1 arrange:1 jx:1 combinatorial:1 currently:1 tatikonda:1 coordination:1 correctness:1 vice:1 reflects:1 hope:1 minimization:1 mit:6 gaussian:40 always:1 lifted:8 publication:1 conjunction:1 corollary:1 derived:1 ax:2 focus:1 improvement:1 rank:1 greatly:1 inference:6 dependent:1 going:1 subroutine:1 i1:1 overall:1 among:1 html:1 special:1 initialize:1 marginal:1 field:1 construct:3 shaped:1 having:1 sampling:61 preparing:1 synchronizing:1 nearly:1 thin:1 intelligent:1 preserve:1 interest:2 message:1 generically:1 chain:1 fox:3 tree:6 old:1 re:1 guidance:1 theoretical:3 instance:2 column:1 modeling:1 increased:1 contiguous:1 sidford:1 papandreou:1 applicability:2 tractability:1 cost:1 entry:6 uniform:1 successful:1 johnson:1 too:2 dependency:1 eec:3 periodic:1 recht:1 density:3 siam:1 probabilistic:1 xi1:1 off:6 physic:1 invertible:1 lauderdale:1 derivative:1 jii:1 potential:7 account:1 stride:1 includes:3 coefficient:3 matter:1 ad:11 vi:2 depends:3 performed:4 hogwild:62 closed:1 analyze:4 recover:1 parallel:19 complicated:2 asuncion:2 contribution:2 variance:5 largely:1 efficiently:2 ensemble:2 yield:4 correspond:1 lds:4 bayesian:2 iid:7 notoriously:1 j6:1 processor:25 definition:2 inexpensive:1 pp:9 james:2 hack:1 proof:12 ihler:3 jacobi:10 saunderson:1 sampled:5 subsection:4 knowledge:1 schedule:3 appears:1 ta:1 higher:1 though:2 generality:1 furthermore:3 just:2 stage:7 until:1 langford:1 d:1 lda:9 perhaps:1 scientific:1 name:1 effect:4 serre:1 requiring:1 unbiased:1 hence:3 symmetric:4 nonzero:2 i2:1 steady:4 generalized:8 pdf:1 complete:1 demonstrate:1 tt:1 meaning:1 variational:1 common:1 bookkeeping:1 mathoverflow:1 empirically:1 ji:2 extend:1 interpretation:1 relating:1 accumulate:1 refer:2 significant:1 v14:1 versa:1 gibbs:58 cambridge:1 mathematics:2 similarly:2 pq:2 stable:8 access:1 similarity:2 dominant:9 own:1 recent:3 discard:1 certain:1 success:2 inverted:1 guestrin:1 greater:3 impose:1 converge:4 colin:1 semi:3 full:3 multiple:1 mix:1 gretton:1 d0:6 seidel:23 alan:1 faster:1 offer:1 long:1 cross:6 serial:2 variant:1 expectation:1 albert:1 iteration:31 sometimes:1 represent:1 arxiv:1 background:1 appropriately:2 parallelization:1 extra:1 unlike:1 exhibited:1 strict:2 induced:1 db:1 orecchia:1 effectiveness:4 schur:3 jordan:1 split:2 decent:1 independence:1 affect:2 architecture:2 identified:1 reduce:1 inner:5 regarding:1 idea:3 tradeoff:2 simplifies:1 chebyshev:1 whether:1 expression:4 utility:1 url:3 mattjj:2 algebraic:1 splittings:1 passing:1 cause:1 repeatedly:1 deep:1 ignored:1 useful:3 generate:3 http:4 per:5 correctly:1 ruozzi:2 broadly:1 write:6 discrete:1 dominance:1 salient:1 drawn:2 ht:1 asymptotically:1 sum:1 run:5 everywhere:1 powerful:3 uncertainty:1 injected:4 named:1 arrive:1 family:1 electronic:1 draw:1 gonzalez:1 scaling:5 bound:6 hi:2 fl:1 guaranteed:1 convergent:7 quadratic:1 nonnegative:1 adapted:1 placement:1 constraint:1 encodes:1 speed:1 argument:4 relatively:1 structured:1 developing:1 according:1 conjugate:1 jr:1 beneficial:1 across:2 unity:1 happens:1 intuitively:2 restricted:1 invariant:1 tind:5 pipeline:1 computationally:1 equation:6 visualization:1 previously:1 mathematik:1 count:1 needed:1 tractable:2 end:1 available:1 gaussians:1 nri:1 junction:1 apply:2 observe:1 hierarchical:2 appropriate:1 spectral:2 generic:1 nicholas:1 submultiplicative:1 robustness:1 original:1 running:2 dirichlet:4 ensure:1 include:2 graphical:5 opportunity:1 lock:2 exploit:1 especially:1 sweep:2 added:1 quantity:2 question:1 strategy:11 degrades:1 dependence:1 diagonal:25 gradient:5 separate:2 majority:1 outer:6 topic:1 unstable:1 toward:1 willsky:3 code:1 index:1 relationship:3 providing:1 demonstration:1 difficult:3 unfortunately:1 hog:24 negative:2 implementation:1 proper:1 perform:3 allowing:2 upper:4 inspect:1 regenerate:1 datasets:1 markov:1 finite:1 descent:3 situation:1 extended:2 communication:2 perturbation:2 parallelizing:2 introduced:3 complement:3 pair:2 extensive:1 connection:8 nip:2 beyond:2 krylov:1 parallelism:3 pattern:4 dynamical:3 regime:3 sparsity:3 challenge:1 including:3 memory:1 explanation:1 wainwright:1 power:3 critical:1 natural:4 synchronize:1 zhu:1 improve:1 github:1 x2i:1 technology:1 numerous:1 identifies:1 fro:1 gmrfs:1 understanding:9 acknowledgement:1 python:1 asymptotic:3 relative:2 synchronization:5 loss:1 afosr:1 highlight:1 interesting:1 limitation:2 allocation:7 sdd:1 proven:1 versus:1 foundation:1 sufficient:8 szyld:2 row:6 diagonally:10 surprisingly:1 supported:3 free:2 last:6 copy:1 asynchronous:2 side:1 allow:1 understand:1 fall:1 taking:2 distributed:5 dimension:1 qn:2 concretely:1 made:1 author:3 far:1 welling:2 transaction:3 approximate:3 nov:1 global:7 sequentially:2 active:1 unnecessary:1 xi:6 latent:9 iterative:10 composing:2 expansion:1 complex:3 necessarily:1 constructing:1 aistats:1 main:1 bounding:1 motivation:1 noise:6 body:1 je:1 parker:2 definiteness:1 slow:1 precision:2 sub:6 exponential:1 lie:1 jmlr:1 qqt:2 theorem:11 xt:1 jt:1 maxi:1 list:1 gaussseidel:2 admits:2 evidence:1 concern:1 exists:1 workshop:1 sequential:2 effectively:3 importance:2 resisted:1 lifting:7 conditioned:1 easier:1 lt:5 simply:2 expressed:1 scalar:2 chang:1 applies:1 corresponds:4 satisfies:2 acm:1 conditional:2 goal:1 towards:1 shared:1 considerable:1 determined:2 typical:1 sampler:18 lemma:2 called:2 inj:4 gauss:23 support:1 scan:1 accelerated:1 evocative:1 |
4,469 | 5,044 | Flexible sampling of discrete data correlations
without the marginal distributions
Ricardo Silva
Department of Statistical Science and CSML
University College London
[email protected]
Alfredo Kalaitzis
Department of Statistical Science and CSML
University College London
[email protected]
Abstract
Learning the joint dependence of discrete variables is a fundamental problem in
machine learning, with many applications including prediction, clustering and
dimensionality reduction. More recently, the framework of copula modeling
has gained popularity due to its modular parameterization of joint distributions.
Among other properties, copulas provide a recipe for combining flexible models
for univariate marginal distributions with parametric families suitable for potentially high dimensional dependence structures. More radically, the extended rank
likelihood approach of Hoff (2007) bypasses learning marginal models completely
when such information is ancillary to the learning task at hand as in, e.g., standard
dimensionality reduction problems or copula parameter estimation. The main idea
is to represent data by their observable rank statistics, ignoring any other information from the marginals. Inference is typically done in a Bayesian framework with
Gaussian copulas, and it is complicated by the fact this implies sampling within
a space where the number of constraints increases quadratically with the number
of data points. The result is slow mixing when using off-the-shelf Gibbs sampling. We present an efficient algorithm based on recent advances on constrained
Hamiltonian Markov chain Monte Carlo that is simple to implement and does not
require paying for a quadratic cost in sample size.
1
Contribution
There are many ways of constructing multivariate discrete distributions: from full contingency tables in the small dimensional case [1], to structured models given by sparsity constraints [11] and
(hierarchies of) latent variable models [6]. More recently, the idea of copula modeling [16] has
been combined with such standard building blocks. Our contribution is a novel algorithm for efficient Markov chain Monte Carlo (MCMC) for the copula framework introduced by [7], extending
algorithmic ideas introduced by [17].
A copula is a continuous cumulative distribution function (CDF) with uniformly distributed univariate marginals in the unit interval [0, 1]. It complements graphical models and other formalisms
that provide a modular parameterization of joint distributions. The core idea is simple and given
by the following observation: suppose we are given a (say) bivariate CDF F (y1 , y2 ) with marginals
F1 (y1 ) and F2 (y2 ). This CDF can then be rewritten as F (F1?1 (F1 (y1 )), F2?1 (F2 (y2 ))). The function C(?, ?) given by F (F1?1 (?), F2?1 (?)) is a copula. For discrete distributions, this decomposition
is not unique but still well-defined [16]. Copulas have found numerous applications in statistics
and machine learning since they provide a way of constructing flexible multivariate distributions by
mix-and-matching different copulas with different univariate marginals. For instance, one can combine flexible univariate marginals Fi (?) with useful but more constrained high-dimensional copulas.
We will not further motivate the use of copula models, which has been discussed at length in recent
1
machine learning publications and conference workshops, and for which comprehensive textbooks
exist [e.g., 9]. For a recent discussion on the applications of copulas from a machine learning perspective, [4] provides an overview. [10] is an early reference in machine learning. The core idea
dates back at least to the 1950s [16].
In the discrete case, copulas can be difficult to apply: transforming a copula CDF into a probability
mass function (PMF) is computationally intractable in general. For the continuous case, a common
trick goes as follows: transform variables by defining ai ? F?i (yi ) for an estimate of Fi (?) and then
fit a copula density c(?, . . . , ?) to the resulting ai [e.g. 9]. It is not hard to check this breaks down
in the discrete case [7]. An alternative is to represent the CDF to PMF transformation for each data
point by a continuous integral on a bounded space. Sampling methods can then be used. This trick
has allowed many applications of the Gaussian copula to discrete domains. Readers familiar with
probit models will recognize the similarities to models where an underlying latent Gaussian field is
discretized into observable integers as in Gaussian process classifiers and ordinal regression [18].
Such models can be indirectly interpreted as special cases of the Gaussian copula.
In what follows, we describe in Section 2 the Gaussian copula and the general framework for constructing Bayesian estimators of Gaussian copulas by [7], the extended rank likelihood framework.
This framework entails computational issues which are discussed. A recent general approach for
MCMC in constrained Gaussian fields by [17] can in principle be directly applied to this problem
as a blackbox, but at a cost that scales quadratically in sample size and as such it is not practical
in general. Our key contribution is given in Section 4. An application experiment on the Bayesian
Gaussian copula factor model is performed in Section 5. Conclusions are discussed in the final
section.
2
Gaussian copulas and the extended rank likelihood
It is not hard to see that any multivariate Gaussian copula is fully defined by a correlation matrix C,
since marginal distributions have no free parameters. In practice, the following equivalent generative
model is used to define a sample U according to a Gaussian copula GC(C):
1. Sample Z from a zero mean Gaussian with covariance matrix C
2. For each Zj , set Uj = ?(zj ), where ?(?) is the CDF of the standard Gaussian
It is clear that each Uj follows a uniform distribution in [0, 1]. To obtain a model for variables
{y1 , y2 , . . . , yp } with marginal distributions Fj (?) and copula GC(C), one can add the deterministic
(n)
(1)
(1) (2)
step yj = Fj?1 (uj ). Now, given n samples of observed data Y ? {y1 , . . . , yp , y1 , . . . , yp },
one is interested on inferring C via a Bayesian approach and the posterior distribution
p(C, ?F | Y) ? pGC (Y | C, ?F )?(C, ?F )
where ?(?) is a prior distribution, ?F are marginal parameters for each Fj (?), which in general might
need to be marginalized since they will be unknown, and pGC (?) is the PMF of a (here discrete)
distribution with a Gaussian copula and marginals given by ?F .
Let Z be the underlying latent Gaussians of the corresponding copula for dataset Y. Although Y is a
deterministic function of Z, this mapping is not invertible due to the discreteness of the distribution:
each marginal Fj (?) has jumps. Instead, the reverse mapping only enforces the constraints where
(i )
(i )
(i )
(i )
yj 1 < yj 2 implies zj 1 < zj 2 . Based on this observation, [7] considers the event Z ? D(y),
where D(y) is the set of values of Z in Rn?p obeying those constraints, that is
oo
n
n
o
n
(k)
(k)
(i)
(i)
(k)
(i)
(k)
D(y) ? Z ? Rn?p : max zj s.t. yj < yj
< zj < min zj s.t. yj < yj
.
Since {Y = y} ? Z(y) ? D(y), we have
pGC (Y | C, ?F )
= pGC (Z ? D(y), Y | C, ?F )
= pN (Z ? D(y) | C) ? pGC (Y| Z ? D(y), C, ?F ),
(1)
the first factor of the last line being that of a zero-mean a Gaussian density function marginalized
over D(y).
2
The extended rank likelihood is defined by the first factor of (1). With this likelihood, inference for
C is given simply by marginalizing
p(C, Z | Y) ? I(Z ? D(y)) pN (Z| C) ?(C),
(2)
the first factor of the right-hand side being the usual binary indicator function.
Strictly speaking, this is not a fully Bayesian method since partial information on the marginals is
ignored. Nevertheless, it is possible to show that under some mild conditions there is information in
the extended rank likelihood to consistently estimate C [13]. It has two important properties: first,
in many applications where marginal distributions are nuisance parameters, this sidesteps any major
assumptions about the shape of {Fi (?)} ? applications include learning the degree of dependence
among variables (e.g., to understand relationships between social indicators as in [7] and [13]) and
copula-based dimensionality reduction (a generalization of correlation-based principal component
analysis, e.g., [5]); second, MCMC inference in the extended rank likelihood is conceptually simpler
than with the joint likelihood, since dropping marginal models will remove complicated entanglements between C and ?F . Therefore, even if ?F is necessary (when, for instance, predicting missing
values of Y), an estimate of C can be computed separately and will not depend on the choice of
estimator for {Fi (?)}. The standard model with a full correlation matrix C can be further refined
to take into account structure implied by sparse inverse correlation matrices [2] or low rank decompositions via higher-order latent variable models [13], among others. We explore the latter case in
section 5.
An off-the-shelf algorithm for sampling from (2) is full Gibbs sampling: first, given Z, the (full or
structured) correlation matrix C can be sampled by standard methods. More to the point, sampling
(i)
Z is straightforward if for each variable j and data point i we sample Zj conditioned on all other
variables. The corresponding distribution is an univariate truncated Gaussian. This is the approach
used originally by Hoff. However, mixing can be severely compromised by the sampling of Z, and
that is where novel sampling methods can facilitate inference.
3
Exact HMC for truncated Gaussian distributions
(i)
Hoff?s algorithm modifies the positions of all Zj associated with a particular discrete value of Yj ,
conditioned on the remaining points. As the number of data points increases, the spread of the hard
(i)
boundaries on Zj , given by data points of Zj associated with other levels of Yj , increases. This
(i)
reduces the space in which variables Zj can move at a time.
To improve the mixing, we aim to sample from the joint Gaussian distribution of all latent variables
(i)
Zj , i = 1 . . . n , conditioned on other columns of the data, such that the constraints between them
are satisfied and thus the ordering in the observation level is conserved. Standard Gibbs approaches
for sampling from truncated Gaussians reduce the problem to sampling from univariate truncated
Gaussians. Even though each step is computationally simple, mixing can be slow when strong
correlations are induced by very tight truncation bounds.
In the following, we briefly describe the methodology recently introduced by [17] that deals with
the problem of sampling from log p(x) ? ? 12 x> Mx + r> x , where x, r ? Rn and M is positive
definite, with linear constraints of the form fj> x ? gj , where fj ? Rn , j = 1 . . . m, is the
normal vector to some linear boundary in the sample space.
Later in this section we shall describe how this framework can be applied to the Gaussian copula
extended rank likelihood model. More importantly, the observed rank statistics impose only linear
constraints of the form xi1 ? xi2 . We shall describe how this special structure can be exploited to
reduce the runtime complexity of the constrained sampler from O(n2 ) (in the number of observations) to O(n) in practice.
3.1
Hamiltonian Monte Carlo for the Gaussian distribution
Hamiltonian Monte Carlo (HMC) [15] is a MCMC method that extends the sampling space with
auxiliary variables so that (ideally) deterministic moves in the joint space brings the sampler to
3
potentially far places in the original variable space. Deterministic moves cannot in general be done,
but this is possible in the Gaussian case.
The form of the Hamiltonian for the general d-dimensional Gaussian case with mean ? and precision matrix M is:
1
1
H = x> Mx ? r> x + s> M?1 s ,
(3)
2
2
where M is also known in the present context as the mass matrix, r = M? and s is the
velocity. Both x and s are Gaussian distributed so this Hamiltonian can be seen (up to a constant)
as the negative log of the product of two independent Gaussian random variables. The physical
interpretation is that of a sum of potential and kinetic energy terms, where the total energy of the
system is conserved.
In a system where this Hamiltonian function is constant, we can exactly compute its evolution
through the pair of differential equations:
x? = ?s H = M?1 s ,
s? = ??x H = ?Mx + r .
(4)
These are solved exactly by x(t) = ? + a sin(t) + b cos(t) , where a and b can be identified
at initial conditions (t = 0) :
?
a = x(0)
= M?1 s ,
b = x(0) ? ? .
(5)
Therefore, the exact HMC algorithm can be summarised as follows:
? Initialise the allowed travel time T and some initial position x0 .
? Repeat for HMC samples k = 1 . . . N
1. Sample sk ? N (0, M)
2. Use sk and xk to update a and b and store the new position at the end of the
trajectory xk+1 = x(T ) as an HMC sample.
It can be easily shown that
the Markov chain of sampled positions has the desired equilibrium
distribution N ?, M?1 [17].
3.2
Sampling with linear constraints
Sampling from multivariate Gaussians does not require any method as sophisticated as HMC, but
the plot thickens when the target distribution is truncated by linear constraints of the form Fx ? g .
Here, F ? Rm?n is a constraint matrix whose every row is the normal vector to a linear boundary
in the sample space. This is equivalent to sampling from a Gaussian that is confined in the (not
necessarily bounded) convex polyhedron {x : Fx ? g}. In general, to remain within the boundaries
of each wall, once a new velocity has been sampled one must compute all possible collision times
with the walls. The smallest of all collision times signifies the wall that the particle should bounce
from at that collision time. Figure 1 illustrates the concept with two simple examples on 2 and 3
dimensions.
The collision times can be computed analytically and their equations can be found in the supplementary material. We also point the reader to [17] for a more detailed discussion of this implementation.
Once the wall to be hit has been found, then position and velocity at impact time are computed and
the velocity is reflected about the boundary normal1 . The constrained HMC sampler is summarized
follows:
? Initialise the allowed travel time T and some initial position x0 .
? Repeat for HMC samples k = 1 . . . N
1. Sample sk ? N (0, M)
2. Use sk and xk to update a and b .
1
Also equivalent to transforming the velocity with a Householder reflection matrix about the bounding
hyperplane.
4
1
2
3
4
1
2
3
4
Figure 1: Left: Trajectories of the first 40 iterations of the exact HMC sampler on a 2D truncated
Gaussian. A reflection of the velocity can clearly be seen when the particle meets wall #2 . Here,
the constraint matrix F is a 4 ? 2 matrix. Center: The same example after 40000 samples. The
coloring of each sample indicates its density value. Right: The anatomy of a 3D Gaussian. The
walls are now planes and in this case F is a 2 ? 3 matrix. Figure best seen in color.
3. Reset remaining travel time Tleft ? T . Until no travel time is left or no walls can be
reached (no solutions exist), do:
(a) Compute impact times with all walls and pick the smallest one, th (if a solution
exists).
(b) Compute v(th ) and reflect it about the hyperplane fh . This is the updated
velocity after impact. The updated position is x(th ) .
(c) Tleft ? Tleft ? th
4. Store the new position at the end of the trajectory xk+1 as an HMC sample.
In general, all walls are candidates for impact, so the runtime of the sampler is linear in m , the
number of constraints. This means that the computational load is concentrated in step 3(a). Another
consideration is that of the allocated travel time T . Depending on the shape of the bounding
polyhedron and the number of walls, a very large travel time can induce many more bounces thus
requiring more computations per sample. On the other hand, a very small travel time explores the
distribution more locally so the mixing of the chain can suffer. What constitutes a given travel time
?large? or ?small? is relative to the dimensionality, the number of constraints and the structure of the
constraints.
Due to the nature of our problem, the number of constraints, when explicitly expressed as linear
functions, is O(n2 ) . Clearly, this restricts any direct application of the HMC framework for Gaussian copula estimation to small-sample (n) datasets. More importantly, we show how to exploit the
structure of the constraints to reduce the number of candidate walls (prior to each bounce) to O(n) .
4
HMC for the Gaussian Copula extended rank likelihood model
Given some discrete data Y ? Rn?p , the task is to infer the correlation matrix of the underlying
Gaussian copula. Hoff?s sampling algorithm proceeds by alternating between sampling the continu(i)
(i)
ous latent representation Zj of each Yj , for i = 1 . . . n, j = 1 . . . p , and sampling a covariance
matrix from an inverse-Wishart distribution conditioned on the sampled matrix Z ? Rn?p , which
is then renormalized as a correlation matrix.
From here on, we use matrix notation for the samples, as opposed to the random variables ? with
(i)
Zi,j replacing Zj , Z:,j being a column of Z, and Z:,\j being the submatrix of Z without the j-th
column.
In a similar vein to Hoff?s sampling algorithm, we replace the successive sampling of each Zi,j conditioned on Zi,\j (a conditional univariate truncated Gaussian) with the simultaneous sampling of
Z:,j conditioned on Z:,\j . This is done through an HMC step from a conditional multivariate truncated Gaussian.
The added benefit of this HMC step over the standard Gibbs approach, is that of a handle for regulating the trade-off between exploration and runtime via the allocated travel time T . Larger travel
times potentially allow for larger moves in the sample space, but it comes at a cost as explained in
the sequel.
5
4.1
The Hough envelope algorithm
The special structure of constraints. Recall that the number of constraints is quadratic in the
dimension of the distribution. This is because every Z sample must satisfy the conditions of
the event Z ? D(y) of the extended rank likelihood (see Section 2). In other words, for any
column Z:,j , all entries are organised into a partition L(j) of |L(j) | levels, the number of
unique values observed for the discrete or ordinal variable Y (j) . Thereby, for any two adjacent
levels lk , lk+1 ? L(j) and any pair i1 ? lk , i2 ? lk+1 , it must be true that Zli ,j < Zli+1 ,j .
Equivalently, a constraint f exists where fi1 = 1, fi2 = ?1 and g = 0 . It is easy to see that
O(n2 ) of such constraints are induced by the order statistics of the j-th variable. To deal with this
boundary explosion, we developed the Hough Envelope algorithm to search efficiently, within all
pairs in {Z:,j }, in practically linear time.
Recall in HMC (section 3.2) that the trajectory of the particle, x(t), is decomposed as
xi (t) = ai sin(t) + bi cos(t) + ?i ,
(6)
and there are n such functions, grouped into a partition of levels as described above. The Hough
envelope2 is found for every pair of adjacent levels. We illustrate this with an example of 10 dimensions and two levels in Figure 2, without loss of generalization to any number of levels or
dimensions. Assume we represent trajectories for points in level lk with blue curves, and points in
lk+1 with red curves. Assuming we start with a valid state, at time t = 0 all red curves are above all
blue curves. The goal is to find the smallest t where a blue curve meets a red curve. This will be our
collision time where a bounce will be necessary.
5
3
1
2
Figure 2: The trajectories xj (t) of each component are sinusoid functions. The right-most green
dot signifies the wall and the time th of the earliest bounce, where the first inter-level pair (that
is, any two components respectively from the blue
and red level) becomes equal, in this case the constraint activated being xblue2 = xred2 .
4
4
5
1
2
3
0.2
0.4
0.6
t
0.8
1
1.2
1.4
1. First we find the largest component bluemax of the blue level at t = 0. This takes
O(n) time. Clearly, this will be the largest component until its sinusoid intersects that
of any other component.
2. To find the next largest component, compute the roots of xbluemax (t) ? xi (t) = 0 for
all components and pick the smallest (earliest) one (represented by a green dot). This also
takes O(n) time.
3. Repeat this procedure until a red sinusoid crosses the highest running blue sinusoid. When
this happens, the time of earliest bounce and its constraint are found.
In the worst-case scenario, n such repetitions have to be made, but in practice we can safely
assume an fixed upper bound h on the number of blue crossings before a inter-level crossing occurs.
In experiments, we found h << n, no more than 10 in simulations with hundreds of thousands of
curves. Thus, this search strategy takes O(n) time in practice to complete, mirroring the analysis
of other output-sensitive algorithms such as the gift wrapping algorithm for computing convex hulls
[8]. Our HMC sampling approach is summarized in Algorithm 1.
2
The name is inspired from the fact that each xi (t) is the sinusoid representation, in angle-distance space,
of all lines that pass from the (ai , bi ) point in a ? b space. A representation known in image processing as the
Hough transform [3].
6
Algorithm 1 HMC for GCERL
# Notation: T MN (?, C, F) is a truncated multivariate normal with location vector ?, scale
matrix C and constraints encoded by F and g = 0 .
# IW(df, V0 ) is an inverse-Wishart prior with degrees of freedom df and scale matrix V0 .
Input: Y ? Rn?p , allocated travel time T , a starting Z and variable covariance V ? Rp?p ,
df = p + 2, V0 = df Ip and chain size N .
Generate constraints F(j) from Y:,j , for j = 1 . . . p .
for samples k = 1 . . . N do
# Resample Z as follows:
for variables j = 1 . . . p do
?1
?1
Compute parameters: ?j2 = Vjj ? Vj,\j V\j,\j
V\j,j ,
?j = Z:,\j V\j,\j
V\j,j .
2
(j)
Get one sample Z:,j ? T MN ?j , ?j I, F
efficiently by using the Hough Envelope
algorithm, see section 4.1.
end for
Resample V ? IW(df + n, V0 + Z> Z) .
p
Compute correlation matrix C, s.t. Ci,j = Vi,j / Vi,i Vj,j and store sample, C(k) ? C .
end for
5
An application on the Bayesian Gausian copula factor model
In this section we describe an experiment that highlights the benefits of our HMC treatment, compared to a state-of-the-art parameter expansion (PX) sampling scheme. During this experiment we
ask the important question:
?How do the two schemes compare when we exploit the full-advantage of the HMC machinery to
jointly sample parameters and the augmented data Z, in a model of latent variables and structured
correlations??
We argue that under such circumstances the superior convergence speed and mixing of HMC undeniably compensate for its computational overhead.
Experimental setup
In this section we provide results from an application on the Gaussian
copula latent factor model of [13] (Hoff?s model [7] for low-rank structured correlation matrices).
We modify the parameter expansion (PX) algorithm used by [13] by replacing two of its Gibbs steps
with a single HMC step. We show a much faster convergence to the true mode with considerable
support on its vicinity. We show that unlike the HMC, the PX algorithm falls short of properly
exploring the posterior in any reasonable finite amount of time, even for small models, even for
small samples. Worse, PX fails in ways one cannot easily detect.
Namely, we sample each row of the factor loadings matrix ? jointly with the corresponding column
of the augmented data matrix Z, conditioning on the higher-order latent factors. This step is analogous to Pakman and Paninski?s [17, sec.3.1] use of HMC in the context of a binary probit model
(the extension to many levels in the discrete marginal is straightforward with direct application of
the constraint matrix F and the Hough envelope algorithm). The sampling of the higher level latent
factors remains identical to [13]. Our scheme involves no parameter expansion. We do however
interweave the Gibbs step for the Z matrix similarly to Hoff. This has the added benefit of exploring
the Z sample space within their current boundaries, complementing the joint (?, z) sampling which
moves the boundaries jointly. The value of such ?interweaving? schemes has been addressed in [19].
Results
We perform simulations of 10000 iterations, n = 1000 observations (rows of Y), travel
time ?/2 for HMC with the setups listed in the following table, along with the elapsed times of each
sampling scheme. These experiments were run on Intel COREi7 desktops with 4 cores and 8GB of
RAM. Both methods were parallelized across the observed variables (p).
Figure p (vars) k (latent factors) M (ordinal levels) elapsed (mins): HMC PX
3(a) :
20
5
2
115
8
3(b) :
10
3
2
80
6
10
3
5
203 16
3(c) :
Many functionals of the loadings matrix ? can be assessed. We focus on reconstructing the true
(low-rank) correlation matrix of the Gaussian copula. In particular, we summarize the algorithm?s
7
outcome with the root mean squared error (RMSE) of the differences between entries of the
ground-truth correlation matrix and the implied correlation matrix at each iteration of a MCMC
scheme (so the following plots looks like a time-series of 10000 timepoints), see Figures 3(a), 3(b)
and 3(c) .
(a)
(b)
(c)
Figure 3: Reconstruction (RMSE per iteration) of the low-rank structured correlation matrix of the
Gaussian copula and its histogram (along the left side).
(a) Simulation setup: 20 variables, 5 factors, 5 levels. HMC (blue) reaches a better mode faster
(in iterations/CPU-time) than PX (red). Even more importantly the RMSE posterior samples of PX
are concentrated in a much smaller region compared to HMC, even after 10000 iterations. This
illustrates that PX poorly explores the true distribution.
(b) Simulation setup: 10 vars, 3 factors, 2 levels. We observe behaviors similar to Figure 3(a). Note
that the histogram counts RMSEs after the burn-in period of PX (iteration #500).
(c) Simulation setup: 10 vars, 3 factors, 5 levels. We observe behaviors similar to Figures 3(a) and
3(b) but with a thinner tail for HMC. Note that the histogram counts RMSEs after the burn-in period
of PX (iteration #2000).
Main message
HMC reaches a better mode faster (iterations/CPUtime). Even more importantly
the RMSE posterior samples of PX are concentrated in a much smaller region compared to HMC,
even after 10000 iterations. This illustrates that PX poorly explores the true distribution. As an
analogous situation we refer to the top and bottom panels of Figure 14 of Radford Neal?s slice sampler paper [14]. If there was no comparison against HMC, there would be no evidence from the PX
plot alone that the algorithm is performing poorly. This mirrors Radford Neal?s statement opening
Section 8 of his paper: ?a wrong answer is obtained without any obvious indication that something
is amiss?. The concentration on the posterior mode of PX in these simulations is misleading of
the truth. PX might seen a bit simpler to implement, but it seems one cannot avoid using complex
algorithms for complex models. We urge practitioners to revisit their past work with this model to
find out by how much credible intervals of functionals of interest have been overconfident. Whether
trivially or severely, our algorithm offers the first principled approach for checking this out.
6
Conclusion
Sampling large random vectors simultaneously in order to improve mixing is in general a very hard
problem, and this is why clever methods such as HMC or elliptical slice sampling [12] are necessary.
We expect that the method here developed is useful not only for those with data analysis problems
within the large family of Gaussian copula extended rank likelihood models, but the method itself
and its behaviour might provide some new insights on MCMC sampling in constrained spaces in
general. Another direction of future work consists of exploring methods for elliptical copulas, and
related possible extensions of general HMC for non-Gaussian copula models.
Acknowledgements
The quality of this work has benefited largely from comments by our anonymous reviewers and useful discussions with Simon Byrne and Vassilios Stathopoulos. Research was supported by EPSRC
grant EP/J013293/1.
8
References
[1] Y. Bishop, S. Fienberg, and P. Holland. Discrete Multivariate Analysis: Theory and Practice.
MIT Press, 1975.
[2] A. Dobra and A. Lenkoski. Copula Gaussian graphical models and their application to modeling functional disability data. Annals of Applied Statistics, 5:969?993, 2011.
[3] R. O. Duda and P. E. Hart. Use of the Hough transformation to detect lines and curves in
pictures. Communications of the ACM, 15(1):11?15, 1972.
[4] G. Elidan. Copulas and machine learning. Proceedings of the Copulae in Mathematical and
Quantitative Finance workshop, to appear, 2013.
[5] F. Han and H. Liu. Semiparametric principal component analysis. Advances in Neural Information Processing Systems, 25:171?179, 2012.
[6] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, 2006.
[7] P. Hoff. Extending the rank likelihood for semiparametric copula estimation. Annals of Applied
Statistics, 1:265?283, 2007.
[8] R. Jarvis. On the identification of the convex hull of a finite set of points in the plane. Information Processing Letters, 2(1):18?21, 1973.
[9] H. Joe. Multivariate Models and Dependence Concepts. Chapman-Hall, 1997.
[10] S. Kirshner. Learning with tree-averaged densities and distributions. Neural Information Processing Systems, 2007.
[11] S. Lauritzen. Graphical Models. Oxford University Press, 1996.
[12] I. Murray, R. Adams, and D. MacKay. Elliptical slice sampling. JMLR Workshop and Conference Proceedings: AISTATS 2010, 9:541?548, 2010.
[13] J. Murray, D. Dunson, L. Carin, and J. Lucas. Bayesian Gaussian copula factor models for
mixed data. Journal of the American Statistical Association, to appear, 2013.
[14] R. Neal. Slice sampling. The Annals of Statistics, 31:705?767, 2003.
[15] R. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo,
pages 113?162, 2010.
[16] R. Nelsen. An Introduction to Copulas. Springer-Verlag, 2007.
[17] A. Pakman and L. Paninski. Exact Hamiltonian Monte Carlo for truncated multivariate Gaussians. arXiv:1208.4118, 2012.
[18] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[19] Y. Yu and X. L. Meng. To center or not to center: That is not the question ? An ancillaritysufficiency interweaving strategy (ASIS) for boosting MCMC efficiency. Journal of Computational and Graphical Statistics, 20(3):531?570, 2011.
9
| 5044 |@word mild:1 briefly:1 duda:1 loading:2 seems:1 simulation:6 decomposition:2 covariance:3 pick:2 thereby:1 reduction:3 initial:3 liu:1 series:1 past:1 current:1 elliptical:3 must:3 partition:2 shape:2 remove:1 plot:3 update:2 alone:1 generative:1 parameterization:2 complementing:1 plane:2 xk:4 desktop:1 hamiltonian:8 core:3 short:1 provides:1 boosting:1 location:1 successive:1 simpler:2 continu:1 along:2 mathematical:1 direct:2 differential:1 consists:1 combine:1 overhead:1 x0:2 inter:2 interweaving:2 behavior:2 blackbox:1 discretized:1 inspired:1 salakhutdinov:1 decomposed:1 cpu:1 becomes:1 gift:1 bounded:2 underlying:3 notation:2 mass:2 panel:1 what:2 interpreted:1 textbook:1 developed:2 transformation:2 safely:1 quantitative:1 every:3 finance:1 runtime:3 exactly:2 classifier:1 rm:1 uk:2 hit:1 unit:1 wrong:1 grant:1 appear:2 positive:1 before:1 modify:1 thinner:1 severely:2 oxford:1 meng:1 meet:2 might:3 burn:2 co:2 bi:2 averaged:1 unique:2 practical:1 enforces:1 yj:10 practice:5 block:1 implement:2 definite:1 procedure:1 urge:1 matching:1 word:1 induce:1 get:1 cannot:3 clever:1 context:2 equivalent:3 deterministic:4 reviewer:1 missing:1 center:3 modifies:1 go:1 straightforward:2 starting:1 williams:1 convex:3 stats:1 estimator:2 insight:1 importantly:4 his:1 initialise:2 handle:1 fx:2 analogous:2 updated:2 annals:3 hierarchy:1 suppose:1 target:1 exact:4 trick:2 velocity:7 crossing:2 vein:1 observed:4 bottom:1 epsrc:1 ep:1 solved:1 worst:1 thousand:1 region:2 ordering:1 trade:1 highest:1 principled:1 transforming:2 entanglement:1 complexity:1 ideally:1 gausian:1 dynamic:1 renormalized:1 motivate:1 depend:1 tight:1 f2:4 efficiency:1 completely:1 easily:2 joint:7 represented:1 intersects:1 describe:5 london:2 monte:6 outcome:1 refined:1 whose:1 modular:2 supplementary:1 larger:2 encoded:1 say:1 statistic:8 transform:2 jointly:3 itself:1 final:1 ip:1 advantage:1 indication:1 ucl:2 reconstruction:1 product:1 reset:1 jarvis:1 j2:1 combining:1 date:1 mixing:7 poorly:3 recipe:1 lenkoski:1 convergence:2 extending:2 nelsen:1 adam:1 oo:1 depending:1 ac:2 illustrate:1 lauritzen:1 paying:1 strong:1 auxiliary:1 involves:1 implies:2 come:1 direction:1 anatomy:1 hull:2 ancillary:1 exploration:1 material:1 kirshner:1 require:2 behaviour:1 f1:4 generalization:2 wall:12 anonymous:1 strictly:1 exploring:3 extension:2 practically:1 hall:1 ground:1 normal:3 equilibrium:1 algorithmic:1 mapping:2 major:1 early:1 smallest:4 fh:1 resample:2 estimation:3 travel:12 iw:2 sensitive:1 grouped:1 largest:3 repetition:1 mit:2 clearly:3 gaussian:41 aim:1 pn:2 shelf:2 avoid:1 publication:1 earliest:3 focus:1 properly:1 consistently:1 rank:17 likelihood:13 check:1 polyhedron:2 indicates:1 detect:2 inference:4 typically:1 interested:1 i1:1 issue:1 among:3 flexible:4 lucas:1 art:1 constrained:6 special:3 copula:45 hoff:8 marginal:10 field:2 once:2 equal:1 mackay:1 sampling:32 chapman:1 identical:1 look:1 yu:1 constitutes:1 carin:1 future:1 others:1 opening:1 simultaneously:1 recognize:1 comprehensive:1 familiar:1 freedom:1 interest:1 regulating:1 message:1 activated:1 chain:6 integral:1 partial:1 necessary:3 explosion:1 machinery:1 tree:1 hough:7 pmf:3 desired:1 instance:2 formalism:1 modeling:3 column:5 signifies:2 cost:3 entry:2 uniform:1 hundred:1 answer:1 combined:1 density:4 fundamental:1 explores:3 sequel:1 off:3 xi1:1 invertible:1 squared:1 reflect:1 satisfied:1 opposed:1 wishart:2 worse:1 american:1 sidestep:1 ricardo:2 yp:3 account:1 potential:1 summarized:2 sec:1 satisfy:1 explicitly:1 vi:2 performed:1 break:1 later:1 root:2 reached:1 red:6 start:1 complicated:2 simon:1 rmse:4 contribution:3 largely:1 efficiently:2 conceptually:1 bayesian:7 identification:1 carlo:6 trajectory:6 simultaneous:1 reach:2 against:1 energy:2 obvious:1 associated:2 sampled:4 dataset:1 treatment:1 ask:1 recall:2 color:1 dimensionality:5 credible:1 asis:1 sophisticated:1 back:1 coloring:1 dobra:1 higher:3 originally:1 methodology:1 reflected:1 done:3 though:1 correlation:16 until:3 hand:3 replacing:2 mode:4 brings:1 quality:1 building:1 facilitate:1 name:1 concept:2 y2:4 requiring:1 true:5 evolution:1 analytically:1 vicinity:1 sinusoid:5 alternating:1 byrne:1 i2:1 neal:4 deal:2 adjacent:2 sin:2 during:1 nuisance:1 alfredo:1 complete:1 reflection:2 silva:1 fj:6 image:1 consideration:1 novel:2 recently:3 fi:4 common:1 superior:1 functional:1 physical:1 overview:1 conditioning:1 discussed:3 interpretation:1 tail:1 association:1 marginals:7 refer:1 gibbs:6 ai:4 trivially:1 similarly:1 particle:3 dot:2 entail:1 similarity:1 han:1 gj:1 v0:4 add:1 something:1 multivariate:9 posterior:5 recent:4 perspective:1 reverse:1 scenario:1 store:3 verlag:1 binary:2 yi:1 exploited:1 ous:1 conserved:2 seen:4 impose:1 parallelized:1 period:2 elidan:1 full:5 mix:1 reduces:1 infer:1 faster:3 pakman:2 cross:1 compensate:1 offer:1 hart:1 impact:4 prediction:1 regression:1 circumstance:1 df:5 arxiv:1 iteration:10 represent:3 histogram:3 confined:1 semiparametric:2 separately:1 interval:2 addressed:1 pgc:5 allocated:3 envelope:4 unlike:1 comment:1 induced:2 integer:1 practitioner:1 easy:1 xj:1 fit:1 zi:3 identified:1 reduce:3 idea:5 csml:2 bounce:6 whether:1 gb:1 suffer:1 speaking:1 mirroring:1 ignored:1 useful:3 collision:5 clear:1 detailed:1 listed:1 amount:1 locally:1 concentrated:3 generate:1 exist:2 restricts:1 zj:15 revisit:1 popularity:1 per:2 blue:8 summarised:1 discrete:13 dropping:1 shall:2 key:1 nevertheless:1 discreteness:1 ram:1 sum:1 run:1 inverse:3 angle:1 letter:1 zli:2 extends:1 family:2 reader:2 place:1 reasonable:1 submatrix:1 bit:1 bound:2 kalaitzis:2 quadratic:2 constraint:25 speed:1 min:2 performing:1 px:15 department:2 structured:5 according:1 overconfident:1 remain:1 across:1 reconstructing:1 smaller:2 happens:1 explained:1 fienberg:1 computationally:2 equation:2 remains:1 count:2 xi2:1 ordinal:3 end:4 fi1:1 gaussians:5 rewritten:1 apply:1 observe:2 indirectly:1 alternative:1 rp:1 original:1 top:1 clustering:1 include:1 remaining:2 running:1 graphical:4 marginalized:2 exploit:2 uj:3 murray:2 implied:2 move:5 added:2 question:2 occurs:1 wrapping:1 parametric:1 strategy:2 dependence:4 usual:1 concentration:1 disability:1 mx:3 distance:1 argue:1 considers:1 assuming:1 length:1 relationship:1 equivalently:1 difficult:1 hmc:33 setup:5 dunson:1 potentially:3 statement:1 negative:1 implementation:1 unknown:1 perform:1 upper:1 observation:5 markov:4 datasets:1 finite:2 truncated:10 defining:1 extended:10 situation:1 communication:1 hinton:1 y1:6 gc:2 rn:7 householder:1 introduced:3 complement:1 pair:5 namely:1 cputime:1 elapsed:2 quadratically:2 proceeds:1 fi2:1 sparsity:1 summarize:1 including:1 max:1 green:2 suitable:1 event:2 predicting:1 indicator:2 mn:2 scheme:6 improve:2 misleading:1 numerous:1 picture:1 lk:6 prior:3 acknowledgement:1 checking:1 marginalizing:1 relative:1 probit:2 fully:2 loss:1 highlight:1 expect:1 mixed:1 organised:1 var:3 rmses:2 contingency:1 degree:2 principle:1 bypass:1 row:3 repeat:3 last:1 free:1 truncation:1 supported:1 rasmussen:1 side:2 allow:1 understand:1 fall:1 sparse:1 distributed:2 benefit:3 boundary:8 dimension:4 curve:8 valid:1 cumulative:1 slice:4 made:1 jump:1 far:1 social:1 functionals:2 observable:2 handbook:1 xi:3 continuous:3 latent:11 compromised:1 search:2 sk:4 why:1 table:2 nature:1 ignoring:1 expansion:3 necessarily:1 complex:2 constructing:3 domain:1 vj:2 aistats:1 main:2 spread:1 bounding:2 n2:3 allowed:3 augmented:2 benefited:1 intel:1 slow:2 precision:1 fails:1 inferring:1 position:8 timepoints:1 obeying:1 candidate:2 jmlr:1 down:1 undeniably:1 load:1 bishop:1 vjj:1 evidence:1 bivariate:1 workshop:3 intractable:1 exists:2 joe:1 gained:1 ci:1 mirror:1 conditioned:6 illustrates:3 simply:1 univariate:7 explore:1 paninski:2 expressed:1 holland:1 radford:2 springer:1 radically:1 truth:2 acm:1 cdf:6 kinetic:1 conditional:2 goal:1 replace:1 considerable:1 hard:4 uniformly:1 reducing:1 sampler:6 hyperplane:2 principal:2 total:1 pas:1 experimental:1 college:2 support:1 latter:1 assessed:1 mcmc:8 |
4,470 | 5,045 | Auxiliary-variable Exact Hamiltonian Monte
Carlo Samplers for Binary Distributions
Ari Pakman and Liam Paninski
Department of Statistics
Center for Theoretical Neuroscience
Grossman Center for the Statistics of Mind
Columbia University
New York, NY, 10027
Abstract
We present a new approach to sample from generic binary distributions, based
on an exact Hamiltonian Monte Carlo algorithm applied to a piecewise continuous augmentation of the binary distribution of interest. An extension of this idea to
distributions over mixtures of binary and possibly-truncated Gaussian or exponential variables allows us to sample from posteriors of linear and probit regression
models with spike-and-slab priors and truncated parameters. We illustrate the advantages of these algorithms in several examples in which they outperform the
Metropolis or Gibbs samplers.
1
Introduction
Mapping a problem involving discrete variables into continuous variables often results in a more
tractable formulation. For the case of probabilistic inference, in this paper we present a new approach to sample from distributions over binary variables, based on mapping the original discrete
distribution into a continuous one with a piecewise quadratic log-likelihood, from which we can
sample efficiently using exact Hamiltonian Monte Carlo (HMC).
The HMC method is a Markov Chain Monte Carlo algorithm that usually has better performance
over Metropolis or Gibbs samplers, because it manages to propose transitions in the Markov chain
which lie far apart in the sampling space, while maintaining a reasonable acceptance rate for these
proposals. But the implementations of HMC algorithms generally involve the non-trivial tuning of
numerical integration parameters to obtain such a reasonable acceptance rate (see [1] for a review).
The algorithms we present in this work are special because the Hamiltonian equations of motion
can be integrated exactly, so there is no need for tuning a step-size parameter and the Markov chain
always accepts the proposed moves. Similar ideas have been used recently to sample from truncated Gaussian multivariate distributions [2], allowing much faster sampling than other methods.
It should be emphasized that despite the apparent complexity of deriving the new algorithms, their
implementation is very simple.
Since the method we present transforms a binary sampling problem into a continuous one, it is natural to extend it to distributions defined over mixtures of binary and Gaussian or exponential variables,
transforming them into purely continuous distributions. Such a mixed binary-continuous problem
arises in Bayesian model selection with a spike-and-slab prior and we illustrate our technique by
focusing on this case. In particular, we show how to sample from the posterior of linear and probit regression models with spike-and-slab priors, while also imposing truncations in the parameter
space (e.g., positivity).
The method we use to map binary to continuous variables consists in simply identifying a binary
variable with the sign of a continuous one. An alternative relaxation of binary to continuous vari1
ables, known in statistical physics as the ?Gaussian integral trick? [3], has been used recently to
apply HMC methods to binary distributions [4], but the details of that method are different than
ours. In particular, the HMC in that work is not ?exact? in the sense used above and the algorithm
only works for Markov random fields with Gaussian potentials.
2
Binary distributions
We are interested in sampling from a probability distribution p(s) defined over d-dimensional binary
vectors s ? {?1, +1}d , and given in terms of a function f (s) as
1
f (s) .
(1)
Z
Here Z is a normalization factor, whose value will not be needed. Let us augment the distribution p(s) with continuous variables y ? Rd as
p(s) =
p(s, y) = p(s)p(y|s)
(2)
where p(y|s) is non-zero only in the orthant defined by
si = sign(yi )
i = 1, . . . , d.
The essence of the proposed method is that we can sample from p(s) by sampling y from
X
p(y) =
p(s0 )p(y|s0 ) ,
(3)
(4)
s0
= p(s)p(y|s) ,
(5)
and reading out the values of s from (3). In the second line we have made explicit that for each y,
only one term in the sum in (4) is non-zero, so that p(y) is piecewise defined in each orthant.
In order to sample from p(y) using the exact HMC method of [2], we require log p(y|s) to be a
quadratic function of y on its support. The idea is to define a potential energy function
U (y) = ? log p(y|s) ? log f (s) ,
(6)
introduce momentum variables qi , and consider the piecewise continuous Hamiltonian
H(y, q)
= U (y) +
q?q
2
,
(7)
whose value is identified with the energy of a particle moving in a d-dimensional space. Suppose the
particle has initial coordinates y(0). In each iteration of the sampling algorithm, we sample initial
values q(0) for the momenta from a standard Gaussian distribution and let the particle move during
a time T according to the equations of motion
?
y(t)
=
?H
,
?q(t)
?
q(t)
=?
?H
.
?y(t)
(8)
The final coordinates, y(T ), belong to a Markov chain with invariant distribution p(y), and are used
as the initial coordinates of the next iteration. The detailed balance condition follows directly from
the conservation of energy and (y, q)-volume along the trajectory dictated by (8), see [1, 2] for
details.
The restriction to quadratic functions of y in log p(y|s) allows us to solve the differential equations (8) exactly in each orthant. As the particle moves, the potential energy U (y) and the kinetic
energy q?q
2 change in tandem, keeping the value of the Hamiltonian (7) constant. But this smooth
interchange gets interrupted when any coordinate reaches zero. Suppose this first happens at time tj
for coordinate yj , and assume that yj < 0 for t < tj . Conservation of energy imposes now a jump
+
on the momentum qj as a result of the discontinuity in U (y). Let us call qj (t?
j ) and qj (tj ) the
value of the momentum qj just before and after the coordinate hits yj = 0. In order to enforce
conservation of energy, we equate the Hamiltonian at both sides of the yj = 0 wall, giving
qj2 (t+
qj2 (t?
j )
j )
= ?j +
2
2
2
(9)
with
?j = U (yj = 0, sj = ?1) ? U (yj = 0, sj = +1)
(10)
If eq. (9) gives a positive value for qj2 (t+
j ), the coordinate yj crosses the boundary and continues
its trajectory in the new orthant. On the other hand, if eq.(9) gives a negative value for qj2 (t+
j ), the
?
particle is reflected from the yj = 0 wall and continues its trajectory with qj (t+
)
=
?q
(t
).
When
j j
j
?j < 0, the situation can be understood as the limit of a scenario in which the particle faces an
upward hill in the potential energy, causing it to diminish its velocity until it either reaches the top
of the hill with a lower velocity or stops and then reverses. In the limit in which the hill has finite
height but infinite slope, the velocity change occurs discontinuously at one instant. Note that we
used in (9) that the momenta qi6=j are continuous, since this sudden infinite slope hill is only seen
by the yj coordinate.
Regardless of whether the particle bounces or crosses the yj = 0 wall, the other coordinates move
unperturbed until the next boundary hit, where a similar crossing or reflection occurs, and so on,
until the final position y(T ).
The framework we presented above is very general and in order to implement a particular sampler
we need to select the distributions p(y|s). Below we consider in some detail two particularly simple
choices that illustrate the diversity of options here.
2.1
Gaussian augmentation
Let us consider first for p(y|s) the truncated Gaussians
y?y
(2/?)d/2 e? 2
for sign(yi ) = si ,
p(y|s) =
0
otherwise ,
i = 1, . . . , d
(11)
? (t) = ?y(t), q
? (t) = ?q(t), and have a solution
The equations of motion (8) lead to y
yi (t)
=
=
qi (t) =
=
yi (0) cos(t) + qi (0) sin(t) ,
ui sin(?i + t) ,
?yi (0) sin(t) + qi (0) cos(t) ,
ui cos(?i + t) .
(12)
(13)
(14)
(15)
This setting is similar to the case studied in [2] and from ?i = tan?1 (yi (0)/qi (0)) the boundary hit
times ti are easily obtained. When a boundary is reached, say yj = 0, the coordinate yj changes its
trajectory for t > tj as
yj (t)
= qj (t+
j ) sin(t ? tj ) ,
(16)
with the value of qj (t+
j ) obtained as described above.
Choosing an appropriate value for the travel time T is crucial when using HMC algorithms [5]. As
is clear from (13), if we let the particle travel during a time T > ?, each coordinate reaches zero at
least once, and the hitting times can be ordered as
0 < tj1 ? tj2 ? ? ? ? ? tjd < ? .
(17)
Moreover, regardless of whether a coordinate crosses zero or gets reflected, it follows from (16) that
the successive hits occur at
ti + n?,
n = 1, 2, . . .
(18)
and therefore the hitting times only need to be computed once for each coordinate in every iteration.
If we let the particle move during a time T = n?, each coordinate reaches zero n times, in the
cyclical order (17), with a computational cost of O(nd) from wall hits. But choosing precisely
T = n? is not recommended for the following reason. As we just showed, between yj (0) and yj (?)
the coordinate touches the boundary yj = 0 once, and if yj gets reflected off the boundary, it is easy
to check that we have yj (?) = yj (0). If we take T = n? and the particle gets reflected all the n
times it hits the boundary, we get yj (T ) = yj (0) and the coordinate yj does not move at all. To
avoid these singular situations, a good choice is T = (n + 21 )?, which generalizes the recommended
3
travel time T = ?/2 for truncated Gaussians in [2]. The value of n should be chosen for each
distribution, but we expect optimal values for n to grow with d.
With T = (n + 21 )?, the total cost of each sample is O((n + 1/2)d) on average from wall hits,
plus O(d) from the sampling of q(0) and from the d inverse trigonometric functions to obtain the
hit times ti . But in complex distributions, the computational cost is dominated by the the evaluation
of ?i in (10) at each wall hit.
Interestingly, the rate at which wall yi = 0 is crossed coincides with the acceptance rate in a
Metropolis algorithm that samples uniformly a value for i and makes a proposal of flipping the
binary variable si . See the Appendix for details. Of course, this does not mean that the HMC algorithm is the same as Metropolis, because in HMC the order in which the walls are hit is fixed given
the initial velocity, and the values of qi2 at successive hits of yi = 0 within the same iteration are not
independent.
2.2
Exponential and other augmentations
Another distribution that allows one an exact solution of the equations of motion (8) is
Pd
e? i=1 |yi | for sign(yi ) = si ,
i = 1, . . . , d
p(y|s) =
0
otherwise ,
(19)
which leads to the equations of motion y?i (t) = ?si , with solutions of the form
si t2
.
(20)
2
In this case, the initial hit time for every coordinate is the solution of the quadratic equation yi (t) =
0. But, unlike the case of the Gaussian augmentation, the order of successive hits is not fixed.
Indeed, if coordinate yj hits zero at time tj , it continues its trajectory as
sj
yj (t > tj ) = q(t+
(t ? tj )2 ,
(21)
j )(t ? tj ) ?
2
so the next wall hit yj = 0 will occur at a time t0j given by
yi (t)
= yi (0) + qi (0)t ?
(t0j ? tj ) = 2|qj (t+
j )| ,
(22)
sign(qj (t+
j )).
where we used sj =
So we see that the time between successive hits of the same
coordinate depends only on its momentum after the last hit. Moreover, since the value of |qj (t+ )|
is smaller than |qj (t? )| if the coordinate crosses to an orthant of lower probability, equation (22)
implies that the particle moves away faster from areas of lower probability. This is unlike the Gaussian augmentation, where a coordinate ?waits in line? until all the other coordinates touch their wall
before hitting its wall again.
The two augmentations we considered above have only scratched the surface of interesting possibilities. One could also define f (y|s) as a uniform distribution in a box such that the computation of the
times for wall hits would becomes purely linear and we get a classical ?billiards? dynamics. More
generally, one could consider different augmentations in different orthants and potentially tailor the
algorithm to mix faster in complex and multimodal distributions.
3
Spike-and-slab regression with truncated parameters
The subject of Bayesian sparse regression has seen a lot of work during the last decade. Along with
priors such as the Bayesian Lasso [6] and the Horsehoe [7], the classic spike-and-slab prior [8, 9]
still remains very competitive, as shown by its superior performance in the recent works [10, 11, 12].
But despite its successes, posterior inference remains a computational challenge for the spike-andslab prior. In this section we will show how the HMC binary sampler can be extended to sample
from the posterior of these models. The latter is a distribution over a set of binary and continuous
variables, with the binary variables determining whether each coefficient should be included in the
model or not. The idea is to map these indicator binary variables into continuous variables as we did
above, obtaining a distribution from which we can sample again using exact HMC methods. Below
we consider a regression model with Gaussian noise but the extension to exponential noise (or other
scale-mixtures of Gaussians) is immediate.
4
3.1
Linear regression
Consider a regression problem with a log-likelihood that depends quadratically on its coefficients,
such as
1
log p(D|w) = ? w0 Mw + r ? w + const.
(23)
2
where D represents the observed data. In a linear regression model z = Xw+?, with ? ? N (0, ? 2 ),
we have M = X 0 X/? 2 and r = z0 X/? 2 . We are interested in a spike-and-slab prior for the
coefficients w of the form
p(w, s|a, ? 2 ) =
d
Y
p(wi |si , ? 2 )p(si |a) .
(24)
i=1
(1?si )
(1+si )
Each binary variable si = ?1 has a Bernoulli prior p(si |a) = a 2 (1 ? a) 2 and determines
whether the coefficient wi is included in the model. The prior for wi , conditioned on si , is
?
w2
?
? ? 1 e? 2?i2 for si = +1,
2?? 2
p(wi |si , ? 2 ) =
(25)
?
?
?(wi )
for si = ?1
We are interested in sampling from the posterior, given by
p(w, s|D, a, ? 2 ) ?
p(D|w)p(w, s|a, ? 2 )
e
?
(26)
0
w+ ? ?2
? 12 w0 Mw+r?w ? 21 w+
e
(2?? 2 )|s+ |/2
0
?|
?(w? )a|s | (1 ? a)|s
+
?2
e? 2 w+ (M+ +? )w+ +r+ ?w+
+
?
?(w? )a|s | (1 ? a)|s |
(2?? 2 )|s+ |/2
1
?
(27)
(28)
where s+ are the variables with si = +1 and s? those with si = ?1. The notation r+ , M+ and
w+ indicates a restriction to the s+ subspace and w? indicates a restriction to the s? space. In the
passage from (27) to (28) we see that the spike-and-slab prior shrinks the dimension of the Gaussian
likelihood from d to |s+ |. In principle we could integrate out the weights w and obtain a collapsed
distribution for s, but we are interested in cases in which the space of w is truncated and therefore
the integration is not feasible. An example would be when a non-negativity constraint wi ? 0 is
imposed.
In these cases, one possible approach is to sample from (28) with a block Gibbs sampler over the
pairs {wi , si }, as proposed in [10]. Here we will present an alternative method, extending the ideas
of the previous section. For this, we consider a new distribution, obtained in two steps. Firstly, we
replace the delta functions in (28) by a factor similar to the slab (25)
?(wi ) ? ?
2
wi
1
2?? 2
e? 2? 2
si = ?1
(29)
The introduction of a non-singular distribution for those wi ?s that are excluded from the model
in (28) creates a Reversible Jump sampler [13]: the Markov chain can now keep track of all the
coefficients, whether they belong or not to the model in a given state of the chain, thus allowing
them to join or leave the model along the chain in a reversible way.
Secondly, we augment the distribution with y variables as in (2)-(5) and sum over s. Using the
Gaussian augmentation (11), this gives a distribution
0
p(w, y|D, a, ? 2 ) ? e? 2 w+ (M+ +?
1
?2
)w+ +r+ ?w+ e?
w? ?w?
2? 2
e?
y?y
2
a|s | (1 ? a)|s
+
?|
(30)
where the values of s in the rhs are obtained from the signs of y. This is a piecewise Gaussian,
different in each orthant of y, and possibly truncated in the w space. Note that the changes in
+
?
p(w, y|D, a, ? 2 ) across orthants of y come both from the factors a|s | (1 ? a)|s | and from the
functional dependence on the w variables. Sampling from (30) gives us samples from the original
distribution (28) using a simple rule: each pair (wi , yi ) becomes (wi , si = +1) if yi ? 0 and
5
(wi = 0, si = ?1) if yi < 0. This undoes the steps we took to transform (28) into (30): the
identification si = sign(yi ) takes us from p(w, y|D, a, ? 2 ) to p(w, s|D, a, ? 2 ), and setting wi = 0
when si = ?1 undoes the replacement in (29).
Since (30) is a piecewise Gaussian distribution, we can sample from it again using the methods
of [2]. As in that work, the possible truncations for w are given as gn (w) ? 0 for n = 1, . . . , N ,
with gn (w) any product of linear and quadratic functions of w. The details are a simple extension
of the purely binary case and are not very illuminating, so we leave them for the Appendix.
3.2
Probit regression
Consider a probit regression model in which binary variables bi = ?1 are observed with probability
Z
2
1
1
p(bi |w, xi ) = ?
dzi e? 2 (zi +xi w)
(31)
2? zi bi ?0
Given a set of N pairs (bi , xi ), we are interested in the posterior distribution of the weights w using
the spike-and-slab prior (24). This posterior is the marginal over the zi ?s of the distribution
2
p(z, w, s|x, a, ? ) ?
N
Y
2
1
e? 2 (zi +xi w) p(w, s|a, ? 2 )
zi bi ? 0 ,
(32)
i=1
and we can use the same approach as above to transform this distribution into a truncated piecewise
Gaussian, defined over the (N + 2d)-dimensional space of the vector (z, w, y). Each zi is truncated
according to the sign of bi and we can also truncate the w space if we so desire. We omit the details
of the HMC algorithm, since it is very similar to the linear regression case.
4
Examples
We present here three examples that illustrate the advantages of the proposed HMC algorithms over
Metropolis or Gibbs samplers.
4.1
1D Ising model
We consider a 1D periodic Ising model, with p(s) ? e??E[s] , where the energy is E[s] =
Pd
? i=1 si si+1 , with sd+1 = s1 and ? is the inverse temperature. Figure 1 shows the first 1000
iterations of both the Gaussian HMC and the Metropolis1 sampler on a model with d = 400 and
? = 0.42, initialized with all spins si = 1. In HMC we took a travel time T = 12.5? and, for
the sake of comparable computational costs, for the Metropolis sampler we recorded the value of
s every d ? 12.5 flip proposals. The plot shows clearly that the Markov chain mixes much faster
with HMC than with Metropolis. A useful variable that summarizes the behavior of the Markov
Pd
chain is the magnetization m = d1 i=1 si , whose expected value is hmi = 0. The oscillations
of both samplers around this value illustrate the superiority of the HMC sampler. In the Appendix
we present a more detailed comparison of the HMC Gaussian and exponential and the Metropolis
samplers, showing that the Gaussian HMC sampler is the most efficient among the three.
4.2
2D Ising model
We consider now a 2D Ising model on a square lattice of size L ? L with periodic boundary conditions below the critical temperature. Starting from a completely disordered state, we compare the
time it takes for the sampler to reach one of the two low energy states with magnetization m ' ?1.
Figure 2 show the results of 20 simulations of such a model with L = 100 and inverse temperature ? = 0.5. We used a Gaussian HMC with T = 2.5? and a Metropolis sampler recording values
of s every 2.5L2 flip proposals. In general we see that the HMC sampler reaches higher likelihood
regions faster.
1
As is well known (see e.g.[14]), for binary distributions, the Metropolis sampler that chooses a random
spin and makes a proposal of flipping its value, is more efficient than the Gibbs sampler.
6
Magnetization
1
0
?1
0
100
200
300
400
500
Energy
600
700
800
900
1000
?900
Metropolis
HMC
?950
?1000
0
100
200
300
400
500
Metropolis
600
700
800
900
1000
100
200
300
400
500
HMC
600
700
800
900
1000
100
200
300
400
500
Iteration
600
700
800
900
1000
200
400
200
400
Figure 1: 1D Ising model. First 1000 iterations of Gaussian HMC and Metropolis samplers on a
model with d = 400 and ? = 0.42, initialized with all spins si = 1 (black dots). For HMC the travel
time was T = 12.5? and in the Metropolis sampler we recorded the state of the Markov chain once
every d ? 12.5 flip proposals. The lower two panels show the state of s at every iteration for each
sampler. The plots show clearly that the HMC model mixes faster than Metropolis in this model.
4
Log likelihood
x 10
2
1.95
1.9
1.85
1.8
HMC
Metropolis
1.75
1.7
5
45
85
125
165
Iteration
205
245
285
205
245
285
Absolute Magnetization
1
0.8
0.6
0.4
0.2
0
5
45
85
125
165
Iteration
Figure 2: 2D Ising model. First samples from 20 simulations in a 2D Ising model in a square lattice
of side length L = 100 with periodic boundary conditions and inverse temperature ? = 0.5. The
initial state is totally disordered. We do not show the first 4 samples in order to ease the visualization.
For the Gaussian HMC we used T = 2.5? and for Metropolis we recorded the state of the chain
every 2.5L2 flip proposals. The plots illustrate that in general HMC reaches equilibrium faster than
Metropolis in this model.
Note that these results of the 1D and 2D Ising models illustrate the advantage of the HMC method
in relation to two different time constants relevant for Markov chains [15]. Figure 1 shows that the
HMC sampler explores faster the sampled space once the chain has reached its equilibrium distribution, while Figure 2 shows that the HMC sampler is faster in reaching the equilibrium distribution.
7
Log likelihood
2580
2560
HMC
Gibbs
2540
2520
100
300
500
700
900
1100
1300
Iteration
Samples of first coefficient
1500
1700
1900
8.2
8
7.8
HMC
Gibbs
7.6
7.4
100
300
500
700
900
1100
1300
Iteration
ACF of first coefficient
1500
1700
1900
1
HMC
Gibbs
0.5
0
0
100
200
300
400
500
Lag
600
700
800
900
1000
Figure 3: Spike-and-slab linear regression with constraints. Comparison of the proposed HMC
method with the Gibbs sampler of [10] for the posterior of a linear regression model with spike-andslab prior, with a positivity constraint on the coefficients. See the text for details of the synthetic data
used. Above: log-likelihood as a function of the iteration. Middle: samples of the first coefficient.
Below: ACF of the first coefficient. The plots shows clearly that HMC mixes much faster than Gibbs
and is more consistent in exploring areas of high probability.
4.3
Spike-and-slab linear regression with positive coefficients
We consider a linear regression model z = Xw + ? with the following synthetic data. X has
N = 700 rows, each sampled from a d = 150-dimensional Gaussian whose covariance matrix has 3
in the diagonal and 0.3 in the nondiagonal entries. The noise is ? ? N (0, ? 2 = 100). The data z is
generated with a coefficients vector w, with 10 non-zero entries with values between 1 and 10. The
spike-and-slab hyperparameters are set to a = 0.1 and ? = 10. Figure 3 compares the results of the
proposed HMC method versus the Gibbs sampler used in [10]. In both cases we impose a positivity
constraint on the coefficients. For the HMC sampler we use a travel time T = ?/2. This results in a
number of wall hits (both for w and y variables) of ' 150, which makes the computational cost of
every HMC and Gibbs sample similar. The advantage of the HMC method is clear, both in exploring
regions of higher probability and in the mixing speed of the sampled coefficients. This impressive
difference in the efficiency of HMC versus Gibbs is similar to the case of truncated multivariate
Gaussians studied in [2].
5
Conclusions and outlook
We have presented a novel approach to use exact HMC methods to sample from generic binary
distributions and certain distributions over mixed binary and continuous variables,
Even though with the HMC algorithm is better than Metropolis or Gibbs in the examples we presented, this will clearly not be the case in many complex binary distributions for which specialized
sampling algorithms have been developed, such as the Wolff or Swendsen-Wang algorithms for 2D
Ising models near the critical temperature [14]. But in particularly difficult distributions, these HMC
algorithms could be embedded as inner loops inside more powerful algorithms of Wang-Landau
type [16]. We leave the exploration of these newly-opened realms for future projects.
Acknowledgments
This work was supported by an NSF CAREER award and by the US Army Research Laboratory
and the US Army Research Office under contract number W911NF-12-1-0594.
8
References
[1] R Neal. MCMC Using Hamiltonian Dynamics. Handbook of Markov Chain Monte Carlo,
pages 113?162, 2011.
[2] Ari Pakman and Liam Paninski. Exact Hamiltonian Monte Carlo for Truncated Multivariate
Gaussians. Journal of Computational and Graphical Statistics, 2013, arXiv:1208.4118.
[3] John A Hertz, Anders S Krogh, and Richard G Palmer. Introduction to the theory of neural
computation, volume 1. Westview press, 1991.
[4] Yichuan Zhang, Charles Sutton, Amos Storkey, and Zoubin Ghahramani. Continuous Relaxations for Discrete Hamiltonian Monte Carlo. In Advances in Neural Information Processing
Systems 25, pages 3203?3211, 2012.
[5] M.D. Hoffman and A. Gelman. The No-U-Turn sampler: adaptively setting path lengths in
Hamiltonian Monte Carlo. Arxiv preprint arXiv:1111.4246, 2011.
[6] T. Park and G. Casella. The Bayesian lasso. Journal of the American Statistical Association,
103(482):681?686, 2008.
[7] C.M. Carvalho, N.G. Polson, and J.G. Scott. The horseshoe estimator for sparse signals.
Biometrika, 97(2):465?480, 2010.
[8] T.J. Mitchell and J.J. Beauchamp. Bayesian variable selection in linear regression. Journal of
the American Statistical Association, 83(404):1023?1032, 1988.
[9] E.I. George and R.E. McCulloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881?889, 1993.
[10] S. Mohamed, K. Heller, and Z. Ghahramani. Bayesian and L1 approaches to sparse unsupervised learning. arXiv preprint arXiv:1106.1157, 2011.
[11] I.J. Goodfellow, A. Courville, and Y. Bengio. Spike-and-slab sparse coding for unsupervised
feature discovery. arXiv preprint arXiv:1201.3382, 2012.
[12] Yutian Chen and Max Welling. Bayesian structure learning for Markov random fields with a
spike and slab prior. arXiv preprint arXiv:1206.1088, 2012.
[13] Peter J Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model
determination. Biometrika, 82(4):711?732, 1995.
[14] Mark E.J. Newman and Gerard T. Barkema. Monte Carlo methods in statistical physics. Oxford: Clarendon Press, 1999., 1, 1999.
[15] Alan D Sokal. Monte Carlo methods in statistical mechanics: foundations and new algorithms,
1989.
[16] Fugao Wang and David P Landau. Efficient, multiple-range random walk algorithm to calculate
the density of states. Physical Review Letters, 86(10):2050?2053, 2001.
9
| 5045 |@word middle:1 nd:1 simulation:2 covariance:1 outlook:1 initial:6 ours:1 interestingly:1 si:29 john:1 interrupted:1 numerical:1 plot:4 hamiltonian:11 sudden:1 beauchamp:1 successive:4 firstly:1 zhang:1 height:1 along:3 differential:1 consists:1 inside:1 introduce:1 expected:1 indeed:1 behavior:1 mechanic:1 landau:2 tandem:1 becomes:2 totally:1 project:1 moreover:2 notation:1 panel:1 mcculloch:1 developed:1 every:8 ti:3 exactly:2 biometrika:2 hit:19 omit:1 superiority:1 before:2 positive:2 understood:1 sd:1 limit:2 despite:2 sutton:1 oxford:1 path:1 black:1 plus:1 studied:2 co:3 ease:1 liam:2 bi:6 palmer:1 range:1 acknowledgment:1 yj:25 block:1 implement:1 area:2 wait:1 zoubin:1 get:6 selection:3 gelman:1 collapsed:1 restriction:3 map:2 imposed:1 center:2 regardless:2 starting:1 identifying:1 rule:1 estimator:1 deriving:1 classic:1 coordinate:22 suppose:2 tan:1 exact:9 goodfellow:1 trick:1 velocity:4 crossing:1 storkey:1 particularly:2 continues:3 ising:9 observed:2 preprint:4 wang:3 calculate:1 region:2 transforming:1 pd:3 complexity:1 ui:2 dynamic:2 yutian:1 purely:3 creates:1 efficiency:1 completely:1 easily:1 multimodal:1 monte:11 newman:1 choosing:2 apparent:1 whose:4 lag:1 solve:1 say:1 otherwise:2 statistic:3 transform:2 final:2 advantage:4 took:2 propose:1 product:1 causing:1 relevant:1 loop:1 tj1:1 trigonometric:1 mixing:1 t0j:2 extending:1 gerard:1 leave:3 illustrate:7 eq:2 krogh:1 auxiliary:1 dzi:1 implies:1 revers:1 come:1 opened:1 exploration:1 disordered:2 require:1 wall:13 secondly:1 extension:3 exploring:2 around:1 diminish:1 considered:1 swendsen:1 equilibrium:3 mapping:2 slab:14 travel:6 amos:1 hoffman:1 clearly:4 gaussian:22 always:1 reaching:1 avoid:1 office:1 bernoulli:1 likelihood:7 check:1 indicates:2 orthants:2 sense:1 inference:2 sokal:1 anders:1 integrated:1 relation:1 interested:5 upward:1 among:1 augment:2 integration:2 special:1 marginal:1 field:2 once:5 sampling:11 represents:1 park:1 unsupervised:2 future:1 t2:1 piecewise:7 richard:1 replacement:1 interest:1 acceptance:3 possibility:1 evaluation:1 mixture:3 tj:10 chain:15 integral:1 initialized:2 walk:1 theoretical:1 gn:2 w911nf:1 lattice:2 cost:5 entry:2 uniform:1 periodic:3 synthetic:2 chooses:1 adaptively:1 density:1 explores:1 probabilistic:1 physic:2 off:1 contract:1 augmentation:8 again:3 recorded:3 possibly:2 positivity:3 american:3 grossman:1 potential:4 diversity:1 coding:1 coefficient:14 scratched:1 depends:2 crossed:1 lot:1 reached:2 competitive:1 option:1 slope:2 square:2 spin:3 efficiently:1 equate:1 bayesian:8 identification:1 manages:1 carlo:11 trajectory:5 reach:7 casella:1 energy:11 mohamed:1 stop:1 sampled:3 newly:1 mitchell:1 realm:1 focusing:1 clarendon:1 higher:2 reflected:4 formulation:1 box:1 shrink:1 though:1 just:2 until:4 hand:1 nondiagonal:1 touch:2 reversible:3 billiards:1 excluded:1 laboratory:1 i2:1 neal:1 sin:4 during:4 essence:1 coincides:1 hill:4 magnetization:4 motion:5 reflection:1 passage:1 temperature:5 l1:1 novel:1 ari:2 recently:2 charles:1 superior:1 specialized:1 functional:1 physical:1 volume:2 extend:1 belong:2 association:3 gibbs:15 imposing:1 tuning:2 rd:1 particle:11 dot:1 moving:1 impressive:1 surface:1 posterior:8 multivariate:3 showed:1 dictated:1 recent:1 apart:1 scenario:1 certain:1 binary:26 success:1 yi:17 seen:2 george:1 impose:1 recommended:2 signal:1 multiple:1 mix:4 smooth:1 alan:1 faster:10 pakman:2 determination:1 cross:4 award:1 qi:6 involving:1 regression:16 arxiv:9 iteration:13 normalization:1 proposal:7 singular:2 grow:1 crucial:1 w2:1 unlike:2 subject:1 recording:1 call:1 mw:2 near:1 bengio:1 easy:1 zi:6 identified:1 lasso:2 inner:1 idea:5 qj:11 bounce:1 whether:5 peter:1 york:1 generally:2 useful:1 detailed:2 involve:1 clear:2 transforms:1 qi2:1 outperform:1 nsf:1 sign:8 neuroscience:1 delta:1 track:1 discrete:3 relaxation:2 sum:2 inverse:4 letter:1 powerful:1 tailor:1 reasonable:2 oscillation:1 appendix:3 summarizes:1 comparable:1 courville:1 quadratic:5 occur:2 precisely:1 constraint:4 sake:1 dominated:1 speed:1 ables:1 department:1 according:2 truncate:1 hertz:1 smaller:1 across:1 wi:14 metropolis:19 happens:1 s1:1 invariant:1 equation:8 visualization:1 remains:2 turn:1 needed:1 mind:1 flip:4 tractable:1 generalizes:1 gaussians:5 apply:1 away:1 generic:2 enforce:1 appropriate:1 tjd:1 alternative:2 original:2 top:1 graphical:1 maintaining:1 instant:1 const:1 xw:2 giving:1 ghahramani:2 classical:1 move:7 spike:15 occurs:2 flipping:2 dependence:1 diagonal:1 subspace:1 w0:2 trivial:1 reason:1 length:2 balance:1 difficult:1 hmc:45 potentially:1 negative:1 polson:1 implementation:2 tj2:1 allowing:2 markov:13 finite:1 horseshoe:1 orthant:6 truncated:12 immediate:1 situation:2 extended:1 david:1 pair:3 accepts:1 quadratically:1 discontinuity:1 usually:1 below:4 scott:1 reading:1 challenge:1 max:1 green:1 critical:2 natural:1 indicator:1 barkema:1 negativity:1 columbia:1 text:1 prior:13 review:2 l2:2 heller:1 discovery:1 determining:1 embedded:1 probit:4 expect:1 mixed:2 interesting:1 carvalho:1 versus:2 foundation:1 integrate:1 illuminating:1 consistent:1 s0:3 imposes:1 principle:1 row:1 course:1 supported:1 last:2 truncation:2 keeping:1 side:2 face:1 undoes:2 sparse:4 absolute:1 boundary:9 dimension:1 transition:1 interchange:1 made:1 jump:3 far:1 welling:1 sj:4 yichuan:1 keep:1 andslab:2 handbook:1 conservation:3 xi:4 continuous:16 decade:1 career:1 obtaining:1 hmi:1 complex:3 did:1 rh:1 noise:3 hyperparameters:1 join:1 ny:1 momentum:6 position:1 explicit:1 acf:2 exponential:5 lie:1 z0:1 emphasized:1 showing:1 unperturbed:1 conditioned:1 chen:1 paninski:2 simply:1 army:2 hitting:3 ordered:1 desire:1 cyclical:1 determines:1 kinetic:1 replace:1 feasible:1 change:4 included:2 infinite:2 discontinuously:1 uniformly:1 westview:1 sampler:28 wolff:1 total:1 select:1 support:1 mark:1 latter:1 arises:1 mcmc:1 d1:1 |
4,471 | 5,046 | Wavelets on Graphs via Deep Learning
Raif M. Rustamov & Leonidas Guibas
Computer Science Department, Stanford University
{rustamov,guibas}@stanford.edu
Abstract
An increasing number of applications require processing of signals defined on
weighted graphs. While wavelets provide a flexible tool for signal processing in
the classical setting of regular domains, the existing graph wavelet constructions
are less flexible ? they are guided solely by the structure of the underlying graph
and do not take directly into consideration the particular class of signals to be
processed. This paper introduces a machine learning framework for constructing
graph wavelets that can sparsely represent a given class of signals. Our construction
uses the lifting scheme, and is based on the observation that the recurrent nature
of the lifting scheme gives rise to a structure resembling a deep auto-encoder
network. Particular properties that the resulting wavelets must satisfy determine the
training objective and the structure of the involved neural networks. The training is
unsupervised, and is conducted similarly to the greedy pre-training of a stack of
auto-encoders. After training is completed, we obtain a linear wavelet transform
that can be applied to any graph signal in time and memory linear in the size of the
graph. Improved sparsity of our wavelet transform for the test signals is confirmed
via experiments both on synthetic and real data.
1
Introduction
Processing of signals on graphs is emerging as a fundamental problem in an increasing number of
applications [22]. Indeed, in addition to providing a direct representation of a variety of networks
arising in practice, graphs serve as an overarching abstraction for many other types of data. Highdimensional data clouds such as a collection of handwritten digit images, volumetric and connectivity
data in medical imaging, laser scanner acquired point clouds and triangle meshes in computer graphics
? all can be abstracted using weighted graphs. Given this generality, it is desirable to extend the
flexibility of classical tools such as wavelets to the processing of signals defined on weighted graphs.
A number of approaches for constructing wavelets on graphs have been proposed, including, but
not limited to the CKWT [7], Haar-like wavelets [24, 10], diffusion wavelets [6], spectral wavelets
[12], tree-based wavelets [19], average-interpolating wavelets [21], and separable filterbank wavelets
[17]. However, all of these constructions are guided solely by the structure of the underlying graph,
and do not take directly into consideration the particular class of signals to be processed. While
this information can be incorporated indirectly when building the underlying graph (e.g. [19, 17]),
such an approach does not fully exploit the degrees of freedom inherent in wavelet design. In
contrast, a variety of signal class specific and adaptive wavelet constructions exist on images and
multidimensional regular domains, see [9] and references therein. Bridging this gap is challenging
because obtaining graph wavelets, let alone adaptive ones, is complicated by the irregularity of the
underlying space. In addition, theoretical guidance for such adaptive constructions is lacking as it
remains largely unknown how the properties of the graph wavelet transforms, such as sparsity, relate
to the structural properties of graph signals and their underlying graphs [22].
The goal of our work is to provide a machine learning framework for constructing wavelets on
weighted graphs that can sparsely represent a given class of signals. Our construction uses the lifting
1
scheme as applied to the Haar wavelets, and is based on the observation that the update and predict
steps of the lifting scheme are similar to the encode and decode steps of an auto-encoder. From this
point of view, the recurrent nature of the lifting scheme gives rise to a structure resembling a deep
auto-encoder network.
Particular properties that the resulting wavelets must satisfy, such as sparse representation of signals,
local support, and vanishing moments, determine the training objective and the structure of the
involved neural networks. The goal of achieving sparsity translates into minimizing a sparsity
surrogate of the auto-encoder reconstruction error. Vanishing moments and locality can be satisfied
by tying the weights of the auto-encoder in a special way and by restricting receptive fields of neurons
in a manner that incorporates the structure of the underlying graph. The training is unsupervised, and
is conducted similarly to the greedy (pre-)training [13, 14, 2, 20] of a stack of auto-encoders.
The advantages of our construction are three-fold. First, when no training functions are specified
by the application, we can impose a smoothness prior and obtain a novel general-purpose wavelet
construction on graphs. Second, our wavelets are adaptive to a class of signals and after training
we obtain a linear transform; this is in contrast to adapting to the input signal (e.g. by modifying
the underlying graph [19, 17]) which effectively renders those transforms non-linear. Third, our
construction provides efficient and exact analysis and synthesis operators and results in a critically
sampled basis that respects the multiscale structure imposed on the underlying graph.
The paper is organized as follows: in ?2 we briefly overview the lifting scheme. Next, in ?3 we
provide a general overview of our approach, and fill in the details in ?4. Finally, we present a number
of experiments in ?5.
2
Lifting scheme
The goal of wavelet design is to obtain a multiresolution [16] of L2 (G) ? the set of all functions/signals
on graph G. Namely, a nested sequence of approximation spaces from coarse to fine of the form
V1 ? V2 ? ... ? V`max = L2 (G) is constructed. Projecting a signal in the spaces V` provides
better and better approximations with increasing level `. Associated wavelet/detail spaces W`
satisfying V`+1 = V` ? W` are also obtained.
Scaling functions {?`,k } provide a basis for approximation space V` , and similarly wavelet functions
{?`,k } for W` . As a result, for any signal f ? L2 (G) on graph and any level `0 < `max , we have
the wavelet decomposition
f=
X
a`0 ,k ?`0 ,k +
k
`max
X?1 X
`=`0
d`,k ?`,k .
(1)
k
The coefficients a`,k and d`,k appearing in this decomposition are called approximation (also, scaling)
and detail (also, wavelet) coefficients respectively. For simplicity, we use a` and d` to denote the
vectors of all approximation and detail coefficients at level `.
Our construction of wavelets is based on the lifting scheme [23]. Starting with a given wavelet
transform, which in our case is the Haar transform (HT ), one can obtain lifted wavelets by applying
the process illustrated in Figure 1(left) starting with ` = `max ? 1, a`max = f and iterating down
until ` = 1. At every level the lifted coefficients a` and d` are computed by augmenting the Haar
Figure 1: Lifting scheme: one step of forward (left) and backward (right) transform. Here, a` and d`
denote the vectors of all approximation and detail coefficients of the lifted transform at level `. U and
P are linear update and predict operators. HT and IHT are the Haar transform and its inverse.
2
coefficients a
?` and d?` (of the lifted approximation coefficients a`+1 ) as follows
a` ? a
?` + U d?`
d` ? d?` ? P a`
where update (U ) and predict (P ) are linear operators (matrices). Note that in adaptive wavelet
designs the update and predict operators will vary from level to level, but for simplicity of notation
we do not indicate this explicitly.
This process is always invertible ? the backward transform is depicted, with IHT being the inverse
Haar transform, in Figure 1(right) and allows obtaining perfect reconstruction of the original signal.
While the wavelets and scaling functions are not explicitly computed during either forward or
backward transform, it is possible to recover them using the expansion of Eq. (1). For example, to
obtain a specific scaling function ?`,k , one simply sets all of approximation and detail coefficients to
zero, except for a`,k = 1 and runs the backward transform.
3
Approach
For a given class of signals, our objective is to design wavelets that yield approximately sparse
expansions in Eq.(1) ? i.e. the detail coefficients are mostly small with a tiny fraction of large
coefficients. Therefore, we learn the update and predict operators that minimize some sparsity
max
surrogate of the detail (wavelet) coefficients of given training functions {f n }nn=1
.
For a fixed multiresolution level `, and a training function f n , let a
?n` and d?n` be the Haar approximation
and detail coefficient vectors of f n received at level ` (i.e. applied to an`+1 as in Figure 1(left)).
Consider the minimization problem
X
X
{U, P } = arg min
s(dn` ) = arg min
s(d?n` ? P (?
an` + U d?n` )),
(2)
U,P
U,P
n
n
where s is some sparse penalty function. This can be seen as optimizing a linear auto-encoder with
encoding step given by a
?n` + U d?n` , and decoding step given by multiplication with the matrix P .
Since we would like to obtain a linear wavelet transform, the linearity of the encode and decode steps
is of crucial importance. In addition to linearity and the special form of bias terms, our auto-encoders
differ from commonly used ones in that we enforce sparsity on the reconstruction error, rather than
the hidden representation ? in our setting, the reconstruction errors correspond to detail coefficients.
The optimization problem of Eq. 2 suffers from a trivial solution: by choosing update matrix to have
large norm (e.g. a large coefficient times identity matrix), and predict operator equal to the inverse
of update, one can practically cancel the contribution of the bias terms, obtaining almost perfect
reconstruction. Trivial solutions are a well-known problem in the context of auto-encoders, and an
effective solution is to tie the weights of the encode and decode steps by setting U = P t . This also
has the benefit of decreasing the number of parameters to learn. We also follow a similar strategy and
tie the weights of update and predict steps, but the specific form of tying is dictated by the wavelet
properties and will be discussed in ?4.2.
The training is conducted in a manner similar to the greedy pre-training of a stack of auto-encoders
[13, 14, 2, 20]. Namely, we first train the the update and predict operators at the finest level: here
the input to the lifting step are the original training functions ? this corresponds to ` = `max ? 1
and ?n, an`+1 = f n in Figure 1(left). After training of this finest level is completed, we obtain new
approximation coefficients an` which are passed to the next level as the training functions, and this
process is repeated until one reaches the coarsest level.
The use of tied auto-encoders is motivated by their success in deep learning revealing their capability
to learn useful features from the data under a variety of circumstances. The choice of the lifting
scheme as the backbone of our construction is motivated by several observations. First, every
invertible 1D discrete wavelet transform can be factored into lifting steps [8], which makes lifting
a universal tool for constructing multiresolutions. Second, lifting scheme is always invertible, and
provides exact reconstruction of signals. Third, it affords fast (linear time) and memory efficient
(in-place) implementation after the update and predict operators are specified. We choose to apply
lifting to Haar wavelets specifically because Haar wavelets are easy to define on any underlying space
provided that it can be hierarchically partitioned [24, 10]. Our use of update-first scheme mirrors its
3
common use for adaptive wavelet constructions in image processing literature, which is motivated by
its stability; see [4] for a thorough discussion.
4
Construction details
We consider a simple connected weighted graph G with vertex set V of size N . A signal on
the graph is represented by a vector f ? RN . Let W be the N ? N edge weight matrix (since
there are no self-loops, Wii = 0), and let S be
Pthe diagonal N ? N matrix of vertex weights;
if no vertex weights are given, we set Sii =
j Wij . For
a graph signal ?f , we define
its
integral
over
the
graph as a
P
weighted sum, G f = i Sii f (i). We define the volume
of
?
aPsubset R of vertices of the graph by V ol(R) = R 1 =
i?R Sii .
We assume that a hierarchical partitioning (not necessarily
dyadic) of the underlying graph into connected regions is provided. We denote the regions at level ` = 1, ..., `max by R`,k ;
see the inset where the three coarsest partition levels of a dataset
are shown. For each region at levels ` = 1, ..., `max ? 1, we
designate arbitrarily all except one of its children (i.e. regions at
level `+1) as active regions. As will become clear, our wavelet construction yields one approximation
coefficient a`,k for each region R`,k , and one detail coefficient d`,k for each active region R`+1,k at
level ` + 1. Note that if the partition is not dyadic, at a given level ` the number of scaling coefficients
(equal to number of regions at level `) will not be the same as the number of detail coefficients (equal
to number of active regions at level ` + 1). We collect all of the coefficients at the same level into
vectors denoted by a` and d` ; to keep our notation lightweight, we refrain from using boldface for
vectors.
4.1
Haar wavelets
Usually, the (unnormalized) Haar approximation and detail coefficients of a signal f are computed as
follows. The coefficient a
?`,k corresponding
to region R`,k equals to the average of the function f on
?
that region: a
?`,k = V ol(R`,k )?1 R`,k f . The detail coefficient d?`,k corresponding to an active region
R`+1,k is the difference between averages at the region R`+1,k and its parent region R`,par(k) , namely
d?`,k = a
?`+1,k ? a
?`,par(k) . For perfect reconstruction there is no need to keep detail coefficients for
inactive regions, because these can be recovered from the scaling coefficient of the parent region and
the detail coefficients of the sibling regions.
In our setting, Haar wavelets are a part of the lifting scheme, and so the coefficient vectors a
?` and d?`
at level ` need to be computed from the augmented coefficient vector a`+1 at level ` + 1 (c.f. Figure
1(left)). This is equivalent to computing a function?s average at a given region from its averages at the
children regions. As a result, we obtain the following formula:
X
a
?`,k = V ol(R`,k )?1
a`+1,j V ol(R`+1,j ),
j,par(j)=k
where the summation is over all the children regions of R`,k . As before, the detail coefficient
corresponding to an active region R`+1,k is given by d?`,k = a`+1,k ? a
?`,par(k) . The resulting Haar
wavelets are not normalized; when sorting wavelet/scaling coefficients we will multiply coefficients
coming from level ` by 2?`/2 .
4.2
Auto-encoder setup
The choice of the update and predict operators and their tying scheme is guided by a number of
properties that wavelets need to satisfy. We discuss these requirements under separate headings.
Vanishing moments: The wavelets should have vanishing dual and primal moments ? two independent conditions due to biorthogonality of our wavelets. In terms of the approximation and detail
4
coefficients these can be expressed as follows: a) all of the detail coefficients of a constant function
should be zero and b) the integral of the approximation at any level of multiresolution should be the
same as the integral of the original function.
Since these conditions are already satisfied by the Haar wavelets, we need to ensure that the update
and predict operators preserve them. To be more precise, if a`+1 is a constant vector, then we have
for Haar coefficients that a
?` = c~1 and d?` = ~0; here c is some constant and ~1 is a column-vector of all
ones. To satisfy a) after lifting, we need to ensure that d` = d?` ?P (?
a` +U d?` ) = ?P a
?` = ?cP ~1 = ~0.
~
~
Therefore, the rows of predict operator should sum to zero: P 1 = 0.
To
P satisfy b), we need toPpreserve the first order
P moment at every level ` by requiring
a
V
ol(R
)
=
a
?
V
ol(R
)
=
`+1,k
`+1,k
`,k
`,k
k
k
k a`,k V ol(R`,k ). The first equality is already satisfied (due to the use of Haar wavelets), so we need to constrain our update operator.
diagonal matrix Ac ofP
the region volumes at level `, we can write 0 =
P Introducing the P
?
?
~t
?`,k V ol(R`,k ) =
k a`,k V ol(R`,k ) ?
ka
k U d` V ol(R`,k ) = 1 Ac U d` . Since this should
t
t
?
~
~
be satisfied for all d` , we must have 1 Ac U = 0 .
Taking these two requirements into consideration, we impose the following constraints on predict and
update weights:
t
P ~1 = ~0 and U = A?1
c P Af
where Af is the diagonal matrix of the active region volumes at level ` + 1. It is easy to check that
t
~1t Ac U = ~1t Ac A?1
~t t
~ t
~t
~t
c P Af = 1 P Af = (P 1) Af = 0 Af = 0 as required. We have introduced
the volume matrix Af of regions at the finer level to make the update/predict matrices dimensionless
(i.e. insensitive to whether the volume is measured in any particular units).
Locality: To make our wavelets and scaling functions localized on the graph, we need to constrain
update and predict operators in a way that would disallow distant regions from updating or predicting
the approximation/detail coefficients of each other.
Since the update is tied to the predict operator, we can limit ourselves to the latter operator. For a
detail coefficient d`,k corresponding to the active region R`+1,k , we only allow predictions that come
from the parent region R`,par(k) and the immediate neighbors of this parent region. Two regions of
graph are considered neighboring if their union is a connected graph. This can be seen as enforcing a
sparsity structure on the matrix P or as limiting the interconnections between the layers of neurons.
As a result of this choice, it is not difficult to see that the resulting scaling functions ?`,k and wavelets
?`,k will be supported in the vicinity of the region R`,k . Larger supports can be obtained by allowing
the use of second and higher order neighbors of the parent for prediction.
4.3
Optimization
A variety of ways for optimizing auto-encoders are available, we refer the reader to the recent paper
[15] and references therein. In our setting, due to the relatively small size of the training set and sparse
inter-connectivity between the layers, an off-the-shelf L-BFGS1 unconstrained smooth optimization
package works very well. In order to make our problem unconstrained, we avoid imposing the
equation P ~1 = ~0 as a hard constraint, but in each row of P (which corresponds to some active region),
the weight corresponding to the ?
parent is eliminated. To obtain a smooth objective, we use L1 norm
with soft absolute value s(x) = + x2 ? |x|, where we set = 10?4 . The initialization is done by
setting all of the weights equal to zero. This is meaningful, because it corresponds to no lifting at all,
and would reproduce the original Haar wavelets.
4.4
Training functions
When training functions are available we directly use them. However, our construction can be applied
even if training functions are not specified. In this case we choose smoothness as our prior, and train
the wavelets with a set of smooth functions on the graph ? namely, we use scaled eigenvectors of
graph Laplacian corresponding to the smallest eigenvalues. More precisely, let D be the diagonal
1
Mark Schmidt, http://www.di.ens.fr/?mschmidt/Software/minFunc.html
5
P
matrix with entries Dii = j Wij . The graph Laplacian L is defined as L = S ?1 (D ?W ). We solve
the symmetric generalized eigenvalue problem (D ? W )? = ?S? to compute the smallest eigen-pairs
max
{?n , ?n }nn=0
.We discard the 0-th eigen-pair which corresponds to the constant eigenvector, and use
max
functions {?n /?n }nn=1
as our training set. The inverse scaling by the eigenvalue is included because
eigenvectors corresponding to larger eigenvalues are less smooth (cf. [1]), and so should be assigned
smaller weights to achieve a smooth prior.
4.5
Partitioning
Since our construction is based on improving upon the Haar wavelets, their quality will have
an effect on the final wavelets. As proved in [10], the quality of Haar wavelets depends on
the quality (balance) of the graph partitioning. From practical standpoint, it is hard to achieve
high quality partitions on all types of graphs using a single algorithm. However, for the datasets
presented in this paper we find that the following approach based on spectral clustering algorithm of [18] works well. Namely, we first embed the graph vertices into Rnmax as follows:
max
i ? (?1 (i)/?1 , ?2 (i)/?2 , ..., ?nmax (i)/?nmax ), ?i ? V , where {?n , ?n }nn=0
are the eigen-pairs
of the Laplacian as in ?4.4, and ?? (i) is the value of the eigenvector at the i-th vertex of the graph.
To obtain a hierarchical tree of partitions, we start with the graph itself as the root. At every step, a
given region (a subset of the vertex set) of graph G is split into two children partitions by running
the 2-means clustering algorithm (k-means with k = 2) on the above embedding restricted to the
vertices of the given partition [24]. This process is continued in recursion at every obtained region.
This results in a dyadic partitioning except at the finest level `max .
4.6
Graph construction for point clouds
Our problem setup started with a weighted graph and arrived to the Laplacian matrix L in ?4.4. It is
also possible o reverse this process whereby one starts with the Laplacian matrix L and infers from it
the weighted graph. This is a natural way of dealing with point clouds sampled from low-dimensional
manifolds, a setting common in manifold learning. There is a number of ways for computing
Laplacians on point clouds, see [5]; almost all of them fit into the above form L = S ?1 (D ? W ),
and so, they can be used to infer a weighted graph that can be plugged into our construction.
5
Experiments
Our goal is to experimentally investigate the constructed wavelets for multiscale behavior, meaningful adaptation to training signals, and sparse representation that generalizes to testing signals.
For the first two objectives we visualize the scaling functions at different levels ` because they provide insight
about the signal approximation spaces V` . The generalization performance can be deduced from comparison to
Haar wavelets, because during training we modify Haar
wavelets so as to achieve a sparser representation of training signals.
We start with the case of a periodic interval, which is Figure 2: Scaling (left) and wavelet
discretized as a cycle graph; 32 scaled eigenvectors (sines (right) functions on periodic interval.
and cosines) are used for training. Figure 2 shows the resulting scaling and wavelet functions at level
` = 4. Up to discretization errors, the wavelets and scaling functions at the same level are shifts of
each other ? showing that our construction is able to learn shift invariance from training functions.
Figure 3(a) depicts a graph representing the road network of Minnesota, with edges showing the
major roads and vertices being their intersections. In our construction we employ unit weights on
edges and use 32 scaled eigenvectors of graph Laplacian as training functions. The resulting scaling
functions for regions containing the red vertex in Figure 3(a) are shown at different levels in Figure
3(b,c,d,e,f). The function values at graph vertices are color coded from smallest (dark blue) to largest
(dark red). Note that the scaling functions are continuous and show multiscale spatial behavior.
To test whether the learned wavelets provide a sparse representation of smooth signals, we synthetically generated 100 continuous functions using the xy-coordinates (the coordinates have not been
6
(a) Road network
(b) Scaling ` = 2
(c) Scaling ` = 4
(d) Scaling ` = 6
(e) Scaling ` = 8
(f) Scaling ` = 10
(g) Sample function (h) Reconstruction error
Figure 3: Our construction trained with smooth prior on the network (a), yields the scaling functions (b,c,d,e,f). A sample continuous function (g) out of 100 total test functions. Better average
reconstruction results (h) for our wavelets (Wav-smooth) indicate a good generalization performance.
seen by the algorithm so far) of the vertices; Figure 3(g) shows one of such functions. Figure 3(h)
shows the average error of reconstruction from expansion Eq. (1) with `0 = 1 by keeping a specified
fraction of largest detail coefficients. The improvement over the Haar wavelets shows that our model
generalizes well to unseen signals.
Next, we apply our approach to real-world graph signals. We use a dataset of average daily temperature measurements2 from meteorological stations located on the mainland US. The longitudes and
latitudes of stations are treated as coordinates of a point cloud, from which a weighted Laplacian is
constructed using [5] with 5-nearest neighbors; the resulting graph is shown in Figure 4(a).
The daily temperature data for the year of 2012 gives us 366 signals on the graph; Figure 4(b) depicts
one such signal. We use the signals from the first half of the year to train the wavelets, and test
for sparse reconstruction quality on the second half of the year (and vice versa). Figure 4(c,d,e,f,g)
depicts some of the scaling functions at a number of levels; note that the depicted scaling function at
level ` = 2 captures the rough temperature distribution pattern of the US. The average reconstruction
error from a specified fraction of largest detail coefficients is shown in Figure 4(g).
As an application, we employ our wavelets for semi-supervised learning of the temperature distribution
for a day from the temperatures at a subset of labeled graph vertices. The sought temperature
(a) GSOD network
(c) Scaling ` = 2
(b) April 9, 2012
(d) Scaling ` = 4
(e) Scaling ` = 6
(f) Scaling ` = 8
(g) Reconstruction error
(h) Learning error
Figure 4: Our construction on the station network (a) trained with daily temperature data (e.g. (b)),
yields the scaling functions (c,d,e,f). Reconstruction results (g) using our wavelets trained on data
(Wav-data) and with smooth prior (Wav-smooth). Results of semi-supervised learning (h).
2
National Climatic Data Center, ftp://ftp.ncdc.noaa.gov/pub/data/gsod/2012/
7
(a) Scaling functions
(b) PSNR
(c) SSIM
Figure 5: The scaling functions (a) resulting from training on a face images dataset. These wavelets
(Wav-data) provide better sparse reconstruction quality than the CDF9/7 wavelet filterbanks (b,c).
distribution is expanded as in Eq. (1) with `0 = 1, and the coefficients are found by solving a least
squares problem using temperature values at labeled vertices. Since we expect the detail coefficients
to be sparse, we impose a lasso penalty on them; to make the problem smaller, all detail coefficients
for levels ` ? 7 are set to zero. We compare to the Laplacian regularized least squares [1] and
harmonic interpolation approach [26]. A hold-out set of 25 random vertices is used to assign all the
regularization parameters. The experiment is repeated for each of the days (not used to learn the
wavelets) with the number of labeled vertices ranging from 10 to 200. Figure 4(h) shows the errors
averaged over all days; our approach achieves lower error rates than the competitors.
Our final example serves two purposes ? showing the benefits of our construction in a standard image
processing application and better demonstrating the nature of learned scaling functions. Images
can be seen as signals on a graph ? pixels are the vertices and each pixel is connected to its 8
nearest neighbors. We consider all of the Extended Yale Face Database B [11] images (cropped and
down-sampled to 32 ? 32) as a collection of signals on a single underlying graph. We randomly
split the collection into half for training our wavelets, and test their reconstruction quality on the
remaining half. Figure 5(a) depicts a number of obtained scaling functions at different levels (the
rows correspond to levels ` = 4, 5, 6, 7, 8) in various locations (columns). The scaling functions have
a face-like appearance at coarser levels, and capture more detailed facial features at finer levels. Note
that the scaling functions show controllable multiscale spatial behavior.
The quality of reconstruction from a sparse set of detail coefficients is plotted in Figure 5(b,c).
Here again we consider the expansion of Eq. (1) with `0 = 1, and reconstruct using a specified
proportion of largest detail coefficients. We also make a comparison to reconstruction using the
standard separable CDF 9/7 wavelet filterbanks from bottom-most level; for both of quality metrics,
our wavelets trained on data perform better than CDF 9/7. The smoothly trained wavelets do not
improve over the Haar wavelets, because the smoothness assumption does not hold for face images.
6
Conclusion
We have introduced an approach to constructing wavelets that take into consideration structural
properties of both graph signals and their underlying graphs. An interesting direction for future
research would be to randomize the graph partitioning process or to use bagging over training
functions in order to obtain a family of wavelet constructions on the same graph ? leading to overcomplete dictionaries like in [25]. One can also introduce multiple lifting steps at each level or
even add non-linearities as common with neural networks. Our wavelets are obtained by training a
structure similar to a deep neural network; interestingly, the recent work of Mallat and collaborators
(e.g. [3]) goes in the other direction and provides a wavelet interpretation of deep neural networks.
Therefore, we believe that there are ample opportunities for future work in the interface between
wavelets and deep neural networks.
Acknowledgments: We thank Jonathan Huang for discussions and especially for his advice regarding the experimental section. The authors acknowledge the support of NSF grants FODAVA 808515
and DMS 1228304, AFOSR grant FA9550-12-1-0372, ONR grant N00014-13-1-0341, a Google
research award, and the Max Plack Center for Visual Computing and Communications.
8
References
[1] M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. Machine Learning, 56(13):209?239, 2004. 4.4, 5
[2] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19,
pages 153?160. MIT Press, Cambridge, MA, 2007. 1, 3
[3] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 35(8):1872?1886, 2013. 6
[4] R. L. Claypoole, G. Davis, W. Sweldens, and R. G. Baraniuk. Nonlinear wavelet transforms for image
coding via lifting. IEEE Transactions on Image Processing, 12(12):1449?1459, Dec. 2003. 3
[5] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5?30,
July 2006. 4.6, 5
[6] R. R. Coifman and M. Maggioni. Diffusion wavelets. Appl. Comput. Harmon. Anal., 21(1):53?94, 2006. 1
[7] M. Crovella and E. D. Kolaczyk. Graph wavelets for spatial traffic analysis. In INFOCOM, 2003. 1
[8] I. Daubechies and W. Sweldens. Factoring wavelet transforms into lifting steps. J. Fourier Anal. Appl.,
4(3):245?267, 1998. 3
[9] M. N. Do and Y. M. Lu. Multidimensional filter banks and multiscale geometric representations. Foundations and Trends in Signal Processing, 5(3):157?264, 2012. 1
[10] M. Gavish, B. Nadler, and R. R. Coifman. Multiscale wavelets on trees, graphs and high dimensional data:
Theory and applications to semi supervised learning. In ICML, pages 367?374, 2010. 1, 3, 4.5
[11] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face
recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intelligence, 23(6):643?660,
2001. 5
[12] D. K. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on graphs via spectral graph theory. Appl.
Comput. Harmon. Anal., 30(2):129?150, 2011. 1
[13] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Comput.,
18(7):1527?1554, 2006. 1, 3
[14] G. E. Hinton and R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks. Science,
313:504?507, July 2006. 1, 3
[15] Q. V. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Y. Ng. On optimization methods for deep
learning. In ICML, pages 265?272, 2011. 4.3
[16] S. Mallat. A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way. Academic Press, 3rd
edition, 2008. 2
[17] S. K. Narang and A. Ortega. Multi-dimensional separable critically sampled wavelet filterbanks on arbitrary
graphs. In ICASSP, pages 3501?3504, 2012. 1
[18] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages
849?856, 2001. 4.5
[19] I. Ram, M. Elad, and I. Cohen. Generalized tree-based wavelet transform. IEEE Transactions on Signal
Processing, 59(9):4199?4209, 2011. 1
[20] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations with an
energy-based model. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19, pages 1137?1144. MIT Press, Cambridge, MA, 2007. 1, 3
[21] R. M. Rustamov. Average interpolating wavelets on point clouds and graphs. CoRR, abs/1110.2227, 2011.
1
[22] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The emerging field of signal
processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains.
IEEE Signal Process. Mag., 30(3):83?98, 2013. 1
[23] W. Sweldens. The lifting scheme: A construction of second generation wavelets. SIAM Journal on
Mathematical Analysis, 29(2):511?546, 1998. 2
[24] A. D. Szlam, M. Maggioni, R. R. Coifman, and J. C. Bremer. Diffusion-driven multiscale analysis on
manifolds and graphs: top-down and bottom-up constructions. In SPIE, volume 5914, 2005. 1, 3, 4.5
[25] X. Zhang, X. Dong, and P. Frossard. Learning of structured graph dictionaries. In ICASSP, pages
3373?3376, 2012. 6
[26] X. Zhu, Z. Ghahramani, and J. D. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In ICML, pages 912?919, 2003. 5
9
| 5046 |@word kolaczyk:1 briefly:1 norm:2 proportion:1 decomposition:2 moment:5 lightweight:1 pub:1 mag:1 interestingly:1 existing:1 recovered:1 ka:1 discretization:1 must:3 finest:3 mesh:1 distant:1 partition:6 update:18 alone:1 greedy:4 half:4 intelligence:2 vanishing:4 gribonval:1 fa9550:1 provides:4 coarse:1 location:1 zhang:1 mathematical:1 dn:1 constructed:3 direct:1 sii:3 become:1 introduce:1 manner:2 coifman:4 acquired:1 inter:1 indeed:1 frossard:2 behavior:3 multi:1 ol:10 discretized:1 salakhutdinov:1 decreasing:1 gov:1 increasing:3 provided:2 underlying:12 notation:2 linearity:3 tying:3 backbone:1 emerging:2 eigenvector:2 thorough:1 every:5 multidimensional:2 tie:2 filterbank:1 scaled:3 platt:2 partitioning:5 unit:2 medical:1 grant:3 szlam:1 before:1 local:1 modify:1 limit:1 encoding:1 mach:1 solely:2 interpolation:1 approximately:1 therein:2 initialization:1 collect:1 challenging:1 appl:3 limited:1 averaged:1 practical:1 acknowledgment:1 lecun:1 testing:1 practice:1 union:1 irregularity:1 digit:1 universal:1 adapting:1 revealing:1 pre:3 road:3 regular:2 nmax:2 ncdc:1 operator:15 context:1 applying:1 dimensionless:1 www:1 equivalent:1 imposed:1 map:1 center:2 resembling:2 go:1 overarching:1 starting:2 simplicity:2 factored:1 insight:1 continued:1 fill:1 lamblin:1 his:1 stability:1 embedding:1 maggioni:2 coordinate:3 limiting:1 construction:26 mallat:3 decode:3 exact:2 us:2 trend:1 satisfying:1 recognition:1 updating:1 located:1 sparsely:2 coarser:1 labeled:3 database:1 bottom:2 cloud:7 capture:2 region:34 connected:4 cycle:1 ranzato:1 kriegman:1 trained:5 solving:1 serve:1 upon:1 basis:2 triangle:1 georghiades:1 icassp:2 represented:1 various:1 laser:1 train:3 fast:2 effective:1 choosing:1 stanford:2 larger:2 solve:1 narang:2 elad:1 interconnection:1 reconstruct:1 encoder:7 niyogi:1 unseen:1 transform:15 itself:1 final:2 advantage:1 sequence:1 eigenvalue:4 net:1 reconstruction:18 coming:1 fr:1 adaptation:1 neighboring:1 loop:1 bremer:1 climatic:1 pthe:1 flexibility:1 multiresolution:3 achieve:3 olkopf:2 parent:6 requirement:2 extending:1 perfect:3 ftp:2 recurrent:2 ac:5 pose:1 augmenting:1 measured:1 nearest:2 received:1 eq:6 longitude:1 indicate:2 come:1 larochelle:1 differ:1 direction:2 guided:3 modifying:1 filter:1 dii:1 require:1 assign:1 generalization:2 collaborator:1 summation:1 designate:1 scanner:1 practically:1 hold:2 considered:1 guibas:2 nadler:1 predict:16 visualize:1 major:1 vary:1 sought:1 smallest:3 achieves:1 dictionary:2 purpose:2 gavish:1 largest:4 vice:1 tool:3 weighted:10 hoffman:2 minimization:1 rough:1 mit:2 always:2 gaussian:1 rather:1 avoid:1 shelf:1 lifted:4 encode:3 improvement:1 check:1 contrast:2 abstraction:1 factoring:1 nn:4 hidden:1 wij:2 reproduce:1 pixel:2 arg:2 dual:1 flexible:2 html:1 denoted:1 spatial:3 special:2 field:3 equal:5 ng:2 eliminated:1 unsupervised:2 cancel:1 icml:3 future:2 inherent:1 employ:2 belkin:1 few:1 randomly:1 preserve:1 national:1 ourselves:1 ab:1 freedom:1 investigate:1 multiply:1 introduces:1 primal:1 edge:3 integral:3 crovella:1 daily:3 xy:1 facial:1 harmon:2 tree:4 plugged:1 plotted:1 guidance:1 overcomplete:1 theoretical:1 minfunc:1 column:2 soft:1 introducing:1 vertex:17 subset:2 entry:1 tour:1 conducted:3 osindero:1 graphic:1 encoders:7 periodic:2 synthetic:1 deduced:1 fundamental:1 siam:1 off:1 dong:1 decoding:1 invertible:3 synthesis:1 connectivity:2 daubechies:1 again:1 satisfied:4 containing:1 choose:2 huang:1 leading:1 coding:1 coefficient:43 filterbanks:3 satisfy:5 explicitly:2 leonidas:1 depends:1 sine:1 view:1 root:1 infocom:1 traffic:1 red:2 start:3 recover:1 complicated:1 capability:1 contribution:1 minimize:1 square:2 largely:1 yield:4 correspond:2 handwritten:1 critically:2 hammond:1 lu:1 shuman:1 confirmed:1 lighting:1 finer:2 reach:1 suffers:1 iht:2 volumetric:1 competitor:1 energy:1 involved:2 dm:1 associated:1 di:1 riemannian:1 spie:1 sampled:4 dataset:3 proved:1 color:1 infers:1 psnr:1 organized:1 dimensionality:1 noaa:1 scattering:1 higher:1 supervised:5 follow:1 day:3 improved:1 april:1 wei:1 done:1 generality:1 until:2 lahiri:1 multiscale:7 nonlinear:1 meteorological:1 google:1 quality:9 believe:1 building:1 effect:1 normalized:1 requiring:1 equality:1 vicinity:1 assigned:1 regularization:1 symmetric:1 illustrated:1 during:2 self:1 mschmidt:1 davis:1 whereby:1 unnormalized:1 cosine:1 generalized:2 ortega:2 arrived:1 cp:1 l1:1 temperature:8 interface:1 image:10 harmonic:3 consideration:4 novel:1 ranging:1 wise:1 common:3 overview:2 cohen:1 insensitive:1 volume:6 extend:1 discussed:1 interpretation:1 refer:1 versa:1 imposing:1 cambridge:2 smoothness:3 rd:1 unconstrained:2 similarly:3 minnesota:1 bruna:1 add:1 recent:2 dictated:1 optimizing:2 driven:1 discard:1 reverse:1 n00014:1 onr:1 success:1 arbitrarily:1 refrain:1 seen:4 impose:3 belhumeur:1 determine:2 signal:42 semi:5 july:2 multiple:1 desirable:1 infer:1 smooth:10 academic:1 af:7 ofp:1 award:1 coded:1 laplacian:8 prediction:2 circumstance:1 metric:1 represent:2 dec:1 irregular:1 addition:3 cropped:1 fine:1 interval:2 crucial:1 standpoint:1 sch:2 ample:1 incorporates:1 lafferty:1 jordan:1 structural:2 chopra:1 synthetically:1 split:2 easy:2 bengio:1 variety:4 fit:1 lasso:1 regarding:1 sibling:1 translates:1 shift:2 inactive:1 whether:2 motivated:3 bridging:1 passed:1 penalty:2 render:1 deep:10 useful:1 iterating:1 clear:1 eigenvectors:4 detailed:1 transforms:4 dark:2 processed:2 http:1 exist:1 affords:1 wav:4 nsf:1 coates:1 arising:1 blue:1 discrete:1 write:1 demonstrating:1 achieving:1 diffusion:4 ht:2 backward:4 v1:1 imaging:1 graph:70 ram:1 fraction:3 sum:2 year:3 cone:1 run:1 inverse:4 package:1 baraniuk:1 place:1 almost:2 reader:1 family:1 scaling:35 layer:3 multiresolutions:1 yale:1 fold:1 constraint:2 precisely:1 constrain:2 prochnow:1 x2:1 software:1 fourier:1 min:2 separable:3 coarsest:2 expanded:1 relatively:1 department:1 structured:1 smaller:2 partitioned:1 projecting:1 restricted:1 invariant:1 equation:1 remains:1 discus:1 serf:1 available:2 wii:1 generalizes:2 sweldens:3 apply:2 hierarchical:2 v2:1 spectral:4 indirectly:1 enforce:1 appearing:1 schmidt:1 eigen:3 original:4 bagging:1 top:1 clustering:3 ensure:2 cf:1 completed:2 running:1 remaining:1 opportunity:1 exploit:1 ghahramani:1 raif:1 especially:1 classical:2 objective:5 already:2 receptive:1 strategy:1 randomize:1 diagonal:4 surrogate:2 separate:1 thank:1 manifold:4 trivial:2 boldface:1 enforcing:1 providing:1 minimizing:1 balance:1 setup:2 mostly:1 difficult:1 relate:1 rise:2 design:4 implementation:1 anal:4 unknown:1 perform:1 allowing:1 teh:1 ssim:1 observation:3 neuron:2 datasets:1 convolution:1 acknowledge:1 immediate:1 extended:1 incorporated:1 precise:1 communication:1 hinton:2 rn:1 stack:3 station:3 arbitrary:1 introduced:2 namely:5 required:1 specified:6 pair:3 learned:2 nip:1 trans:1 able:1 usually:1 pattern:3 latitude:1 laplacians:1 sparsity:7 poultney:1 including:1 memory:2 max:14 belief:1 natural:1 treated:1 regularized:1 haar:23 predicting:1 recursion:1 zhu:1 representing:1 scheme:15 improve:1 started:1 auto:14 prior:5 literature:1 l2:3 popovici:1 geometric:1 multiplication:1 afosr:1 lacking:1 fully:1 par:5 expect:1 interesting:1 generation:1 localized:1 vandergheynst:2 foundation:1 degree:1 editor:2 bank:1 tiny:1 row:3 supported:1 keeping:1 heading:1 bias:2 disallow:1 allow:1 neighbor:4 taking:1 face:5 absolute:1 sparse:12 benefit:2 world:1 lafon:1 forward:2 collection:3 adaptive:6 commonly:1 author:1 far:1 transaction:3 keep:2 dealing:1 abstracted:1 active:8 continuous:3 nature:3 learn:5 controllable:1 obtaining:3 improving:1 expansion:4 ngiam:1 interpolating:2 necessarily:1 constructing:5 domain:3 hierarchically:1 edition:2 repeated:2 dyadic:3 rustamov:3 child:4 augmented:1 advice:1 en:1 depicts:4 comput:3 tied:2 third:3 wavelet:95 down:3 formula:1 embed:1 specific:3 inset:1 showing:3 restricting:1 effectively:1 importance:1 corr:1 mirror:1 lifting:22 illumination:1 gap:1 sorting:1 sparser:1 locality:2 smoothly:1 depicted:2 intersection:1 simply:1 appearance:1 visual:1 expressed:1 nested:1 corresponds:4 cdf:2 ma:2 goal:4 identity:1 hard:2 experimentally:1 included:1 specifically:1 except:3 reducing:1 called:1 total:1 invariance:1 experimental:1 meaningful:2 highdimensional:1 support:3 mark:1 latter:1 jonathan:1 |
4,472 | 5,047 | Stochastic blockmodel approximation of a graphon:
Theory and consistent estimation
Edoardo M. Airoldi
Dept. Statistics
Harvard University
Thiago B. Costa
SEAS, and Dept. Statistics
Harvard University
Stanley H. Chan
SEAS, and Dept. Statistics
Harvard University
Abstract
Non-parametric approaches for analyzing network data based on exchangeable
graph models (ExGM) have recently gained interest. The key object that defines
an ExGM is often referred to as a graphon. This non-parametric perspective on
network modeling poses challenging questions on how to make inference on the
graphon underlying observed network data. In this paper, we propose a computationally efficient procedure to estimate a graphon from a set of observed networks
generated from it. This procedure is based on a stochastic blockmodel approximation (SBA) of the graphon. We show that, by approximating the graphon with
a stochastic block model, the graphon can be consistently estimated, that is, the
estimation error vanishes as the size of the graph approaches infinity.
1 Introduction
Revealing hidden structures of a graph is the heart of many data analysis problems. From the wellknown small-world network to the recent large-scale data collected from online service providers
such as Wikipedia, Twitter and Facebook, there is always a momentum in seeking better and more
informative representations of the graphs [1, 14, 29, 3, 26, 12]. In this paper, we develop a new computational tool to study one type of non-parametric representations which recently draws significant
attentions from the community [4, 19, 5, 30, 23].
The root of the non-parametric model discussed in this paper is in the theory of exchangeable random arrays [2, 15, 16], and it is presented in [11] as a link connecting de Finetti?s work on partial
exchangeability and graph limits [20, 6]. In a nutshell, the theory predicts that every convergent
sequence of graphs (Gn ) has a limit object that preserves many local and global properties of the
graphs in the sequence. This limit object, which is called a graphon, can be represented by measurable functions w : [0, 1]2 ? [0, 1], in a way that any w? obtained from measure preserving
transformations of w describes the same graphon.
Graphons are usually seen as kernel functions for random network models [18]. To construct an
n-vertex random graph G(n, w) for a given w, we first assign a random label ui ? Uniform[0, 1] to
each vertex i ? {1, . . . , n}, and connect any two vertices i and j with probability w(ui , uj ), i.e.,
Pr (G[i, j] = 1 | ui , uj ) = w(ui , uj ),
i, j = 1, . . . , n,
(1)
where G[i, j] denotes the (i, j)th entry of the adjacency matrix representing a particular realization
of G(n, w) (See Figure 1). As an example, we note that the stochastic block-model is the case where
w(x, y) is a piecewise constant function.
The problem of interest is defined as follows: Given a sequence of 2T observed directed graphs
G1 , . . . , G2T , can we make an estimate w
b of w, such that w
b ? w with high probability as n ? ??
This question has been loosely attempted in the literature, but none of which has a complete solution.
For example, Lloyd et al. [19] proposed a Bayesian estimator without a consistency proof; Choi and
1
w
G2T
w(ui , uj )
?
ui
uj
(ui , uj )
G1
Figure 1: [Left] Given a graphon w : [0, 1]2 ? [0, 1], we draw i.i.d. samples ui , uj from
Uniform[0,1] and assign Gt [i, j] = 1 with probability w(ui , uj ), for t = 1, . . . , 2T . [Middle]
Heat map of a graphon w. [Right] A random graph generated by the graphon shown in the middle.
Rows and columns of the graph are ordered by increasing ui , instead of i for better visualization.
Wolfe [9] studied the consistency properties, but did not provide algorithms to estimate the graphon.
To the best of our knowledge, the only method that estimates graphons consistently, besides ours, is
USVT [8]. However, our algorithm has better complexity and outperforms USVT in our simulations.
More recently, other groups have begun exploring approaches related to ours [28, 24].
The proposed approximation procedure requires w to be piecewise Lipschitz. The basic idea is to
approximate w by a two-dimensional step function w
b with diminishing intervals as n increases.The
proposed method is called the stochastic blockmodel approximation (SBA) algorithm, as the idea of
using a two-dimensional step function for approximation is equivalent to using the stochastic block
models [10, 22, 13, 7, 25]. The SBA algorithm is defined up to permutations of the nodes, so the
estimated graphon is not canonical. However, this does not affect the consistency properties of the
SBA algorithm, as the consistency is measured w.r.t. the graphon that generates the graphs.
2 Stochastic blockmodel approximation: Procedure
In this section we present the proposed SBA algorithm and discuss its basic properties.
2.1 Assumptions on graphons
We assume that w is piecewise Lipschitz, i.e., there exists a sequence of non-overlaping intervals
Ik = [?k?1 , ?k ] defined by 0 = ?0 < . . . < ?K = 1, and a constant L > 0 such that, for any
(x1 , y1 ) and (x2 , y2 ) ? Iij = Ii ? Ij ,
|w(x1 , y1 ) ? w(x2 , y2 )| ? L (|x1 ? x2 | + |y1 ? y2 |) .
For generality we assume w to be asymmetric i.e., w(u, v) 6= w(v, u), so that symmetric graphons
can be considered as a special case. Consequently, a random graph G(n, w) generated by w is
directed, i.e., G[i, j] 6= G[j, i].
2.2 Similarity of graphon slices
The intuition of the proposed SBA algorithm is that if the graphon is smooth, neighboring crosssections of the graphon should be similar. In other words, if two labels ui and uj are close i.e.,
|ui ? uj | ? 0, then the difference between the row slices |w(ui , ?) ? w(uj , ?)| and the column slices
|w(?, ui ) ? w(?, uj )| should also be small. To measure the similarity between two labels using the
graphon slices, we define the following distance
Z 1
Z 1
1
2
2
[w(x, ui ) ? w(x, uj )] dx +
[w(ui , y) ? w(uj , y)] dy .
(2)
dij =
2
0
0
2
Thus, dij is small only if both row and column slices of the graphon are similar.
The usage of dij for graphon estimation will be discussed in the next subsection. But before
we proceed, it should be noted that in practice dij has to be estimated from the observed graphs
G1 , . . . , G2T . To derive an estimator dbij of dij , it is helpful to express dij in a way that the estimators can be easily obtained. To this end, we let
Z 1
Z 1
w(x, ui )w(x, uj )dx
and
rij =
w(ui , y)w(uj , y)dy,
cij =
0
0
h
i
and express dij as dij = (cii ?cij ?cji +cjj )+(rii ?rij ?rji +rjj ) . Inspecting this expression,
we consider the following estimators for cij and rij :
?
??
?
X
X
1
Gt1 [k, i]? ?
Gt2 [k, j]? ,
(3)
ckij = 2 ?
b
T
1?t1 ?T
T <t2 ?2T
?
??
?
X
X
1
k
rbij
= 2?
Gt1 [i, k]? ?
Gt2 [j, k]? .
(4)
T
1
2
1?t1 ?T
T <t2 ?2T
Here, the superscript k can be interpreted as the dummy variables x and y in defining cij and rij ,
respectively. Summing all possible k?s yields an estimator dbij that looks similar to dij :
"
#
X
1
1
k
k
k
k
dbij =
rbii
? rbij
? rbji
+ rbjj
+ b
ckii ? b
ckij ? b
ckji + b
ckjj
,
(5)
2 S
k?S
where S = {1, . . . , n}\{i, j} is the set of summation indices.
The motivation of defining the estimators in (3) and (4) is that a row of the adjacency matrix G[i, ?]
is fully
the graphon w(ui , ?). Thus the expected value of
P characterized by
the corresponding row ofP
1
1
k
bij
is an estimator for rij . To theoretically
1?t1 ?T Gt1 [i, ?] is w(ui , ?), and hence S
k?S r
T
justify this intuition, we will show in Section 3 that dbij is indeed a good estimator: it is not only
unbiased, but is also concentrated round dij for large n. Furthermore, we will show that it is possible
to use a random subset of S instead of {1, . . . , n}\{i, j} to achieve the same asymptotic behavior.
As a result, the estimation of dij can be performed locally in a neighborhood of i and j, instead of
all n vertices.
2.3 Blocking the vertices
The similarity metric dbij discussed above suggests one simple method to approximate w by a piecewise constant function w
b (i.e., a stochastic block-model). Given G1 , . . . , G2T , we can cluster the
b1 , . . . , B
bK using a procedure described below. Once
(unknown) labels {u1 , . . . , un } into K blocks B
b1 , . . . , B
bK are defined, we can then determine w(u
the blocks B
b i , uj ) by computing the empirical
b
b
frequency of edges that are present across blocks Bi and Bj :
X X 1
1
w(u
b i , uj ) =
(G1 [ix , jy ] + G2 [ix , jy ] + . . . + G2T [ix , jy ]) ,
(6)
bi | |B
bj |
2T
|B
b
b
ix ?Bi jy ?Bj
bi is the block containing ui so that summing Gt [x, y] over x ? B
bi and y ? B
bj yields an
where B
b
b
estimate of the expected number of edges linking block Bi and Bj .
To cluster the unknown labels {u1 , . . . , un } we propose a greedy approach as shown in Algorithm
1. Starting with ? = {u1 , . . . , un }, we randomly pick a node ip and call it the pivot. Then for all
other vertices iv ? ?\{ip }, we compute the distance dbip ,iv and check whether dbip ,iv < ?2 for some
precision parameter ? > 0. If dbip ,iv < ?2 , then we assign iv to the same block as ip . Therefore,
b1 = {ip , iv1 , iv2 , . . .} will be defined. By updating ? as
after scanning through ? once, a block B
b
? ? ?\B1 , the process repeats until ? = ?.
3
The proposed greedy algorithm is only a local solution in a sense that it does not return the globally
optimal clusters. However, as will be shown in Section 3, although the clustering algorithm is not
globally optimal, the estimated graphon w
b is still guaranteed to be a consistent estimate of the true
graphon w as n ? ?. Since the greedy algorithm is numerically efficient, it serves as a practical
computational tool to estimate w.
2.4 Main algorithm
Algorithm 1 Stochastic blockmodel approximation
Input: A set of observed graphs G1 , . . . , G2T and the precision parameter ?.
b1 , . . . , B
bK .
Output: Estimated stochastic blocks B
Initialize: ? = {1, . . . , n}, and k = 1.
while ? 6= ? do
bk : B
bk ? ip .
Randomly choose a vertex ip from ? and assign it as the pivot for B
for Every other vertices iv ? ?\{ip } do
Compute the distance estimate dbip ,iv .
bk : B
bk ? iv .
If dbip ,iv ? ?2 , then assign iv as a member of B
end for
bk .
Update ?: ? ? ?\B
Update counter: k ? k + 1.
end while
Algorithm 1 illustrates the pseudo-code for the proposed stochastic block-model approximation.
The complexity of this algorithm is O(T SKn), where T is half the number of observations, S is
the size of the neighborhood, K is the number of blocks and n is number of vertices of the graph.
3 Stochastic blockmodel approximation: Theory of estimation
In this section we present the theoretical aspects of the proposed SBA algorithm. We will first
discuss the properties of the estimator dbij , and then show the consistency of the estimated graphon
w.
b Details of the proofs can be found in the supplementary material.
3.1 Concentration analysis of dbij
Our first theorem below shows that the proposed estimator dbij is both unbiased, and is concentrated
around its expected value dij .
Theorem 1. The estimator dbij for dij is unbiased, i.e., E[dbij ] = dij . Further, for any ? > 0,
h
i
S?2
Pr dbij ? dij > ? ? 8e? 32/T +8?/3 ,
(7)
where S is the size of the neighborhood S, and 2T is the number of observations.
Proof. Here we only highlight the important steps to present the intuition. The basic idea of the
k
proof is to zoom-in a microscopic term of rbij
and show that it is unbiased. To this end, we use the
fact that Gt1 [i, k] and Gt2 [j, k] are conditionally independent on uk to show
E[Gt1 [i, k]Gt2 [j, k] | uk ] = Pr[Gt1 [i, k] = 1, Gt2 [j, k] = 1 | uk ]
(a)
= Pr[Gt1 [i, k] = 1 | uk ] Pr[Gt2 [j, k] = 1 | uk ]
= w(ui , uk )w(uj , uk ),
k
k
which then implies E[b
rij
| uk ] = w(ui , uk )w(uj , uk ), and by iterated expectation we have E[b
rij
]=
k
E[E[b
rij | uk ]] = rij . The concentration inequality follows from a similar idea to bound the variance
k
of rbij
and apply Bernstein?s inequality.
4
That Gt1 [i, k] and Gt2 [j, k] are conditionally independent on uk is a critical fact for the success of
the proposed algorithm. It also explains why at least 2 independently observed graphs are necessary,
for otherwise we cannot separate the probability in the second equality above marked with (a).
3.2 Choosing the number of blocks
The performance of the Algorithm 1 is sensitive to the number of blocks it defines. On the one hand,
it is desirable to have more blocks so that the graphon can be finely approximated. But on the other
hand, if the number of blocks is too large then each block will contain only few vertices. This is bad
because in order to estimate the value on each block, a sufficient number of vertices in each block is
required. The trade-off between these two cases is controlled by the precision parameter ?: a large
? generates few large clusters, while small ? generates many small clusters. A precise relationship
between the ? and K, the number of blocks generated the algorithm, is given in Theorem 2.
Theorem 2. Let ? be the accuracy parameter and K be the number of blocks estimated by Algorithm 1, then
"
? #
S?4
QL 2
?
Pr K >
? 8n2 e 128/T +16?2 /3 ,
(8)
?
where L is the Lipschitz constant and Q is the number of Lipschitz blocks in w.
In practice, we estimate ? using a cross-validation scheme to find the optimal 2D histogram bin
width [27]. The idea is to test a sequence of potential values of ? and seek the one that minimizes
the cross validation risk, defined as
K
b
J(?)
=
2
n+1 X 2
?
pb ,
h(n ? 1) h(n ? 1) j=1 j
(9)
bj |/n and h = 1/K. Algorithm 2 details the proposed cross-validation scheme.
where pbj = |B
Algorithm 2 Cross Validation
Input: Graphs G1 , . . . , G2T .
b1 , . . . , B
bK , and optimal ?.
Output: Blocks B
for a sequence of ??s do
b1 , . . . , B
bK from G1 , . . . , G2T . [Algorithm 1]
Estimate blocks B
bj |/n, for j = 1, . . . , K.
Compute pbj = |B
n+1 PK
2
b
? h(n?1)
b2j , with h = 1/K.
Compute J(?)
= h(n?1)
j=1 p
end for
b
b1 , . . . , B
bK .
Pick the ? with minimum J(?),
and the corresponding B
3.3 Consistency of w
b
The goal of our next theorem is to show that w
b is a consistent estimate of w, i.e., w
b ? w as n ? ?.
To begin with, let us first recall two commonly used metric:
Definition 1. The mean squared error (MSE) and mean absolute error (MAE) are defined as
MSE(w)
b =
n
n
1 X X
2
(w(uiv , ujv ) ? w(u
b iv , ujv ))
n2 i =1 j =1
v
v
n
n
1 X X
MAE(w)
b = 2
|w(uiv , ujv ) ? w(u
b iv , ujv )| .
n i =1 j =1
Theorem 3. If S ? ?(n) and ? ? ?
v
v
log(n)
n
lim E[MAE(w)]
b =0
n??
14
and
5
? o(1), then
lim E[MSE(w)]
b = 0.
n??
Proof. The details of the proof can be found in the supplementary material . Here we only outline
the key steps to present the intuition of the theorem. The goal of Theorem 3 is to show convergence
of |w(u
b i , uj ) ? w(ui , uj )|. The idea is to consider the following two quantities:
X X
1
w(ui , uj ) =
w(uix , ujx ),
bi | |B
bj |
|B
w(u
b i , uj ) =
1
bi | |B
bj |
|B
bi jx ?B
bj
ix ?B
X X
bi jy ?B
bj
ix ?B
1
(G1 [ix , jy ] + G2 [ix , jy ] + . . . + G2T [ix , jy ]) ,
2T
so that if we can bound |w(ui , uj ) ? w(ui , uj )| and |w(ui , uj ) ? w(u
b i , uj )|, then consequently
|w(u
b i , uj ) ? w(ui , uj )| can also be bounded.
The bound for the first term |w(ui , uj ) ? w(ui , uj )| is shown in Lemma 1: By Algorithm 1, any
bi is guaranteed to be within a distance ? from the pivot of B
bi . Since w(ui , uj ) is an
vertex iv ? B
b
b
average over Bi and Bj , by Theorem 1 a probability bound involving ? can be obtained.
b i , uj )| is shown in Lemma 2. Different from Lemma
The bound for the second term |w(ui , uj )? w(u
1, here we need to consider two possible situations: either the intermediate estimate w(ui , uj ) is
close to the ground truth w(ui , uj ), or w(ui , uj ) is far from the ground truth w(ui , uj ). This accounts for the sum in Lemma 2. Individual bounds are derived based on Lemma 1 and Theorem 1.
Combining Lemma 1 and Lemma 2, we can then bound the error and show convergence.
bi and jv ? B
bj ,
Lemma 1. For any iv ? B
i
h
S?4
b i | |B
bj |e? 32/T +8?2 /3 .
Pr |w(ui , uj ) ? w(uiv , ujv )| > 8?1/2 L1/4 ? 32|B
bi and jv ? B
bj ,
Lemma 2. For any iv ? B
h
i
?
S?4
b
b
bi |2 |B
bj |2 e? 32/T +8?2 /3) .
Pr |w
bij ? w ij | > 8?1/2 L1/4 ? 2e?256(T |Bi | |Bj | L?) + 32|B
(10)
(11)
The condition S ? ?(n) is necessary to make Theorem 3 valid, because if S is independent of n,
the right hand sides of (10) and (11) cannot approach 0 even if n ? ?. The condition on ? is also
important as it forces the numerators and denominators in the exponentials of (10) and (11) to be
well behaved.
4 Experiments
In this section we evaluate the proposed SBA algorithm by showing some empirical results. For the
purpose of comparison, we consider (i) the universal singular value thresholding (USVT) [8]; (ii)
the largest-gap algorithm (LG) [7]; (iii) matrix completion from few entries (OptSpace) [17].
4.1 Estimating stochastic blockmodels
Accuracy as a function of growing graph size. Our first experiment is to evaluate the proposed
SBA algorithm for estimating stochastic blockmodels. For this purpose, we generate (arbitrarily) a
graphon
?
?
0.8 0.9 0.4 0.5
?0.1 0.6 0.3 0.2?
w=?
,
(12)
0.3 0.2 0.8 0.3?
0.4 0.1 0.2 0.9
which represents a piecewise constant function with 4 ? 4 equi-space blocks.
Since USVT and LG use only one observed graph whereas the proposed SBA require at least 2
observations, in order to make the comparison fair, we use half of the nodes for SBA by generating
two independent n2 ? n2 observed graphs. For USVT and LG, we use one n ? n observed graph.
Figure 2(a) shows the asymptotic behavior of the algorithms when n grows. Figure 2(b) shows the
estimation error of SBA algorithm as T grows for graphs of size 200 vertices.
6
?0.5
?2
Proposed
?2.1
?1
?2.2
log 10 (MAE)
log 10 (MAE)
?2.3
?1.5
?2
?2.4
?2.5
?2.6
?2.7
?2.5
?3
0
Proposed
Largest Gap
OptSpace
USVT
200
?2.8
?2.9
400
n
600
800
?3
0
1000
(a) Growing graph size, n
5
10
15
20
2T
25
30
35
40
(b) Growing no. observations, 2T
Figure 2: (a) MAE reduces as graph size grows. For the fairness of the amount of data that can be
used, we use n2 ? n2 ? 2 observations for SBA, and n ? n ? 1 observation for USVT [8] and LG
[7]. (b) MAE of the proposed SBA algorithm reduces when more observations T is available. Both
plots are averaged over 100 independent trials.
Accuracy as a function of growing number of blocks. Our second experiment is to evaluate the
performance of the algorithms as K, the number of blocks, increases. To this end, we consider a
sequence of K, and for each K we generate a graphon w of K ? K blocks. Each entry of the
block is a random number generated from Uniform[0, 1]. Same as the previous experiment, we fix
n = 200 and T = 1. The experiment is repeated over 100 trials so that in every trial a different
graphon is generated. The result shown in Figure 3(a) indicates that while estimation error increases
as K grows, the proposed SBA algorithm still attains the lowest MAE for all K.
?0.7
?0.6
?0.7
?0.8
?0.8
?0.9
log 10 (MAE)
log 10 (MAE)
?0.9
?1
?1.1
?1.2
?1.3
?1.2
5
10
K
15
Proposed
Largest Gap
OptSpace
USVT
?1.4
Proposed
Largest Gap
USVT
?1.3
?1.4
0
?1
?1.1
?1.5
?1.6
0
20
(a) Growing no. blocks, K
5
10
% missing links
15
20
(b) Missing links
Figure 3: (a) As K increases, MAE of all three algorithm increases but SBA still attains the lowest
MAE. Here, we use n2 ? n2 ? 2 observations for SBA, and n ? n ? 1 observation for USVT [8] and
LG [7]. (b) Estimation of graphon in the presence of missing links: As the amount of missing links
increases, estimation error also increases.
4.2 Estimation with missing edges
Our next experiment is to evaluate the performance of proposed SBA algorithm when there are
missing edges in the observed graph. To model missing edges, we construct an n ? n binary matrix
M with probability Pr[M [i, j] = 0] = ?, where 0 ? ? ? 1 defines the percentage of missing
edges. Given ?, 2T matrices are generated with missing edges, and the observed graphs are defined
as M1 ? G1 , . . . , M2T ? G2T , where ? denotes the element-wise multiplication. The goal is to
study how well SBA can reconstruct the graphon w
b in the presence of missing links.
7
The modification of the proposed SBA algorithm for the case missing links is minimal: when combi and jy ? B
bj , we only average ix ? B
bi and jy ? B
bj
puting (6), instead of averaging over all ix ? B
that are not masked out by all M ? s. Figure 3(b) shows the result of average over 100 independent
trials. Here, we consider the graphon given in (12), with n = 200 and T = 1. It is evident that SBA
outperforms its counterparts at a lower rate of missing links.
4.3 Estimating continuous graphons
Our final experiment is to evaluate the proposed SBA algorithm in estimating continuous graphons.
Here, we consider two of the graphons reported in [8]:
1
w1 (u, v) =
, and w2 (u, v) = uv,
1 + exp{?50(u2 + v 2 )}
where u, v ? [0, 1]. Here, w2 can be considered as a special case of the Eigenmodel [13] or latent
feature relational model [21].
The results in Figure 4 shows that while both algorithms have improved estimates when n grows, the
performance depends on which of w1 and w2 that we are studying. This suggests that in practice the
choice of the algorithm should depend on the expected structure of the graphon to be estimated: If the
graph generated by the graphon demonstrates some low-rank properties, then USVT is likely to be
a better option. For more structured or complex graphons the proposed procedure is recommended.
?2.9
?0.6
Proposed
USVT
Proposed
USVT
?0.8
?2.95
?1
log 10 (MAE)
log 10 (MAE)
?3
?3.05
?1.2
?1.4
?3.1
?1.6
?3.15
?3.2
0
?1.8
200
400
n
600
800
?2
0
1000
(a) graphon w1
200
400
n
600
800
1000
(b) graphon w2
Figure 4: Comparison between SBA and USVT in estimating two continuous graphons w1 and w2 .
Evidently, SBA performs better for w1 (high-rank) and worse for w2 (low-rank).
5 Concluding remarks
We presented a new computational tool for estimating graphons. The proposed algorithm approximates the continuous graphon by a stochastic block-model, in which the first step is to cluster
the unknown vertex labels into blocks by using an empirical estimate of the distance between two
graphon slices, and the second step is to build an empirical histogram to estimate the graphon. Complete consistency analysis of the algorithm is derived. The algorithm was evaluated experimentally,
and we found that the algorithm is effective in estimating block structured graphons.
Implementation of the SBA algorithm is available online at https://github.com/airoldilab/SBA.
Acknowledgments. EMA is partially supported by NSF CAREER award IIS-1149662, ARO MURI
award W911NF-11-1-0036, and an Alfred P. Sloan Research Fellowship. SHC is partially supported
by a Croucher Foundation Post-Doctoral Research Fellowship.
References
[1] E.M. Airoldi, D.M. Blei, S.E. Fienberg, and E.P. Xing. Mixed-membership stochastic blockmodels.
Journal of Machine Learning Research, 9:1981?2014, 2008.
8
[2] D.J. Aldous. Representations for partially exchangeable arrays of random variables. Journal of Multivariate Analysis, 11:581?598, 1981.
[3] H. Azari and E. M. Airoldi. Graphlet decomposition of a weighted network. Journal of Machine Learning
Research, W&CP, 22:54?63, 2012.
[4] P.J. Bickel and A. Chen. A nonparametric view of network models and Newman-Girvan and other modularities. Proc. Natl. Acad. Sci. USA, 106:21068?21073, 2009.
[5] P.J. Bickel, A. Chen, and E. Levina. The method of moments and degree distributions for network models.
Annals of Statistics, 39(5):2280?2301, 2011.
[6] C. Borgs, J. Chayes, L. Lov?asz, V. T. S?os, B. Szegedy, and K. Vesztergombi. Graph limits and parameter
testing. In Proc. ACM Symposium on Theory of Computing, pages 261?270, 2006.
[7] A. Channarond, J. Daudin, and S. Robin. Classification and estimation in the Stochastic Blockmodel
based on the empirical degrees. Electronic Journal of Statistics, 6:2574?2601, 2012.
[8] S. Chatterjee. Matrix estimation by universal singular value thresholding. ArXiv:1212.1247. 2012.
[9] D.S. Choi and P.J. Wolfe. Co-clustering separately exchangeable network data. ArXiv:1212.4093. 2012.
[10] D.S. Choi, P.J. Wolfe, and E.M. Airoldi. Stochastic blockmodels with a growing number of classes.
Biometrika, 99:273?284, 2012.
[11] P. Diaconis and S. Janson. Graph limits and exchangeable random graphs. Rendiconti di Matematica e
delle sue Applicazioni, Series VII, pages 33?61, 2008.
[12] A. Goldenberg, A.X. Zheng, S.E. Fienberg, and E.M. Airoldi. A survey of statistical network models.
Foundations and Trends in Machine Learning, 2:129?233, 2009.
[13] P.D. Hoff. Modeling homophily and stochastic equivalence in symmetric relational data. In Neural
Information Processing Systems (NIPS), volume 20, pages 657?664, 2008.
[14] P.D. Hoff, A.E. Raftery, and M.S. Handcock. Latent space approaches to social network analysis. Journal
of the American Statistical Association, 97(460):1090?1098, 2002.
[15] D.N. Hoover. Relations on probability spaces and arrays of random variables. Preprint, Institute for
Advanced Study, Princeton, NJ, 1979.
[16] O. Kallenberg. On the representation theorem for exchangeable arrays. Journal of Multivariate Analysis,
30(1):137?154, 1989.
[17] R.H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Information
Theory, 56:2980?2998, Jun. 2010.
[18] N.D. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6:1783?1816, 2005.
[19] J.R. Lloyd, P. Orbanz, Z. Ghahramani, and D.M. Roy. Random function priors for exchangeable arrays
with applications to graphs and relational data. In Neural Information Processing Systems (NIPS), 2012.
[20] L. Lov?asz and B. Szegedy. Limits of dense graph sequences. Journal of Combinatorial Theory, Series B,
96:933?957, 2006.
[21] K.T. Miller, T.L. Griffiths, and M.I. Jordan. Nonparametric latent fature models for link prediction. In
Neural Information Processing Systems (NIPS), 2009.
[22] K. Nowicki and T.A. Snijders. Estimation and prediction of stochastic block structures. Journal of
American Statistical Association, 96:1077?1087, 2001.
[23] P. Orbanz and D.M. Roy. Bayesian models of graphs, arrays and other exchangeable random structures,
2013. Unpublished manuscript.
[24] P.Latouche and S. Robin. Bayesian model averaging of stochastic block models to estimate the graphon
function and motif frequencies in a w-graph model. ArXiv:1310.6150, October 2013. Unpublished
manuscript.
[25] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel.
Annals of Statistics, 39(4):1878?1915, 2011.
[26] M. Tang, D.L. Sussman, and C.E. Priebe. Universally consistent vertex classification for latent positions
graphs. Annals of Statistics, 2013. In press.
[27] L. Wasserman. All of Nonparametric Statistics. Springer, 2005.
[28] P.J. Wolfe and S.C. Olhede. Nonparametric graphon estimation. ArXiv:1309.5936, September 2013.
Unpublished manuscript.
[29] Z. Xu, F. Yan, and Y. Qi. Infinite Tucker decomposition: nonparametric Bayesian models for multiway
data analysis. In Proc. Intl. Conf. Machine Learning (ICML), 2012.
[30] Y. Zhao, E. Levina, and J. Zhu. Community extraction for social networks. In Proc. Natl. Acad. Sci. USA,
volume 108, pages 7321?7326, 2011.
9
| 5047 |@word sba:26 trial:4 middle:2 seek:1 simulation:1 decomposition:2 pick:2 moment:1 series:2 ours:2 janson:1 outperforms:2 com:1 dx:2 iv1:1 informative:1 plot:1 update:2 greedy:3 half:2 olhede:1 blei:1 equi:1 node:3 symposium:1 ik:1 theoretically:1 lov:2 indeed:1 expected:4 behavior:2 growing:6 usvt:14 globally:2 increasing:1 begin:1 estimating:7 underlying:1 bounded:1 lowest:2 interpreted:1 minimizes:1 transformation:1 nj:1 pseudo:1 every:3 nutshell:1 biometrika:1 demonstrates:1 uk:12 exchangeable:8 before:1 service:1 t1:3 local:2 puting:1 limit:6 acad:2 analyzing:1 sussman:1 doctoral:1 studied:1 equivalence:1 g2t:10 challenging:1 suggests:2 co:1 bi:18 averaged:1 directed:2 practical:1 acknowledgment:1 testing:1 practice:3 block:37 graphlet:1 procedure:6 universal:2 empirical:5 yan:1 revealing:1 word:1 griffith:1 cannot:2 close:2 risk:1 measurable:1 map:1 equivalent:1 missing:12 attention:1 starting:1 independently:1 survey:1 wasserman:1 estimator:11 array:6 oh:1 annals:3 harvard:3 wolfe:4 element:1 approximated:1 trend:1 updating:1 dbij:11 roy:2 asymmetric:1 predicts:1 muri:1 blocking:1 observed:11 modularities:1 preprint:1 rij:9 azari:1 counter:1 trade:1 intuition:4 vanishes:1 ui:38 complexity:2 depend:1 easily:1 represented:1 heat:1 effective:1 newman:1 neighborhood:3 choosing:1 supplementary:2 otherwise:1 reconstruct:1 statistic:8 g1:10 superscript:1 online:2 ip:7 final:1 sequence:8 chayes:1 evidently:1 propose:2 aro:1 neighboring:1 combining:1 realization:1 achieve:1 convergence:2 cluster:6 sea:2 intl:1 generating:1 object:3 derive:1 develop:1 completion:2 pose:1 measured:1 ij:2 implies:1 stochastic:22 b2j:1 material:2 adjacency:2 explains:1 bin:1 require:1 assign:5 fix:1 hoover:1 inspecting:1 summation:1 exploring:1 graphon:41 around:1 considered:2 ground:2 exp:1 lawrence:1 bj:19 bickel:2 jx:1 purpose:2 estimation:14 proc:4 label:6 combinatorial:1 sensitive:1 largest:4 tool:3 gt1:8 weighted:1 always:1 gaussian:1 exchangeability:1 derived:2 consistently:2 rank:3 check:1 indicates:1 blockmodel:8 attains:2 sense:1 helpful:1 inference:1 twitter:1 goldenberg:1 motif:1 membership:1 diminishing:1 hidden:1 relation:1 classification:2 special:2 initialize:1 hoff:2 construct:2 once:2 extraction:1 skn:1 represents:1 look:1 yu:1 fairness:1 icml:1 t2:2 piecewise:5 few:4 randomly:2 diaconis:1 preserve:1 zoom:1 individual:1 interest:2 zheng:1 natl:2 edge:7 partial:1 necessary:2 iv:15 loosely:1 theoretical:1 minimal:1 column:3 modeling:2 gn:1 optspace:3 w911nf:1 delle:1 vertex:15 entry:4 subset:1 uniform:3 masked:1 dij:15 too:1 reported:1 connect:1 scanning:1 probabilistic:1 off:1 connecting:1 w1:5 squared:1 containing:1 choose:1 worse:1 conf:1 american:2 zhao:1 return:1 rji:1 szegedy:2 account:1 potential:1 de:1 lloyd:2 gt2:7 sloan:1 depends:1 performed:1 root:1 view:1 xing:1 option:1 accuracy:3 variance:1 miller:1 yield:2 pbj:2 bayesian:4 iterated:1 none:1 provider:1 facebook:1 definition:1 frequency:2 tucker:1 proof:6 di:1 costa:1 begun:1 recall:1 knowledge:1 subsection:1 lim:2 stanley:1 manuscript:3 improved:1 evaluated:1 generality:1 furthermore:1 until:1 hand:3 keshavan:1 o:1 defines:3 behaved:1 grows:5 usage:1 usa:2 contain:1 y2:3 unbiased:4 true:1 counterpart:1 hence:1 equality:1 symmetric:2 nowicki:1 conditionally:2 round:1 numerator:1 width:1 noted:1 croucher:1 outline:1 complete:2 evident:1 performs:1 l1:2 cp:1 wise:1 recently:3 wikipedia:1 homophily:1 volume:2 thiago:1 discussed:3 linking:1 mae:14 numerically:1 m1:1 approximates:1 significant:1 association:2 uv:1 consistency:7 handcock:1 multiway:1 ckij:2 similarity:3 gt:2 multivariate:2 chan:1 recent:1 perspective:1 aldous:1 orbanz:2 wellknown:1 inequality:2 binary:1 success:1 arbitrarily:1 preserving:1 seen:1 minimum:1 cii:1 determine:1 recommended:1 ii:3 desirable:1 snijders:1 reduces:2 smooth:1 levina:2 characterized:1 cjj:1 cross:4 dept:3 ofp:1 post:1 award:2 jy:10 controlled:1 qi:1 prediction:2 involving:1 basic:3 denominator:1 metric:2 expectation:1 sue:1 arxiv:4 histogram:2 kernel:1 shc:1 whereas:1 fellowship:2 separately:1 interval:2 singular:2 w2:6 finely:1 asz:2 member:1 jordan:1 call:1 presence:2 vesztergombi:1 bernstein:1 intermediate:1 iii:1 affect:1 idea:6 pivot:3 whether:1 expression:1 cji:1 edoardo:1 proceed:1 remark:1 amount:2 nonparametric:5 locally:1 concentrated:2 generate:2 http:1 percentage:1 canonical:1 nsf:1 estimated:8 dummy:1 alfred:1 finetti:1 express:2 group:1 key:2 pb:1 jv:2 kallenberg:1 graph:36 sum:1 electronic:1 draw:2 dy:2 bound:7 uiv:3 guaranteed:2 convergent:1 infinity:1 x2:3 generates:3 u1:3 aspect:1 concluding:1 ujv:5 structured:2 describes:1 across:1 modification:1 pr:9 heart:1 fienberg:2 computationally:1 visualization:1 discus:2 end:6 serf:1 studying:1 available:2 eigenmodel:1 apply:1 spectral:1 denotes:2 clustering:3 ghahramani:1 uj:40 build:1 approximating:1 seeking:1 question:2 quantity:1 parametric:4 concentration:2 microscopic:1 september:1 distance:5 link:9 separate:1 sci:2 collected:1 besides:1 code:1 index:1 relationship:1 ql:1 lg:5 cij:4 october:1 priebe:1 implementation:1 rii:1 unknown:3 observation:9 defining:2 situation:1 relational:3 precise:1 y1:3 community:2 bk:11 unpublished:3 required:1 nip:3 trans:1 usually:1 below:2 graphons:11 critical:1 force:1 advanced:1 zhu:1 representing:1 scheme:2 github:1 raftery:1 m2t:1 jun:1 prior:1 literature:1 multiplication:1 asymptotic:2 girvan:1 fully:1 permutation:1 highlight:1 mixed:1 validation:4 foundation:2 degree:2 sufficient:1 consistent:4 thresholding:2 row:5 rendiconti:1 repeat:1 supported:2 side:1 institute:1 absolute:1 slice:6 world:1 valid:1 commonly:1 universally:1 far:1 social:2 matematica:1 approximate:2 global:1 summing:2 b1:8 un:3 continuous:4 latent:5 why:1 robin:2 career:1 mse:3 complex:1 did:1 pk:1 main:1 blockmodels:4 montanari:1 dense:1 motivation:1 n2:8 fair:1 repeated:1 x1:3 xu:1 referred:1 iij:1 precision:3 momentum:1 position:1 rjj:1 exponential:1 ix:11 bij:2 tang:1 theorem:12 choi:3 bad:1 rohe:1 borgs:1 showing:1 exists:1 gained:1 airoldi:5 illustrates:1 chatterjee:2 gap:4 chen:2 vii:1 likely:1 ordered:1 g2:2 partially:3 u2:1 springer:1 truth:2 acm:1 marked:1 goal:3 consequently:2 lipschitz:4 experimentally:1 infinite:1 justify:1 averaging:2 lemma:9 principal:1 called:2 attempted:1 ema:1 evaluate:5 princeton:1 |
4,473 | 5,048 | Bayesian Hierarchical Community Discovery
Charles Blundell?
DeepMind Technologies
[email protected]
Yee Whye Teh
Department of Statistics,
University of Oxford
[email protected]
Abstract
We propose an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks. Our model is a tree-structured
mixture of potentially exponentially many stochastic blockmodels. We describe a
family of greedy agglomerative model selection algorithms that take just one pass
through the data to learn a fully probabilistic, hierarchical community model. In
the worst case, Our algorithms scale quadratically in the number of vertices of
the network, but independent of the number of nested communities. In practice,
the run time of our algorithms are two orders of magnitude faster than the Infinite
Relational Model, achieving comparable or better accuracy.
1
Introduction
People often organise themselves into groups or communities. For example, friends may form
cliques, scientists may have recurring collaborations, and politicians may form factions. Consequently the structure found in social networks is often studied by inferring these groups. Using
community membership one may then make predictions about the presence or absence of unobserved connectivity in the social network. Sometimes these communities possess hierarchical structure. For example, within science, the community of physicists may be split into those working on
various branches of physics, and each branch refined repeatedly until finally reaching the particular
specialisation of an individual physicist.
Much previous work on social networks has focused on discovering flat community structure. The
stochastic blockmodel [1] places each individual in a community according to the block structure
of the social network?s adjacency matrix, whilst the mixed membership stochastic blockmodel [2]
extends the stochastic blockmodel to allow individuals to belong to several flat communities simultaneously. Both models require the number of flat communities to be known and are parametric
methods.
Greedy hierarchical clustering has previously been applied directly to discovering hierarchical community structure [3]. These methods do not require the community structure to be flat or the number
of communities to be known. Such schemes are often computationally efficient, scaling quadratically in the number of individuals for a dense network, or linearly in the number of edges for a
sparse network [4]. These methods do not yield a full probabilistic account of the data, in terms of
parameters and the discovered structure.
Several authors have also proposed Bayesian approaches to inferring community structure. The Infinite Relational Model (IRM; [5, 6, 7]) infers a flat community structure. The IRM has been extended
to infer hierarchies [8], by augmenting it with a tree, but comes at considerable computational cost.
[9] and [10] propose methods limited to hierarchies of depth two, whilst [11] propose methods limited to hierarchies of known depth.. The Mondrian process [12] propose a flexible prior on trees and
a likelihood model for relational data. Current Bayesian nonparametric methods do not scale well
to larger networks because the inference algorithms used make many small changes to the model.
?
Part of the work was done whilst at the Gatsby Unit, University College London.
1
Such schemes can take a large number of iterations to converge on an adequate solution whilst each
iteration often scales unfavourably in the number of communities or vertices.
We shall describe a greedy Bayesian hierarchical clustering method for discovering community
structure in social networks. Our work builds upon Bayesian approaches to greedy hierarchical
clustering [13, 14] extending these approaches to relational data. We call our method Bayesian
Hierarchical Community Discovery (BHCD). BHCD produces good results two orders of magnitude
faster than a single iteration of the IRM, and its worst case run-time is quadratic in the number of
vertices of the graph and independent of the number of communities.
The remainder of the paper is organised as follows. Section 2 describes the stochastic blockmodel. In
Section 3 we introduce our model as a hierarchical mixture of stochastic blockmodels. In Section 4
we describe an efficient scheme for inferring hierarchical community structure with our model.
Section 5 demonstrates BHCD on several data sets. We conclude with a brief discussion in Section 6
2
Stochastic Blockmodels
A stochastic blockmodel [1] consists of a partition, ?, of vertices V and for each pair of clusters
p and q in ?, a parameter, ?pq , giving the probability of a presence or absence of an edge between
nodes of the clusters. Suppose V = {a, b, c, d}, then one way to partition the vertices would be
to form clusters ab, c and d, which we shall write as ? = ab|c|d, where | denotes a split between
clusters. The probability of an adjacency matrix, D, given a stochastic blockmodel, is as follows:
Y n1
0
P (D|?, {?pq }p,q?? ) =
?pqpq (1 ? ?pq )npq
(1)
p,q??
where n1pq is the total number of edges in D between the vertices in clusters p and q, and n0pq is the
total number of observed absent edges in D between the vertices in clusters p and q.
When modelling communities, we expect the edge appearance probabilities within a cluster to be
different to those between different clusters. Hence we place a different prior on each of these
cases. Similar approaches have been taken to adapt the IRM to community detection [7], where
non-conjugate priors were used at increased computational cost. In the interest of computational
efficiency, our model will instead use conjugate priors but with differing hyperparameters. ?pp will
have a Beta(?, ?) prior and ?pq , p 6= q, will have a Beta(?, ?) prior. The hyperparameters are picked
such that ? > ? and ? < ?, which correspond to a prior belief of a higher density of edges within
a community than across communities. Integrating out the edge appearance parameters, we obtain
the following likelihood of a particular partition ?:
P (D|?) =
Y B(? + n1pp , ? + n0pp ) Y B(? + n1pq , ? + n0pq )
B(?, ?)
B(?, ?)
p??
(2)
p,q??
p6=q
where B(?, ?) is the Beta function. We may generalise this to use any exponential family:
Y
Y
p(D|?) =
f (?pp )
g(?pq )
p??
(3)
p,q??, p6=q
where f (?) is the marginal likelihood of the on-diagonal blocks, and g(?) is the marginal likelihood
of the off-diagonal blocks. We use ?pq to denote the sufficient statistics from a (p, q)-block of the
adjacency matrix: all of the elements whose row indices are in cluster p and whose column indices
are in cluster q. For the remainder of the paper, we shall focus on the beta-Bernoulli case given in
(2) for concreteness. i.e., ?pq = (n1pq , n0pq ), with f (x, y) = B(?+x,?+y)
and g(x, y) = B(?+x,?+y)
.
B(?,?)
B(?,?)
For clarity of exposition, we shall focus on modelling undirected or symmetric networks with no
self-edges, so ?pq = ?qp and ?{x}{x} = 0 for each vertex x, but in general this restriction is not
necessary.
One approach to inferring ? is to fix the number of communities K then use maximum likelihood
estimation or Bayesian inference to assign vertices to each of the communities [1, 15]. Another
approach is to use variational Bayes, combined with an upper bound on the number of communities,
to determine the number of communities and community assignments [16].
2
Figure 1: Hierarchical decomposition of the adjacency matrix into tree-consistent partitions. Black
squares indicated edge presence, white squares indicate edge absence, grey squares are unobserved.
3
Bayesian Hierarchical Communities
In this section we shall develop a Bayesian nonparametric approach to community discovery. Our
model organises the communities into a nested hierarchy T , with all vertices in one community at
the root and singleton vertices at the leaves. Each vertex belongs to all communities along the path
from the root to the leaf containing it. We describe the probabilistic model relating the hierarchy
of communities to the observed network connectivity data here, whilst in the next section we will
develop a greedy model selection procedure for learning the hierarchy T from data.
We begin with the marginal probability of the adjacency matrix D under a stochastic blockmodel:
X
p(D) =
p(?)p(D|?)
(4)
?
If the Chinese restaurant process (CRP) is used as the prior on partitions p(?), then (4) corresponds
to the marginal likelihood of the IRM. Computing (4) typically requires an approximation: the space
of partitions ? is large and so the number of partitions in the above sum grows at least exponentially
in the number of vertices.
We shall take a different approach: we use a tree to define a prior on partitions, where only partitions
that are consistent with the tree are included in the sum. This allows us to evaluate (4) exactly. The
tree will represent the hierarchical community structure discovered in the network. Each internal
node of the tree corresponds to a community and the leaves of the tree are the vertices of the adjacency matrix, D. Figure 1 shows how a tree defines a collection of partitions for inclusion in the
sum in (4). The adjacency matrix on the left is explained by our model, conditioned upon the tree
on the upper left, by its five tree-consistent partitions. Various blocks within the adjacency matrix
are explained either by the on-diagonal model f or the off-diagonal model g, according to each partition. Note that the block structure of the off-diagonal model depends on the structure of the tree T ,
not just on the partition ?. The model always includes the trivial partition of all vertices in a single
community and also the singleton partition of all vertices in separate communities.
More precisely, we shall denote trees as a nested collection of sets of vertices. For example, the tree
in Figure 1 is T = {{a, b}, {c, d, e}, f }. The set of of partitions consistent with a tree T may be
expressed formally as in [14]:
?(T ) = {leaves(T )} ? {?1 |. . . |?nT : ?i ? ?(Ti ), Ti ? ch(T )}
(5)
where leaves(T ) are the leaves of the tree T , ch(T ) are its children, and so Ti is the ith subtree of
tree T . The marginal likelihood of the tree T can be written as:
X
p(D|T ) = p(DT T |T ) =
p(?|T )p(DT T |?, T )
(6)
?
where the notation DT T is short for Dleaves(T ),leaves(T ) , the block of D whose rows and columns
correspond to the leaves of T .
Our prior on partitions p(?|T ) is motivated by the following generative process: Begin at the root
of the tree, S = T . With probability ?S , stop and generate DSS according to the on-diagonal model
f . Otherwise, with probability 1 ? ?S , generate all inter-cluster edges between the children of the
current node according to g, and recurse on each child of the current tree S. The resulting prior on
3
tree-consistent partitions p(?|T ) factorises as:
Y
p(?|T ) =
Y
(1 ? ?S )
S?ancestorT (?)
?S
(7)
S?subtreeT (?)
where subtreeT (?) are the subtrees in T corresponding to the clusters in partition ? and ancestorT (?)
are the ancestors of trees in subtreeT (?). The prior probability of partitions not consistent with T is
zero. Following [14], we define ?S = 1 ? (1 ? ?)|ch(S)| , where ? is a parameter of the model. This
choice of ?S gives higher likelihood to non-binary trees over cascading binary trees when the data
has no hierarchical structure [14]. Similarly, the likelihood of each partition p(D|?, T ) factorises as:
Y
Y
?ch
p(DT T |?, T ) =
g ?SS
f (?SS )
(8)
S?ancestorT (?)
S?subtreeT (?)
where ?SS are the sufficient statistics of the adjacency matrix D among the leaves of tree S, and
?ch
?SS
are the sufficient statistics of the edges between different children of S:
X
?ch
?SS
= ?SS ?
?CC
(9)
C?ch(S)
The set of tree consistent partitions given in (5) has at most O(2n ) partitions, for n vertices. However
due to the structure of the prior on partitions (7) and the block model (8), the marginal likelihood (6)
may be calculated by dynamic programming, in O(n + m) time where n is the number of vertices
and m is the number of edges. Combining (7) and (8) and expanding (6) by breadth-first traversal of
the tree, yields the following recursion for the marginal likelihood of the generative process given at
the beginning of the section:
Y
p(DT T |T ) = ?T f (?T T ) + (1 ? ?T )g ?T?ch
p(DCC |C)
(10)
T
C?ch(T )
4
Agglomerative Model Selection
In this section we describe how to learn the hierarchy of communities T . The problem is treated as
one of greedy model selection: each tree T is a different model, and we wish to find the model that
best explains the data. The tree is built in a bottom-up greedy agglomerative fashion, starting from
a forest consisting of n trivial trees, each corresponding to exactly one vertex. Each iteration then
merges two of the trees in the forest. At each iteration, each vertex in the network is a leaf of exactly
one tree in the forest. The algorithm finishes when just one tree remains. We define the likelihood
of the forest F as the probability of data described by each tree in the forest times that for the data
corresponding to edges between different trees:
Y
p(D|F ) = g(?F?ch
p(DT T |T )
(11)
F)
T ?F
where
?F?ch
F
are the sufficient statistics of the edges between different trees in the forest.
The initial forest, F (0) , consists a singleton tree for each vertex of the network. At each iteration
a pair of trees in the forest F is chosen to be merged, resulting in forest F ? . Which pair of tree to
merge, and how to merge these trees, is determined by considering which pair and type of merger
yields the largest Bayes factor improvement over the current model. If the trees I and J are merged
to form the tree M , then the Bayes factor score is:
S CORE(M ; I, J) =
p(DM M |F ? )
p(DM M |M )
=
p(DM M |F )
p(DII |I)p(DJJ |J)g(?IJ )
(12)
where p(DM M |M ), p(DII |I) and p(DJJ |J) are given by (10) and ?IJ are the sufficient statistics of
the edges connecting leaves(I) and leaves(J). Note that the Bayes factor score is based on data local
to the merge?i.e., by considering the probability of the connectivity data only among the leaves of
the newly merged tree. This permits efficient local computations and makes the assumption that
local community structure should depend only on the local connectivity structure.
We consider three possible mergers of two trees I and J into M . See Figure 2, where for concreteness we take I = {Ta , Tb , Tc } and J = {Td , Te } where Ta , Tb , Tc , Td , Te are subtrees. M may be
4
Join (M )
I
?ch
1: Initialise F, {pI , ?II
}I?F , {?IJ }I,J?F .
2: for each unique pair I, J ? F do
3:
Let M := M ERGE(I; J), compute pM and
J
S CORE(M ; I, J), and add M to the heap.
Ta
Tb
Tc
Td
Te
Absorb (M )
J
4: end for
5: while heap is not empty do
6:
Pop I = M ERGE(X; Y ) off the top of heap.
7:
if X ? F and Y ? F then
8:
F ? (F \ {X, Y }) ? {I}.
9:
for each tree J ? F \ {I}, do
?ch
10:
Compute ?IJ , ?M M , and ?M
M using (13).
11:
Let M := M ERGE(I; J), compute pM and
S CORE(M ; I, J), and add M to heap.
Ta
Tb
Tc
Td Te
12:
end for
13:
end if
14: end while
15: return the only tree in F
Figure 2: Different merge operations. Algorithm 1: Bayesian hierarchical community discovery.
formed by joining I and J together using a new node, giving M = {I, J}. Alternatively M may be
formed by absorbing J as a child of I, yielding M = {J} ? ch(I), or vice versa, M = {I} ? ch(J).
The algorithm for finding T is described in Algorithm 1. The algorithm maintains a forest F of
trees, the likelihood pI = p(DII |I) of each tree I ? F according to (10), and the sufficient statistics
?ch
{?II
}I?F , {?IJ }I,J?F needed for efficient computation. It also maintains a heap of potential
merges ordered by the S CORE (12), used for determining the ordering of merges. At each iteration,
the best potential merge, say of trees X and Y resulting in tree I, is picked off the heap. If either X or
Y is not in F , this means that the tree has been used in a previous merge, so that the potential merge
is discarded and the next potential merge is considered. After a successful merge, the sufficient
statistics associated with the new tree I are computed using the previously computed ones:
?IJ = ?XJ + ?Y J
for J ? F, J 6= I.
?M M = ?II + ?JJ + ?IJ
?
if M is formed by joining I and J,
??IJ
?ch
?ch
?M M = ?II + ?IJ if M is formed by J absorbed into I,
? ?ch
?JJ + ?IJ if M is formed by I absorbed into J.
(13)
These sufficient statistics are computed based on previous cached values, allowing each inner loop
of the algorithm to take O(1) time. Finally, potential mergers of I with other trees J in the forest are
considered and added onto the heap. In the algorithm, M ERGE(I; J) denotes the best of the three
possible merges of I and J.
Algorithm 1 is structurally the same as that in [14], and so has time complexity in O(n2 log(n)).
The difference is that additional care is needed to cache the sufficient statistics allowing for O(1)
computation per inner loop of the algorithm. We shall refer to Algorithm 1 as BHCD.
4.1
Variations
BHCD will consider merging trees that have no edges between them if the merge score (12) is
high enough. This does not seem to be a reasonable behaviour as communities that are completely
disconnected should not be merged. We can alter BHCD by simply prohibiting such merges between
trees that have no edges between them. The resulting algorithm we call BHCD sparse, as it only
needs to perform computations on the parts of the network with edges present. Empirically, we have
found that it produces better results than BHCD and runs faster for sparse networks, although in the
worst case it has the same time complexity O(n2 log n) as BHCD.
As BHCD runs, several merges can have the same score. In particular, at the first iteration all pairs of
vertices connected by an edge have the same score. In such situations, we break the ties at random.
Different tie breaks can yield different results and so different runs on the same data may yield
5
different trees. Where we want a single tree, we use R (R = 50 in experiments) restarts and pick
the tree with the highest likelihood according to (10). Where we wish to make predictions, we will
construct predictive probabilities (see next section) by averaging all R trees.
4.2
Predictions
For link prediction, we wish to obtain the predictive distribution of a previously unobserved element
of the adjacency matrix. This is easily achieved by traversing one path of the tree from the root towards the leaves, hence the computational complexity is linear in the depth of the tree. In particular,
suppose we wish to predict the edge between x and y, Dxy , given the observed edges D, then the
predictive distribution can be computed recursively starting with S = T :
p(Dxy |DCC , C) if ?C ? ch(S) : x, y ? leaves(C),
p(Dxy |DSS , S) = rS f (Dxy |?SS ) + (1 ? rS )
?ch
g(Dxy |?SS
)
if ?C ? ch(S) : x, y 6? leaves(C).
rS =
?S f (?SS )
p(DSS |S)
(14)
where rS is the probability that the cluster consisting of leaves(S) is present if the cluster corresponding to its parent is not present, given the data D and the tree T . The predictive distribution
is a mixture of a number of on-diagonal posterior f terms (weighted by the responsibility rT ), and
finally an off-diagonal posterior g term. Hence the computational complexity is ?(n).
5
Experiments
We now demonstrate BHCD on three data sets. Firstly we examine qualitative performance on
Sampson?s monastery network. Then we demonstrate the speed and accuracy of our method on
a subset of the NIPS 1?17 co-authorship network compared to IRM?one of the fastest Bayesian
nonparametric models for these data. Finally we show hierarchical structure found examining the
full NIPS 1?17 co-authorship network. In our experiments we set the model hyperparameters ? =
? = 1.0, ? = ? = 0.2, and ? = 0.4 which we found to work well. In the first two experiments
we shall compare four variations of BHCD: BHCD, BHCD sparse, BHCD restricted to binary trees,
and BHCD sparse restricted to binary trees. Binary-only variations of BHCD only consider joins,
not absorptions, and so may run faster. They also tend to produce better predictive results as they
average over a larger number of partitions. But, as we shall see below, the hierarchies found can be
more difficult to interpret than the non-binary hierarchies found.
Sampson?s Monastery Network Figure 3 shows the result of running six variants of BHCD on time
four of Sampson?s monastery network [17]. Sampson observed the monastery at five times?time
four is the most interesting time as it was before four of the monks were expelled. We treated positive
affiliations as edges, and negative affiliations as observed absent edges, and unknown affiliation as
missing data. [17], using a variety of methods, found four flat groups, shown at the top of Figure 3:
Young Turks (Albert, Boniface, Gregory, Hugh, John Bosco, Mark, Winfrid), Loyal Opposition
(Ambrose, Berthold, Bonaventure, Louis, Peter), Outcasts (Basil, Elias, Simplicius), and Interstitial
group (Amand, Ramuald, Victor).
As can be seen in Figure 3, most BHCD variants find clear block diagonal structure in the adjacency
matrix. The binary versions find similar structure to the non-binary versions, up to permutations of
the children of the non-binary trees. BHCD global is lead astray by out of date scores on its heap
and so finds less coherent structure. The log likelihoods of the trees in Figure 3 are ?6.62 (BHCD)
and ?22.80 (BHCD sparse). Whilst the log likelihoods of the binary trees in Figure 3 are ?8.32
(BHCD binary) and ?24.68 (BHCD sparse binary). BHCD finds the most likely tree, and rose trees
typically better explain the data than binary trees.
BHCD finds the Young Turks and Loyal Opposition groups and chooses to merge some members
of the Interstitial group into the Loyal Opposition and Amand into the Outcasts. Mark, however, is
placed in a separate community: although Mark has a positive affiliation with Gregory, Mark also
has a negative affiliation with John Bosco and so BHCD elects to create a new community to account
for this discrepancy.
NIPS-234 Next we applied BHCD to a subset of the NIPS co-authorship dataset [19]. We compared
its predictive performance to both IRM using MCMC and also inference in the IRM using greedy
6
Albert
Boniface
Gregory
Hugh
John Bosco
Mark
Winfrid
Amand
Ramuald
Victor
Ambrose
Berthold
Bonaventure
Louis
Peter
Basil
Elias
Simplicius
Simplicius
Elias
Basil
Peter
Louis
Bonaventure
Berthold
Ambrose
Victor
Ramuald
Amand
Winfrid
Mark
John Bosco
Hugh
Gregory
Boniface
Albert
Method
IRM (na??ve)
IRM (sparse)
LFRM [18]
IMMM [9]
ILA [10]
[8]
BHCD
Time complexity
O(n2 K 2 IR)
O(mK 2 IR)
O(n2 F 2 IR)
O(n2 K 2 IR)
O(n2 (F + K 2 )IR)
O(n2 K 2 IR)
O(n2 log(n)R)
Table 1: Time complexities of different methods.
n = # vertices, m = # edges, K = # communities, F = # latent factors, I = # iterations per
restart, R = # restarts.
Albert
Basil
Boniface
Gregory
Hugh
John Bosco
Winfrid
Amand
Elias
Simplicius
Mark
Ambrose
Berthold
Bonaventure
Louis
Peter
Ramuald
Victor
?0.05
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?????
??
??
??
?
?
?0.06
Average Log Predictive
Victor
Ramuald
Peter
Louis
Bonaventure
Berthold
Ambrose
Mark
Simplicius
Elias
Amand
Winfrid
John Bosco
Hugh
Gregory
Boniface
Basil
Albert
Albert
Mark
Basil
Gregory
Hugh
John Bosco
Winfrid
Boniface
Amand
Elias
Simplicius
Ambrose
Louis
Berthold
Bonaventure
Peter
Ramuald
Victor
?
?
?
?0.07
?
?0.08
?0.09
IRM
?
Greedy
?0.10
?
Sparse
Binary
Binary
Rose
Rose
10
1000
Run time (s)
Victor
Ramuald
Peter
Bonaventure
Berthold
Louis
Ambrose
Simplicius
Elias
Amand
Boniface
Winfrid
John Bosco
Hugh
Gregory
Basil
Mark
Albert
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
???
?
?
?
?
?
?
?
?
?
?
?????
??? ???????
?
??
0.984
Accuracy
Albert
Basil
John Bosco
Gregory
Hugh
Winfrid
Boniface
Mark
Amand
Elias
Simplicius
Ambrose
Louis
Berthold
Bonaventure
Peter
Victor
Ramuald
?
?
0.98
0.976
Ramuald
Victor
Peter
Bonaventure
Berthold
Louis
Ambrose
Simplicius
Elias
Amand
Mark
Boniface
Winfrid
Hugh
Gregory
John Bosco
Basil
Albert
?
10
1000
Run time (s)
Louis
Berthold
Victor
Ramuald
Peter
Bonaventure
Ambrose
Mark
Simplicius
Elias
Amand
Winfrid
Boniface
Gregory
Hugh
John Bosco
Basil
Albert
Area Under the Curve (AUC)
Albert
Basil
John Bosco
Hugh
Gregory
Boniface
Winfrid
Amand
Elias
Simplicius
Mark
Ambrose
Bonaventure
Peter
Ramuald
Victor
Berthold
Louis
Figure 3: Sampson?s monastery network. White indicates a positive affiliation, black negative, whilst grey indicates unknown. From top to bottom: Sampson?s clustering, BHCD,
BHCD-sparse, BHCD with binary trees,
BHCD-sparse-binary.
BHCD
MCMC
?
0.90
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
???
??
??
??
??
?
?
?
?
0.85
?
0.80
10
1000
Run time (s)
Figure 4: NIPS-234 comparison using log predictive, accuracy and AUC, averaged across 10
cross-validation folds.
7
Amari S
Waibel A
Doya K
Yang H
Cortes C
Finke M
Cichocki A
Murata N
Haffner P
Li Y
Bengio Y
Obermayer K
Bishop C
Singer Y
Kawato M
Baldi P
Moore A
Tresp V
Morgan N
Smyth P
LeCun Y
Poggio T
Koch C
Meir R
Jaakkola T
Freeman W
Mead C
Alspector J Mukherjee S
Vapnik V
Guyon I
Pontil M
Liu S
Allen R
Darrell T
Geiger D
Harris J
Jayakumar A
Denker J
Fisher J
Rudra A
Horiuchi T
El-Yaniv R
Graf H
Willsky A
Wawrzynek JSatyanarayana S Vetter T
Wainwright M Simard P
Luo J
Domany E
Serre T
Sudderth E B Henderson D
Lazzaro J Zeppenfeld T Heisele B
Jackel L
Ihler A T
Bottou L
Girosi F
Lippe D
Ruderman D
Taycher L
Lapedes A Riesenhuber M
Bair W
Adelson E H Hubbard W
Maass W
Stork D
Jain A
Singh S
Sejnowski T
Brown T
Barto A
Wolff G
Tebelskis J
Movellan J R
Wang X
Kearns M
Watanabe T
Zador A
Viola P
Opper M
Claiborne B Boonyanit K Schmidbauer O
Bartlett M
Sutton R Natschlager T Leung M
Sloboda T
Movellan J
Tsai K
Kritayakirana K McNair A
Schapire R
Cohen M
Peterson A
Sontag E
Ballard D
Littlewort G Mansour Y
Mainen Z
Schwartz E
Saito H
Isbell C L
Ekman P
Burr J
Osterholtz L
Littman M Camevale N
Andreou A
Murray M Woszczyna M
Larsen J
McAllester D Sontag E D
Marchand M Scholkopf B
Weston J
Golea M
Mason L
Muller K
Tenorio M Shawe-Taylor J
Lee W
Smola A
Baxter J
Bartlett P
Ratsch G
Japkowicz N
Frean M
Williamson R
Platt J
Sokolova M
Elisseeff A
Tsirukis A
Figure 5: Clusters of authors found in NIPS 1?17. Top 10 most most collaborating authors shown
for all clusters with more than 15 vertices.
search, using the publicly available C implementation[20]. Our implementation of BHCD is also in
C. As can be seen from Table 1, BHCD has significantly lower computational complexity than other
Bayesian nonparametric methods even than those inferring flat hierarchies. This is because it is a
simpler model and uses a simpler inference method?thus we do not expect it to yield better predictive results, but instead to get good results quickly. Unlike the other listed methods, BHCD?s worst
case complexity does not depend upon the number of communities, and BHCD always terminates
after a fixed number of steps so has no I factor. This latter factor, I, corresponds to the number
of MCMC steps or the number of greedy search steps, may be large and may need to scale as the
number of vertices increases.
Following [18, 10] we restricted the network to the 234 most connected individuals. Figure 4 shows
the average log predictive probability of held out data, accuracy and Area under the receiver operating Curve (AUC) over time for both BHCD and IRM. For the IRM, each point represents a single
Gibbs step (for MCMC) or a search step (for greedy search). For BHCD, each point represents a
complete run of the inference algorithm. BHCD is able to make reasonable predictions before the
IRM has completed a single Gibbs scan. We used the same 10 cross-validation folds as used in
[10] and so our results are quantitatively comparable to their results for the Latent Factor Relational
Model (LFRM [18]) and their model, the Infinite Latent Attributes model (ILA). BHCD performs
similarly to LFRM, worse than ILA, and better IRM. After about 10 seconds, the sparse variants
of BHCD make as good predictions on NIPS-234 as the IRM after about 1000 seconds. Notably
the sparse variations are faster than the non-sparse variants of BHCD, as the NIPS co-authorship
network is sparse.
Full NIPS The full NIPS dataset has 2864 vertices and 9466 edges. Figure 5 shows part of the
hierarchy discovered by BHCD. The full inferred hierarchy is large, having 646 internal nodes. We
cut the tree and retained the top portion of the hierarchy,
shown above the clusters. We merged all
Q
the leaves of a subtree T into a flat cluster when rT A?ancestorT (1 ? rA ) > 0.5 where rT is given in
(14). This quantity corresponds to the probability of picking that particular subtree in the predictive
distribution. Amongst these clusters we included only those with at least 15 members in Figure 5.
We include hierarchies with a lower cut-off in the supplementary.
6
Discussion and Future Work
We proposed an efficient Bayesian procedure for discovering hierarchical communities in social
networks. Experimentally our procedure discovers reasonable hierarchies and is able to make predictions about two orders of magnitude faster than one of the fastest existing Bayesian nonparametric
schemes, whilst attaining comparable performance. Our inference procedure scales as O(n2 log n)
through a novel caching scheme, where n is the number of vertices, making the procedure suitable
for large dense networks. However our likelihood can be computed in O(n + m) time, where m are
the number of edges. This disparity between inference and likelihood suggests that in future it may
be possible to improve the scalability of the model on sparse networks, where m n2 . Another way
to scale up the model would be to investigate parameterising the network using the sufficient statistics of triangles, instead of edges as in [21]. Others [7] have found that non-conjugate likelihoods
can yield improved predictions?thus adapting our scheme to work with non-conjugate likelihoods
and doing hyperparameter inference could also be fruitful next steps.
Acknowledgements We thank the Gatsby Charitable Foundation for generous funding.
8
References
[1] P. Holland, K.B. Laskey, and S. Leinhardt. Stochastic blockmodels: Some first steps. Social
Networks, 5:109137, 1983.
[2] Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. Mixed membership
stochastic blockmodel. Journal of Machine Learning Research, 9:1981?2014, 2008.
[3] M. Girvan and M. E. J. Newman. Community structure in social and biological networks.
PNAS, 99:7821?7826, 2002.
[4] A. Clauset, M. E. J. Newman, and C. Moore. Finding community structure in very large
networks. Physics Review E, 70, 2004.
[5] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori
Ueda. Learning systems of concepts with an infinite relational model. AAAI, 2006.
[6] Zhao Xu, Volker Tresp, Kai Yu, and Hans-Peter Kriegel. Infinite hidden relational models.
Uncertainty in Artificial Intelligence (UAI), 2006.
[7] Morten M?rup and Mikkel N. Schmidt. Bayesian community detection. Neural Computation,
24:2434?2456, 2012.
[8] T. Herlau, M. M?rup, M. N. Schmidt, and L. K. Hansen. Detecting hierarchical structure in
networks. In Cognitive Information Processing, 2012.
[9] Phaedon-Stelios Koutsourelakis and Tina Eliassi-Rad. Finding mixed-memberships in social
networks. 2008 AAAI Spring Symposium on Social Information Processing (AAAI-SS?08),
2008.
[10] Konstantina Palla, David A. Knowles, and Zoubin Ghahramani. An infinite latent attribute
model for network data. In Proceedings of the 29th International Conference on Machine
Learning, ICML 2012. July 2012.
[11] Qirong Ho, Ankur P. Parikh, Le Song, and Erix P. Xing. Multiscale community blockmodel
for network exploration. Proceedings of the Fourteenth International Workshop on Artificial
Intelligence and Statistics (AISTATS), 2011.
[12] D. M. Roy and Y. W. Teh. The Mondrian process. In Advances in Neural Information Processing Systems, volume 21, 2009.
[13] K. A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of the
International Conference on Machine Learning, volume 22, 2005.
[14] C. Blundell, Y. Teh, and K. A. Heller. Bayesian Rose trees. UAI, 2010.
[15] T. Snijders and K. Nowicki. Estimation and prediction for stochastic blockmodels for graphs
with latent block structure. Journal of Classification, 14:75?100, 1997.
[16] Jake M. Hofman and Chris H. Wiggins. Bayesian approach to network modularity. Physical
Review Letters, 100(25):258701, 2008.
[17] S. F. Sampson. A novitiate in a period of change. an experimental and case study of social
relationships. 1968.
[18] Kurt T. Miller, Thomas L. Griffiths, and Michael I. Jordan. Nonparametric latent feature models for link prediction. Neural Information Processing Systems (NIPS), 2009.
[19] A. Globerson, G. Chechik, F. Pereira, and N. Tishby. Euclidean embedding of co-occurrence
data. Journal of Machine Learning Research, 8:2265?2295, 2007.
[20] Charles Kemp. Infinite relational model implementation. http://www.psy.cmu.edu/
?ckemp/code/irm.html. Accessed: 2013-04-08.
[21] Q. Ho, J. Yin, and E. P. Xing. On triangular versus edge representations ? towards scalable
modeling of networks. Neural Information Processing Systems (NIPS), 2012.
9
| 5048 |@word bosco:12 version:2 grey:2 r:4 decomposition:1 elisseeff:1 pick:1 recursively:1 initial:1 liu:1 score:6 disparity:1 mainen:1 lapedes:1 kurt:1 existing:1 current:4 com:1 nt:1 luo:1 written:1 john:12 partition:25 girosi:1 greedy:11 discovering:5 leaf:18 generative:2 intelligence:2 merger:3 monk:1 beginning:1 ith:1 short:1 core:4 yamada:1 blei:1 detecting:1 node:5 firstly:1 simpler:2 accessed:1 five:2 along:1 beta:4 symposium:1 scholkopf:1 qualitative:1 consists:2 burr:1 baldi:1 introduce:1 inter:1 notably:1 ra:1 alspector:1 themselves:1 examine:1 freeman:1 palla:1 td:4 cache:1 considering:2 begin:2 notation:1 natschlager:1 deepmind:2 whilst:8 differing:1 unobserved:3 finding:3 ti:3 tie:2 exactly:3 demonstrates:1 uk:1 schwartz:1 unit:1 platt:1 louis:11 before:2 positive:3 scientist:1 local:4 physicist:2 sutton:1 joining:2 oxford:1 mead:1 path:2 merge:11 black:2 studied:1 ankur:1 suggests:1 co:5 fastest:2 faction:1 limited:2 averaged:1 unique:1 lecun:1 globerson:1 practice:1 block:10 movellan:2 procedure:5 heisele:1 pontil:1 saito:1 area:2 significantly:1 adapting:1 chechik:1 integrating:1 vetter:1 griffith:2 zoubin:1 ila:3 get:1 onto:1 selection:4 yee:1 restriction:1 fruitful:1 www:1 missing:1 starting:2 zador:1 focused:1 stats:1 cascading:1 initialise:1 embedding:1 variation:4 hierarchy:15 suppose:2 smyth:1 programming:1 us:1 element:2 roy:1 npq:1 zeppenfeld:1 mukherjee:1 cut:2 observed:5 bottom:2 wang:1 worst:4 clauset:1 connected:2 ordering:1 highest:1 rose:4 complexity:8 rup:2 littman:1 dynamic:1 traversal:1 depend:2 mondrian:2 singh:1 hofman:1 predictive:11 upon:3 efficiency:1 eric:1 completely:1 triangle:1 easily:1 various:2 horiuchi:1 jain:1 describe:5 london:1 sejnowski:1 artificial:2 newman:2 refined:1 whose:3 larger:2 supplementary:1 kai:1 say:1 s:10 otherwise:1 amari:1 triangular:1 littlewort:1 statistic:12 propose:4 leinhardt:1 remainder:2 outcast:2 combining:1 loop:2 date:1 qirong:1 scalability:1 parent:1 cluster:19 empty:1 extending:1 darrell:1 produce:3 cached:1 yaniv:1 friend:1 ac:1 frean:1 augmenting:1 develop:2 ij:10 come:1 indicate:1 merged:5 attribute:2 stochastic:13 golea:1 exploration:1 mcallester:1 dii:3 adjacency:11 explains:1 require:2 behaviour:1 assign:1 fix:1 organises:1 biological:1 absorption:1 ds:3 koch:1 considered:2 predict:1 generous:1 heap:8 estimation:2 hansen:1 jackel:1 hubbard:1 largest:1 vice:1 create:1 erge:4 weighted:1 always:2 reaching:1 caching:1 volker:1 barto:1 jaakkola:1 focus:2 improvement:1 modelling:2 likelihood:20 bernoulli:1 indicates:2 blockmodel:9 psy:1 inference:8 el:1 membership:4 leung:1 typically:2 hidden:1 ancestor:1 japkowicz:1 among:2 flexible:1 classification:1 html:1 marginal:7 construct:1 having:1 represents:2 adelson:1 yu:1 icml:1 alter:1 discrepancy:1 future:2 ramuald:11 others:1 quantitatively:1 simultaneously:1 ve:1 individual:5 consisting:2 n1:1 ab:2 detection:2 interest:1 investigate:1 henderson:1 mixture:3 recurse:1 yielding:1 held:1 parameterising:1 subtrees:2 edge:29 rudra:1 naonori:1 necessary:1 poggio:1 traversing:1 tree:74 taylor:1 irm:17 euclidean:1 politician:1 mk:1 increased:1 column:2 modeling:1 finke:1 assignment:1 cost:2 vertex:28 subset:2 successful:1 examining:1 tishby:1 gregory:12 combined:1 chooses:1 density:1 international:3 hugh:11 probabilistic:3 physic:2 off:7 lee:1 picking:1 michael:1 connecting:1 together:1 quickly:1 na:1 connectivity:4 aaai:3 containing:1 mikkel:1 worse:1 cognitive:1 jayakumar:1 simard:1 zhao:1 return:1 li:1 account:2 potential:5 singleton:3 attaining:1 boonyanit:1 includes:1 depends:1 root:4 picked:2 break:2 responsibility:1 doing:1 portion:1 xing:3 bayes:4 maintains:2 square:3 formed:5 accuracy:5 ir:6 publicly:1 murata:1 yield:7 correspond:2 miller:1 bayesian:19 cc:1 explain:1 tsirukis:1 pp:2 turk:2 larsen:1 dm:4 associated:1 ihler:1 stop:1 newly:1 dataset:2 infers:1 schmidbauer:1 higher:2 dt:6 dcc:2 ta:4 restarts:2 improved:1 done:1 ox:1 just:3 smola:1 crp:1 p6:2 until:1 working:1 ruderman:1 multiscale:1 defines:1 indicated:1 laskey:1 grows:1 serre:1 brown:1 concept:1 hence:3 symmetric:1 moore:2 nowicki:1 maass:1 white:2 self:1 amand:12 elect:1 auc:3 djj:2 authorship:4 sokolova:1 whye:1 complete:1 demonstrate:2 performs:1 allen:1 variational:1 discovers:1 novel:1 charles:4 funding:1 parikh:1 absorbing:1 kawato:1 qp:1 empirically:1 physical:1 stork:1 cohen:1 exponentially:2 lippe:1 volume:2 belong:1 relating:1 interpret:1 refer:1 versa:1 gibbs:2 pm:2 similarly:2 inclusion:1 ambrose:11 shawe:1 pq:8 han:1 operating:1 add:2 posterior:2 belongs:1 binary:17 affiliation:6 muller:1 victor:11 joshua:1 seen:2 morgan:1 additional:1 care:1 converge:1 determine:1 period:1 july:1 ii:4 branch:2 full:5 stephen:1 pnas:1 infer:1 snijders:1 faster:6 adapt:1 cross:2 prediction:10 variant:4 scalable:1 cmu:1 albert:11 iteration:9 sometimes:1 represent:1 achieved:1 want:1 ratsch:1 sudderth:1 unlike:1 posse:1 tend:1 undirected:1 member:2 seem:1 eliassi:1 call:2 jordan:1 presence:3 yang:1 split:2 enough:1 bengio:1 baxter:1 variety:1 xj:1 restaurant:1 finish:1 inner:2 domany:1 haffner:1 blundell:2 absent:2 motivated:1 six:1 bair:1 bartlett:2 edoardo:1 song:1 peter:12 sontag:2 jj:2 repeatedly:1 adequate:1 lazzaro:1 clear:1 listed:1 loyal:3 takeshi:1 nonparametric:7 tenenbaum:1 generate:2 schapire:1 meir:1 http:1 per:2 write:1 hyperparameter:1 shall:10 group:6 four:5 basil:11 achieving:1 clarity:1 breadth:1 graph:2 concreteness:2 sum:3 taycher:1 run:10 fourteenth:1 letter:1 uncertainty:1 place:2 family:2 extends:1 reasonable:3 guyon:1 monastery:5 lfrm:3 doya:1 geiger:1 ueda:1 knowles:1 scaling:1 comparable:3 bound:1 opposition:3 fold:2 quadratic:1 marchand:1 precisely:1 prohibiting:1 isbell:1 flat:8 stelios:1 speed:1 tebelskis:1 spring:1 kritayakirana:1 department:1 structured:1 according:6 waibel:1 disconnected:1 conjugate:4 describes:1 across:2 terminates:1 wawrzynek:1 making:1 explained:2 restricted:3 taken:1 interstitial:2 computationally:1 fienberg:1 previously:3 remains:1 needed:2 singer:1 end:4 available:1 operation:1 permit:1 denker:1 hierarchical:20 occurrence:1 schmidt:2 ho:2 thomas:2 denotes:2 clustering:5 top:5 running:1 completed:1 include:1 tina:1 giving:2 ghahramani:2 unfavourably:1 build:1 chinese:1 murray:1 jake:1 added:1 quantity:1 parametric:1 rt:3 diagonal:9 obermayer:1 amongst:1 separate:2 link:2 thank:1 morten:1 restart:1 astray:1 chris:1 agglomerative:3 kemp:2 trivial:2 willsky:1 code:1 index:2 retained:1 relationship:1 difficult:1 potentially:1 negative:3 implementation:3 unknown:2 perform:1 teh:4 upper:2 allowing:2 discarded:1 riesenhuber:1 situation:1 relational:8 extended:1 viola:1 discovered:3 mansour:1 wiggins:1 community:54 inferred:1 david:2 pair:6 woszczyna:1 rad:1 andreou:1 coherent:1 quadratically:2 merges:6 pop:1 nip:12 able:2 recurring:1 kriegel:1 below:1 tb:4 built:1 mcnair:1 belief:1 wainwright:1 suitable:1 treated:2 recursion:1 scheme:6 improve:1 technology:1 brief:1 factorises:2 cichocki:1 tresp:2 prior:13 review:2 discovery:4 acknowledgement:1 heller:2 determining:1 graf:1 girvan:1 fully:1 expect:2 permutation:1 mixed:3 interesting:1 organised:1 versus:1 validation:2 foundation:1 elia:11 sufficient:10 consistent:7 charitable:1 ckemp:1 pi:2 collaboration:1 row:2 placed:1 organise:1 allow:1 generalise:1 herlau:1 peterson:1 sparse:16 curve:2 depth:3 calculated:1 berthold:11 opper:1 author:3 collection:2 social:12 claiborne:1 absorb:1 clique:1 global:1 uai:2 receiver:1 conclude:1 alternatively:1 search:4 latent:6 modularity:1 table:2 learn:2 ballard:1 expanding:1 forest:11 williamson:1 bottou:1 aistats:1 blockmodels:5 dense:2 linearly:1 hyperparameters:3 n2:10 child:6 xu:1 join:2 fashion:1 gatsby:2 structurally:1 inferring:5 watanabe:1 wish:4 pereira:1 exponential:1 young:2 bishop:1 mason:1 specialisation:1 cortes:1 workshop:1 vapnik:1 merging:1 airoldi:1 magnitude:3 te:4 subtree:3 conditioned:1 konstantina:1 tc:4 yin:1 simply:1 appearance:2 likely:1 tenorio:1 absorbed:2 expressed:1 ordered:1 holland:1 ch:22 nested:3 corresponds:4 collaborating:1 harris:1 weston:1 consequently:1 exposition:1 towards:2 sampson:7 absence:3 considerable:1 change:2 fisher:1 included:2 infinite:7 determined:1 ekman:1 experimentally:1 averaging:1 kearns:1 wolff:1 total:2 pas:1 experimental:1 formally:1 college:1 internal:2 people:1 mark:14 latter:1 scan:1 dxy:5 tsai:1 evaluate:1 mcmc:4 |
4,474 | 5,049 | Nonparametric Multi-group Membership Model
for Dynamic Networks
Jure Leskovec
Stanford University
Stanford, CA 94305
[email protected]
Myunghwan Kim
Stanford University
Stanford, CA 94305
[email protected]
Relational data?like graphs, networks, and matrices?is often dynamic, where the relational structure evolves over time. A fundamental problem in the analysis of time-varying network data is to
extract a summary of the common structure and the dynamics of the underlying relations between
the entities. Here we build on the intuition that changes in the network structure are driven by dynamics at the level of groups of nodes. We propose a nonparametric multi-group membership model
for dynamic networks. Our model contains three main components: We model the birth and death of
individual groups with respect to the dynamics of the network structure via a distance dependent Indian Buffet Process. We capture the evolution of individual node group memberships via a Factorial
Hidden Markov model. And, we explain the dynamics of the network structure by explicitly modeling the connectivity structure of groups. We demonstrate our model?s capability of identifying the
dynamics of latent groups in a number of different types of network data. Experimental results show
that our model provides improved predictive performance over existing dynamic network models on
future network forecasting and missing link prediction.
1 Introduction
Statistical analysis of social networks and other relational data is becoming an increasingly important problem as the scope and availability of network data increases. Network data?such as the
friendships in a social network?is often dynamic in a sense that relations between entities rise and
decay over time. A fundamental problem in the analysis of such dynamic network data is to extract
a summary of the common structure and the dynamics of the underlying relations between entities.
Accurate models of structure and dynamics of network data have many applications. They allow us
to predict missing relationships [20, 21, 23], recommend potential new relations [2], identify clusters
and groups of nodes [1, 29], forecast future links [4, 9, 11, 24], and even predict group growth and
longevity [15].
Here we present a new approach to modeling network dynamics by considering time-evolving interactions between groups of nodes as well as the arrival and departure dynamics of individual nodes
to these groups. We develop a dynamic network model, Dynamic Multi-group Membership Graph
Model, that identifies the birth and death of individual groups as well as the dynamics of node joining and leaving groups in order to explain changes in the underlying network linking structure. Our
nonparametric model considers an infinite number of latent groups, where each node can belong to
multiple groups simultaneously. We capture the evolution of individual node group memberships
via a Factorial Hidden Markov model. However, in contrast to recent works on dynamic network
modeling [4, 5, 11, 12, 14], we explicitly model the birth and death dynamics of individual groups
by using a distance-dependent Indian Buffet Process [7]. Under our model only active/alive groups
influence relationships in a network at a given time. Further innovation of our approach is that we
not only model relations between the members of the same group but also account for links between
members and non-members. By explicitly modeling group lifespan and group connectivity structure
we achieve greater modeling flexibility, which leads to improved performance on link prediction and
network forecasting tasks as well as to increased interpretability of obtained results.
1
The rest of the paper is organized as follows: Section 2 provides the background and Section 3
presents our generative model and motivates its parametrization. We discuss related work in Section 4 and present model inference procedure in Section 5. Last, in Section 6 we provide experimental results as well as analysis of the social network from the movie, The Lord of the Rings.
2 Models of Dynamic Networks
First, we describe general components of modern dynamic network models [4, 5, 11, 14]. In the
next section we will then describe our own model and point out the differences to the previous work.
Dynamic networks are generally conceptualized as discrete time series of graphs on a fixed set of
nodes N . Dynamic network Y is represented as a time series of adjacency matrices Y (t) for each
time t = 1, 2, ? ? ? , T . In this work, we limit our focus to unweighted directed as well as undirected
(t)
networks. So, each Y (t) is a N ? N binary matrix where Yij = 1 if a link from node i to j exists
(t)
at time t and Yij = 0 otherwise.
Each node i of the network is associated with a number of latent binary features that govern the
interaction dynamics with other nodes of the network. We denote the binary value of feature k of
(t)
node i at time t by zik ? {0, 1}. Such latent features can be viewed as assigning nodes to multiple overlapping, latent clusters or groups [1, 21]. In our work, we interpret these latent features as
memberships to latent groups such as social communities of people with the same interests or hobbies. We allow each node to belong to multiple groups simultaneously. We model each node-group
membership using a separate Bernoulli random variable [17, 22, 29]. This is in contrast to mixedmembership models where the distribution over individual node?s group memberships is modeled
using a multinomial distribution [1, 5, 12]. The advantage of our multiple-membership approach
is as follows. Mixed-membership models (i.e., multinomial distribution over group memberships)
essentially assume that by increasing the amount of node?s membership to some group k, the same
node?s membership to some other group k ? has to decrease (due to the condition that the probabilities
normalize to 1). On the other hand, multiple-membership models do not suffer from this assumption
and allow nodes to truely belong to multiple groups. Furthermore, we consider a nonparametric
model of groups which does not restrict the number of latent groups ahead of time. Hence, our
model adaptively learns the appropriate number of latent groups for a given network at a given time.
In dynamic network models, one also specifies a process by which nodes dynamically join and leave
groups. We assume that each node i can join or leave a given group k according to a Markov model.
However, since each node can join multiple groups independently, we naturally consider factorial
hidden Markov models (FHMM) [8], where latent group membership of each node independently
(t)
evolves over time. To be concrete, each membership zik evolves through a 2-by-2 Markov transition
(t)
(t)
(t)
(t?1)
probability matrix Qk where each entry Qk [r, s] corresponds to P (zik = s|zik
= r), where
r, s ? {0 = non-member, 1 = member}.
(t)
Now, given node group memberships zik at time t one also needs to specify the process of link
generation. Links of the network realize according to a link function f (?). A link from node i to
(t) (t)
node j at time t occurs with probability determined by the link function f (zi? , zj? ). In our model,
we develop a link function that not only accounts for links between group members but also models
links between the members and non-members of a given group.
3 Dynamic Multi-group Membership Graph Model
Next we shall describe our Dynamic Multi-group Membership Graph Model (DMMG) and point out
the differences with the previous work. In our model, we pay close attention to the three processes
governing network dynamics: (1) birth and death dynamics of individual groups, (2) evolution of
memberships of nodes to groups, and (3) the structure of network interactions between group members as well as non-members. We now proceed by describing each of them in turn.
Model of active groups. Links of the network are influenced not only by nodes changing memberships to groups but also by the birth and death of groups themselves. New groups can be born and
old ones can die. However, without explicitly modeling group birth and death there exists ambiguity
2
between group membership change and the birth/death of groups. For example, consider two disjoint groups k and l such that their lifetimes and members do not overlap. In other words, group l is
born after group k dies out. However, if group birth and death dynamics is not explicitly modeled,
then the model could interpret that the two groups correspond to a single latent group where all the
members of k leave the group before the members of l join the group. To resolve this ambiguity we
devise an explicit model of birth/death dynamics of groups by introducing a notion of active groups.
Under our model, a group can be in one of two states: it can be either active (alive) or inactive (not
yet born or dead). However, once a group becomes inactive, it can never be active again. That is,
once a group dies, it can never be alive again. To ensure coherence of group?s state over time, we
build on the idea of distance-dependent Indian Buffet Processes (dd-IBP) [7]. The IBP is named
after a metaphorical process that gives rise to a probability distribution, where customers enter an
Indian Buffet restaurant and sample some subset of an infinitely long sequence of dishes. In the
context of networks, nodes usually correspond to ?customers? and latent features/groups correspond
to ?dishes?. However, we apply dd-IBP in a different way. We regard each time step t as a ?customer?
that samples a set of active groups Kt . So, at the first time step t = 1, we have P oisson(?) number
of groups that are initially active, i.e., |K1 | ? P oisson(?). To account for death of groups we
then consider that each active group at time t ? 1 can become inactive at the next time step t with
probability ?. On the other hand, P oisson(??) new groups are also born at time t. Thus, at each
time currently active groups can die, while new ones can also be born. The hyperparameter ?
controls for how often new groups are born and how often old ones die. For instance, there will be
almost no newborn or dead groups if ? ? 1, while there would be no temporal group coherence and
practically all the groups would die between consecutive time steps if ? = 0.
Figure 1(a) gives an example of the above process. Black circles indicate active groups and white
circles denote inactive (not yet born or dead) groups. Groups 1 and 3 exist at t = 1 and Group 2
is born at t = 2. At t = 3, Group 3 dies but Group 4 is born. Without our group activity model,
Group 3 could have been reused with a completely new set of members and Group 4 would have
never been born. Our model can distinguish these two disjoint groups.
Formally, we denote the number of active groups at time t by Kt = |Kt |. We also denote the state
(t)
(active/inactive) of group k at time t by Wk = 1{k ? Kt }. For convenience, we also define a set
(t)
(t? )
of newly active groups at time t be Kt+ = {k|Wk = 1, Wk
= 0 ?t? < t} and Kt+ = |Kt+ |.
Putting it all together we can now fully describe the process of group birth/death as follows:
P oisson (?) ,
for t = 1
+
Kt ?
P oisson (??) , for t > 1
?
(t?1)
?
=1
?Bernoulli(1 ? ?) if W
Pt
Pkt?1 +
(t)
Wk ? 1,
if t? =1 Kt? < k ? t? =1 Kt+?
?
?0,
otherwise .
(1)
Note that under this model an infinite number of active groups can exist. This means our model automatically determines the right number of active groups and each node can belong to many groups
simultaneously. We now proceed by describing the model of node group membership dynamics.
Dynamics of node group memberships. We capture the dynamics of nodes joining and leaving
groups by assuming that latent node group memberships form a Markov chain. In this framework,
node memberships to active groups evolve through time according to Markov dynamics:
1 ? ak
ak
(t) (t?1)
P (zik |zik ) = Qk =
,
bk
1 ? bk
where matrix Qk [r, s] denotes a Markov transition from state r to state s, which can be a fixed
parameter, group specific, or otherwise domain dependent as long as it defines a Markov transition
matrix. Thus, the transition of node?s i membership to active group k can be defined as follows:
(t?1)
(t?1)
1?zik
(t)
(t)
zik
.
(2)
ak , bk ? Beta(?, ?), zik ? Wk ? Bernoulli ak
(1 ? bk )
Typically, ? > ?, which ensures that group?s memberships are not too volatile over time.
3
(a) Group activity model
(b) Link function model
Figure 1: (a) Birth and death of groups: Black circles represent active and white circles represent inactive
(t)
(unborn or dead) groups. A dead group can never become active again. (b) Link function: zi denotes
binary node group memberships. Entries of link affinity matrix ?k denotes linking parameters between all 4
(t)
(t)
(t)
combinations of members (zi = 1) and non-members (zi = 0). To obtain link probability pij , individual
(t)
(t)
affinities ?k [zj , zj ] are combined using a logistic function g(?)
.
Relationship between node group memberships and links of the network. Last, we describe the
part of the model that establishes the connection between node?s memberships to groups and the
links of the network. We achieve this by defining a link function f (i, j), which for given a pair of
(t)
nodes i, j determines their interaction probability pij based on their group memberships.
We build on the Multiplicative Attribute Graph model [16, 18], where each group k is associated
with a link affinity matrix ?k ? R2?2 . Each of the four entries of the link affinity matrix captures
the tendency of linking between group?s members, members and non-members, as well as nonmembers themselves. While traditionally link affinities were considered to be probabilities, we
relax this assumption by allowing affinities to be arbitrary real numbers and then combine them
through a logistic function to obtain a final link probability.
(t)
(t)
The model is illustrated in Figure 1(b). Given group memberships zik and zjk of nodes i and j at
(t)
(t)
time t the binary indicators ?select? an entry ?k [zik , zjk ] of matrix ?k . This way linking tendency
from node i to node j is reflected based on their membership to group k. We then determine the
(t)
overall link probability pij by combining the link affinities via a logistic function g(?)1 . Thus,
!
?
X
(t) (t)
(t)
(t) (t)
(t)
(3)
?k [zik , zjk ] , Yij ? Bernoulli(pij )
pij = f (zi? , zj? ) = g ?t +
k=1
where ?t is a density parameter that reflects the varying link density of network over time.
Note that due to potentially infinite number of groups the sum of an infinite number of link affinities
may not be tractable. To resolve this, we notice that for a given ?k subtracting ?k [0, 0] from all its
(t)
entries and then adding this value to ?t does not change the overall linking probability pij . Thus, we
(t)
can set ?k [0, 0] = 0 and then only a finite number of affinities selected by zik have to be considered.
For all other entries of ?k we use N (0, ? 2 ) as a prior distribution.
To sum up, Figure 2 illustrates the three components of the DMMG in a plate notation. Group?s
(t)
(t)
state Wk is determined by the dd-IBP process and each node-group membership zik is defined as
the FHMM over active groups. Then, the link between nodes i and j is determined based on the
groups they belong to and the corresponding group link affinity matrices ?.
4 Related Work
Classically, non-Bayesian approaches such as exponential random graph models [10, 27] have been
used to study dynamic networks. On the other hand, in the Bayesian approaches to dynamic network
analysis latent variable models have been most widely used. These approaches differ by the structure of the latent space that they assume. For example, euclidean space models [13, 24] place nodes
1
g(x) = exp(x)/(1 + exp(x))
4
Figure 2: Dynamic Multi-group Membership Graph Model. Network Y depends on each node?s group memberships Z and active groups W . Links of Y appear via link affinities ?.
in a low dimensional Euclidean space and the network evolution is then modeled as a regression
problem of node?s future latent location. In contrast, our model uses HMMs, where latent variables stochastically depend on the state at the previous time step. Related to our work are dynamic
mixed-membership models where a node is probabilistically allocated to a set of latent features. Examples of this model include the dynamic mixed-membership block model [5, 12] and the dynamic
infinite relational model [14]. However, the critical difference here is that our model uses multimemberships where node?s membership to one group does not limit its membership to other groups.
Probably most related to our work here are DRIFT [4] and LFP [11] models. Both of these models
consider Markov switching of latent multi-group memberships over time. DRIFT uses the infinite
factorial HMM [6], while LFP adds ?social propagation? to the Markov processes so that network
links of each node at a given time directly influence group memberships of the corresponding node
at the next time. Compared to these models, we uniquely incorporate the model of group birth and
death and present a novel and powerful linking function.
5 Model Inference via MCMC
We develop a Markov chain Monte Carlo (MCMC) procedure to approximate samples from the
posterior distribution of the latent variables in our model. More specifically, there are five types
(t)
of variables that we need to sample: node group memberships Z = {zik }, group states W =
(t)
{Wk }, group membership transitions Q = {Qk }, link affinities ? = {?k }, and density parameters
? = {?t }. By sampling each type of variables while fixing all the others, we end up with many
samples representing the posterior distribution P (Z, W, Q, ?, ?|Y, ?, ?, ?, ?). We shall now explain
a sampling strategy for each varible type.
(t)
Sampling node group memberships Z. To sample node group membership zik , we use the
forward-backward recursion algorithm [26]. The algorithm first defines a deterministic forward
pass which runs down the chain starting at time one, and at each time point t collects information
from the data and parameters up to time t in a dynamic programming cache. A stochastic backward
(t)
pass starts at time T and samples each zik in backwards order using the information collected dur(T B :T D )
ing the forward pass. In our case, we only need to sample zik k k where TkB and TkD indicate the
birth time and the death time of group k. Due to space constraints, we discuss further details in the
extended version of the paper [19].
Sampling group states W . To update active groups, we use the Metropolis-Hastings algorithm
with the following proposal distribution P (W ? W ? ): We add a new group, remove an existing
group, or update the life time of an active group with the same probability 1/3. When adding a new
group k ? we select the birth and death time of the group at random such that 1 ? TkB? ? TkD? ? T .
(t)
For removing groups we randomly pick one of existing groups k ?? and remove it by setting Wk?? = 0
for all t. Finally, to update the birth and death time of an existing group, we select an existing group
and propose new birth and death time of the group at random. Once new state vector W ? is proposed
we accept it with probability
P (Y |W ? )P (W ? |?, ?)P (W ? ? W )
min 1,
.
(4)
P (Y |W )P (W |?, ?)P (W ? W ? )
We compute P (W |?, ?) and P (W ? ? W ) in a closed form, while we approximate the posterior
P (Y |W ) by sampling L Gibbs samples while keeping W fixed.
5
Sampling group membership transition matrix Q. Beta distribution is a conjugate prior of
Bernoulli distribution and thus we can sample each ak and bk in Qk directly from the posterior
distribution: ak ? Beta(? + N01,k , ? + N00,k ) and bk ? Beta(? + N10,k , ? + N11,k ), where Nrs,k
is the number of nodes that transition from state r to s in group k (r, s ? {0 = non-member, 1 =
member}).
Sampling link affinities ?. Once node group memberships Z are determined, we update the entries
of link affinity matrices ?k . Direct sampling of ? is intractable because of non-conjugacy of the
logistic link function. An appropriate method in such case would be the Metropolis-Hastings that
accepts or rejects the proposal based on the likelihood ratio. However, to avoid low acceptance
rates and quickly move toward the mode of the posterior distribution, we develop a method based
on Hybrid Monte Carlo (HMC) sampling [3]. We guide the sampling using the gradient of log(t)
likelihood function with respect to each ?k . Because links Yij are generated independently given
group memberships Z, the gradient with respect to ?k [x, y] can be computed by
X (t)
1
(t)
(t)
(t)
? 2 ?2k +
(5)
Yij ? pij 1{zik = x, zjk = y} .
2?
i,j,t
Updating density parameter ?. Parameter vector ? is defined over a finite dimension T . Therefore,
we can update ? by maximizing the log-likelihood given all the other variables. We compute the
gradient update for each ?t and directly update ?t via a gradient step.
Updating hyperparameters. The number of groups over all time periods is given by a Poisson
distribution with parameter ? (1 + ? (T ? 1)). Hence, given ? we sample ? by using a Gamma
conjugate prior. Similarly, we can use the Beta conjugate prior for the group death process (i.e.,
Bernoulli distribution) to sample ?. However, hyperparameters ? and ? do not have a conjugate
prior, so we update them by using a gradient method based on the sampled values of ak and bk .
Time complexity of model parameter estimation. Last, we briefly comment on the time com(t)
plexity of our model parameter estimation procedure. Each sample zik requires computation of
(t)
link probability pij for all j 6= i. Since the expected number of active groups at each time is ?,
(t)
this requires O(?N 2 T ) computations of pij . By caching the sum of link affinities between every
pair of nodes sampling Z as well as W requires O(?N 2 T ) time. Sampling ? and ? also requires
(t)
O(?N 2 T ) because the gradient of each pij needs to be computed. Overall, our approach takes
O(?N 2 T ) to obtain a single sample, while models that are based on the interaction matrix between
all groups [4, 5, 11] require O(K 2 N 2 T ), where K is the expected number of groups. Furthermore,
it has been shown that O(log N ) groups are enough to represent networks [16, 18]. Thus, in practice
K (i.e., ?) is of order log N and the running time for each sample is O(N 2 T log N ).
6 Experiments
We evaluate our model on three different tasks. For quantitative evaluation, we perform missing link
prediction as well as future network forecasting and show our model gives favorable performance
when compared to current dynamic and static network models. We also analyze the dynamics of
groups in a dynamic social network of characters in a movie ?The Lord of the Rings: The Two
Towers.?
Experimental setup. For the two prediction experiments, we use the following three datasets. First,
the NIPS co-authorships network connects two people if they appear on the same publication in
the NIPS conference in a given year. Network spans T =17 years (1987 to 2003). Following [11]
we focus on a subset of 110 most connected people over all time periods. Second, the DBLP coauthorship network is obtained from 21 Computer Science conferences from 2000 to 2009 (T =
10) [28]. We focus on 209 people by taking 7-core of the aggregated network for the entire time.
Third, the INFOCOM dataset represents the physical proximity interactions between 78 students at
the 2006 INFOCOM conference, recorded by wireless detector remotes given to each attendee [25].
As in [11] we use the processed data that removes inactive time slices to have T =50.
To evaluate the predictive performance of our model, we compare it to three baseline models. For
a naive baseline model, we regard the relationship between each pair of nodes as the instance of
6
Model
Naive
LFRM
DRIFT
DMMG
TestLL
-2030
-880
-758
?624
NIPS
AUC
0.808
0.777
0.866
0.916
F1
0.177
0.195
0.296
0.434
TestLL
-12051
-3783
-3108
?2684
DBLP
AUC
0.814
0.784
0.916
0.939
F1
0.300
0.146
0.421
0.492
TestLL
-17821
-8689
-6654
?6422
INFOCOM
AUC
0.677
0.946
0.973
0.976
F1
0.252
0.703
0.757
0.764
Table 1: Missing link prediction. We bold the performance of the best scoring method. Our DMMG performs
the best in all cases. All improvements are statistically significant at 0.01 significance level.
independent Bernoulli distribution with Beta(1, 1) prior. Thus, for a given pair of nodes, the link
probability at each time equals to the expected probability from the posterior distribution given network data. Second baseline is LFRM [21], a model of static networks. For missing link prediction,
we independently fit LFRM to each snapshot of dynamic networks. For network forecasting task,
we fit LFRM to the most recent snapshot of a network. Even though LFRM does not capture time
dynamics, we consider this to be a strong baseline model. Finally, for the comparison with dynamic
network models, we consider two recent state of the art models. The DRIFT model [4] is based
on an infinite factorial HMM and authors kindly shared their implementation. We also consider the
LFP model [11] for which we were not able to obtain the implementation, but since we use the same
datasets, we compare performance numbers directly with those reported in [11].
To evaluate predictive performance, we use various standard evaluation metrics. First, to assess
goodness of inferred probability distributions, we report the log-likelihood of held-out edges. Second, to verify the predictive performance, we compute the area under the ROC curve (AUC). Last,
we also report the maximum F1-score (F1) by scanning over all possible precision/recall thresholds.
Task 1: Predicting missing links. To generate the datasets for the task of missing link prediction,
we randomly hold out 20% of node pairs (i.e., either link or non-link) throughout the entire time
period. We then run each model to obtain 400 samples after 800 burn-in samples for each of 10
MCMC chains. Each sample gives a link probability for a given missing entry, so the final link
probability of a missing entry is computed by averaging the corresponding link probability over all
the samples. This final link probability provides the evaluation metric for a given missing data entry.
Table 1 shows average evaluation metrics for each model and dataset over 10 runs. We also compute
the p-value on the difference between two best results for each dataset and metric. Overall, our
DMMG model significantly outperforms the other models in every metric and dataset. Particularly
in terms of F1-score we gain up to 46.6% improvement over the other models.
By comparing the naive model and LFRM, we observe that LFRM performs especially poorly
compared to the naive model in two networks with few edges (NIPS and DBLP). Intuitively this
makes sense because due to the network sparsity we can obtain more information from the temporal
trajectory of each link than from each snapshot of network. However, both DRIFT and DMMG
successfully combine the temporal and the network information which results in better predictive
performance. Furthermore, we note that DMMG outperforms the other models by a larger margin
as networks get sparser. DMMG makes better use of temporal information because it can explicitly
model temporally local links through active groups.
Last, we also compare our model to the LFP model. The LFP paper reports AUC ROC score of
?0.85 for NIPS and ?0.95 for INFOCOM on the same task of missing link prediction with 20%
held-out missing data [11]. Performance of our DMMG on these same networks under the same
conditions is 0.916 for NIPS and 0.976 for INFOCOM, which is a strong improvement over LFP.
Task 2: Future network forecasting. Here we are given a dynamic network up to time Tobs and
the goal is to predict the network at the next time Tobs + 1. We follow the experimental protocol
described in [4, 11]: We train the models on first Tobs networks, fix the parameters, and then for
each model we run MCMC sampling one time step into the future. For each model and network,
we obtain 400 samples with 10 different MCMC chains, resulting in 400K network samples. These
network samples provide a probability distribution over links at time Tobs + 1.
Table 2 shows performance averaged over different Tobs values ranging from 3 to T -1. Overall,
DMMG generally exhibits the best performance, but performance results seem to depend on the
dataset. DMMG performs the best at 0.001 significance level in terms of AUC and F1 for the NIPS
dataset, and at 0.05 level for the INFOCOM dataset. While DMMG improves performance on AUC
7
Model
TestLL
-547
-356
?148
-170
Naive
LFRM
DRIFT
DMMG
NIPS
AUC
0.524
0.398
0.672
0.732
F1
0.130
0.011
0.084
0.196
DBLP
AUC
0.668
0.492
0.650
0.652
TestLL
-3248
-1680
?1324
-1347
F1
0.243
0.024
0.122
0.245
TestLL
-774
-760
-661
?625
INFOCOM
AUC
0.673
0.640
0.782
0.804
F1
0.270
0.248
0.381
0.392
Table 2: Future network forecasting. DMMG performs best on NIPS and INFOCOM while results on DBLP
are mixed.
haldir
gandalf
merry
frodo
sam
gollum
pippin
aragorn
legolas
gimli
saruman
eowyn
eomer
theoden
grima
hama
faramir
arwen
elrond
galadriel
madril
haldir
gandalf
merry
frodo
sam
gollum
pippin
aragorn
legolas
gimli
saruman
eowyn
eomer
theoden
grima
hama
faramir
arwen
elrond
galadriel
madril
1
2
3
(a) Group 1
4
5
haldir
gandalf
merry
frodo
sam
gollum
pippin
aragorn
legolas
gimli
saruman
eowyn
eomer
theoden
grima
hama
faramir
arwen
elrond
galadriel
madril
1
2
3
(b) Group 2
4
5
1
2
3
4
5
(c) Group 3
Figure 3: Group arrival and departure dynamics of different characters in the Lord of the Rings. Dark areas in
the plots correspond to a give node?s (y-axis) membership to each group over time (x-axis)
.
(9%) and F1 (133%), DRIFT achieves the best log-likelihood on the NIPS dataset. In light of our
previous observations, we conjecture that this is due to change in network edge density between
different snapshots. On the DBLP dataset, DRIFT gives the best log-likelihood, the naive model
performs best in terms of AUC, and DMMG is the best on F1 score. However, in all cases of DBLP
dataset, the differences are not statistically significant. Overall, DMMG performs the best on NIPS
and INFOCOM and provides comparable performance on DBLP.
Task 3: Case study of ?The Lord of the Rings: The Two Towers? social network. Last, we also
investigate groups identified by our model on a dynamic social network of characters in a movie,
The Lord of the Rings: The Two Towers. Based on the transcript of the movie we created a dynamic
social network on 21 characters and T =5 time epochs, where we connect a pair of characters if they
co-appear inside some time window.
We fit our model to this network and examine the results in Figure 3. Our model identified three
dynamic groups, which all nicely correspond to the Lord of the Rings storyline. For example,
the core of Group 1 corresponds to Aragorn, elf Legolas, dwarf Gimli, and people in Rohan who
in the end all fight against the Orcs. Similarly, Group 2 corresponds to hobbits Sam, Frodo and
Gollum on their mission to destroy the ring in Mordor, and are later joined by Faramir and ranger
Madril. Interestingly, Group 3 evolving around Merry and Pippin only forms at t=2 when they start
their journey with Treebeard and later fight against wizard Saruman. While the fight occurs in two
separate places we find that some scenes are not distinguishable, so it looks as if Merry and Pippin
fought together with Rohan?s army against Saruman?s army.
Acknowledgments
We thank Creighton Heaukulani and Zoubin Ghahramani for sharing data and code. This research
has been supported in part by NSF IIS-1016909, CNS-1010921, IIS-1149837, IIS-1159679, IARPA
AFRL FA8650-10-C-7058, Okawa Foundation, Docomo, Boeing, Allyes, Volkswagen, Intel, Alfred
P. Sloan Fellowship and the Microsoft Faculty Fellowship.
References
[1] E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing. Mixed membership stochastic blockmodels.
JMLR, 9, 2008.
[2] L. Backstrom and J. Leskovec. Supervised random walks: Predicting and recommending links in social
networks. In WSDM, 2011.
8
[3] S. Duane, A. Kennedy, B. J. Pendleton, and D. Roweth.
195(2):216?222, 1987.
Hybrid monte carlo.
Physics Letter B,
[4] J. Foulds, A. U. Asuncion, C. DuBois, C. T. Butts, and P. Smyth. A dynamic relational infinite feature
model for longitudinal social networks. In AISTATS, 2011.
[5] W. Fu, L. Song, and E. P. Xing. Dynamic mixed membership blockmodel for evolving networks. In
ICML, 2009.
[6] J. V. Gael, Y. W. Teh, , and Z. Ghahramani. The infinite factorial hidden markov model. In NIPS, 2009.
[7] S. J. Gershman, P. I. Frazier, and D. M. Blei.
arXiv:1110.5454, 2012.
Distance dependent infinite latent feature models.
[8] Z. Ghahramani and M. I. Jordan. Factorial hidden markov models. Machine Learning, 29(2-3):245?273,
1997.
[9] F. Guo, S. Hanneke, W. Fu, and E. P. Xing. Recovering temporally rewiring networks: a model-based
approach. In ICML, 2007.
[10] S. Hanneke, W. Fu, and E. P. Xing. Discrete temporal models of social networks. Electron. J. Statist.,
4:585?605, 2010.
[11] C. Heaukulani and Z. Ghahramani. Dynamic probabilistic models for latent feature propagation in social
networks. In ICML, 2013.
[12] Q. Ho, L. Song, and E. P. Xing. Evolving cluster mixed-membership blockmodel for time-varying networks. In AISTATS, 2011.
[13] P. D. Hoff, A. E. Raftery, and M. S. Handcock. Latent space approaches to social network analysis. JASA,
97(460):1090 ? 1098, 2002.
[14] K. Ishiguro, T. Iwata, N. Ueda, and J. Tenenbaum. Dynamic infinite relational model for time-varying
relational data analysis. In NIPS, 2010.
[15] S. Kairam, D. Wang, and J. Leskovec. The life and death of online groups: Predicting group growth and
longevity. In WSDM, 2012.
[16] M. Kim and J. Leskovec. Modeling social networks with node attributes using the multiplicative attribute
graph model. In UAI, 2011.
[17] M. Kim and J. Leskovec. Latent multi-group membership graph model. In ICML, 2012.
[18] M. Kim and J. Leskovec. Multiplicative attribute graph model of real-world networks. Internet Mathematics, 8(1-2):113?160, 2012.
[19] M. Kim and J. Leskovec.
arXiv:1311.2079, 2013.
Nonparametric multi-group membership model for dynamic networks.
[20] J. R. Lloyd, P. Orbanz, Z. Ghahramani, and D. M. Roy. Random function priors for exchangeable arrays
with applications to graphs and relational data. In NIPS, 2012.
[21] K. T. Miller, T. L. Grifths, and M. I. Jordan. Nonparametric latent feature models for link prediction. In
NIPS, 2009.
[22] M. M?rup, M. N. Schmidt, and L. K. Hansen. Infinite multiple membership relational modeling for
complex networks. In MLSP, 2011.
[23] K. Palla, D. A. Knowles, and Z. Ghahramani. An infinite latent attribute model for network data. In
ICML, 2012.
[24] P. Sarkar and A. W. Moore. Dynamic social network analysis using latent space models. In NIPS, 2005.
[25] J. Scott, R. Gass, J. Crowcroft, P. Hui, C. Diot, and A. Chaintreau. CRAWDAD data set cambridge/haggle
(v. 2009-05-29), May 2009.
[26] S. L. Scott. Bayesian methods for hidden markov models. JASA, 97(457):337?351, 2002.
[27] T. A. B. Snijders, G. G. van de Bunt, and C. E. G. Steglich. Introduction to stochastic actor-based models
for network dynamics. Social Networks, 32(1):44?60, 2010.
[28] J. Tang, J. Zhang, L. Yao, J. Li, L. Zhang, and Z. Su. Arnetminer: Extraction and mining of academic
social networks. In KDD?08, 2008.
[29] J. Yang and J. Leskovec. Community-affiliation graph model for overlapping community detection. In
ICDM, 2012.
9
| 5049 |@word briefly:1 faculty:1 version:1 reused:1 pick:1 volkswagen:1 born:10 contains:1 series:2 score:4 interestingly:1 longitudinal:1 outperforms:2 existing:5 current:1 com:1 comparing:1 assigning:1 yet:2 realize:1 kdd:1 remove:3 plot:1 update:8 zik:21 generative:1 selected:1 parametrization:1 core:2 blei:2 provides:4 node:64 location:1 zhang:2 five:1 direct:1 become:2 beta:6 combine:2 inside:1 expected:3 themselves:2 examine:1 multi:9 wsdm:2 palla:1 automatically:1 resolve:2 cache:1 considering:1 increasing:1 becomes:1 window:1 underlying:3 notation:1 heaukulani:2 truely:1 temporal:5 quantitative:1 every:2 growth:2 control:1 exchangeable:1 appear:3 before:1 local:1 limit:2 switching:1 joining:2 ak:7 becoming:1 black:2 burn:1 dynamically:1 collect:1 co:2 hmms:1 statistically:2 averaged:1 directed:1 acknowledgment:1 lfp:6 practice:1 block:1 procedure:3 area:2 evolving:4 reject:1 significantly:1 word:1 zoubin:1 get:1 convenience:1 close:1 context:1 influence:2 deterministic:1 customer:3 missing:12 conceptualized:1 maximizing:1 attention:1 starting:1 independently:4 foulds:1 identifying:1 mixedmembership:1 array:1 notion:1 traditionally:1 pt:1 smyth:1 programming:1 us:3 roy:1 particularly:1 updating:2 attendee:1 wang:1 capture:5 ensures:1 connected:1 remote:1 decrease:1 intuition:1 govern:1 complexity:1 rup:1 dynamic:61 depend:2 lord:6 predictive:5 completely:1 represented:1 various:1 train:1 describe:5 monte:3 pendleton:1 birth:16 stanford:6 widely:1 larger:1 relax:1 otherwise:3 final:3 online:1 advantage:1 sequence:1 propose:2 subtracting:1 interaction:6 mission:1 rewiring:1 combining:1 flexibility:1 achieve:2 poorly:1 wizard:1 normalize:1 cluster:3 ring:7 leave:3 develop:4 fixing:1 crawdad:1 ibp:4 transcript:1 strong:2 recovering:1 c:1 indicate:2 differ:1 attribute:5 stochastic:3 oisson:5 adjacency:1 require:1 f1:12 fix:1 yij:5 hold:1 practically:1 proximity:1 considered:2 around:1 exp:2 scope:1 predict:3 electron:1 achieves:1 consecutive:1 estimation:2 favorable:1 currently:1 hansen:1 establishes:1 successfully:1 reflects:1 avoid:1 caching:1 varying:4 newborn:1 probabilistically:1 publication:1 focus:3 improvement:3 frazier:1 bernoulli:7 likelihood:6 contrast:3 blockmodel:2 kim:5 sense:2 hobby:1 baseline:4 inference:2 dependent:5 membership:57 entire:2 typically:1 accept:1 initially:1 hidden:6 relation:5 fight:3 overall:6 art:1 hoff:1 equal:1 once:4 never:4 nicely:1 extraction:1 sampling:13 represents:1 look:1 icml:5 future:7 others:1 recommend:1 report:3 elf:1 few:1 modern:1 randomly:2 simultaneously:3 gamma:1 individual:9 ranger:1 connects:1 cns:1 microsoft:1 detection:1 interest:1 acceptance:1 investigate:1 mining:1 evaluation:4 light:1 held:2 chain:5 accurate:1 kt:10 edge:3 fu:3 old:2 euclidean:2 walk:1 circle:4 leskovec:8 roweth:1 increased:1 instance:2 modeling:8 goodness:1 introducing:1 entry:10 subset:2 too:1 reported:1 connect:1 scanning:1 combined:1 adaptively:1 density:5 fundamental:2 probabilistic:1 physic:1 together:2 quickly:1 concrete:1 yao:1 connectivity:2 again:3 ambiguity:2 recorded:1 classically:1 dead:5 stochastically:1 li:1 account:3 potential:1 de:1 dur:1 wk:8 availability:1 student:1 bold:1 lloyd:1 mlsp:1 explicitly:6 sloan:1 depends:1 multiplicative:3 later:2 closed:1 infocom:9 analyze:1 start:2 xing:5 capability:1 asuncion:1 ass:1 qk:6 who:1 miller:1 correspond:5 identify:1 fhmm:2 bayesian:3 carlo:3 trajectory:1 hanneke:2 kennedy:1 n10:1 explain:3 detector:1 influenced:1 sharing:1 against:3 naturally:1 associated:2 static:2 sampled:1 newly:1 dataset:10 gain:1 recall:1 improves:1 organized:1 afrl:1 nrs:1 supervised:1 follow:1 reflected:1 specify:1 improved:2 hama:3 though:1 furthermore:3 governing:1 lifetime:1 hand:3 hastings:2 su:1 overlapping:2 propagation:2 defines:2 logistic:4 mode:1 zjk:4 verify:1 evolution:4 hence:2 death:19 moore:1 illustrated:1 white:2 uniquely:1 auc:11 die:4 authorship:1 plate:1 plexity:1 demonstrate:1 performs:6 ranging:1 novel:1 common:2 volatile:1 multinomial:2 physical:1 linking:6 belong:5 interpret:2 significant:2 cambridge:1 gibbs:1 enter:1 mathematics:1 longevity:2 similarly:2 handcock:1 testll:6 gollum:4 actor:1 metaphorical:1 add:2 posterior:6 own:1 recent:3 orbanz:1 driven:1 dish:2 binary:5 affiliation:1 life:2 devise:1 scoring:1 greater:1 determine:1 aggregated:1 period:3 ii:3 multiple:8 snijders:1 ing:1 academic:1 long:2 icdm:1 n11:1 prediction:9 regression:1 n01:1 essentially:1 metric:5 poisson:1 arxiv:2 represent:3 proposal:2 background:1 fellowship:2 leaving:2 allocated:1 rest:1 tkb:2 probably:1 comment:1 undirected:1 member:21 seem:1 jordan:2 yang:1 backwards:1 enough:1 restaurant:1 zi:5 fit:3 restrict:1 identified:2 idea:1 okawa:1 inactive:7 forecasting:6 song:2 suffer:1 fa8650:1 proceed:2 generally:2 gael:1 factorial:7 amount:1 nonparametric:6 dark:1 tenenbaum:1 statist:1 processed:1 lifespan:1 generate:1 specifies:1 exist:2 zj:4 nsf:1 notice:1 disjoint:2 alfred:1 discrete:2 hyperparameter:1 shall:2 group:173 putting:1 four:1 threshold:1 changing:1 backward:2 destroy:1 graph:13 sum:3 year:2 run:4 letter:1 powerful:1 journey:1 named:1 place:2 almost:1 throughout:1 ueda:1 knowles:1 lfrm:8 coherence:2 dy:3 comparable:1 internet:1 pay:1 distinguish:1 activity:2 ahead:1 constraint:1 alive:3 scene:1 min:1 span:1 diot:1 conjecture:1 according:3 combination:1 conjugate:4 increasingly:1 character:5 sam:4 backstrom:1 metropolis:2 evolves:3 intuitively:1 fienberg:1 conjugacy:1 discus:2 describing:2 turn:1 tractable:1 end:2 apply:1 observe:1 appropriate:2 schmidt:1 buffet:4 ho:1 denotes:3 running:1 ensure:1 include:1 k1:1 build:3 especially:1 ghahramani:6 move:1 dwarf:1 occurs:2 strategy:1 exhibit:1 affinity:15 gradient:6 distance:4 link:59 separate:2 thank:1 entity:3 hmm:2 tower:3 considers:1 collected:1 toward:1 assuming:1 code:1 modeled:3 relationship:4 ratio:1 innovation:1 setup:1 hmc:1 potentially:1 rise:2 boeing:1 implementation:2 motivates:1 perform:1 allowing:1 teh:1 observation:1 snapshot:4 markov:15 datasets:3 ishiguro:1 finite:2 gas:1 defining:1 relational:9 extended:1 arbitrary:1 community:3 drift:8 inferred:1 sarkar:1 bk:7 pair:6 connection:1 accepts:1 bunt:1 nip:16 jure:2 able:1 usually:1 scott:2 departure:2 sparsity:1 interpretability:1 overlap:1 critical:1 hybrid:2 predicting:3 indicator:1 recursion:1 representing:1 movie:4 dubois:1 temporally:2 identifies:1 axis:2 created:1 raftery:1 extract:2 naive:6 prior:7 epoch:1 evolve:1 fully:1 mixed:7 generation:1 gershman:1 foundation:1 jasa:2 pij:10 creighton:1 dd:3 summary:2 supported:1 last:6 keeping:1 wireless:1 guide:1 allow:3 taking:1 van:1 regard:2 slice:1 dimension:1 curve:1 transition:7 world:1 unweighted:1 forward:3 author:1 social:18 approximate:2 butt:1 active:25 uai:1 recommending:1 latent:27 table:4 storyline:1 pkt:1 ca:2 complex:1 domain:1 protocol:1 kindly:1 significance:2 main:1 blockmodels:1 aistats:2 hyperparameters:2 arrival:2 iarpa:1 intel:1 join:4 roc:2 precision:1 explicit:1 exponential:1 jmlr:1 third:1 learns:1 tang:1 down:1 removing:1 friendship:1 specific:1 r2:1 decay:1 exists:2 intractable:1 adding:2 n00:1 hui:1 airoldi:1 illustrates:1 margin:1 forecast:1 dblp:8 sparser:1 aragorn:4 distinguishable:1 army:2 infinitely:1 joined:1 duane:1 corresponds:3 iwata:1 determines:2 viewed:1 goal:1 shared:1 change:5 infinite:13 determined:4 specifically:1 averaging:1 pas:3 experimental:4 tendency:2 coauthorship:1 formally:1 select:3 people:5 guo:1 indian:4 incorporate:1 evaluate:3 mcmc:5 |
4,475 | 505 | Markov Random Fields Can Bridge Levels of
Abstraction
Paul R. Cooper
Institute for the Learning Sciences
Northwestern University
Evanston, IL
[email protected]
Peter N. Prokopowicz
Institute for the Learning Sciences
Northwestern U ni versity
Evanston, IL
[email protected]
Abstract
Network vision systems must make inferences from evidential information across levels of representational abstraction, from low level invariants,
through intermediate scene segments, to high level behaviorally relevant
object descriptions. This paper shows that such networks can be realized
as Markov Random Fields (MRFs). We show first how to construct an
MRF functionally equivalent to a Hough transform parameter network,
thus establishing a principled probabilistic basis for visual networks. Second, we show that these MRF parameter networks are more capable and
flexible than traditional methods. In particular, they have a well-defined
probabilistic interpretation, intrinsically incorporate feedback, and offer
richer representations and decision capabilities.
1
INTRODUCTION
The nature of the vision problem dictates that neural networks for vision must make
inferences from evidential information across levels of representational abstraction.
For example, local image evidence about edges might be used to determine the
occluding boundary of an object in a scene. This paper demonstrates that parameter
networks [Ballard, 1984], which use voting to bridge levels of abstraction, can be
realized with Markov Random Fields (MRFs).
We show two main results. First, an MRF is constructed with functionality formally
equivalent to that of a parameter net based on the Hough transform. Establishing
396
Markov Random Fields Can Bridge Levels of Abstraction
this equivalence provides a sound probabilistic foundation for neural networks for
vision. This is particularly important given the fundamentally evidential nature of
the vision problem.
Second, we show that parameter networks constructed from MRFs offer a more
flexible and capable framework for intermediate vision than traditional feedforward
parameter networks with threshold decision making. In particular, MRF parameter nets offer a richer representational framework, the potential for more complex
decision surfaces, an integral treatment of feedback, and probabilistically justified
decision and training procedures. Implementation experiments demonstrate these
features.
Together, these results establish a basis for the construction of integrated network
vision systems with a single well-defined representation and control structure that
intrinsically incorporates feedback.
2 BACKGROUND
2.1
HOUGH TRANSFORM AND PARAMETER NETS
One approach to bridging levels of abstraction in vision is to combine local, highly
variable evidence into segments which can be described compactly by their parameters. The Hough transform offers one method for obtaining these high-level
parameters. Parameter networks implement the Hough transform in a parallel
feedforward network. The central idea is voting: local low-level evidence cast votes
via the network for compatible higher-level parameterized hypotheses. The classic Hough example finds lines from edges. Here local evidence about the direction
and magnitude of image contrast is combined to extract the parameters of lines
(e.g. slope-intercept), which are more useful scene segments. The Hough transform
is widely used in computer vision (e.g. [Bolle et al., 1988]) to bridge levels of
abstraction.
2.2
MARKOV RANDOM FIELDS
Markov Random Fields offer a formal foundation for networks [Geman and Geman,
1984] similar to that of the Boltzmann machine. MRFs define a prior joint probability distribution over a set X of discrete random variables. The possible values
for the variables can be interpreted as possible local features or hypotheses. Each
variable is associated with a node S in an undirected graph (or network), and can
be written as X,. An assignment of values to all the variables in the field is called
a configuration, and is denoted Wi an assignment of a single variable is denoted w,.
Each fully-connected neighborhood C in a configuration of the field has a weight,
or clique potential, Vc.
We are interested in the probability distributions P over the random field X.
Markov Random Fields have a locality property:
P(X,
= w,IXr = Wr,r E S,r '# s) = P(X, = w,lXr = Wr,r EN,)
(1)
that says roughly that the state of site is dependent only upon the state of its
neighbors (N,). MRFs can also be characterized in terms of an energy function U
397
398
Cooper and Prokopowicz
with a Gibb's distribution:
P(w)
=
e-U(w)/T
Z
(2)
where T is the temperature, and Z is a normalizing constant.
If we are interested only in the prior distribution P(w), the energy function U is
defined as:
U(w)
= L Vc(w)
(3)
cEO
where C is the set of cliques defined by the neighborhood graph, and the Vc are
the clique potentials. Specifying the clique potentials thus provides a convenient
way to specify the global joint prior probability distribution P, i.e. to encode prior
domain knowledge about plausible structures.
Suppose we are instead interested in the distribution P(wIO) on the field after an
observation 0, where an observation constitutes a combination of spatially distinct
observations at each local site. The evidence from an observation at a site is denoted
P (0 .11lw.ll) and is called a likelihood. Assuming likelihoods are local and spatially
distinct, it is reasonable to assume that they are conditionally independent. Then,
with Bayes' Rule we can derive:
(4)
The MRF definition, together with evidence from the current problem, leaves a
probability distribution over all possible configurations. An algorithm is then
used to find a solution, normally the configuration of maximal probability, or
equivalently, minimal energy as expressed in equation 4. The problem of minimizing non-convex energy functions, especially those with many local minima,
has been the subject of intense scrutiny recently (e.g. [Kirkpatrick et al., 1983;
Hopfield and Tank, 1985]). In this paper we focus on developing MRF representations wherein the minimum energy configuration defines a desirable goal, not on
methods of finding the minimum. In our experiments have have used the deterministic Highest Confidence First (HCF) algorithm [Chou and Brown, 1990].
MRFs have been widely used in computer vision applications, including image
restoration, segmentation, and depth reconstruction [Geman and Geman, 1984;
Marroquin, 1985; Chellapa and Jain, 1991]. All these applications involve Hat representations at a single level of abstraction. A novel aspect of our work is the
hierarchical framework which explicitly represents visual entities at different levels
of abstraction, so that these higher-order entities can serve as an interpretation of
the data as well as playa role in further constraint satisfaction at even higher levels.
3
CONSTRUCTING MRFS EQUIVALENT TO
PARAMETER NETWORKS
Here we define a Markov Random Field that computes a Hough transform; i.e.
it detects higher-order features by tallying weighted votes from low-level image
components and thresholding the sum. The MRF has one discrete variable for
Markov Random Fields Can Bridge Levels of Abstraction
Parameterized
segment
Clique potentials:
High-level
variable and
label set
Linear sum and
threshold
clique
- Exists
Ee
E-e
-Ee
-E-e
low-level
variables and
label sets
'f
-8
-w.f
kl 1 max
k2
o
f<f
max
399
Input nodes
Figure 1: Left: Hough-transform parameter net. Input determines confidence I, in
each low-level feature; these confidences are weighted (Wi)' summed, and thresholded. Right: Equivalent MRF. Circles show variables with possible labels and
non-zero unary clique potentials; lines show neighborhoods; potentials are for the
four labellings of the binary cliques.
the higher-order feature, whose possible values are ezists and doesn't ezist and one
discrete variable for each voting element, with the same two possible values. Such
a field could be replicated in space to compute many features simultaneously.
The construction follows from two ideas: first, the clique potentials of the network
are defined such that only two of the many configurations need be considered, the
other configurations being penalized by high clique potentials (i.e. low a priori
probability). One configuration encodes the decision that the higher-order feature
exists, the other that it doesn't exist. The second point is that the energy of the
"doesn't exist" configuration is independent of the observation, while the energy of
the "exists" configurations improves with the strength of the evidence.
Consider a parameter net for the Hough transform that represents only a single
parameterized image segment (e.g. a line segment) and a set of low-level features,
(e.g. edges) which vote for it ( Figure 1 left). The variables, labels, and neighborhoods, of the equivalent MRF are defined in the right side of Figure 1 The clique
potentials, which depend on the Hough parameters, are shown in the right side of
the figure for a single neighborhood of the graph (There are four ways to label this
clique.) Unspecified unary potentials are zero. Evidence applies only to the labels
ei; it is the likelihood of making a local observation 0,:
(5)
=
In lemma 1, we show that the configuration WE
Eele2 ... en , has an energy equal to the negated weighted sum of the feature inputs, and configuration
W9 = ,Ee'le'2 ... ,en has a constant energy equal to the negated Hough threshold. Then, in lemma 2, we show that the clique potentials restrict the possible
configurations to only these two, so that the network must have its minimum energy in a configuration whose high-level feature has the correct label.
400
Cooper and Prokopowicz
Lemma 1:
U(WE 10) = - E~=l wi/i
U(W9 I 0) = -0
Proof: The energy contributed by the clique potentials in WE is E~=l -Wi!mo.1:'
Defining W
E~=1 Wi, this simplifies to -W!mo.1:'
=
The evidence also contributes to the energy of WE, in the form: - E~=1 log ei'
Substituting from 5 into 4 and simplifying gives the total posterior energy of WE:
n
U(WE 10) = -W!mo.1:
+ W!mo.1:
n
- LWi!;, = - LWi!i
1=1
(6)
;'=1
The energy of the configuration W9 does not depend on evidence derived from the
Hough features. It has only one clique with a non-zero potential, the unary clique
of label -,E. Hence U(W9 I 0) = -0.0
Lemma 2:
(Vw)(w = E . .. -,elt ... ) :::} U(w I 0)
(Vw)(w = -,E ... elt ... ) :::} U(w I 0)
> U(WE I 0)
> U(W9 I 0)
Proof: For a mixed configuration W = E . .. -,elt ... , changing label -,elt to elt adds
energy because of the evidence associated with elt. This is at most Wi!mo.1:' It
also removes energy because of the potential of the clique Eelt, which is -Wi!mo.1:'
Because the clique potential K2 from E-,e1c is also removed, if K2 > 0, then changing
this label always reduces the energy.
For a mixed configuration w = -,E ... elt ... , changing the low-level label e1e to
-,e1c cannot add to the energy contributed by evidence, since -,elt has no evidence
associated with it. There is no binary clique potential for -,E-,e, but the potential
K1 for clique -,Ee1c is removed. Therefore, again, choosing any K1 > 0 reduces
energy and ensures that compatible labels are preferred.D
From lemma 2, there are two configurations that could possibly have minimal posterior energy. From lemma I, the configuration which represents the existence of
the higher-order feature is preferred if and only if the weighted sum of the evidence
exceeds threshold, as in the Hough transform.
Often it is desirable to find the mode in a high-level parameter space rather than
those elements which surpass a fixed threshold. Finding a single mode is easy to
do in a Hough-like MRFj add lateral connections between the ezists labels of the
high-level features to form a winner-take-all network. If the potentials for these
cliques are large enough, it is not possible for more than one variable corresponding
to a high-level feature to be labeled ezists.
4
BEYOND HOUGH TRANSFORMS: MRF
PARAMETER NETS
The essentials of a parameter network are a set of variables representing low-order
features, a set of variables representing high-order features, and the appropriate
Markov Random Fields Can Bridge Levels of Abstraction
Figure 2: Noisy image data
Figure 3: Three parameter-net MRF experiments: white dots in the lower images
indicate the decision that a horizontal or vertical local edge is present. Upper images
show the horizontal and vertical lines found. The left net is a feedforward Hough
transform; the middle net uses positive feedback from lines to edges; the right net
uses negative feedback, from non-existing lines to non-existing edges
weighted connections between them. This section explores the characteristics of
more "natural" MRF parameter networks, still based on the same variables and
connections, but not limited to binary label sets and sum/threshold decision procedures.
4.1
EXPERIMENTS WITH FEEDBACK
The Hough transform and its parameter net instantiation are inherently feedforward . In contrast, all MRFs intrinsically incorporate feedback. We experimented
with a network designed to find lines from edges. Horizontal and vertical edge inputs
are represented at the low level, and horizontal and vertical lines which span the
image at the high level. The input data look like Figure 2. Probabilistic evidence
for the low-level edges is generated from pixel data using a model of edge-image formation [Sher, 1987]. The edges vote for compatible lines. In Figure 3, the decision
of the feed-forward, Hough transform MRF is shown at the left: edges exist where
the local evidence is sufficient; lines exist where enough votes are received.
Keeping the same topology, inputs, and representations in the MRF, we added topdown feedback by changing binary clique potentials so that the existence of a line at
the high level is more strongly compatible with the existence of its edges. Missing
edges are filled in (middle). By making non-existent lines strongly incompatible with
the existence of edges, noisy edges are substantially removed (right). Other MRFs
for segmentation [Chou and Brown, 1990; Marroquin, 1985] find collinear edges,
401
402
Cooper and Prokopowicz
but cannot reason about lines and therefore cannot exploit top-down feedback.
4.2
REPRESENTATION AND DECISION MAKING
Both parameter nets and MRFs represent confidence in local hypotheses, but here
the MRF framework has intrinsic advantages. MRFs can simultaneously represent
independent beliefs for and against the same hypotheses. In an active vision system, which must reason about gathering as well as interpreting evidence, one could
extend this to include the label don't know, allowing explicit reasoning about the
condition in which the local evidence insufficiently supports any decision. MRFs can
also express higher-order constraints as more than a set of pairs. The exploitation
of appropriate 3-cliques, for example, has been shown to be very useful [Cooper,
1990].
Since the potentials in an MRF are related to local conditional probabilities, there
is a principled way to obtain them. Observations can be used to estimate local joint
probabilities, which can be converted to the clique potentials defining the prior
distribution on the field [Pearl, 1988; Swain, 1990].
Most evidence integration schemes require, in addition to the network topology and
parameters, the definition of a decision making process (e.g. thresholding) and a
theory of parameter acquisition for that process, which is often ad hoc. To estimate
the maximum posterior probability of a MRF, on the other hand, is intrinsically
to make a decision among the possibilities embedded in the chosen variables and
labels.
The space of possible decisions (interpretations of problem input) is also much
richer for MRFs than for parameter networks. For both nets, the nodes for which
evidence is available define a n-dimensional problem input space. The weights
di vide this space into regions defined by the one best interpretation (configuration)
for all problems in that region. With parameter nets, these regions are separated
by planes, since only the sum of the inputs matters. In MRFs, the energy depends
on the log-product of the evidence and the sum of the potentials, allowing more
general decision surfaces. Non-linear decisions such as AND or XOR are easy to
encode, whereas they are impossible for the linear Hough transform.
5
CONCLUSION
This paper has shown that parameter networks can be constructed with Markov
Random Fields. MRFs can thus bridge representational levels of abstraction in
network vision systems. Furthermore, it has been demonstrated that MRFs offer
the potential for a significantly more powerful implementation of parameter nets,
even if their topological architecture is identical to traditional Hough networks. In
short, at least one method is now available for constructing intermediate vision
solutions with Markov Random Fields.
It may thus be possible to build entire integrated vision systems with a single welljustified formal framework - Markov Random Fields. Such systems would have a
unified representational scheme, constraints and evidence with well-defined semantics, and a single control structure. Furthermore, feedback and feedforward flow of
Markov Random Fields Can Bridge Levels of Abstraction
information, crucial in any complete vision system, is intrinsic to MRFs.
Of course, the task still remains to build a functioning vision system for some
domain. In this paper we have said nothing about the definition of specific "features" and the constraints between them that would constitute a useful system.
But providing essential tools implemented in a well-defined formal framework is an
important step toward building robust, functioning systems.
Acknowledgements
Support for this research was provided by NSF grant #IRI-9110492 and by Andersen
Consulting, through their founding grant to the Institute for the Learning Sciences.
Patrick Yuen wrote the MRF simulator that was used in the experiments.
References
[Ballard, 1984] D.H. Ballard,
"Parameter Networks,"
Artificial Intelligence,
22(3):235-267, 1984.
[Bolle et al., 1988] Ruud M. Bolle, Andrea Califano, Rick Kjeldsen, and R.W. Taylor, "Visual Recognition Using Concurrent and Layered Parameter Networks,"
Technical Report RC-14249, IBM Research Division, T.J. Watson Research Center, Dec 1988.
[Chellapa and Jain, 1991] Rama Chellapa and Anil Jain, editors, Markov Random
Fields: Theory and Application, Academic Press, 1991.
[Chou and Brown, 1990] Paul B. Chou and Christopher M. Brown, "The Theory
and Practice of Bayesian Image Labeling," International Journal of Computer
Vision, 4:185-210, 1990.
[Cooper, 1990] Paul R. Cooper, "Parallel Structure Recognition with Uncertainty:
Coupled Segmentation and Matching," In Proceedings of the Third International
Conference on Computer Vision ICCV '90, Osaka, Japan, December 1990.
[Geman and Geman, 1984] Stuart Geman and Donald Geman, "Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images," PAMI,
6(6):721-741, November 1984.
[Hopfield and Tank, 1985] J. J. Hopfield and D. W. Tank, ""Neural" Computation
of Decisions in Optimization Problems," Biological Cybernetics, 52:141-152, 1985.
[Kirkpatrick et al., 1983] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, "Optimization by Simulated Annealing," Science, 220:671-680, 1983.
[Marroquin, 1985] Jose Luis Marroquin, "Probabilistic Solution of Inverse Problems," Technical report, MIT Artificial Intelligence Laboratory, September, 1985.
[Pearl, 1988] Judea Pearl, Probabalistic Reasoning in Intelligent Systems, Morgan
Kaufman, 1988.
[Sher, 1987] David B. Sher, "A Probabilistic Approach to Low-Level Vision," Technical Report 232, Department of Computer Science, University of Rochester,
October 1987.
[Swain, 1990] Michael J. Swain, "Parameter Learning for Markov Random Fields
with Highest Confidence First Estimation," Technical Report 350, Dept. of Computer Science, University of Rochester, August 1990.
403
| 505 |@word exploitation:1 middle:2 simplifying:1 configuration:20 existing:2 current:1 must:4 luis:1 written:1 remove:1 designed:1 intelligence:2 leaf:1 plane:1 short:1 provides:2 consulting:1 node:3 rc:1 constructed:3 combine:1 vide:1 andrea:1 roughly:1 simulator:1 detects:1 versity:1 provided:1 kaufman:1 interpreted:1 unspecified:1 substantially:1 unified:1 finding:2 voting:3 demonstrates:1 evanston:2 k2:3 control:2 normally:1 grant:2 scrutiny:1 positive:1 local:15 establishing:2 tallying:1 might:1 pami:1 equivalence:1 specifying:1 limited:1 practice:1 implement:1 procedure:2 significantly:1 dictate:1 convenient:1 matching:1 confidence:5 donald:1 cannot:3 layered:1 impossible:1 intercept:1 equivalent:5 deterministic:1 demonstrated:1 missing:1 center:1 nwu:2 iri:1 convex:1 rule:1 osaka:1 classic:1 construction:2 suppose:1 us:2 hypothesis:4 element:2 recognition:2 particularly:1 geman:8 labeled:1 role:1 region:3 ensures:1 connected:1 highest:2 removed:3 principled:2 existent:1 depend:2 segment:6 serve:1 upon:1 division:1 basis:2 compactly:1 joint:3 hopfield:3 represented:1 separated:1 distinct:2 jain:3 artificial:2 labeling:1 formation:1 neighborhood:5 choosing:1 whose:2 richer:3 widely:2 plausible:1 say:1 transform:14 noisy:2 hoc:1 advantage:1 net:15 reconstruction:1 maximal:1 product:1 relevant:1 representational:5 description:1 object:2 rama:1 derive:1 received:1 implemented:1 indicate:1 direction:1 correct:1 functionality:1 stochastic:1 vc:3 require:1 yuen:1 biological:1 considered:1 mo:6 substituting:1 estimation:1 label:16 bridge:8 concurrent:1 tool:1 weighted:5 mit:1 behaviorally:1 always:1 rather:1 rick:1 probabilistically:1 encode:2 derived:1 focus:1 likelihood:3 contrast:2 chou:4 inference:2 abstraction:13 mrfs:17 dependent:1 unary:3 integrated:2 entire:1 interested:3 semantics:1 tank:3 pixel:1 among:1 flexible:2 denoted:3 priori:1 summed:1 integration:1 field:22 construct:1 equal:2 identical:1 represents:3 stuart:1 look:1 constitutes:1 report:4 fundamentally:1 intelligent:1 simultaneously:2 highly:1 possibility:1 kirkpatrick:3 bolle:3 edge:17 capable:2 integral:1 intense:1 filled:1 hough:21 taylor:1 circle:1 minimal:2 restoration:2 assignment:2 swain:3 combined:1 explores:1 international:2 probabilistic:6 michael:1 together:2 again:1 central:1 andersen:1 possibly:1 japan:1 potential:24 converted:1 matter:1 explicitly:1 ad:1 depends:1 bayes:1 capability:1 parallel:2 slope:1 rochester:2 il:4 ni:1 xor:1 characteristic:1 bayesian:2 cybernetics:1 evidential:3 definition:3 against:1 energy:21 acquisition:1 associated:3 proof:2 di:1 judea:1 treatment:1 intrinsically:4 knowledge:1 improves:1 segmentation:3 marroquin:4 feed:1 higher:8 specify:1 wherein:1 strongly:2 furthermore:2 hand:1 horizontal:4 ei:2 christopher:1 defines:1 mode:2 building:1 brown:4 functioning:2 hence:1 spatially:2 elt:8 laboratory:1 white:1 conditionally:1 ll:1 complete:1 demonstrate:1 temperature:1 interpreting:1 reasoning:2 image:12 novel:1 recently:1 winner:1 extend:1 interpretation:4 functionally:1 lxr:1 gibbs:1 dot:1 surface:2 add:3 patrick:1 playa:1 posterior:3 binary:4 watson:1 morgan:1 minimum:4 determine:1 desirable:2 sound:1 reduces:2 exceeds:1 technical:4 academic:1 characterized:1 offer:6 mrf:18 vision:19 represent:2 dec:1 justified:1 background:1 addition:1 whereas:1 annealing:1 crucial:1 subject:1 undirected:1 december:1 incorporates:1 flow:1 ee:3 vw:2 intermediate:3 feedforward:5 easy:2 enough:2 architecture:1 restrict:1 topology:2 idea:2 simplifies:1 bridging:1 collinear:1 peter:1 constitute:1 useful:3 involve:1 transforms:1 exist:4 nsf:1 wr:2 discrete:3 express:1 four:2 threshold:6 changing:4 e1c:2 thresholded:1 graph:3 relaxation:1 sum:7 jose:1 parameterized:3 powerful:1 uncertainty:1 inverse:1 reasonable:1 decision:16 incompatible:1 topological:1 strength:1 insufficiently:1 constraint:4 scene:3 encodes:1 gibb:1 aspect:1 vecchi:1 span:1 department:1 developing:1 combination:1 across:2 wi:7 labellings:1 making:5 founding:1 invariant:1 iccv:1 gathering:1 equation:1 remains:1 know:1 available:2 hierarchical:1 hcf:1 appropriate:2 gelatt:1 hat:1 existence:4 top:1 ceo:1 include:1 exploit:1 k1:2 especially:1 establish:1 build:2 added:1 realized:2 traditional:3 said:1 september:1 lateral:1 entity:2 simulated:1 reason:2 toward:1 assuming:1 providing:1 minimizing:1 equivalently:1 october:1 negative:1 implementation:2 boltzmann:1 negated:2 contributed:2 upper:1 vertical:4 observation:7 allowing:2 markov:16 november:1 defining:2 august:1 david:1 cast:1 pair:1 kl:1 connection:3 pearl:3 beyond:1 topdown:1 including:1 max:2 belief:1 satisfaction:1 natural:1 representing:2 scheme:2 extract:1 sher:3 coupled:1 prior:5 acknowledgement:1 embedded:1 fully:1 northwestern:2 mixed:2 foundation:2 sufficient:1 thresholding:2 editor:1 ibm:1 compatible:4 penalized:1 course:1 keeping:1 formal:3 side:2 institute:3 neighbor:1 feedback:10 boundary:1 depth:1 computes:1 doesn:3 forward:1 replicated:1 preferred:2 wrote:1 clique:24 global:1 active:1 instantiation:1 califano:1 don:1 nature:2 ballard:3 robust:1 inherently:1 obtaining:1 contributes:1 probabalistic:1 complex:1 constructing:2 domain:2 main:1 paul:3 nothing:1 site:3 en:3 cooper:8 explicit:1 lw:1 third:1 anil:1 down:1 specific:1 experimented:1 evidence:22 normalizing:1 exists:3 essential:2 intrinsic:2 magnitude:1 w9:5 locality:1 visual:3 expressed:1 applies:1 determines:1 lwi:2 conditional:1 goal:1 surpass:1 lemma:6 called:2 total:1 wio:1 vote:5 occluding:1 formally:1 support:2 incorporate:2 dept:1 |
4,476 | 5,050 | Universal models for binary spike patterns using
centered Dirichlet processes
Il Memming Park123 , Evan Archer24 , Kenneth Latimer12 , Jonathan W. Pillow1234
1. Institue for Neuroscience, 2. Center for Perceptual Systems, 3. Department of Psychology
4. Division of Statistics & Scientific Computation
The University of Texas at Austin
{memming@austin., earcher@, latimerk@, pillow@mail.} utexas.edu
Abstract
Probabilistic models for binary spike patterns provide a powerful tool for understanding the statistical dependencies in large-scale neural recordings. Maximum entropy (or ?maxent?) models, which seek to explain dependencies in terms
of low-order interactions between neurons, have enjoyed remarkable success in
modeling such patterns, particularly for small groups of neurons. However, these
models are computationally intractable for large populations, and low-order maxent models have been shown to be inadequate for some datasets. To overcome
these limitations, we propose a family of ?universal? models for binary spike patterns, where universality refers to the ability to model arbitrary distributions over
all 2m binary patterns. We construct universal models using a Dirichlet process
centered on a well-behaved parametric base measure, which naturally combines
the flexibility of a histogram and the parsimony of a parametric model. We derive
computationally efficient inference methods using Bernoulli and cascaded logistic base measures, which scale tractably to large populations. We also establish a
condition for equivalence between the cascaded logistic and the 2nd-order maxent
or ?Ising? model, making cascaded logistic a reasonable choice for base measure
in a universal model. We illustrate the performance of these models using neural
data.
1
Introduction
Probability distributions over spike words form the fundamental building blocks of the neural code.
Accurate estimates of these distributions are difficult to obtain in the context of modern experimental techniques, which make it possible to record the simultaneous spiking activity of hundreds of
neurons. These difficulties, both computational and statistical, arise fundamentally from the exponential scaling (in population size) of the number of possible words a given population is capable
of expressing. One strategy for combating this combinatorial explosion is to introduce a parametric
model which seeks to make trade-offs between flexibility, computational expense [1, 2], or mathematical completeness [3] in order to be applicable to large-scale neural recordings. A variety of
parametric models have been proposed in the literature, including the 2nd-order maxent or Ising
model [4, 5], the reliable interaction model [3], restricted Boltzmann machine [6], deep learning [7],
mixture of Bernoulli model [8], and the dichotomized Gaussian model [9]. However, while the number of parameters in a model chosen from a given parametric family may increase with the number
of neurons, it cannot increase exponentially with the number of words. Thus, as the size of a population increases, a parametric model rapidly loses flexibility in describing the full spike distribution. In
contrast, nonparametric models allow flexibility to grow with the amount of data [10, 11, 12, 13, 14].
A naive nonparametric model, such as the histogram of spike words, theoretically preserves representational power and computational simplicity. Yet in practice, the empirical histogram may be
extremely slow to converge, especially for the high dimensional data we are primarily interested
1
B
C independent Bernoulli model
m neurons
A
D cascaded logistic model
time
Figure 1: (A) Binary representation of neural population activity. A single spike word x is indicated
in red. (B) Hierarchical Dirichlet process prior for the universal binary model (UBM) over spike
words. Each word is drawn with probability ?j . The ??s are drawn from a Dirichlet with parameters
given by ? and a base distribution over spike words with parameter ?. (C, D) Graphical models
of two base measures over spike words: independent Bernoulli model and cascaded logistic model.
The base measure is also a distribution over each spike word x = (x1 , . . . , xm ).
in. In most cases, we expect never to have enough data for the empirical histogram to converge.
Perhaps even more concerning is that a naive histogram model fails smooth over the space of words:
unobserved words are not accounted for in the model.
We propose a framework which combines the parsimony of parametric models with the flexibility
of nonparametric models. We model the spike word distribution as a Dirichlet process centered on a
parametric base measure. An appropriately chosen base measure smooths the observations, while the
Dirichlet process allows for data that depart systematically from the base measure. These models
are universal in the sense that they can converge to any distribution supported on the (2m 1)dimensional simplex. The influence of any base measure diminishes with increasing sample size,
and the model ultimately converges to the empirical distribution function.
The choice of base measure influences the small-sample behavior and computational tractability of
universal models, both of which are crucial for neural applications. We consider two base measures
that exploit a priori knowledge about neural data while remaining computationally tractable for large
populations: the independent Bernoulli spiking model, and the cascaded logistic model [15, 16].
Both the Bernoulli and cascaded logistic models show better performance when used as a base
measure for a universal model than when used alone. We apply these models to several simulated
and neural data examples.
2
Universal binary model
Consider a (random) binary spike word of length m, x 2 {0, 1}m , where m denotes the number of
distinct neurons (and/or time bins; Fig. 1A). There are K = 2m possible words, which we index
by k 2 {1, . . . , K}. The universal binary model is a hierarchical probabilistic model where on the
bottom level (Fig. 1B), x is drawn from a multinomial (categorical) distribution with the probability
of observing each word given by the vector ? (spike word distribution). On the top level, we model
? as a Dirichlet process [11] with a discrete base measure G? , hence,
x ? Cat(?),
? ? DP(?G? ),
? ? p(?| ),
(1)
where ? is the concentration parameter, G? is the base measure, a discrete probability distribution
over spike words, parameterized by ?, and p(?| ) is the hyper-prior. We choose a discrete probability
measure for G? such that it has positive measure only over {1, . . . , K}, and denote gk = G? (k).
Thus, the Dirichlet process has probability mass only on the K spike words, and is described by a
(finite dimensional) Dirichlet distribution,
? ? Dir(?g1 , . . . , ?gK ).
(2)
In the absence of data, the parametric base measure controls the mean of this nonparametric model,
E[?|?] = G? ,
2
(3)
regardless of ?. Therefore, we loosely say that ? is ?centered? around G? .1 We can start with good
parametric models of neural populations, and extend them into a nonparametric model by using
them as the base measure [17]. Under this scheme, the base measure quickly learns much of the
basic structure of the data while the Dirichlet extension takes into account any deviations in the data
which are not predicted by the parametric component. We call such an extension a universal binary
model (UBM) with base measure G? .
The marginal distribution of a collection of words X = {xi }N
i=1 under UBM is obtained by integrating over ?, and has the form of a Polya (a.k.a. Dirichlet-Multinomial) distribution:
P (X|?, G? ) =
K
Y
(?)
(N + ?)
k=1
(nk + ?gk )
,
(?gk )
(4)
where nk is the number of observations of the word k. This leads to a simple formula for sampling
from the predictive distribution over words:
Pr(xN +1 = k|XN , ?, G? ) =
nk + ?gk
.
N +?
(5)
Thus, sampling proceeds exactly as in the Chinese restaurant process (CRP): we set the (N + 1)-th
word to be k with probability proportional to nk + ?gk , and with probability proportional to ? we
draw a new word from G? (which in turn increases the probability of getting word k on the next
draw). Note that as ? ! 0, the predictive distribution converges to the histogram estimate nNk , and
as ? ! 1, it converges to the base measure itself. We use the Jensen-Shannon divergence to the
predictive distribution to quantify the performance in our experiments.
2.1
Model fitting
Given data, we fit the UBM via maximum a posteriori (MAP) inference for ? and ?, using coordinate
ascent. The marginal log-likelihood from (4) is given by,
X
X
L = log P (XN |?, ?) =
log (nk + ?gk )
log (?gk ) + log (?) log (N + ?) . (6)
k
Derivatives with respect to ? and ? are,
X
@L
=?
( (nk + ?gk )
@?
k
@L X
=
gk ( (nk + ?gk )
@?
k
(?gk ))
@
gk ,
@?
(?gk )) + (?)
(7)
(N + ?) ,
(8)
k
where denotes the digamma function. Note that the summation terms vanish when we have no
observations (nk = 0), so we only need to consider the words observed in the dataset.
P nk @
Note also that in the limit ? ! 1, dL
d? converges to
gk @? gk , the derivative of the logarithm
of the base
with respect to ?. On the other hand, in the limit ? ! 0, the derivative
P 1measure
@
goes to
gk @? gk , reflecting the fact that the number of observations nk is ignored: the likelihood
effectively reflects only a single draw from the base distribution with probability gk .
Even when the likelihood defined by the base measure is a convex or log-convex in ?, the UBM
likelihood is not guaranteed to be convex. Hence, we optimize by a coordinate ascent procedure that
alternates between optimizing ? and ?.
2.2
Hyper-prior
When modeling large populations of neurons, the number of parameters ? of the base measure grows
and over-fitting becomes a concern. Since the UBM relies on the base measure to provide smoothing
over words, it is critical to properly regularize our estimate of ?.
1
Technically, the mode of ? is G? only for ? 1, and for ? < 1, the distribution is symmetric around G? ,
but the probability mass is concentrated on the corners of the simplex.
3
We place a hyper-prior p(?| ) on ? for regularization. We consider both l2 and l1 regularization,
which correspond to Gaussian and double exponential priors, respectively. With regularization, the
loss function for optimization is L
k?kpp , where p = 1, 2. In a typical multi-neuron recording,
the connectivity is known to be sparse and lower order [1, 3], and so we assume the connectivity is
sparse. The l1 prior in particular promotes sparsity.
3
Base measures
The scalability of UBM hinges on the scalability of its base measure. We describe two computationally efficient base measures.
3.1
Independent Bernoulli model
We consider the independent Bernoulli model which assumes (statistically) independent spiking
neurons. It is often used as a baseline model for its simplicity [4, 3]. The Bernoulli base measure
takes the form,
G? (k) = p(x1 , . . . , xm |?) =
m
Y
pxi i (1
pi ) 1
xi
(9)
,
i
where pi
0 and ? = (p1 , . . . , pm ). The distribution has full support on K spike words as long
as all pi ?s are non-zero. Although the Bernoulli model cannot capture the higher-order correlation
structure of the spike word distribution with only m parameters, inference is fast and memoryefficient.
3.2
Cascaded logistic model
To introduce a rich dependence structure among the neurons, we assume the joint firing probability
of each neuron factors with a cascaded structure (see Fig. 1D):
p(x1 , x2 , . . . , xm ) = p(x1 )p(x2 |x1 )p(x3 |x1 , x2 ) ? ? ? p(xm |x1 , x2 , . . . , xm
Along with a parametric form of conditional distribution p(xi |x1 , . . . , pi
bilistic model of spike words.
1 ),
1 ).
(10)
it provides a proba-
A natural choice of the conditional is the logistic-Bernoulli linear model?a widely used model for
binary observations [2].
X
p(xi = 1|x1:i 1 , ?) = logistic(hi +
wij xj )
(11)
j<i
where ? = (hi , wij )i,j<i are the parameters. The combination of the factorization and the likelihoods give rise to the cascaded logistic (Bernoulli) model2 , which can be written as,
G? (k) = p(x1 , . . . , xm |?) =
p(xi |x1:i
1 , ?)
h
?
= 1 + exp
(2xi
m
Y
i=1
p(xi |x1:i
(12)
1)
?
??i
Pi 1
1) hi + j=1 wij xj
1
(13)
The cascaded logistic model and the Ising model (second order maxent model) have the same number of parameters m(m+1)
, but a different parametric form. The Ising model can be written as3 ,
2
0
1
X
1
p(x1 , . . . , xm |?) =
exp @
Jij xi xj A
(14)
Z(J)
i,j?i
where ? = J is a upper triangular matrix of parameters, and Z(J) is the normalizer. However, unlike
the cascaded logistic model, it is difficult to evaluate the likelihood of the Ising model, since it does
not have a computationally tractable normalizer (partition function). Hence, fitting an Ising model
is typically challenging. Since each conditional can be independently fit with a logistic regression (a
4
A Example cascaded logistic model
for Theorem 1
C
sparse Ising
10
D
Bernoulli
sparse Ising
100
cascaded logistic
10
50
cascaded
logistic
Bernoulli
10
0
10
B Equivalent Ising model
E
10
10
F
dense Ising
10
0
0.4
dense Ising
100
10
0.2
50
10
0
10
10
10
0
0.2
0.4
Figure 2: Tight relation between cascaded logistic model and the Ising model. (A) A cascaded
logistic model depicted as a graphical model with at most two conditioning (incoming arrow) per
node (see Theorem 2). The hi parameters are given in the nodes and the interaction terms, wij
are shown on the arrows between nodes. (B) Parameter matrix J of an Ising model equivalent to
(A). (C) A scatter plot of three simulated Ising models fit with cascaded logistic (blue tone) and
independent Bernoulli (red tone) models. Each point is a word in the m = 15 spike word space. The
x-axis gives probability of the word under the actual Ising model and the y-axis shows the estimated
probability from the fit model. The Ising model parameters were sparsely connect and generated
randomly. The diagonal terms (Jii ) were drawn from a standard normal. 80% of the off-diagonal
(Jij , i 6= j) terms were set to 0 and the rest drawn from a normal with mean 0 and standard deviation
3. Both models were fit by maximum likelihood using 107 samples. (D) A histogram of the JensenShannon (JS) divergence between 100 random pairs of sparse Ising model and the fit models. (E,F)
Same as (C,D) for Ising models generated with dense connectivity. The diagonal terms in the Ising
model parameters were constant -2. The off-diagonal terms were drawn from a standard normal
distribution.
convex optimization), cascaded logistic model?s estimation is computationally tractable for a large
number of neurons [2].
Despite these differences, remarkably, the Ising model and the cascaded logistic models overlap
substantially. Up to m = 3 neurons, Ising model and cascaded logistic model are equivalent. For
larger populations, the following theorem describes the intersection of the two models.
Theorem 1 (Pentadiagonal Ising model is a cascaded logistic model). An Ising model with Jij = 0
for j < i 2 or j > i+2, is also a cascaded logistic model. Moreover, the parameter transformation
is bijective.
The mapping between models parameters is given by
Jm,m = hm
Jm 1,m = wm,m
Jm
1,m 1
Ji,i
Ji,i+1
Ji,i+2
for 1 ? i ? n
2
3
1
?
?
1 + exp(hm )
+
log
1
1 + exp(hm + wm,m 1 )
?
?
?
?
1 + exp(hi+1 )
1 + exp(hi+2 )
= hi + log
+ log
1 + exp(hi+1 + wi+1,i )
1 + exp(hi+2 + wi+2,i )
?
?
(1 + exp(hi+2 + wi+2,i ))(1 + exp(hi+2 + wi+2,i+1 ))
= wi+1,i + log
(1 + exp(hi+2 ))(1 + exp(hi+2 + wi+2,i+1 + wi+2,i ))
= wi+2,i
= hm
2, for a symmetric J. Proof can be found in the supplemental material.
Also known as the logistic autoregressive network. See [15], chapter 3.2.
Note that for xi 2 {0, 1}, the mean hi ?s can be incorporated as the diagonal of J.
5
(15)
(16)
(17)
(18)
(19)
(20)
A
10
10
2
10
B
3
10
4
10
5
10
4
6
x 10
4
2
0
0 1 2 3 4 5 6 7 8
Figure 3: 3rd order maxent distribution experiment. (A) Convergence in Jensen-Shannon (JS)
divergence between the fit model and the true model. Error bar represents SEM over 10 repeats.
(B) Histogram of the number of spikes per word. (C) Scatter plots of the log-likelihood ratio
log(Pemp (k)) log(Pmodel (k)) for each model (column), and two sample sizes of N = 1000 and
N = 100000 (rows). Note the scale difference on the y-axes. Error line represents twice the standard
deviation over 10 repeats. Shaded area represents frequentist 95% confidence interval for histogram
estimator assuming the same amount of data. The number on the bottom right is the JS divergence.
Unlike the Ising model, the order of the neurons plays a role in the formulation of the cascaded
logistic model. Since a permutation of a pentadiagonal matrix is not necessarily pentadiagonal,
this poses a potential challenge to the application of this equivalency. However, the Cuthill-McKee
algorithm can be used as a heuristic to find a permutation of J with the lowest bandwidth (i.e.,
closest to pentadiagonal) [18].
This theorem can be generalized to sparse, structured cascaded logistic models.
Theorem 2 (Intersection between cascaded logistic model and Ising model). A cascaded logistic
model with at most two interactions with other neurons is also an Ising model.
For example, cascaded logistic with a sparse cascade p(x1 )p(x2 |x1 )p(x3 |x1 )p(x4 |x1 , x3 )p(x5 |x2 , x4 )
is an Ising model (Fig. 2A)4 . We remark that although the cascaded logistic model can be written
as an exponential family form, the cascaded logistic does not correspond to a simple family of
maximum entropy models in general.
The theorems show that only a subset of Ising models are equivalent to cascaded logistic models.
However, cascaded logistic models generally provide good approximations to the Ising model. We
demonstrate this by drawing random Ising models (both with sparse and dense pairwise coupling J),
and then fitting with a cascaded logistic model (Fig. 2C-F). Since Ising models are widely accepted
as effective models of neural populations, the cascaded logistic model presents a computationally
tractable alternative.
4
Simulations
We compare two parametric models (independent Bernoulli and cascaded logistic model) with three
nonparametric models (two universal binary models centered on the parametric models, and a naive
histogram estimator) on simulated data with 15 neurons. We find the MAP solution as the parameter
estimate for each model. We use an l1 regularization to fit the cascaded logistic model and the corresponding UBM. The l1 regularizer was selected by scanning on a grid until the cross-validation
likelihood started decreasing on 10% of the training data.
In Fig. 3, we simulate a maximum entropy (maxent) distribution with a third order interaction. As
the number of samples increases, Jensen-Shannon (JS) divergence between the estimated model and
true maxent model decreases exponentially for the nonparametric models. The JS-divergence of the
4
We provide MATLAB code to convert back and forth between a subset of Ising models and the corresponding subset of cascaded logistic models (see online supplemental material).
6
A
10
10
2
3
10
B
4
10
5
10
10
4
4
x 10
3
2
1
0
0 1 2 3 4 5 6 7 8 9 1011
Figure 4: Synchrony histogram model. Each word with the same number of total spikes regardless
of neuron identity has the same probability. Both Bernoulli and cascaded logistic models do not
provide a good approximation in this case and saturate, in terms of JS divergence. Same format as
Fig. 3.
10
2
10
3
10
4
10
5
10
4
4
x 10
3
2
1
0
0 1 2 3 4 5 6 7 8
Figure 5: Ising model with 1-D nearest neighbor interaction. Same format as Fig. 3. Note that
cascaded logistic and UBM with cascaded logistic base measure perform almost identically, and
their convergence does not saturate (as expected by Theorem 1).
parametric models saturates since the actual distribution does not lie within the same parametric
family. The cascaded logistic model and the UBM centered on it show the best performance for the
small sample regime, but eventually other nonparametric models catch up with the cascaded logistic
model.
The scatter plot (Fig. 3C) displays the log-likelihood ratio log(Ptrue ) log(Pmodel ) to quantify the
accuracy of the predictive distribution. Where significant deviations from the base measure model
can be observed in Fig. 3C, the corresponding UBM adapts to account for those deviations.
In Fig. 4, we draw samples from a distribution with higher-order dependences; Each word with the
same number of total spikes are assigned the same probability. For example, words with exactly
10 neurons spiking (and 5 not spiking, out of 15 neurons) occur with high probability as can be
seen from the histogram of the total spikes (Fig. 4B). Neither the Bernoulli model nor the cascaded
logistic model can capture this structure accurately, indicated by a plateau in the convergence plots
(Fig. 4A,C). In this case, all three nonparameteric models behave similarly: both UBMs converge
with the histogram.
In addition, we see that if the data comes from the model class assumed by the base measure, then
UBM is just as good as the base measure alone (Fig. 5). Together, these results suggest that UBM
7
A
C
4
4
0
0
10
JS = 0.0016
3
4
10
B
5
10
10
10
JS = 0.0349
0
10
10
0
10
4
x 10
4
4
10
D
14
10
10
8
10
0
10
0
6
8
10
4
6
10
4
10
0
0
1
3
4
5
6
7
8
9
10
0
10
10
0
10
3
10
4
10
5
10
Figure 6: Various models fit to a population of ten retinal ganglion neurons? response to naturalistic
movie [3]. Words consisted of 20 ms, binarized responses. 1 ? 105 samples were reserved for
testing. (A) JS divergence between the estimated model, and histogram constructed from the test
data. Ising model is included, and its trace is closely followed by the cascaded logistic model. (B)
Histogram of number of spikes per word. (C) Log-likelihood ratio scatter plot for the models trained
with 105 randomized observations. (D) The concentration parameter ? as a function of sample size.
supplements the base measure to model flexibly the observed firing patterns, and performs at least
as well as the histogram in the worst case.
5
Neural data
We apply UBMs to a simultaneously recorded population of 10 retinal gangilion cells, and compare
to the Ising model. In Fig. 6A we evaluate the convergence of each model. Three models?cascaded
logistic, its corresponding UBM, and the Ising model?initially perform similarly, however, as more
data is provided, UBM predicts the probabilities better. In panel C, we confirm that the cascaded
logistic UBM gives the best fit. The decrease in corresponding ?, shown in panel D, indicates
that the cascaded logistic UBM is becoming less confident that the data is from an actual cascaded
logistic model as we obtain more data.
6
Conclusion
We proposed universal binary models (UBMs), a nonparametric framework that extends parametric
models of neural recordings. UBMs flexibly trade off between smoothing from the base measure and
?histogram-like? behavior. The Dirichlet process can incorporate deviations from the base measure
when supported by the data, even as the base measure buttresses the nonparametric approach with
desirable properties of parametric models, such as fast convergence and interpretability. Unlike the
reliable interaction model [3], which aims to provide the same features in a heuristic manner, the
UBM is a well-defined probabilistic model.
Since the main source of smoothing is the base measure, UBM?s ability to extrapolate is limited
to repeatedly observed words. However, UBM is capable of adjusting the probabilities of the most
frequent words to focus on fitting the regularities of small probability events.
We proposed the cascaded logistic model for use as a powerful, but still computationally tractable,
base measure. We showed, both theoretically and empirically, that the cascaded logistic model is
an effective, scalable alternative to the Ising model, which is usually limited to smaller populations.
The UBM model class has the potential to reveal complex structure in large-scale recordings without
the limitations of a priori parametric assumptions.
Acknowledgments
We thank R. Segev and E. Ganmor for the retinal data. This work was supported by a Sloan Research Fellowship, McKnight Scholar?s Award, and NSF CAREER Award IIS-1150186 (JP).
8
References
[1] I. E. Ohiorhenuan, F. Mechler, K. P. Purpura, A. M. Schmid, Q. Hu, and J. D. Victor. Sparse coding and
high-order correlations in fine-scale cortical networks. Nature, 466(7306):617?621, July 2010.
[2] P. Ravikumar, M. Wainwright, and J. Lafferty. High-dimensional Ising model selection using L1regularized logistic regression. The Annals of Statistics, 38(3):1287?1319, 2010.
[3] E. Ganmor, R. Segev, and E. Schneidman. Sparse low-order interaction network underlies a highly
correlated and learnable neural population code. Proceedings of the National Academy of Sciences,
108(23):9679?9684, 2011.
[4] E. Schneidman, M. J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087):1007?1012, Apr 2006.
[5] J. Shlens, G. Field, J. Gauthier, M. Grivich, D. Petrusca, A. Sher, L. A. M., and E. J. Chichilnisky. The
structure of multi-neuron firing patterns in primate retina. J Neurosci, 26:8254?8266, 2006.
[6] P. Smolensky. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1.
chapter Information processing in dynamical systems: foundations of harmony theory, pages 194?281.
MIT Press, Cambridge, MA, USA, 1986.
[7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[8] G. J. McLachlan and D. Peel. Finite mixture models. Wiley, 2000.
[9] M. Bethge and P. Berens. Near-maximum entropy models for binary neural representations of natural
images. Advances in neural information processing systems, 20:97?104, 2008.
[10] P. M?uller and F. A. Quintana. Nonparametric bayesian data analysis. Statistical science, 19(1):95?110,
2004.
[11] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[12] W. Truccolo and J. P. Donoghue. Nonparametric modeling of neural point processes via stochastic gradient boosting regression. Neural computation, 19(3):672?705, 2007.
[13] R. P. Adams, I. Murray, and D. J. C. MacKay. Tractable nonparametric bayesian inference in poisson
processes with gaussian process intensities. In Proceedings of the 26th Annual International Conference
on Machine Learning. ACM New York, NY, USA, 2009.
[14] A. Kottas, S. Behseta, D. E. Moorman, V. Poynor, and C. R. Olson. Bayesian nonparametric analysis of
neuronal intensity rates. Journal of Neuroscience Methods, 203(1):241?253, January 2012.
[15] B. J. Frey. Graphical models for machine learning and digital communication. MIT Press, 1998.
[16] M. Pachitariu, B. Petreska, and M. Sahani. Recurrent linear models of simultaneously-recorded neural
populations. Advances in Neural Information Processing (NIPS), 2013.
[17] E. Archer, I. M. Park, and J. W. Pillow. Bayesian entropy estimation for binary spike train data using
parametric prior knowledge. In Advances in Neural Information Processing Systems (NIPS), 2013.
[18] E. Cuthill and J. McKee. Reducing the bandwidth of sparse symmetric matrices. In Proceedings of the
1969 24th national conference, ACM ?69, pages 157?172, New York, NY, USA, 1969. ACM.
9
| 5050 |@word nd:2 hu:1 simulation:1 seek:2 universality:1 yet:1 written:3 scatter:4 partition:1 earcher:1 plot:5 alone:2 selected:1 tone:2 record:1 blei:1 completeness:1 provides:1 boosting:1 node:3 mathematical:1 along:1 constructed:1 nnk:1 combine:2 fitting:5 manner:1 introduce:2 pairwise:2 theoretically:2 expected:1 behavior:2 p1:1 nor:1 multi:2 kpp:1 salakhutdinov:1 decreasing:1 actual:3 jm:3 increasing:1 becomes:1 provided:1 moreover:1 panel:2 mass:2 lowest:1 parsimony:2 substantially:1 supplemental:2 unobserved:1 transformation:1 pmodel:2 binarized:1 exactly:2 control:1 positive:1 frey:1 limit:2 despite:1 firing:3 becoming:1 twice:1 equivalence:1 challenging:1 shaded:1 factorization:1 limited:2 statistically:1 acknowledgment:1 testing:1 practice:1 block:1 x3:3 procedure:1 evan:1 area:1 universal:13 empirical:3 cascade:1 word:41 integrating:1 refers:1 confidence:1 ganmor:2 suggest:1 naturalistic:1 cannot:2 selection:1 context:1 influence:2 optimize:1 equivalent:4 map:2 center:1 go:1 regardless:2 flexibly:2 independently:1 convex:4 simplicity:2 estimator:2 shlens:1 regularize:1 l1regularized:1 population:17 coordinate:2 annals:1 play:1 particularly:1 sparsely:1 ising:38 predicts:1 bottom:2 observed:4 role:1 capture:2 worst:1 trade:2 decrease:2 ultimately:1 trained:1 tight:1 predictive:4 technically:1 division:1 model2:1 joint:1 cat:1 chapter:2 various:1 regularizer:1 train:1 distinct:1 fast:2 describe:1 effective:2 hyper:3 pemp:1 heuristic:2 widely:2 larger:1 say:1 drawing:1 triangular:1 ability:2 statistic:2 g1:1 itself:1 online:1 beal:1 propose:2 interaction:8 jij:3 frequent:1 rapidly:1 flexibility:5 representational:1 adapts:1 academy:1 forth:1 olson:1 scalability:2 getting:1 convergence:5 double:1 regularity:1 adam:1 converges:4 derive:1 illustrate:1 coupling:1 pose:1 recurrent:1 nearest:1 polya:1 predicted:1 come:1 quantify:2 bilistic:1 closely:1 stochastic:1 centered:6 exploration:1 material:2 bin:1 truccolo:1 microstructure:1 scholar:1 summation:1 extension:2 around:2 normal:3 exp:12 mapping:1 cognition:1 estimation:2 diminishes:1 applicable:1 harmony:1 combinatorial:1 utexas:1 tool:1 reflects:1 mclachlan:1 uller:1 offs:1 mit:2 gaussian:3 aim:1 as3:1 ax:1 pxi:1 focus:1 properly:1 bernoulli:18 likelihood:11 indicates:1 contrast:1 digamma:1 normalizer:2 baseline:1 sense:1 nonparameteric:1 posteriori:1 inference:4 typically:1 initially:1 relation:1 wij:4 archer:1 interested:1 among:1 ubm:21 priori:2 smoothing:3 mackay:1 marginal:2 field:1 construct:1 never:1 sampling:2 petrusca:1 x4:2 represents:3 park:1 simplex:2 fundamentally:1 primarily:1 retina:1 modern:1 randomly:1 ohiorhenuan:1 preserve:1 divergence:8 simultaneously:2 national:2 proba:1 peel:1 highly:1 mixture:2 accurate:1 capable:2 explosion:1 loosely:1 maxent:8 logarithm:1 dichotomized:1 quintana:1 column:1 modeling:3 tractability:1 deviation:6 subset:3 hundred:1 inadequate:1 dependency:2 connect:1 scanning:1 dir:1 confident:1 fundamental:1 randomized:1 international:1 probabilistic:3 off:3 jensenshannon:1 together:1 quickly:1 bethge:1 connectivity:3 recorded:2 choose:1 corner:1 american:1 derivative:3 account:2 jii:1 potential:2 retinal:3 coding:1 sloan:1 kottas:1 observing:1 red:2 start:1 wm:2 parallel:1 synchrony:1 memming:2 il:1 accuracy:1 reserved:1 correspond:2 weak:1 bayesian:4 accurately:1 explain:1 simultaneous:1 plateau:1 naturally:1 proof:1 dataset:1 adjusting:1 knowledge:2 dimensionality:1 reflecting:1 back:1 higher:2 response:2 formulation:1 strongly:1 just:1 crp:1 correlation:3 until:1 hand:1 gauthier:1 logistic:54 mode:1 reveal:1 indicated:2 perhaps:1 behaved:1 scientific:1 grows:1 building:1 usa:3 consisted:1 true:2 hence:3 regularization:4 assigned:1 symmetric:3 x5:1 m:1 generalized:1 bijective:1 demonstrate:1 performs:1 l1:4 image:1 multinomial:2 spiking:5 ji:3 mckee:2 empirically:1 conditioning:1 exponentially:2 jp:1 extend:1 association:1 expressing:1 significant:1 cambridge:1 enjoyed:1 rd:1 grid:1 pm:1 similarly:2 base:39 j:9 closest:1 showed:1 optimizing:1 binary:15 success:1 victor:1 seen:1 converge:4 schneidman:2 july:1 ii:1 full:2 desirable:1 smooth:2 cross:1 long:1 concerning:1 ravikumar:1 award:2 promotes:1 scalable:1 basic:1 regression:3 underlies:1 poisson:1 histogram:17 cell:1 addition:1 remarkably:1 fellowship:1 fine:1 interval:1 grow:1 source:1 crucial:1 appropriately:1 rest:1 unlike:3 ascent:2 recording:5 lafferty:1 jordan:1 call:1 near:1 enough:1 identically:1 variety:1 xj:3 restaurant:1 psychology:1 fit:10 equivalency:1 bandwidth:2 donoghue:1 texas:1 york:2 remark:1 matlab:1 deep:1 ignored:1 generally:1 repeatedly:1 amount:2 nonparametric:14 ten:1 concentrated:1 nsf:1 neuroscience:2 estimated:3 per:3 blue:1 discrete:3 vol:1 group:1 drawn:6 neither:1 kenneth:1 convert:1 parameterized:1 powerful:2 place:1 family:5 reasonable:1 almost:1 extends:1 draw:4 scaling:1 hi:14 guaranteed:1 followed:1 display:1 annual:1 activity:2 institue:1 occur:1 segev:3 x2:6 simulate:1 extremely:1 format:2 department:1 structured:1 alternate:1 mechler:1 combination:1 mcknight:1 describes:1 smaller:1 petreska:1 wi:8 making:1 primate:1 restricted:1 pr:1 cuthill:2 computationally:8 describing:1 turn:1 eventually:1 tractable:6 grivich:1 pachitariu:1 apply:2 hierarchical:3 frequentist:1 alternative:2 denotes:2 top:1 remaining:1 dirichlet:13 assumes:1 graphical:3 hinge:1 exploit:1 especially:1 establish:1 chinese:1 murray:1 spike:26 depart:1 parametric:21 strategy:1 concentration:2 dependence:2 diagonal:5 bialek:1 gradient:1 dp:1 thank:1 simulated:3 mail:1 assuming:1 code:3 length:1 index:1 ratio:3 difficult:2 expense:1 gk:19 trace:1 rise:1 boltzmann:1 perform:2 teh:1 upper:1 neuron:21 observation:6 datasets:1 finite:2 behave:1 january:1 saturates:1 incorporated:1 hinton:1 communication:1 arbitrary:1 intensity:2 pair:1 chichilnisky:1 extrapolate:1 tractably:1 nip:2 bar:1 proceeds:1 usually:1 pattern:7 xm:7 dynamical:1 regime:1 sparsity:1 challenge:1 smolensky:1 including:1 reliable:2 interpretability:1 wainwright:1 power:1 critical:1 overlap:1 difficulty:1 natural:2 event:1 cascaded:50 scheme:1 movie:1 imply:1 axis:2 started:1 categorical:1 hm:4 naive:3 catch:1 schmid:1 sher:1 sahani:1 prior:7 understanding:1 literature:1 l2:1 berry:1 loss:1 expect:1 permutation:2 limitation:2 proportional:2 remarkable:1 validation:1 foundation:1 digital:1 systematically:1 pi:5 austin:2 row:1 accounted:1 supported:3 repeat:2 allow:1 neighbor:1 combating:1 sparse:11 distributed:1 overcome:1 xn:3 cortical:1 pillow:2 rich:1 autoregressive:1 collection:1 confirm:1 incoming:1 assumed:1 xi:9 purpura:1 nature:2 career:1 sem:1 necessarily:1 complex:1 berens:1 apr:1 dense:4 main:1 neurosci:1 arrow:2 arise:1 x1:17 neuronal:1 fig:15 slow:1 wiley:1 ny:2 fails:1 exponential:3 lie:1 perceptual:1 vanish:1 third:1 learns:1 formula:1 theorem:8 saturate:2 jensen:3 learnable:1 concern:1 dl:1 intractable:1 effectively:1 supplement:1 nk:10 entropy:5 depicted:1 intersection:2 ganglion:1 ptrue:1 loses:1 relies:1 acm:3 ma:1 conditional:3 identity:1 absence:1 included:1 typical:1 reducing:2 total:3 accepted:1 experimental:1 shannon:3 support:1 jonathan:1 incorporate:1 evaluate:2 correlated:2 |
4,477 | 5,051 | A Determinantal Point Process Latent Variable
Model for Inhibition in Neural Spiking Data
Jasper Snoek?
Harvard University
[email protected]
Ryan P. Adams
Harvard University
[email protected]
Richard S. Zemel
University of Toronto
[email protected]
Abstract
Point processes are popular models of neural spiking behavior as they provide a
statistical distribution over temporal sequences of spikes and help to reveal the
complexities underlying a series of recorded action potentials. However, the most
common neural point process models, the Poisson process and the gamma renewal
process, do not capture interactions and correlations that are critical to modeling
populations of neurons. We develop a novel model based on a determinantal point
process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model
is a natural extension of the popular generalized linear model to sets of interacting
neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied
to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic
modulation by the theta rhythm known to be present in the data.
1
Introduction
Statistical models of neural spike recordings have greatly facilitated the study of both intra-neuron
spiking behavior and the interaction between populations of neurons. Although these models are
often not mechanistic by design, the analysis of their parameters fit to physiological data can help
elucidate the underlying biological structure and causes behind neural activity. Point processes in
particular are popular for modeling neural spiking behavior as they provide statistical distributions
over temporal sequences of spikes and help to reveal the complexities underlying a series of noisy
measured action potentials (see, e.g., Brown (2005)). Significant effort has been focused on addressing the inadequacies of the standard homogenous Poisson process to model the highly non-stationary
stimulus-dependent spiking behavior of neurons. The generalized linear model (GLM) is a widely
accepted extension for which the instantaneous spiking probability can be conditioned on spiking
history or some external covariate. These models in general, however, do not incorporate the known
complex instantaneous interactions between pairs or sets of neurons. Pillow et al. (2008) demonstrated how the incorporation of simple pairwise connections into the GLM can capture correlated
spiking activity and result in a superior model of physiological data. Indeed, Schneidman et al.
(2006) observe that even weak pairwise correlations are sufficient to explain much of the collective
behavior of neural populations. In this paper, we develop a point process over spikes from collections of neurons that explicitly models anti-correlation to capture the inhibitive and competitive
relationships known to exist between neurons throughout the brain.
?
Research was performed while at the University of Toronto.
1
Although the incorporation of pairwise inhibition in statistical models is challenging, we demonstrate how complex nonlinear pairwise inhibition between neurons can be modeled explicitly and
tractably using a determinantal point process (DPP). As a starting point, we show how a collection
of independent Poisson processes, which is easily extended to a collection of GLMs, can be jointly
modeled in the context of a DPP. This is naturally extended to include dependencies between the individual processes and the resulting model is particularly well suited to capturing anti-correlation or
inhibition. The Poisson spike rate of each neuron is used to model individual spiking behavior, while
pairwise inhibition is introduced to model competition between neurons. The reader familiar with
Markov random fields can consider the output of each generalized linear model in our approach to
be analogous to a unary potential while the DPP captures pairwise interaction. Although inhibitory,
negative pairwise potentials render the use of Markov random fields intractable in general; in contrast, the DPP provides a more tractable and elegant model of pairwise inhibition. Given neural
spiking data from a collection of neurons and corresponding stimuli, we learn a latent embedding
of neurons such that nearby neurons in the latent space inhibit one another as enforced by a DPP
over the kernel between latent embeddings. Not only does this overcome a modeling shortcoming of
standard point processes applied to spiking data but it provides an interpretable model for studying
the inhibitive and competitive properties of sets of neurons. We demonstrate how divisive normalization is easily incorporated into our model and a learned periodic modulation of individual neuron
spiking is added to model the influence on individual neurons of periodic phenomena such as theta
or gamma rhythms.
The model is empirically validated in Section 4, first on three simulated examples to show the influence of its various components and then using spike recordings from a collection of neurons in
the hippocampus of an awake behaving rat. We show that the model learns a latent embedding of
neurons that is consistent with the previously observed inhibitory relationship between interneurons
and pyramidal cells. The inferred periodic component of approximately 4 Hz is precisely the frequency of the theta rhythm observed in these data and its learned influence on individual neurons is
again consistent with the dichotomy of neurons.
2
2.1
Background
Generalized Linear Models for Neuron Spiking
A standard starting point for modeling single neuron spiking data is the homogenous Poisson process, for which the instantaneous probability of spiking is determined by a scalar rate or intensity
parameter. The generalized linear model (Brillinger, 1988; Chornoboy et al., 1988; Paninski, 2004;
Truccolo et al., 2005) is a framework that extends this to allow inhomogeneity by conditioning the
spike rate on a time varying external input or stimulus. Specifically, in the GLM the rate parameter
results from applying a nonlinear warping (such as the exponential function) to a linear weighting
of the inputs. Paninski (2004) showed that one can analyze recorded spike data by finding the maximum likelihood estimate of the parameters of the GLM, and thereby study the dependence of the
spiking on external input. Truccolo et al. (2005) extended this to analyze the dependence of a neuron?s spiking behavior on its past spiking history, ensemble activity and stimuli. Pillow et al. (2008)
demonstrated that the model of individual neuron spiking activity was significantly improved by
including coupling filters from other neurons with correlated spiking activity in the GLM. Although
it is prevalent in the literature, there are fundamental limitations to the GLM?s ability to model real
neural spiking patterns. The GLM can not model the joint probability of multiple neurons spiking
simultaneously and thus lacks a direct dependence between the spiking of multiple neurons. Instead,
the coupled GLM relies on an assumption that pairs of neurons are conditionally independent given
the previous time step. However, empirical evidence, from for example neural recordings from the
rat hippocampus (Harris et al., 2003), suggests that one can better predict the spiking of an individual neuron by taking into account the simultaneous spiking of other neurons. In the following, we
show how to express multiple GLMs as a determinantal point process, enabling complex inhibitory
interactions between neurons. This new model enables a rich set of interactions between neurons
and enables them to be embedded in an easily-visualized latent space.
2.2
Determinantal Point Processes
The determinantal point process is an elegant distribution over configurations of points in space that
tractably models repulsive interactions. Many natural phenomena are DPP distributed including
fermions in quantum mechanics and the eigenvalues of random matrices. For an in-depth survey,
2
see Hough et al. (2006); see Kulesza and Taskar (2012) for an overview of their development within
machine learning. A point process provides a distribution over subsets of a space S. A determinantal point process models the probability density (or mass function, as appropriate) for a subset
of points, S ? S as being proportional to the determinant of a corresponding positive semi-definite
gram matrix KS , i.e., p(S) ? |KS |. In the L-ensemble construction that we limit ourselves to here,
this gram matrix arises from the application of a positive semi-definite kernel function to the set S.
Kernel functions typically capture a notion of similarity and so the determinant is maximized when
the similarity between points, represented as the entries in KS is minimized. As the joint probability
is higher when the points in S are distant from one another, this encourages repulsion or inhibition
between points. Intuitively, if one point i is observed, then another point j with high similarity, as
captured by a large entry [KS ]ij of KS , will become less likely to be observed under the model. It
is important to clarify here that KS can be any positive semi-definite matrix over some set of inputs corresponding to the points in the set, but it is not the empirical covariance between the points
themselves. Conversely, KS encodes a measure of anti-correlation between points in the process.
Therefore, we refer hereafter to KS as the kernel or gram matrix.
3
3.1
Methods
Modeling inter-Neuron Inhibition with Determinantal Point Processes
We are interested in modelling the spikes on N neurons during an interval of time T . We will
assume that time has been discretized into T bins of duration ?. In our formulation here, we assume
that all interaction across time occurs due to the GLM and that the determinantal point process
only modulates the inter-neuron inhibition within a single time slice. This corresponds to a Poisson
assumption for the marginal of each neuron taken by itself.
In our formulation, we associate each neuron, n, with a D-dimensional latent vector yn ? RD and
take our space to be the set of these vectors, i.e., S = {y1 , y2 , ? ? ? , yN }. At a high level, we use an
L-ensemble determinantal point process to model which neurons spike in time t via a subset St ? S:
|KSt |
Pr(St | {yn }N
.
(1)
n=1 ) =
|KS + I N |
Here the entries of the matrix KS arise from a kernel function k? (?, ?) applied to the values {yn }N
n=1
so that [KS ]n,n0 = k? (yn , yn0 ). The kernel function, governed by hyperparameters ?, measures the
degree of dependence between two neurons as a function of their latent vectors. In our empirical
analysis we choose a kernel function that measures this dependence based on the Euclidean distance
between latent vectors such that neurons that are closer in the latent space will inhibit each other
more. In the remainder of this section, we will expand this to add stimulus dependence.
As the determinant of a diagonal matrix is simply the product of the diagonal entries, when KS
is diagonal the DPP has the property that it is simply the joint probability of N independent (discretized) Poisson processes. Thus in the case of independent neurons with Poisson spiking we can
write KS as a diagonal matrix where the diagonal entries are the individual Poisson intensity parameters, KS = diag(?1 , ?2 , ? ? ? , ?N ). Through conditioning the diagonal elements on some external
input, this elegant property allows us to express the joint probability of N independent GLMs in
the context of the DPP. This is the starting point of our model, which we will combine with a full
covariance matrix over the latent variables to include interaction between neurons.
Following Zou and Adams (2012), we express the marginal preference for a neuron firing over
others, thus including the neuron in the subset S, with a ?prior kernel? that modulates the covariance.
Assuming that k? (y, y) = 1, this kernel has the form
p p
(2)
[KS ]n,n0 = k? (yn , yn0 )? ?n ?n0 ,
0
where n, n ? S and ?n is the intensity measure of the Poisson process for the individual spiking
behavior of neuron n. We can use these intensities to modulate the DPP with a GLM by allowing
the ?n to depend on a weighted time-varying stimulus. We denote the stimulus at time t by a
vector xt ? RK and neuron-specific weights as wn ? RK , leading to instantaneous rates:
T
?(t)
(3)
n = exp{xt wn }.
This leads to a stimulus dependent kernel for the DPP L-ensemble:
1 T
(t)
0
0
x (wn + wn ) .
[KS ]n,n = k? (yn , yn0 ) ? exp
(4)
2 t
3
q
q
q
(t)
(t)
(t)
It is convenient to denote the diagonal matrix ?(t) = diag( ?1 , ?2 , ? ? ? , ?N ), as well as
(t)
the St -restricted submatrix ?St , where St indexes the rows of ? corresponding to the subset of
neurons that spiked at time t. We can now write the joint probability of the spike history as
T
Pr({St }Tt=1 | {wn , yn }N
n=1 , {xt }t=1 , ?) =
T
Y
(t)
(t)
|??St KSt ?St |
.
(5)
(t)
(t)
|??S KS ?S + IN |
The generalized linear model now modulates the marginal rates, while the determinantal point process induces inhibition. This is similar to unary versus pairwise potentials in a Markov random field.
Note also that as the influence of the DPP goes to zero, KS tends toward the identity matrix and
(t)
the probability of neuron n firing becomes (for ? 1) ??n , which recovers the basic GLM. The
latent embeddings yn and weights wn can now be learned so that the appropriate balance is found
between stimulus dependence and inhibition due to, e.g., overlapping receptive fields.
3.2
t=1
Learning
We learn the model parameters {wn , yn }N
n=1 from data by maximizing the likelihood in Equation 5.
This optimization is performed using stochastic gradient descent on mini-batches of time slices.
The computational complexity of learning the model is asymptotically dominated by the cost of
computing the determinants in the likelihood, which are O(N 3 ) in this model. This was not a
limiting factor in this work, as we model a population of 31 neurons. Fitting this model for 31
neurons in Section 4.3 with approximately eighty thousand time bins requires approximately three
hours using a single core of a typical desktop computer. The cubic scaling of determinants in this
model will not be a realistic limiting factor until it is possible to simultaneously record from tens of
thousands of neurons simultaneously. Nevertheless, at these extremes there are promising methods
for scaling the DPP using low rank approximations of KS (Affandi et al., 2013) or expressing them
in the dual representation when using a linear covariance (Kulesza and Taskar, 2011).
3.3
Gain and Contrast Normalization
There is increasing evidence that neural responses are normalized or scaled by a common factor such
as the summed activations across a pool of neurons (Carandini and Heeger, 2012). Many computational models of neural activity include divisive normalization as an important component (Wainwright et al., 2002). Such normalization can be captured in our model through scaling the individual
neuron spiking rates by a stimulus-dependent multiplicative constant ?t > 0:
(t)
Pr(St | {wn , yn }N
n=1 , xt , ?, ?t ) =
(t)
|?t ??St KSt ?St |
(t)
(t)
|?t ??S KS ?S + IN |
,
(6)
where ?t = exp{xT
t w? }. We learn these parameters w? jointly with the other model parameters.
3.4
Modeling the Influence of Periodic Phenomena
Neuronal spiking is known to be heavily influenced by periodic phenomena. For example, in our
empirical analysis in Section 4.3 we apply the model to the spiking of neurons in the hippocampus
of behaving rats. Csicsvari et al. (1999) observe that the theta rhythm plays a significant role in
determining the spiking behavior of the neurons in these data, with neurons spiking in phase with
the 4 Hz periodic signal. Thus, the firing patterns of neurons that fire in phase can be expected to
be highly correlated while those which fire out of phase will be strongly anti-correlated. In order to
(t)
incorporate the dependence on a periodic signal into our model, we add to ?n a periodic term that
modulates the individual neuron spiking rates with a frequency f , a phase ?, and a neuron-specific
amplitude or scaling factor ?n ,
T
?(t)
(7)
n = exp xt wn + ?n sin(f t + ?)
where t is the time at which the spikes occurred. Note that if desired one can easily manipulate
Equation 7 to have each of the neurons modulated by an individual frequency, ai , and offset bi .
Alternatively, we can create a mixture of J periodic components, modeling for example the influence
of the theta and gamma rhythms, by adding a sum over components,
?
?
J
?
?
X
T
?(t)
=
exp
x
w
+
?
sin(f
t
+
?
)
(8)
n
jn
j
j
n
? t
?
j=1
4
2
2
1.4
1.2
1.5
Latent Value
Latent Value
0.5
0
?0.5
Gain Weight
1
1
0
0.6
0.4
?1
?1
1
0.8
0.2
?1.5
0
2
4
6
8
Order in 1D Retina
10
(a) Sliding Bar
12
?2
0
2
4
6
8
Order in 1D Retina
(b) Random Spiking
10
12
0
0
2
4
6
8
Order in 1D Retina
10
12
(c) Gain Control
Figure 1: Results of the simulated moving bar experiment (1a) compared to independent spiking behavior (1b).
Note that in 1a the model puts neighboring neurons within the unit length scale while it puts others at least one
length scale apart. 1c demonstrates the weights, w? , of the gain component learned if up to 5x random gain is
added to the stimulus at retina locations 6-12.
4
Experiments
In this section we present an empirical analysis of the model developed in this paper. We first
evaluate the model on a set of simulated experiments to examine its ability to capture inhibition in
the latent variables while learning the stimulus weights and gain normalization. We then train the
model on recorded rat hippocampal data and evaluate its ability to capture the properties of groups of
interacting neurons. In all experiments we compute KS with the Mat?ern 5/2 kernel (see Rasmussen
and Williams (2006) for an overview) with a fixed unit length scale (which determines the overall
scaling of the latent space).
4.1
Simulated Moving Bar
We first consider an example simulated problem where twelve neurons are configured in order along
a one dimensional retinotopic map and evaluate the ability of the DPP to learn latent representations
that reflect their inhibitive properties. Each neuron has a receptive field of a single pixel and the
neurons are stimulated by a three pixel wide moving bar. The bar is slid one pixel at each time step
from the first to last neuron, and this is repeated twenty times. Of the three neighboring neurons
exposed to the bar, all receive high spike intensity but due to neural inhibition, only the middle one
spikes. A small amount of random background stimulus is added as well, causing some neurons to
spike without being stimulated by the moving bar. We train the DPP specified above on the resulting
spike trains, using the stimulus of each neuron as the Poisson intensity measure and visualize the
one-dimensional latent representation, y, for each neuron. This is compared to the case where all
neurons receive random stimulus and spike randomly and independently when the stimulus is above
a threshold. The resulting learned latent values for the neurons are displayed in Figure 1. We see
in Figure 1a that the DPP prefers neighboring neurons to be close in the latent space, because they
compete when the moving bar stimulates them. To demonstrate the effect of the gain and contrast
normalization we now add random gain of up to 5x to the stimulus only at retina locations 6-12 and
retrain the model while learning the gain component. In Figure 1c we see that the model learns to
use the gain component to normalize these inputs.
4.2
Digits Data
Now we use a second simulated experiment to examine the ability of the model to capture structure
encoding inhibitory interactions in the latent representation while learning the stimulus dependent
probability of spiking from data. This experiment includes thirty simulated neurons, each with a
two dimensional latent representation, i.e., N = 30, yn ? R2 . The stimuli are 16?16 images of
handwritten digits from the MNIST data set, presented sequentially, one per ?time slice?. In the
data, each of the thirty neurons is specialized to one digit class, with three neurons per digit. When
a digit is presented, two neurons fire among the three: one that fires with probability one, and one
of the remaining two fires with uniform probability. Thus, we expect three neurons to have strong
probability of firing when the stimulus contains their preferred digit; however, one of the neurons
does not spike due to competition with another neuron. We expect the model to learn this inhibition
by moving the neurons close together in the latent space. Examining the learned stimulus weights
and latent embeddings, shown in Figures 2a and 2b respectively, we see that this is indeed the
case. This scenario highlights a major shortcoming of the coupled GLM. For each of the inhibitory
5
(a) Stimulus Weights
(b) 2D Latent Embedding
15
0.5
20
25
0
5
5
10
15
20
30
0
5
10
15
20
Neuron Index
25
30
(a) Kernel Matrix, KS
0.0
10
15
20
0
5
10
15
20
Neuron Index
(b) Stimulus Weights, wn
25
0
1
2
3
4
0 1 2 3 4
Orientations
10
Stimulus Index
5
0
Stimulus Index
1.0
0
Location Grid
Figure 2: Results of the digits experiment. A visualization of the neuron specific weights wn (2a) and latent
embedding (2b) learned by the DPP. In (2b) each blue number indicates the position of the neuron that always
fires for that specific digit, and the red and green numbers indicate the neurons that respond to that digit but
inhibit each other. We observe in (2b) that inhibitory pairs of neurons, the red and green pairs, are placed
extremely close to each other in the DPP?s learned latent space while neurons that spike simultaneously (the blue
and either red or green) are distant. This scenario emphasizes the benefit of having an inhibitory dependence
between neurons. The coupled GLM can not model this scenario well because both neurons of the inhibitory
pair receive strong stimulus but there is no indication from past spiking behavior which neuron will spike.
30
(c) w?
(d) wn=3
Figure 3: Visualizations of the parameters learned by the DPP on the Hippocampal data. Figure 3a shows a
visualization of the kernel matrix KS . Dark colored entries of KS indicate a strong pairwise inhibition while
lighter ones indicate no inhibition. The low frequency neurons, pyramidal cells, are strongly anti-correlated
which is consistent with the notion that they are inhibited by a common source such as an interneuron. Figure 3b
shows the (normalized) weights, wn learned from the stimulus feature vectors, which consist of concatenated
(t)
location and orientation bins, to each neuron?s Poisson spike rate ?n . An interesting observation is that the
two highest frequency neurons, interneurons, have little dependence on any particular stimulus and are strongly
anti-correlated with a large group of low frequency pyramidal cells. 3c shows the weights, w? to the gain
control, ?, and 3d shows a visualization of the stimulus weights for a single neuron n = 3 organized by
location and orientation bins. In 3a and 3b the neurons are ordered by their firing rates. In 3d we see that the
neuron is stimulated heavily by a specific location and orientation.
pairs of neurons, both will simultaneously receive strong stimulus but the conditional independence
assumption will not hold; past spiking behavior can not indicate that only one can spike.
4.3
Hippocampus Data
As a final experiment, we empirically evaluate the proposed model on multichannel recordings from
layer CA1 of the right dorsal hippocampus of awake behaving rats (Mizuseki et al., 2009; Csicsvari
et al., 1999). The data consist of spikes recorded from 31 neurons across four shanks during open
field tasks as well as the syncronized positions of two LEDs on the rat?s head. The extracted positions
and orientations of the rat?s head are binned into twenty-five discrete location and twelve orientation
bins which are input to the model as the stimuli. Approximately twenty seven minutes of spike
recording data was divided into time slices of 20ms. The data are hypothesized to consist of spiking
6
10
0.8
4c
0.6
0.15
6d
0.0
?0.2
?0.4
4a
?0.6
?1.0
?0.5
0.0
0.5
2c
0.05
0.00
7c
3b
0a
?0.05
6a
?0.10
5c
1b6b
5a
?0.15
1.0
1
5d
?0.20
?0.2
(a) Latent embedding of neurons
Spike Rate (Hz)
6c
Spike Rate (Hz)
0.2
3c
0.10
4b
0.4
10
0.20
?0.1
0.0
0.1
0.2
1
(b) Latent embedding of neurons (zoomed)
Figure 4: A visualization of the two dimensional latent embeddings, yn , learned for each neuron. Figure 4b
shows 4a zoomed in on the middle of the figure. Each dot indicates the latent value of a neuron. The color
of the dots represents the empirical spiking rate of the neuron, the number indicates the depth of the neuron
according to its position along the shank - from 0 (shallow) to 7 (deep) - and the letter denotes which of four
distinct shanks the neurons spiking was read from. We observe that the higher frequency interneurons are
placed distant from each other but in a configuration such that they inhibit the low frequency pyramidal cells.
2.0
1.8
Low Spike Rate (Pyr)
High Spike Rate (Int)
1.6
1.5
1.4
1.2
1.0
1.0
0.8
0.5
0.6
0.4
0.0
0.0
0.2
0.4
0.6
Time (seconds)
0.8
1.0
0.2
0
2?
4Hz Phase
(a) Single periodic component
(b) Two component mixture
(c) (Csicsvari et al., 1999)
Figure 5: A visualization of the periodic component learned by our model. In 5a, the neurons share a single
learned periodic frequency and offset but each learn an individual scaling factor ?n and 5b shows the average
influence of the two component mixture on the high and low spike rate neurons. In 5c we provide a reproduction
from (Csicsvari et al., 1999) for comparison. In 5a the neurons are colored by firing rate from light (high) to
dark (low). Note that the model learns a frequency that is consistent with the approximately 4 Hz theta rhythm
and there is a dichotomy in the learned amplitudes, ?, that is consistent with the influence of the theta rhythm
on pyramidal cells and interneurons.
originating from two classes of neurons, pyramidal cells and interneurons (Csicsvari et al., 1999),
which are largely separable by their firing rates. Csicsvari et al. (1999) found that interneurons fire
at a rate of 14 ? 1.43 Hz and pyramidal cells at 1.4 ? 0.01 Hz. Interneurons are known to inhibit
pyramidal cells, so we expect interesting inhibitory interactions and anti-correlated spiking between
the pyramidal cells. In our qualitative analysis we visualize the the data by the firing rates of the
neurons to see if the model learns this dichotomy.
Figures 3, 4 and 5a show visualizations of the parameters learned by the model with a single periodic
component according to Equation 7. Figure 3 shows the kernel matrix KS corresponding to the
latent embeddings in Figure 4 and the stimulus and gain control weights learned by the model. In
Figure 4 we see the two dimensional embeddings, yn , learned for each neuron by the same model.
In Figure 5 we see the periodic components learned for individual neurons on the hippocampal
data according to Equation 7 when the frequency term f and offset ? are shared across neurons.
However, the scaling terms ?n are learned for each neuron, so the neurons can each determine the
influence of the periodic component on their spiking behavior. Although the parameters are all
randomly initialized at the start of learning, the single frequency signal learned is of approximately
4 Hz which is consistent with the theta rhyhtm that Mizuseki et al. (2009) empirically observed in
these data. In Figures 5a and 5b we see that each neuron?s amplitude component depends strongly
7
Model
Valid Log Likelihood
Train Log Likelihood
?3.79
?3.17
?3.07
?3.04
?2.95
?2.74
?2.07
?3.68
?3.29
?2.91
?2.92
?2.84
?2.63
?1.96
Only Latent
Only Stimulus
Stimulus + Periodic + Latent
Stimulus + Gain + Periodic
Stimulus + Gain
Stimulus + Periodic + Gain + Latent
Stimulus + 2?Periodic + Gain + Latent
Table 1: Model log likelihood on the held out validation set and training set for various combinations of
components. We found the algorithm to be extremely stable. Each model configuration was run 5 times with
different random initializations and the variance of the results was within 10?8 .
on the neuron?s firing rate. This is also consistent with the observations of Csicsvari et al. (1999)
that interneurons and pyramidal cells are modulated by the theta rhythm at different amplitudes. We
find a strong similarity between the periodic influence learned by our two component model (5b) to
that in the reproduced figure (5c) from Csicsvari et al. (1999).
In Table 1 we present the log likelihood of the training data and withheld validation data under
variants of our model after learning the model parameters. The validation data consists of the last
full minute of recording which is 3,000 consecutive 20ms time slices. We see that the likelihood of
the validation data under our model increases as each additional component is added. Interestingly,
adding a second component to the periodic mixture greatly increases the model log likelihood.
Finally, we conduct a leave-one-neuron out prediction experiment on the validation data to compare
the proposed model to the coupled GLM. A spike is predicted if it increases the likelihood under
the model and the accuracy is averaged over all neurons and time slices in the validation set. We
compare GLMs with the periodic component, gain, stimulus and coupling filters to our DPP with the
latent component. The models did not differ significantly in the correct prediction of when neurons
would not spike - i.e. both were 99% correct. However, the DPP predicted 21% of spikes correctly
while the GLM predicted only 5.5% correctly. This may be counterintuitive, as one may not expect a
model for inhibitory interactions to improve prediction of when spikes do occur. However, the GLM
predicts almost no spikes (483 spikes of a possible 92,969), possibly due to its inability to capture
higher order inhibitory structure. As an example scenario, in a one-of-N neuron firing case the GLM
may prefer to predict that nothing fires (rather than incorrectly predict multiple spikes) whereas the
DPP can actually condition on the behavior of the other neurons to determine which neuron fired.
5
Conclusion
In this paper we presented a novel model for neural spiking data from populations of neurons that is
designed to capture the inhibitory interactions between neurons. The model is empirically validated
on simulated experiments and rat hippocampal neural spike recordings. In analysis of the model
parameters fit to the hippocampus data, we see that it indeed learns known structure and interactions between neurons. The model is able to accurately capture the known interaction between a
dichotomy of neurons and the learned frequency component reflects the true modulation of these
neurons by the theta rhythm.
There are numerous possible extensions that would be interesting to explore. A defining feature of
the DPP is an ability to model inhibitory relationships in a neural population; excitatory connections
between neurons are modeled as through the lack of inhibition. Excitatory relationships could be
modeled by incorporating an additional process, such as a Gaussian process, but integrating the
two processes would require some care. Also, a limitation of the current approach is that time
slices are modeled independently. Thus, neurons are not influenced by their own or others? spiking
history. The DPP could be extended to include not only spikes from the current time slice but also
neighboring time slices. This will present computational challenges, however, as the DPP scales with
respect to the number of spikes. Finally, we see from Table 1 that the gain modulation and periodic
component are essential to model the hippocampal data. An interesting alternative to the periodic
modulation of individual neuron spiking probabilities would be to have the latent representation
of neurons itself be modulated by a periodic component. This would thus change the inhibitory
relationships to be a function of the theta rhythm, for example, rather than static in time.
8
References
Emery N. Brown. Theory of point processes for neural systems. In Methods and Models in Neurophysics, chapter 14, pages 691?726. 2005.
J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli.
Spatio-temporal correlations and visual signaling in a complete neuronal population. Nature, 454
(7206):995?999, Aug 2008.
Elad Schneidman, Michael J. Berry, Ronen Segev, and William Bialek. Weak pairwise correlations
imply strongly correlated network states in a neural population. Nature, 440(7087):1007?1012,
April 2006.
David R. Brillinger. Maximum likelihood analysis of spike trains of interacting nerve cells. Biological Cybernetics, 59(3):189?200, August 1988.
E.S. Chornoboy, L.P. Schramm, and A.F. Karr. Maximum likelihood identification of neural point
process systems. Biological Cybernetics, 59(3):265?275, 1988.
Liam Paninski. Maximum likelihood estimation of cascade point-process neural encoding models.
Network: Computation in Neural Systems, 15(4):243?262, 2004.
W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal of Neurophysiology, 93(2):1074, 2005.
K. D. Harris, J. Csicsvari, H. Hirase, G. Dragoi, and G. Buzsaki. Organization of cell assemblies in
the hippocampus. Nature, 424:552?555, 2003.
J. Ben Hough, Manjunath Krishnapur, Yuval Peres, and Blint Vir?ag. Determinantal processes and
independence. Probability Surveys, 3:206?229, 2006.
Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Foundations
and Trends in Machine Learning, 5(2?3), 2012.
James Zou and Ryan P. Adams. Priors for diversity in generative latent variable models. In Advances
in Neural Information Processing Systems, 2012.
Raja H. Affandi, Alex Kulesza, Emily Fox, and Ben Taskar. Nystr?om Approximation for LargeScale Determinantal Processes. In Artificial Intelligence and Statistics, 2013.
Alex Kulesza and Ben Taskar. Structured determinantal point processes. In Advances in Neural
Information Processing Systems, 2011.
Matteo Carandini and David J. Heeger. Normalization as a canonical neural computation. Nature
reviews. Neuroscience, 13(1):51?62, January 2012.
Martin J. Wainwright, Odelia Schwartz, and Eero P. Simoncelli. Natural image statistics and divisive
normalization: Modeling nonlinearity and adaptation in cortical neurons. In R Rao, B Olshausen,
and M Lewicki, editors, Probabilistic Models of the Brain: Perception and Neural Function,
chapter 10, pages 203?222. MIT Press, February 2002.
J. Csicsvari, H. Hirase, A. Czurk?o, A. Mamiya, and G. Buzs?aki. Oscillatory coupling of hippocampal
pyramidal cells and interneurons in the behaving rat. The Journal of Neuroscience, 19(1):274?
287, jan 1999.
Carl E. Rasmussen and Christopher Williams. Gaussian Processes for Machine Learning. MIT
Press, 2006.
Kenji Mizuseki, Anton Sirota, Eva Pastalkova, and Gy?orgy Buzs?aki. Theta oscillations provide
temporal windows for local circuit computation in the entorhinal-hippocampal loop. Neuron, 64
(2):267?280, October 2009.
9
| 5051 |@word neurophysiology:1 determinant:5 middle:2 hippocampus:8 open:1 covariance:4 thereby:1 nystr:1 configuration:3 series:2 contains:1 hereafter:1 interestingly:1 past:3 current:2 activation:1 determinantal:15 realistic:1 distant:3 enables:2 designed:1 interpretable:1 n0:3 stationary:1 generative:1 intelligence:1 desktop:1 core:1 record:1 colored:2 provides:3 toronto:3 preference:1 location:7 five:1 along:2 pastalkova:1 direct:1 become:1 qualitative:1 fermion:1 consists:1 combine:1 fitting:1 pairwise:11 inter:2 snoek:1 expected:1 indeed:3 behavior:14 themselves:1 examine:2 mechanic:1 brain:2 discretized:2 little:1 window:1 increasing:1 becomes:1 retinotopic:1 underlying:3 circuit:1 mass:1 developed:1 ca1:1 finding:1 brillinger:2 ag:1 temporal:4 fellow:1 scaled:1 demonstrates:1 vir:1 control:4 unit:2 schwartz:1 yn:14 positive:3 local:1 tends:1 limit:1 encoding:2 firing:10 modulation:6 approximately:6 matteo:1 initialization:1 k:25 suggests:1 challenging:1 conversely:1 liam:1 bi:1 averaged:1 thirty:2 definite:3 digit:9 signaling:1 jan:1 empirical:6 significantly:2 cascade:1 convenient:1 integrating:1 close:3 put:2 context:2 influence:10 applying:1 map:1 demonstrated:2 maximizing:1 go:1 williams:2 starting:3 duration:1 independently:2 focused:1 survey:2 emily:1 counterintuitive:1 shlens:1 population:8 embedding:6 notion:2 analogous:1 limiting:2 elucidate:1 construction:1 heavily:2 play:1 lighter:1 carl:1 harvard:4 associate:1 element:1 trend:1 particularly:1 predicts:1 observed:5 taskar:5 role:1 capture:13 thousand:2 eva:1 jsnoek:1 inhibit:5 highest:1 complexity:3 depend:1 exposed:1 easily:4 joint:5 various:2 represented:1 chapter:2 train:5 distinct:1 shortcoming:2 artificial:1 zemel:2 dichotomy:5 neurophysics:1 widely:1 elad:1 ability:6 statistic:2 jointly:2 noisy:1 inhomogeneity:1 itself:2 final:1 reproduced:1 sequence:2 eigenvalue:1 indication:1 interaction:16 product:1 zoomed:2 remainder:1 adaptation:1 neighboring:4 causing:1 loop:1 fired:1 buzsaki:1 competition:2 normalize:1 sea:2 emery:1 adam:3 leave:1 ben:4 help:4 coupling:3 develop:2 measured:1 ij:1 aug:1 strong:5 c:1 predicted:3 indicate:4 kenji:1 differ:1 raja:1 correct:2 filter:2 stochastic:1 bin:5 require:1 truccolo:3 biological:3 ryan:2 extension:3 clarify:1 hold:1 exp:5 predict:3 visualize:3 major:1 consecutive:1 estimation:1 create:1 weighted:1 reflects:1 mit:2 always:1 gaussian:2 rather:2 varying:2 validated:2 prevalent:1 likelihood:13 modelling:1 rank:1 greatly:2 indicates:3 contrast:3 litke:1 dependent:4 repulsion:1 unary:2 typically:1 expand:1 originating:1 rpa:1 interested:1 pixel:3 overall:1 dual:1 among:1 orientation:6 development:1 renewal:1 summed:1 homogenous:2 field:6 marginal:3 having:1 represents:1 minimized:1 others:3 stimulus:39 richard:1 eighty:1 retina:5 inhibited:1 randomly:2 mizuseki:3 gamma:3 simultaneously:5 individual:15 familiar:1 phase:5 ourselves:1 fire:8 william:1 organization:1 interneurons:9 highly:2 intra:1 mamiya:1 mixture:4 extreme:1 light:1 behind:1 held:1 closer:1 fox:1 conduct:1 hough:2 euclidean:1 initialized:1 desired:1 modeling:8 rao:1 cost:1 addressing:1 subset:5 entry:6 uniform:1 examining:1 dependency:1 stimulates:1 periodic:27 st:11 density:1 fundamental:1 twelve:2 probabilistic:1 pool:1 michael:1 together:1 again:1 reflect:1 recorded:4 choose:1 possibly:1 external:4 leading:1 account:1 potential:5 diversity:1 schramm:1 gy:1 includes:1 int:1 configured:1 explicitly:2 depends:1 performed:2 multiplicative:1 analyze:2 red:3 competitive:3 start:1 om:1 accuracy:1 variance:1 largely:1 ensemble:5 maximized:1 ronen:1 weak:2 handwritten:1 identification:1 anton:1 accurately:1 emphasizes:1 cybernetics:2 history:5 explain:1 simultaneous:1 influenced:2 oscillatory:1 frequency:13 james:1 naturally:1 pyr:1 recovers:1 static:1 gain:19 carandini:2 popular:3 color:1 organized:1 amplitude:4 actually:1 nerve:1 higher:3 response:1 improved:1 april:1 formulation:2 strongly:5 correlation:7 glms:4 until:1 christopher:1 nonlinear:2 overlapping:1 lack:2 reveal:2 olshausen:1 effect:2 hypothesized:1 brown:3 y2:1 normalized:2 true:1 read:1 conditionally:1 sin:2 during:2 encourages:1 aki:2 rhythm:10 rat:10 m:2 generalized:6 hippocampal:7 tt:1 demonstrate:3 complete:1 image:2 instantaneous:4 novel:2 common:3 superior:1 specialized:1 jasper:1 spiking:49 empirically:4 overview:2 conditioning:2 occurred:1 relating:1 significant:2 refer:1 expressing:1 ai:1 rd:1 grid:1 nonlinearity:1 dot:2 moving:6 stable:1 similarity:4 behaving:4 inhibition:17 add:3 buzs:2 own:1 showed:1 apart:1 scenario:4 captured:2 additional:2 care:1 determine:2 schneidman:2 signal:3 semi:3 sliding:1 multiple:4 full:2 simoncelli:2 divided:1 manipulate:1 prediction:3 variant:1 basic:1 poisson:12 normalization:9 kernel:14 cell:13 receive:4 background:2 whereas:1 interval:1 pyramidal:11 source:1 hz:9 recording:8 elegant:3 embeddings:7 wn:13 independence:2 fit:2 donoghue:1 inadequacy:1 effort:1 manjunath:1 render:1 cause:1 action:2 prefers:1 deep:1 amount:1 dark:2 ten:1 induces:1 visualized:1 multichannel:1 exist:1 canonical:1 inhibitory:16 neuroscience:2 inhibitive:3 per:2 correctly:2 extrinsic:1 blue:2 hirase:2 write:2 discrete:1 mat:1 express:3 group:2 four:2 nevertheless:1 threshold:1 eden:1 asymptotically:1 sum:1 enforced:1 compete:1 facilitated:1 letter:1 run:1 respond:1 extends:1 throughout:1 reader:1 almost:1 oscillation:1 prefer:1 scaling:7 submatrix:1 capturing:1 layer:1 shank:3 chornoboy:2 activity:7 binned:1 occur:1 incorporation:2 precisely:1 segev:1 awake:2 alex:3 encodes:1 nearby:1 dominated:1 extremely:2 separable:1 martin:1 ern:1 structured:1 according:3 combination:1 across:4 shallow:1 intuitively:1 restricted:1 pr:3 spiked:1 glm:17 taken:1 equation:4 visualization:7 previously:1 mechanistic:1 tractable:1 studying:1 repulsive:1 apply:1 observe:4 appropriate:2 batch:1 alternative:1 jn:1 denotes:1 remaining:1 include:4 assembly:1 concatenated:1 february:1 warping:1 added:4 spike:41 occurs:1 receptive:2 dependence:10 diagonal:7 bialek:1 gradient:1 distance:1 simulated:8 seven:1 dragoi:1 toward:1 assuming:1 kst:3 length:3 modeled:5 relationship:6 index:5 mini:1 balance:1 october:1 negative:1 design:1 collective:1 twenty:3 allowing:1 neuron:138 observation:2 markov:3 enabling:1 withheld:1 descent:1 anti:7 displayed:1 incorrectly:1 january:1 defining:1 extended:5 incorporated:1 head:2 peres:1 y1:1 interacting:3 august:1 intensity:6 inferred:1 introduced:1 david:2 pair:6 csicsvari:10 specified:1 connection:2 chichilnisky:1 learned:22 yn0:3 hour:1 tractably:2 able:1 bar:8 pattern:2 perception:1 kulesza:5 challenge:1 including:3 green:3 wainwright:2 critical:1 natural:3 largescale:1 improve:1 theta:12 imply:1 numerous:1 coupled:4 sher:1 prior:2 literature:1 berry:1 review:1 determining:1 embedded:1 expect:4 highlight:1 interesting:4 limitation:2 proportional:1 versus:1 validation:6 foundation:1 degree:1 sufficient:1 consistent:7 editor:1 share:1 row:1 excitatory:2 placed:2 last:2 rasmussen:2 allow:1 karr:1 affandi:2 wide:1 taking:1 distributed:1 slice:9 dpp:24 overcome:1 depth:2 gram:3 pillow:3 rich:1 quantum:1 benefit:1 valid:1 collection:5 cortical:1 preferred:1 sequentially:1 krishnapur:1 eero:1 spatio:1 alternatively:1 latent:40 table:3 stimulated:3 promising:1 learn:6 nature:4 orgy:1 complex:4 zou:2 diag:2 did:1 arise:1 hyperparameters:1 nothing:1 repeated:1 neuronal:2 retrain:1 cubic:1 position:4 heeger:2 exponential:1 governed:1 weighting:1 learns:5 rk:2 minute:2 xt:6 covariate:2 specific:5 offset:3 r2:1 physiological:2 evidence:2 reproduction:1 intractable:1 consist:3 mnist:1 incorporating:1 adding:2 effectively:1 modulates:4 essential:1 entorhinal:1 conditioned:1 slid:1 interneuron:1 suited:1 led:1 paninski:4 likely:1 simply:2 explore:1 visual:1 ordered:1 scalar:1 lewicki:1 corresponds:1 determines:1 relies:1 harris:2 extracted:1 conditional:1 modulate:1 identity:1 shared:1 change:1 determined:1 specifically:1 typical:1 yuval:1 accepted:1 divisive:4 odelia:1 arises:1 modulated:3 dorsal:1 inability:1 incorporate:3 evaluate:4 phenomenon:5 correlated:8 |
4,478 | 5,052 | Neural representation of action sequences: how far
can a simple snippet-matching model take us?
Cheston Tan
Institute for Infocomm Research
Singapore
[email protected]
Jedediah M. Singer
Boston Children?s Hospital
Boston, MA 02115
[email protected]
Thomas Serre
David Sheinberg
Brown University
Providence, RI 02912
{Thomas Serre, David Sheinberg}@brown.edu
Tomaso A. Poggio
MIT
Cambridge, MA 02139
[email protected]
Abstract
The macaque Superior Temporal Sulcus (STS) is a brain area that receives and integrates inputs from both the ventral and dorsal visual processing streams (thought
to specialize in form and motion processing respectively). For the processing of
articulated actions, prior work has shown that even a small population of STS neurons contains sufficient information for the decoding of actor invariant to action,
action invariant to actor, as well as the specific conjunction of actor and action.
This paper addresses two questions. First, what are the invariance properties of individual neural representations (rather than the population representation) in STS?
Second, what are the neural encoding mechanisms that can produce such individual neural representations from streams of pixel images? We find that a simple
model, one that simply computes a linear weighted sum of ventral and dorsal responses to short action ?snippets?, produces surprisingly good fits to the neural
data. Interestingly, even using inputs from a single stream, both actor-invariance
and action-invariance can be accounted for, by having different linear weights.
1
Introduction
For humans and other primates, action recognition is an important ability that facilitates social interaction, as well as recognition of threats and intentions. For action recognition, in addition to the
challenge of position and scale invariance (which are common to many forms of visual recognition),
there are additional challenges. The action being performed needs to be recognized in a manner
invariant to the actor performing it. Conversely, the actor also needs to be recognized in a manner
invariant to the action being performed. Ultimately, however, both the particular action and actor
also need to be ?bound? together by the visual system, so that the specific conjunction of a particular
actor performing a particular action is recognized and experienced as a coherent percept.
For the ?what is where? vision problem, one common simplification of the primate visual system
is that the ventral stream handles the ?what? problem, while the dorsal stream handles the ?where?
problem [1]. Here, we investigate the analogous ?who is doing what? problem. Prior work has
found that brain cells in the macaque Superior Temporal Sulcus (STS) ? a brain area that receives
converging inputs from dorsal and ventral streams ? play a major role in solving the problem. Even
with a small population subset of only about 120 neurons, STS contains sufficient information for
action and actor to be decoded independently of one another [2]. Moreover, the particular conjunction of actor and action (i.e. stimulus-specific information) can also be decoded. In other words,
1
STS neurons have been shown to have successfully tackled the three challenges of actor-invariance,
action-invariance and actor-action binding.
What sort of neural computations are performed by the visual system to achieve this feat is still an
unsolved question. Singer and Sheinberg [2] performed population decoding from a collection of
single neurons. However, they did not investigate the computational mechanisms underlying the
individual neuron representations. In addition, they utilized a decoding model (i.e. one that models
the usage of the STS neural information by downstream neurons). An encoding model ? one that
models the transformation of pixel inputs into the STS neural representation ? was not investigated.
Here, we further analyze the neural data of [2] to investigate the characteristics of the neural representation at the level of individual neurons, rather than at the population level. We find that instead
of distinct clusters of actor-invariant and action-invariant neurons, the neurons cover a broad, continuous range of invariance.
To the best of our knowledge, there have not been any prior attempts to predict single-neuron responses at such a high level in the visual processing hierarchy. Furthermore, attempts at time-series
prediction for visual processing are also rare. Therefore, as a baseline, we propose a very simple
and biologically-plausible encoding model and explore how far this model can go in terms of reproducing the neural responses in the STS. Despite its simplicity, modeling STS neurons as a linear
weighted sum of inputs over a short temporal window produces surprisingly good fits to the data.
2
Background: the Superior Temporal Sulcus
The macaque visual system is commonly described as being separated into the ventral (?what?)
and dorsal (?where?) streams [1]. The Superior Temporal Sulcus (STS) is a high-level brain area
that receive inputs from both streams [3, 4]. In particular, it receives inputs from the highest levels
of the processing hierarchy of either stream ? inferotemporal (IT) cortex for the ventral stream,
and the Medial Superior Temporal (MST) cortex for the dorsal stream. Accordingly, neurons that
are biased more towards either encoding form information or motion information have been found
in the STS [5]. The upper bank of the STS has been found to contain neurons more selective for
motion, with some invariance to form [6, 7]. Relative to the upper bank, neurons in the lower bank
of the STS have been found to be more sensitive to form, with some ?snapshot? neurons selective
for static poses within action sequences [7]. Using functional MRI (fMRI), neurons in the lower
bank were found to respond to point-light figures [8] performing biological actions [9], consistent
with the idea that actions can be recognized from distinctive static poses [10]. However, there is no
clear, quantitative evidence for a neat separation between motion-sensitive, form-invariant neurons
in the upper bank and form-sensitive, motion-invariant neurons in the lower bank. STS neurons have
been found to be selective for specific combinations of form and motion [3, 11]. Similarly, based on
fMRI data, the STS responds to both point-light display and video displays, consistent with the idea
that the STS integrates both form and motion [12].
3
Materials and methods
Neural recordings. The neural data used in this work has previously been published by Singer and
Sheinberg [2]. We summarize the key points here, and refer the reader to [2] for details. Two male
rhesus macaques (monkeys G and S) were trained to perform an action recognition task, while neural activity from a total of 119 single neurons (59 and 60 from G and S respectively) was recorded
during task performance. The mean firing rate (FR) over repeated stimulus presentations was calculated, and the mean FR over time is termed the response ?waveform? (Fig. 3). The monkeys? heads
were fixed, but their eyes were free to move (other than fixating at the start of each trial).
Stimuli and task. The stimuli consisted of 64 movie clips (8 humanoid computer-generated ?actors?
each performing 8 actions; see Fig. 1). A sample movie of one actor performing the 8 actions
can be found at http://www.jneurosci.org/content/30/8/3133/suppl/DC1 (see Supplemental Movie
1 therein). The monkeys? task was to categorize the action in the displayed clip into two predetermined but arbitrary groups, pressing one of two buttons to indicate their decision. At the start
of each trial, after the monkey maintained fixation for 450ms, a blank screen was shown for 500ms,
and then one of the actors was displayed (subtending 6? of visual angle vertically). Regardless of
action, the actor was first displayed motionless in an upright neutral pose for 300ms, then began
2
performing one of the 8 actions. Each clip ended back at the initial neutral pose after 1900ms of
motion. A button-press response at any point by the monkey immediately ended the trial, and the
screen was blanked. In this paper, we considered only the data corresponding to the actions (i.e.
excluding the motionless neutral pose). Similar to [2], we assumed that all neurons had a response
latency of 130ms.
Figure 1: Depiction of stimuli used. Top row: the 8 ?actors? in the neutral pose. Bottom row:
sample frames of actor 5 performing the 8 actions; frames are from the same time-point within each
action. The 64 stimuli were an 8-by-8 cross of each actor performing each action.
Actor- and action-invariance indices. We characterized each neuron?s response characteristics
along two dimensions: invariance to actor and to action. For the actor-invariance index, a neuron?s
average response waveform to each of the 8 actions was first calculated by averaging over all actors.
Then, we calculated the Pearson correlation between the neuron?s actual responses and the responses
that would be seen if the neuron were completely actor-invariant (i.e. if it always responded with
the average waveform calculated in the previous step). The action-invariance index was calculated
similarly. The calculation of these indices bear similarities to that for the pattern and component
indices of cells in area MT [13].
Ventral and dorsal stream encoding models. We utilize existing models of brain areas that provide
input to the STS. Specifically, we use the HMAX family of models, which include models of the
ventral [14] and dorsal [15] streams. These models receive pixel images as input, and simulate visual
processing up to areas V4/IT (ventral) and areas MT/MST (dorsal). Such models build hierarchies
of increasingly complex and invariant representations, similar to convolutional and deep-learning
networks. While ventral stream processing has traditionally been modeled as producing outputs in
response to static images, in practice, neurons in the ventral stream are also sensitive to temporal
aspects [16]. As such, we extend the ventral stream model to be more biologically realistic. Specifically, the V1 neurons that project to the ventral stream now have temporal receptive fields (RFs) [17],
not just spatial ones. These temporal and spatial RFs are separable, unlike those for V1 neurons that
project to the dorsal stream [18]. Such space-time separable V1 neurons that project to the ventral
stream are not directionally-selective and are not sensitive to motion per se. They are still sensitive
to form rather than motion, but are better models of form processing, since in reality input to the
visual system consists of a continuous stream of images. Importantly, the parameters of dorsal and
ventral encoding models were fixed, and there was no optimization done to produce better fits to the
current data. We used only the highest-level (C2) outputs of these models.
STS encoding model. As a first approximation, we model the neural processing by STS neurons
as a linear weighted sum of inputs. The weights are fixed, and do not change over time. In other
words, at any point in time, the output of a model STS neuron is a linear combination of the C2
outputs produced by the ventral and dorsal encoding models. We do not take into account temporal
phenomena such as adaptation. We make the simplifying (but unrealistic) assumptions that synaptic
efficacy is constant (i.e. no ?neural fatigue?), and time-points are all independent.
Each model neuron has its own set of static weights that determine its unique pattern of neural
responses to the 64 action clips. The weights are learned using leave-one-out cross-validation. Of
the 64 stimuli, we use 63 for training, and use the learnt weights to predict the neural response
waveform to the left-out stimulus. This procedure is repeated 64 times, leaving out a different
stimulus each time. The 64 sets of predicted waveforms are collectively compared to the original
neural responses. The goodness-of-fit metric is the Pearson correlation (r) between predicted and
actual responses.
3
The weights are learned using simple linear regression. For number of input features F , there are
F + 1 unknown weights (including a constant bias term). The inputs to the STS model neuron are
represented as a (T ? 63) by (F + 1) matrix, where T is the number of timesteps. The output
is a (T ? 63) by 1 vector, which is simply a concatenation of the 63 neural response waveforms
corresponding to the 63 training stimuli. This simple linear system of equations, with (F + 1)
unknowns and (T ? 63) equations, can be solved using various methods. In practice, we used the
least-squares method. Importantly, at no point are ground-truth actor or action labels used.
Rather than use the 960 dorsal and/or 960 ventral C2 features directly as inputs to the linear regression, we first performed PCA on these features (separately for the two streams) to reduce the
dimensionality. Only the first 300 principal components (accounting for 95% or more of the variance) were used; the rest was discarded. Therefore, F = 300. Fitting was also performed using the
combination of dorsal and ventral C2 features. As before, PCA was performed, and only the first
300 principal components were retained. Keeping F constant at 300, rather than setting it to 600,
allowed for a fairer comparison to using either stream alone.
4
What is the individual neural representation like?
In this section, we examine the neural representation at the level of individual neurons. Figure 2
shows the invariance characteristics of the population of 119 neurons. Overall, the neurons span a
broad range of action and actor invariance (95% of invariance index values span the ranges [0.301
0.873] and [0.396 0.894] respectively). The correlation between the two indices is low (r=0.26).
Considering each monkey separately, the correlations between the two indices were 0.55 (monkey
G) and -0.09 (monkey S). This difference could be linked to slightly different recording regions [2].
Figure 2: Actor- and action-invariance indices for 59 neurons from monkey G (blue) and 60 neurons
from monkey S (red). Blue and red crosses indicate mean values.
Figure 3 shows the response waveforms of some example neurons to give a sense of what response
patterns correspond to low and high invariance indices. The average over actors, average over actions
and the overall average are also shown. Neuron S09 is highly action-invariant but not actor-invariant,
while G54 is the opposite. Neuron G20 is highly invariant to both action and actor, while the
invariance of S10 is close to the mean invariance of the population.
We find that there are no distinct clusters of neurons with high actor-invariance or action-invariance.
Such clusters would correspond to a representation scheme in which certain neurons specialize in
coding for action invariant to actor, and vice-versa. A cluster of neurons with both low actor- and
action-invariance could correspond to cells that code for a specific conjunction (binding) of actor and
action, but no such cluster is seen. Rather, Fig. 2 indicates that instead of the ?cell specialization?
approach to neural representation, the visual system adopts a more continuous and distributed representation scheme, one that is perhaps more universal and generalizes better to novel stimuli. In the
4
Figure 3: Plots of waveforms (mean firing rate in Hz vs. time in secs) for four example neurons.
Rows are actors, columns are actions. Red lines: mean firing rate (FR). Light red shading: ?1 SEM
of FR. Black lines (row 9 and column 9): waveforms averaged over actors, actions, or both.
rest of this paper, we explore how well a linear, feedforward encoding model of STS ventral/dorsal
integration can reproduce the neural responses and invariance properties found here.
5
The ?snippet-matching? model
In their paper, Singer and Sheinberg found evidence for the neural population representing actions
as ?sequences of integrated poses? [2]. Each pose contains visual information integrated over a
window of about 120ms. However, it was unclear what the representation was for each individual
neuron. For example, does each neuron encode just a single pose (i.e. a ?snippet?), or can it encode
more than one? What are the neural computations underlying this encoding?
In this paper, we examine what is probably the simplest model of such neural computations, which
we call the ?snippet-matching? model. According to this model, each individual STS neuron compares its incoming input over a single time step to its preferred stimulus. Due to hierarchical organization, this single time step at the STS level contains information processed from roughly 120ms of
raw visual input. For example, a neuron matches the incoming visual input to one particular short
segment of the human walking gait cycle, and its output at any time is in effect how similar the
visual input (from the previous 120ms up to the current time) is compared to that preferred stimulus
(represented by linear weights; see sub-section on STS encoding model in Section 3).
Such a model is purely feedforward and does not rely on any lateral or recurrent neural connections.
The temporal-window matching is straightforward to implement neurally e.g. using the same ?delayline? mechanisms [19] proposed for motion-selective cells in V1 and MT. Neurons implementing
5
this model are said to be ?memoryless? or ?stateless?, because their outputs are solely dependent on
their current inputs, and not on their own previous outputs. It is important to note that the inputs
to be matched could, in theory, be very short. In the extreme, the temporal window is very small,
and the visual input to be matched could simply be the current frame. In this extreme case, action
recognition is performed by the matching of individual frames to the neuron?s preferred stimulus.
Such a ?snippet? framework (memoryless matching over a single time step) is consistent with prior
findings regarding recognition of biological motion. For instance, it has been found that humans can
easily recognize videos of people walking from short clips of point-light stimuli [8]. This is consistent with the idea that action recognition is performed via matching of snippets. Neurons sensitive
to such action snippets have been found using techniques such as fMRI [9, 12] and electrophysiology [7]. However, such snippet encoding models have not been investigated in much detail.
While there is some evidence for the snippet model in terms of the existence of neurons responsive
and selective for short action sequences, it is still unclear how feasible such an encoding model is.
For instance, given some visual input, if a neuron simply tries to match that sequence to its preferred
stimulus, how exactly does the neuron ignore the motion aspects (to recognize actor invariant to
action) or ignore the form aspects (to recognize action invariant to actors)? Given the broad range of
actor- and action-invariances found in the previous section, it is crucial to see if the snippet model
can in fact reproduce such characteristics.
6
How far can snippet-matching go?
In this section, we explore how well the simple snippet-matching model can predict the response
waveforms of our population of STS neurons. This is a challenging task. STS is high up in the
visual processing hierarchy, meaning that there are more unknown processing steps and parameters
between the retina and STS, as compared to a lower-level visual area. Furthermore, there is a diversity of neural response patterns, both between different neurons (see Figs. 2 and 3) and sometimes
also between different stimuli for a neuron (e.g. S10, Fig. 3).
The snippet-matching process can utilize a variety of matching functions. Again, we try the simplest
possible function: a linear weighted sum. First, we examine the results of the leave-one-out fitting
procedure when the inputs to STS model neurons are from either the dorsal or ventral streams alone.
For monkey G, the mean goodness-of-fit (correlation between actual and predicted neural responses
on left-out test stimuli) over all 59 neurons are 0.50 and 0.43 for the dorsal and ventral stream
inputs respectively. The goodness-of-fit is highly correlated between the two streams (r=0.94).
For monkey S, the mean goodness-of-fit over all 60 neurons is 0.33 for either stream (correlation
between streams, r=0.91). Averaged over all neurons and both streams, the mean goodness-of-fit is
0.40. As a sanity check, when either the linear weights or the predictions are randomly re-ordered,
mean goodness-of-fit is 0.00.
Figure 4 shows the predicted and actual responses for two example neurons, one with a relatively
good fit (G22 fit to dorsal, r=0.70) and one with an average fit (S10 fit to dorsal, r=0.39). In the case
of G22 (Fig. 4 left), which is not even the best-fit neuron, there is a surprisingly good fit despite the
clear complexity in the neural responses. This complexity is seen most clearly from the responses
to the 8 actions averaged over actors, where the number and height of peaks in the waveform vary
considerably from one action to another. The fit is remarkable considering the simplicity of the
snippet model, in which there is only one set of static linear weights; all fluctuations in the predicted
waveforms arise purely from changes in the inputs to this model STS neuron.
Over the whole population, the fits to the dorsal model (mean r=0.42) are better than to the ventral
model (mean r=0.38). Is there a systematic relationship between the difference in goodness-of-fit to
the two streams and the invariance indices calculated in Section 4? For instance, one might expect
that neurons with high actor-invariance would be better fit to the dorsal than ventral model. From
Fig. 5, we see that this is exactly the case for actor invariance. There is a strong positive correlation between actor invariance and difference (dorsal minus ventral) in goodness-of-fit (monkey G:
r=0.72; monkey S: r=0.69). For action invariance, as expected, there is a negative correlation (i.e.
strong action invariance predicts better fit to ventral model) for monkey S (r=-0.35). However, for
monkey G, the correlation is moderately positive (r=0.43), contrary to expectation. It is unclear why
this is the case, but it may be linked to the robust correlation between actor- and action-invariance
indices for monkey G (r=0.55), seen in Fig. 2. This is not the case for monkey S (r=-0.09).
6
Figure 4: Predicted (blue) and actual (red) waveforms for two example neurons, both fit to the dorsal
stream. G22: r=0.70, S10: r=0.39. For each of the 64 sub-plots, the prediction for that test stimulus
used the other 63 stimuli for training. Solely for visualization purposes, predictions were smoothed
using a moving average window of 4 timesteps (total length 89 timesteps).
Figure 5: Relationship between goodness-of-fit and invariance. Y-axis: difference between r from
fitting to dorsal versus ventral streams. X-axis: actor (left) and action (right) invariance indices.
Interestingly, either stream can produce actor-invariant and action-invariant responses (Fig. 6).
While G54 is better fit to the dorsal than ventral stream (0.77 vs. 0.67), both fits are relatively
good ? and are actor-invariant. The converse is true for S48. These results are consistent with the
reality that both streams are interconnected and the what/where distinction is a simplification.
So far, we have performed linear fitting using the dorsal and ventral streams separately. Does fitting
to a combination of both models improve the fit? For monkey G, the mean goodness-of-fit is 0.53;
for monkey S it is 0.38. The improvements over the better of either model alone are moderate (6%
for G, 15% for S). Interestingly, this fitting to a combination of streams without prior knowledge of
which stream is more suitable, produces fits that are as good or better than if we knew a priori which
stream would produce a better fit for a specific neuron (0.53 vs. 0.51 for G; 0.38 vs. 0.36 for S).
How much better compared to low-level controls does our snippet model fit to the combined outputs
of dorsal and ventral stream models? To answer this question, we instead fit our snippet model to a
low-level pixel representation while keeping all else constant. The stimuli were resized to be 32 x
32 pixels, so that the number of features (1024 = 32 x 32) was roughly the same number as the C2
features. This was then reduced to 300 principal components, as was done for C2 features. Fitting
our snippet model to this pixel-derived representation produced worse fits (G: 0.40, S: 0.32). These
were 25% (G) and 16% (S) worse than fitting to the combination of dorsal and ventral models.
Furthermore, the monkeys were free to move their eyes during the task (apart from a fixation period
7
Figure 6: Either stream can produce actor-invariant (G54) and action-invariant (S48) responses.
at the start of each trial). Even slight random shifts in the pixel-derived representation of less than
0.25? of visual angle (on the order of micro-saccades) dramatically reduced the fits to 0.25 (G)
and 0.21 (S). In contrast, the same random shifts did not change the average fit numbers for the
combination of dorsal and ventral models (0.53 for G, 0.39 for S). These results suggest that the
fitting process does in fact learn meaningful weights, and that biologically-realistic, robust encoding
models are important in providing suitable inputs to the fitting process.
Finally, how do the actor- and action-invariance indices calculated from the predicted responses
compare to those calculated from the ground-truth data? Averaged over all 119 neurons fitted to
a combination of dorsal and ventral streams, the actor- and action-invariance indices are within
0.0524 and 0.0542 of their true values (mean absolute error). In contrast, using the pixel-derived
representation, the results are much worse (0.0944 and 0.1193 respectively, i.e. the error is double).
7
Conclusions
We found that at the level of individual neurons, the neuronal representation in STS spans a broad,
continuous range of actor- and action-invariance, rather than having groups of neurons with distinct
invariance properties. Simply as a baseline model, we investigated how well a linear weighted sum
of dorsal and ventral stream responses to action ?snippets? could reproduce the neural response
patterns found in these STS neurons. The results are surprisingly good for such a simple model,
consistent with findings from computer vision [20]. Clearly, however, more complex models should,
in theory, be able to better fit the data. For example, a non-linear operation can be added, as in the
LN family of models [13]. Other models include those with nonlinear dynamics, as well as lateral
and feedback connections [21, 22]. Other ventral and dorsal models can also be tested (e.g. [23]),
including computer vision models [24, 25]. Nonetheless, this simple ?snippet-matching? model is
able to grossly reproduce the pattern of neural responses and invariance properties found in the STS.
8
References
[1] L. G. Ungerleider and J. V. Haxby, ??What? and ?where? in the human brain.? Current Opinion in Neurobiology, vol. 4, no. 2, pp. 157?65, 1994.
[2] J. M. Singer and D. L. Sheinberg, ?Temporal cortex neurons encode articulated actions as slow sequences
of integrated poses.? Journal of Neuroscience, vol. 30, no. 8, pp. 3133?45, 2010.
[3] M. W. Oram and D. I. Perrett, ?Integration of form and motion in the anterior superior temporal polysensory area of the macaque monkey.? Journal of Neurophysiology, vol. 76, no. 1, pp. 109?29, 1996.
[4] J. S. Baizer, L. G. Ungerleider, and R. Desimone, ?Organization of visual inputs to the inferior temporal
and posterior parietal cortex in macaques.? Journal of Neuroscience, vol. 11, no. 1, pp. 168?90, 1991.
[5] T. Jellema, G. Maassen, and D. I. Perrett, ?Single cell integration of animate form, motion and location in
the superior temporal cortex of the macaque monkey.? Cerebral Cortex, vol. 14, no. 7, pp. 781?90, 2004.
[6] C. Bruce, R. Desimone, and C. G. Gross, ?Visual properties of neurons in a polysensory area in superior
temporal sulcus of the macaque.? Journal of Neurophysiology, vol. 46, no. 2, pp. 369?84, 1981.
[7] J. Vangeneugden, F. Pollick, and R. Vogels, ?Functional differentiation of macaque visual temporal cortical neurons using a parametric action space.? Cerebral Cortex, vol. 19, no. 3, pp. 593?611, 2009.
[8] G. Johansson, ?Visual perception of biological motion and a model for its analysis.? Perception & Psychophysics, vol. 14, pp. 201?211, 1973.
[9] E. Grossman, M. Donnelly, R. Price, D. Pickens, V. Morgan, G. Neighbor, and R. Blake, ?Brain areas
involved in perception of biological motion.? Journal of Cognitive Neuroscience, vol. 12, no. 5, pp. 711?
20, 2000.
[10] J. A. Beintema and M. Lappe, ?Perception of biological motion without local image motion.? Proceedings
of the National Academy of Sciences of the United States of America, vol. 99, no. 8, pp. 5661?3, 2002.
[11] D. I. Perrett, P. A. Smith, A. J. Mistlin, A. J. Chitty, A. S. Head, D. D. Potter, R. Broennimann, A. D.
Milner, and M. A. Jeeves, ?Visual analysis of body movements by neurones in the temporal cortex of the
macaque monkey: a preliminary report.? Behavioural Brain Research, vol. 16, no. 2-3, pp. 153?70, 1985.
[12] M. S. Beauchamp, K. E. Lee, J. V. Haxby, and A. Martin, ?FMRI responses to video and point-light
displays of moving humans and manipulable objects.? Journal of Cognitive Neuroscience, vol. 15, no. 7,
pp. 991?1001, 2003.
[13] N. C. Rust, V. Mante, E. P. Simoncelli, and J. A. Movshon, ?How MT cells analyze the motion of visual
patterns.? Nature Neuroscience, vol. 9, no. 11, pp. 1421?31, 2006.
[14] M. Riesenhuber and T. Poggio, ?Hierarchical models of object recognition in cortex.? Nature Neuroscience, vol. 2, no. 11, pp. 1019?25, 1999.
[15] H. Jhuang, T. Serre, L. Wolf, and T. Poggio, ?A Biologically Inspired System for Action Recognition,? in
2007 IEEE 11th International Conference on Computer Vision. IEEE, 2007.
[16] C. G. Gross, C. E. Rocha-Miranda, and D. B. Bender, ?Visual properties of neurons in inferotemporal
cortex of the macaque.? Journal of Neurophysiology, vol. 35, no. 1, pp. 96?111, 1972.
[17] P. Dayan and L. F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems. Cambridge, MA: The MIT Press, 2005.
[18] J. A. Movshon and W. T. Newsome, ?Visual response properties of striate cortical neurons projecting to
area MT in macaque monkeys.? Journal of Neuroscience, vol. 16, no. 23, pp. 7733?41, 1996.
[19] W. Reichardt, ?Autocorrelation, a principle for the evaluation of sensory information by the central nervous system,? Sensory Communication, pp. 303?17, 1961.
[20] K. Schindler and L. van Gool, ?Action snippets: How many frames does human action recognition require?? in 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2008.
[21] M. A. Giese and T. Poggio, ?Neural mechanisms for the recognition of biological movements.? Nature
Reviews Neuroscience, vol. 4, no. 3, pp. 179?92, 2003.
[22] J. Lange and M. Lappe, ?A model of biological motion perception from configural form cues.? Journal
of Neuroscience, vol. 26, no. 11, pp. 2894?906, 2006.
[23] P. J. Mineault, F. A. Khawaja, D. A. Butts, and C. C. Pack, ?Hierarchical processing of complex motion
along the primate dorsal visual pathway.? Proceedings of the National Academy of Sciences of the United
States of America, vol. 109, no. 16, pp. E972?80, 2012.
[24] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, ?Learning realistic human actions from movies,?
in 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2008.
[25] A. Bobick and J. Davis, ?The recognition of human movement using temporal templates,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257?267, 2001.
9
| 5052 |@word neurophysiology:3 trial:4 mri:1 johansson:1 rhesus:1 fairer:1 simplifying:1 accounting:1 stateless:1 minus:1 shading:1 initial:1 contains:4 series:1 efficacy:1 united:2 interestingly:3 existing:1 blank:1 current:5 anterior:1 mst:2 realistic:3 predetermined:1 haxby:2 plot:2 medial:1 v:4 alone:3 cue:1 intelligence:1 nervous:1 accordingly:1 smith:1 short:6 beauchamp:1 location:1 org:1 height:1 mathematical:1 along:2 c2:6 specialize:2 fixation:2 consists:1 fitting:10 pathway:1 autocorrelation:1 manner:2 expected:1 roughly:2 tomaso:1 examine:3 brain:8 inspired:1 actual:5 window:5 considering:2 bender:1 project:3 moreover:1 underlying:2 matched:2 what:14 monkey:25 supplemental:1 finding:2 transformation:1 differentiation:1 ended:2 configural:1 temporal:20 quantitative:1 exactly:2 control:1 converse:1 producing:1 rozenfeld:1 before:1 positive:2 vertically:1 local:1 despite:2 encoding:14 firing:3 solely:2 fluctuation:1 marszalek:1 black:1 might:1 therein:1 conversely:1 challenging:1 range:5 averaged:4 unique:1 practice:2 implement:1 procedure:2 area:12 universal:1 thought:1 matching:12 intention:1 word:2 suggest:1 close:1 polysensory:2 pollick:1 www:1 go:2 regardless:1 straightforward:1 independently:1 simplicity:2 immediately:1 importantly:2 rocha:1 population:10 handle:2 traditionally:1 analogous:1 hierarchy:4 tan:1 play:1 milner:1 harvard:1 recognition:15 utilized:1 walking:2 predicts:1 bottom:1 role:1 solved:1 region:1 cycle:1 movement:3 highest:2 gross:2 complexity:2 moderately:1 dynamic:1 ultimately:1 trained:1 solving:1 segment:1 laptev:1 animate:1 purely:2 distinctive:1 completely:1 easily:1 represented:2 various:1 america:2 articulated:2 separated:1 distinct:3 pearson:2 sanity:1 plausible:1 ability:1 directionally:1 sequence:6 pressing:1 g20:1 baizer:1 propose:1 gait:1 interaction:1 s48:2 interconnected:1 fr:4 adaptation:1 bobick:1 achieve:1 beintema:1 academy:2 cluster:5 double:1 produce:8 leave:2 object:2 recurrent:1 pose:10 strong:2 predicted:7 indicate:2 waveform:13 g22:3 human:8 opinion:1 material:1 implementing:1 require:1 preliminary:1 biological:7 considered:1 ground:2 ungerleider:2 blake:1 predict:3 major:1 ventral:34 vary:1 purpose:1 integrates:2 label:1 sensitive:7 vice:1 successfully:1 weighted:5 mit:4 clearly:2 always:1 rather:7 resized:1 conjunction:4 encode:3 derived:3 improvement:1 indicates:1 check:1 contrast:2 baseline:2 sense:1 dependent:1 dayan:1 integrated:3 selective:6 reproduce:4 pixel:8 overall:2 priori:1 spatial:2 integration:3 psychophysics:1 field:1 having:2 broad:4 fmri:4 report:1 stimulus:21 micro:1 retina:1 randomly:1 recognize:3 national:2 individual:10 attempt:2 organization:2 investigate:3 highly:3 evaluation:1 male:1 extreme:2 light:5 desimone:2 poggio:4 re:1 theoretical:1 fitted:1 instance:3 column:2 modeling:2 cover:1 tp:1 goodness:10 newsome:1 subtending:1 subset:1 rare:1 neutral:4 providence:1 answer:1 learnt:1 considerably:1 combined:1 peak:1 international:1 v4:1 systematic:1 lee:1 decoding:3 together:1 again:1 central:1 recorded:1 worse:3 cognitive:2 grossman:1 fixating:1 account:1 diversity:1 coding:1 sec:1 sts:34 stream:42 performed:10 try:2 doing:1 analyze:2 linked:2 start:3 sort:1 red:5 bruce:1 square:1 responded:1 convolutional:1 who:1 percept:1 characteristic:4 variance:1 correspond:3 raw:1 produced:2 published:1 synaptic:1 grossly:1 nonetheless:1 pp:21 involved:1 unsolved:1 static:5 oram:1 knowledge:2 dimensionality:1 back:1 response:33 done:2 furthermore:3 just:2 correlation:10 receives:3 nonlinear:1 perhaps:1 vogels:1 usage:1 effect:1 serre:3 brown:2 contain:1 consisted:1 true:2 memoryless:2 during:2 inferior:1 maintained:1 davis:1 giese:1 m:8 fatigue:1 motion:22 perrett:3 image:5 meaning:1 novel:1 began:1 superior:8 common:2 functional:2 mt:5 rust:1 cerebral:2 extend:1 slight:1 refer:1 cambridge:2 versa:1 ai:1 similarly:2 had:1 moving:2 actor:52 cortex:10 depiction:1 similarity:1 inferotemporal:2 posterior:1 own:2 moderate:1 apart:1 termed:1 certain:1 seen:4 morgan:1 additional:1 recognized:4 determine:1 period:1 neurally:1 simoncelli:1 match:2 characterized:1 calculation:1 cross:3 converging:1 prediction:4 regression:2 vision:6 metric:1 expectation:1 sometimes:1 suppl:1 cell:7 receive:2 addition:2 background:1 separately:3 else:1 leaving:1 crucial:1 biased:1 rest:2 unlike:1 probably:1 recording:2 hz:1 facilitates:1 contrary:1 call:1 feedforward:2 variety:1 fit:35 timesteps:3 opposite:1 reduce:1 idea:3 regarding:1 lappe:2 lange:1 shift:2 specialization:1 pca:2 movshon:2 neurones:1 action:74 deep:1 dramatically:1 latency:1 clear:2 se:1 clip:5 processed:1 simplest:2 reduced:2 http:1 singapore:1 neuroscience:10 per:1 blue:3 vol:20 threat:1 key:1 group:2 four:1 donnelly:1 sulcus:5 schindler:1 miranda:1 abbott:1 utilize:2 v1:4 button:2 downstream:1 sum:5 angle:2 respond:1 family:2 reader:1 separation:1 decision:1 bound:1 simplification:2 tackled:1 display:3 mante:1 activity:1 s10:4 ri:1 aspect:3 simulate:1 span:3 performing:8 separable:2 relatively:2 martin:1 according:1 blanked:1 combination:8 slightly:1 increasingly:1 primate:3 biologically:4 projecting:1 invariant:21 ln:1 equation:2 visualization:1 previously:1 behavioural:1 mechanism:4 singer:6 generalizes:1 operation:1 motionless:2 hierarchical:3 responsive:1 existence:1 thomas:2 original:1 top:1 include:2 build:1 move:2 question:3 added:1 receptive:1 parametric:1 striate:1 responds:1 unclear:3 said:1 lateral:2 concatenation:1 potter:1 code:1 length:1 index:15 modeled:1 retained:1 relationship:2 providing:1 negative:1 unknown:3 perform:1 upper:3 neuron:77 snapshot:1 discarded:1 snippet:21 riesenhuber:1 displayed:3 parietal:1 sheinberg:6 neurobiology:1 excluding:1 head:2 communication:1 frame:5 reproducing:1 smoothed:1 arbitrary:1 david:2 connection:2 coherent:1 learned:2 distinction:1 macaque:12 address:1 able:2 pattern:10 perception:5 challenge:3 summarize:1 rf:2 including:2 video:3 gool:1 unrealistic:1 suitable:2 rely:1 representing:1 scheme:2 improve:1 movie:4 eye:2 dc1:1 axis:2 schmid:1 reichardt:1 prior:5 review:1 relative:1 expect:1 bear:1 versus:1 remarkable:1 manipulable:1 validation:1 humanoid:1 sufficient:2 consistent:6 principle:1 bank:6 row:4 jhuang:1 accounted:1 surprisingly:4 childrens:1 neat:1 free:2 keeping:2 bias:1 institute:1 neighbor:1 template:1 absolute:1 distributed:1 van:1 feedback:1 calculated:8 dimension:1 cortical:2 computes:1 sensory:2 adopts:1 collection:1 commonly:1 far:4 social:1 transaction:1 ignore:2 preferred:4 feat:1 butt:1 incoming:2 assumed:1 knew:1 continuous:4 why:1 reality:2 learn:1 nature:3 robust:2 pack:1 sem:1 investigated:3 complex:3 did:2 whole:1 arise:1 child:1 repeated:2 allowed:1 body:1 neuronal:1 fig:9 screen:2 slow:1 experienced:1 position:1 decoded:2 sub:2 hmax:1 specific:6 evidence:3 boston:2 electrophysiology:1 simply:5 explore:3 visual:30 ordered:1 saccade:1 binding:2 collectively:1 wolf:1 truth:2 ma:3 presentation:1 towards:1 price:1 content:1 change:3 feasible:1 upright:1 specifically:2 infocomm:1 averaging:1 principal:3 total:2 hospital:1 invariance:38 meaningful:1 people:1 dorsal:33 categorize:1 tested:1 phenomenon:1 correlated:1 |
4,479 | 5,053 | Firing rate predictions in optimal balanced networks
Sophie Den`eve
Group for Neural Theory
?
Ecole
Normale Sup?erieure
Paris, France
[email protected]
David G.T. Barrett
Group for Neural Theory
?
Ecole
Normale Sup?erieure
Paris, France
[email protected]
Christian K. Machens
Champalimaud Neuroscience Programme
Champalimaud Centre for the Unknown
Lisbon, Portugal
[email protected]
Abstract
How are firing rates in a spiking network related to neural input, connectivity and
network function? This is an important problem because firing rates are a key
measure of network activity, in both the study of neural computation and neural
network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons
interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting
networks because they provide an optimal spike-based signal representation while
producing cortex-like spiking activity through a dynamic balance of excitation and
inhibition. We can calculate firing rates by treating balanced network dynamics
as an algorithm for optimising signal representation. We identify this algorithm
and then calculate firing rates by finding the solution to the algorithm. Our firing
rate calculation relates network firing rates directly to network input, connectivity
and function. This allows us to explain the function and underlying mechanism of
tuning curves in a variety of systems.
1
Introduction
The firing rate of a neuron is arguably the most important characterisation of both neural network
dynamics and neural computation, and has been ever since the seminal recordings of Adrian and
Zotterman [1] in which the firing rate of a neuron was observed to increase with muscle tension. A
large, sometimes bewildering, diversity of firing rate responses to stimuli have since been observed
[2], ranging from sigmoidal-shaped tuning curves [3, 4], to bump-shaped tuning curves [5], with
much diversity in between [6]. What is the computational role of these firing rate responses and how
are firing rates determined by neuron dynamics, network connectivity and neural input?
There have been many attempts to answer these questions, using a variety of experimental and
theoretical techniques. However, most approaches have struggled to deal with the non-linearity of
neural spike-generation mechanisms and the strong interaction between neurons as mediated through
network connectivity. Significant progress has been made using linear approximations. For example,
experimentally recorded firing rates in a variety of systems have been described using the linear
receptive field, which captures the linear relationship between stimulus and firing rate response [7].
However, in recent years, it has been found that this linear approximation often fails to capture
important aspects of neural activity [8]. Similarly, in theoretical studies, linear approximations
1
have been used to simplify non-linear firing rate calculations in a variety of network models, using
Taylor Series approximations [9], and more recently, using linear response theory [10, 11]. These
calculations have led to important insights into how neural network connectivity and input determine
firing rates. Again, however, these calculations only apply to a restricted subset of situations, where
the linearising assumptions apply.
We develop a new technique for calculating firing rates, by directly identifying the non-linear structure of tightly balanced networks. Balanced network theory has come to be regarded as the standard
model of cortical activity [12, 13], accounting for a large proportion of observed activity through
a dynamic balance of excitation and inhibition [14]. Recently, it was found that tightly balanced
networks are synonymous with efficient coding, in which a signal is represented optimally subject
to metabolic costs [15]. This observation allows us, here, to interpret balanced network activity as
an optimisation algorithm. We can then directly identify that the non-linear relationship between
firing rates, input, connectivity and neural computation is provided by this algorithm. We use this
technique to calculate firing rates in a variety of balanced network models, thereby exploring the
computational role and underlying network mechanisms of monotonic firing rate tuning curves,
bump-shaped tuning curves and tuning curve inhomogeneity.
2
Optimal balanced network models
We calculate firing rates in a balanced network consisting of N recurrently connected leaky
integrate-and-fire neurons (Fig.
1a).
The network is driven by an input signal I =
(I1 , . . . , Ik , . . . IM ), where Ik is the k th input and M is the dimension of the input. In response toPthis input, neurons produce spike trains, denoted by s = (s
1 , . . . , si , . . . , sN ), where
si (t) = k ?(t ? tik ) is the spike train of neuron i with spike times tik . A spike is produced
whenever the membrane potential Vi exceeds the spiking threshold Ti of neuron i. This simple
spike rule captures the essence of a neural spike-generation mechanism. The membrane potential
has the following dynamics:
M
N
X
X
dVi
Fij Ij ,
(1)
?ik sk +
= ??Vi +
dt
j=1
k=1
where ? is the neuron leak, ?ik is connection strength from neuron k to neuron i and Fij is the
connection strength from input j to neuron i [16]. When a neuron spikes, the membrane potential
is reset to Ri ? Ti + ?ii . This is written in equation 1 as a self-connection. Throughout this work,
we focus on networks where connectivity ? is symmetric - this simplifies our analysis, although in
certain cases we can generalise to non-symmetric matrices.
We are interested in networks where a balance of excitation and inhibition coincides with optimal signal representation. Not all choices of network connectivity and spiking thresholds will give
both [12, 13], but if certain conditions are satisfied, this can be possible. Before we proceed to our
firing rate calculation, we must derive these conditions.
We begin by calculating the sum total of excitatory and inhibitory input received by neurons in our
network. This is given by solving equation 1 implicitly:
N
M
X
X
Vi =
?ik rk +
Fij xj ,
(2)
j=1
k=1
where rk is a temporal filtering of the k
th
neuron?s spike train
?
Z
0
e??t sk (t ? t0 ) dt0 ,
(3)
and xj is a temporal filtering of the j th input
Z ?
0
xj =
e??t Ij (t ? t0 ) dt0 .
(4)
rk =
0
0
All the excitatory and inhibitory inputs received by neuron i are included in this summation (Eqn.
2). This can be rewritten as the slope of a loss function as follows:
1 dE(r)
Vi = ?
,
(5)
2 dri
2
where
E(r) = ?rT ?r ? 2rT Fx + c
(6)
and c is a constant.
Now, we can use this expression to derive the conditions that connectivity must satisfy so that the
network operates in an optimal balanced state. In balanced networks, excitation and inhibition cancel
to produce an input that is the same order of magnitude as the spiking threshold. This is very small,
relative to the magnitude of excitation or inhibition alone [12, 13]. In tightly balanced networks,
which we consider, this cancellation is so precise that Vi ? 0 in the large network limit (for all
active neurons) [15, 17, 18]. Now, using equation 5, we can see that this tight balance condition is
equivalent to saying that our loss function (Eqn. 6) is minimised.
This has two implications for our choice of network connectivity and spiking thresholds. First,
the loss function must have a minimum. To guarantee this, we require ?? to be positive definite.
Secondly, the spiking threshold of each neuron must be chosen so that each spike acts to minimise
the cost function. This spiking condition can be written as E(no spike) > E(with spike). Using
equation 6, this can be rewritten as E(no spike) > E(no spike) ? 2[?r]k ? 2[Fx]k ? ?kk . Finally,
(A)
(C)
x
x
?
x, x
?
(B)
time (sec)
Figure 1: Optimal balanced network example. (A) Schematic of a balanced neural network providing an optimal spike-based representation x
? of a signal x. (B) A tightly balanced network can
produce an output x
?1 (blue, top panel) that closely matches the signal x1 (black, top panel). Population spiking activity is represented here using a raster plot (middle panel), where each spike is
represented with a dot. For a randomly chosen neuron (red, middle panel), we plot the total excitatory input (green, bottom panel) and the total inhibitory input (red, bottom panel). The sum of
excitation and inhibition (black, bottom panel) fluctuates about the spiking threshold (thin black line,
bottom panel) indicating that this network is tightly balanced. A spike is produced whenever this
sum exceeds the spiking threshold. (C) Firing rate tuning curves are measured during simulations of
our balanced network. Each line represents the tuning curve of a single neuron. The representation
error at each value of x1 is given by equation 7.
3
cancelling terms, and using equation 2, we can write our spiking condition as Vk > ??kk /2.
Therefore, the spiking threshold for each neuron must be set to Tk ? ??kk /2, though this condition
can be relaxed considerably if our loss function has an additional linear cost term1 . Once these
conditions are satisfied, our network is tightly balanced.
We are interested in networks that are both tightly balanced and optimal. Now, we can see from
equation 5 that the balance of excitation and inhibition coincides with the optimisation of our loss
function (Eqn. 6). This is an important result, because it relates balanced network dynamics to a
neural computation. Specifically, it allows us to interpret the spiking activity of our tightly balanced
network as an algorithm that optimises a loss function (Eqn. 6).
This is interesting because this optimisation can be easily mapped onto many useful computations.
A particularly interesting example is given by ? = ?FFT ? ?I, where I is the identity matrix [15,
17, 18]. In recent work, it was shown that this connectivity can be learnt using a spike timingdependent plasticity rule [15]. Here, we use this connectivity to rewrite our loss function (Eqn. 6)
as follows:
N
X
? )2 + ?
E = (x ? x
ri2 ,
(7)
i=1
where
? = FT r .
x
(8)
The second term of equation 7 is a metabolic cost term that penalises neurons for spiking excessively,
? , where
and the first term quantifies the difference between the signal value x and a linear read-out, x
? is computed using the linear decoder FT (Eqn. 8). Therefore, a network with this connectivity
x
? that is close to the
produces spike trains that optimise equation 7, thereby producing an output x
signal value x. Throughout the remainder of this work, we will focus on optimal balanced networks
with this form of connectivity.
We illustrate the properties of this system by simulating a network of 30 neurons. We find that
our network produces spike trains (Fig. 1 b, middle panel) that represent x with great accuracy,
across a broad range of signal values (Fig. 1 b, top panel). As expected, this optimal performance
coincides with a tight balance of excitation and inhibition (Fig. 1 b, bottom panel), reminiscent
of cortical observations [14]. In this example, our network has been optimised to represent a 2dimensional signal x = (x1 , x2 ). We measure firing rate tuning curves using a fixed value of x2
while varying x1 . We use this signal because it can produce interesting, non-linear tuning curves
(Fig. 1 c), especially at signal values where neurons fall silent. In the next section, we will attempt
to understand this tuning curve non-linearity by calculating firing rates analytically.
3
Firing rate analysis with quadratic programming
Our goal is to calculate the firing rates f of all the neurons in these tightly balanced network models as a function of the network input, the recurrent network connectivity ?, and the feedforward
connectivity F. On the surface, this may seem to be a difficult problem, because individual neurons
have complicated non-linear integrate-and-fire dynamics and they interact strongly through network
connectivity. However, the loss function relationship that we developed above allows us now to
circumvent these problems.
There are many possible firing rate measures used in experiments and theoretical studies. Usually, a
box-shaped temporal averaging window is used. We define the firing rate of a neuron to be:
Z ?
0
fk = ?
e??t sk (t ? t0 ) dt0 .
(9)
0
This is an exponentially weighted temporal average2 , with timescale ??1 . We have chosen this
temporal average because it matches the dynamics of synaptic filters in our neural network (Eqn. 3),
Suppose that our network optimises the following cost function: E(r) = ?rT ?r ? 2rT Fx + c + bT r,
where b is a vector of positive linear weights. Then, we find that the optimal spiking thresholds for this network
are given by Ti ? (??ii + bi )/2 ? ??ii /2. Therefore, we can apply our techniques to all networks with
thresholds Ti ? ??ii /2.
2
In this case, the firing rate timescale is very short, because ? is the membrane potential leak. However, we
can easily generalise our framework so that this timescale can be as long as the slowest synaptic process [17, 18].
1
4
allowing us to write fi (t) = ?ri (t). Here, we need to multiply by ? to ensure that our firing rates
are reported in units of spikes per second.
We can now calculate firing rates using this relationship and by exploiting the algorithmic nature
of tightly balanced networks. These networks produce spike trains that minimise our loss function
E(r) (Eqn. 6). Therefore, the firing rates of our network are those that minimise E(f /?), under the
constraint that firing rates must be positive:
{fi } = arg min E(f /?) .
(10)
fi ?0
This firing rate prediction is the solution to a constrained optimisation problem known as quadratic
programming [19]. The optimisation is quadratic, because our loss function is a quadratic function
of f , and it is constrained because firing rates are positive valued quantities, by definition.
We illustrate this firing rate prediction using a simple two-neuron network, with recurrent connectivity given by ? = ?FT F ? ?I as before. We simulate this system and measure the spike-train firing
rates for both neurons (Fig. 2 a, left panel). We then use equation 10 to obtain a theoretical prediction for firing rates. We find that our firing rate prediction matches the spike-train measurement with
great accuracy (Fig.2 a, middle panel and right panel).
We can now use our firing rate solution to understand the relationship between firing rates, input,
connectivity and function. When both neurons are active, we can solve equation 10 exactly, to see
that firing rates are related to network connectivity according to f = ????1 Fx. When one of the
neurons becomes silent, the other neuron must compensate by adjusting its firing rate slope. For
example, when neuron 1 becomes silent, we have f1 = 0 and the firing rate of neuron 2 increases
to f2 = ?F2 x/(F2 FT2 + ?I), where F2 denotes the second row of F. Similarly, when neuron 2
(A)
prediction
?
simulation measurement
f2 (Hz)
f2 (Hz)
f2 (Hz)
(B)
f1 (Hz)
f1 (Hz)
f1 (Hz)
Figure 2: Calculating firing rates in a two-neuron example. (A) Tuning curve measurements are
obtained from a simulation of a two-neuron network (left, top). The representation error E for
this network is given at each signal value x (left, bottom). Tuning curve predictions are obtained
using quadratic programming (middle, top), with predicted representation error E (middle, bottom).
Predicted firing rates closely match measured firing rates for both neurons, and for all signal values
(right). (B) A phase diagram of the network activity during a simulation (left panel). Firing rates
evolve from a silent state towards the minimum of the cost function E(x1 = 0) (red cross, left
panel). Here, they fluctuate about the minimum, increasing in discrete steps of size ? and decreasing
exponentially (left panel, inset).We also measure the firing rate trajectory (right panel) as the network
evolves towards the minimum of the cost function E(x1 = 1) (blue cross, right panel), where neuron
2 is silent.
5
becomes silent, we have f2 = 0, and the firing rate of neuron 1 increases to f1 = ?F1 x/(F1 FT1 +
?I), where F1 is the first row of F. This non-linear change in firing rates is caused by the positivity
constraint. It can be understood functionally, as an attempt by the network to represent x accurately,
within the constraints of the system.
In larger networks, our firing rate prediction is more difficult to write down analytically because there
are so many interactions between individual neurons and the positivity constraint. Nonetheless, we
can make a number of general observations about tuning curve shape. In general, we can interpret
tuning curve shape to be the solution of a quadratic programming problem, which can be written as
a piece-wise linear function f = M (x) ? x, where M(x) is a matrix whose entries depend on the
region of signal space occupied by x. For example, in the two-neuron system that we just discussed,
the signal space is partitioned into three regions: one region where neuron 1 is active and where
neuron 2 is silent, a second region where both neurons are active and a third region where neuron
1 is silent and neuron 2 is active (Fig. 2 a, left panel). In each region there is a different linear
relationship between the signal and the firing rates. The boundaries of these regions occur at points
in signal space where an active neuron becomes silent (or where a silent neuron becomes active). At
most, there will be N + 1 such regions.
We can also use quadratic programming to describe the spiking dynamics underlying these nonlinear networks. Returning to our two-neuron example, we measure the temporal evolution of the
firing rates f1 and f2 . We find that if we initialise the network to a sub-optimal state, the firing rates
rapidly evolve toward the optimum in a series of discrete steps of size ? (Fig. 2 b, left panel). The
step-size is ? because when neuron i spikes, ri ? ri + 1, according to equation 3, and therefore,
fi ? fi +?, according to equation 9. Once the network has reached the optimal state, it is impossible
for it to remain there. The firing rates begin to decay exponentially, because our firing rate definition
is an exponentially weighted summation (Eqn. 9) (Fig. 2 b, middle panel). Eventually, when the
firing rate has decayed too far from the optimal solution, another spike is fired and the network moves
closer to the optimum. In this way, spiking dynamics can be interpreted as a quadratic programming
algorithm. The firing rate continues to fluctuate around the optimal spiking value. These fluctuations
are noisy, in that they are dependent on initial conditions of the network. However, this noise has an
unusual algorithmic structure that it is not well characterised by standard probabilistic descriptions
of spiking irregularity.
4
Analysing tuning curve shape with quadratic programming
Now that we have a framework for relating firing rates to network connectivity and input, we can
explore the computational function of tuning curve shapes and the network mechanisms that generate these tuning curves. We will investigate systems that have monotonic tuning curves and systems
that have bump-shaped tuning curves, which together constitute a large proportion of firing rate
observations [2, 3, 4, 5].
We begin by considering a system of monotonic tuning curves, similar to the examples that we have
considered already where recurrent connectivity is given by ? = ?FFT ? ?I. In these systems,
the recurrent connectivity and hence the tuning curve shape is largely determined by the form of the
feedforward matrix F. This matrix also determines the contribution of tuning curves to computational function, through its role as a linear decoder for signal representation (Eqn. 8). We illustrate
this by simulating the response of our network to a 2-dimensional signal x = (x1 , x2 ), where x1
is varied and x2 is fixed, using three different configurations of F (Fig. 3). This system produces
monotonically increasing and decreasing tuning curves (Fig. 3a). We find that neurons with positive
values of F have positive firing rate slopes (Fig. 3, blue tuning curves), and neurons with negative
F values have negative firing rate slopes (Fig. 3, red tuning curves). If the values of F are regularly
spaced, then the tuning curves of individual neurons are regularly spaced, and, if we manipulate this
regularity by adding some random noise to the connectivity, we obtain inhomogeneous and highly
irregular tuning curves (Fig.3 b). This inhomogeneity has little effect on the representation error.
This inhomogeneous monotonic tuning is reminiscent of tuning in many neural systems, including
the oculomotor system [4]. The oculomotor system represents eye position, using neurons with
negative slopes to represent left side eye positions and neurons with positive slopes to represent
right side eye positions. To relate our model to this system, the signal variable x1 can be interpreted
as eye-position, with zero representing the central eye position, and with positive and negative values
6
simulation measurement
(A)
prediction
?
(B)
(C)
-?
0
?
?
?
-?
0
?
?
?
-?
0
?
?
?
-?
0
?
?
?
Figure 3: The relationship between firing rates, stimulus and connectivity in a network of 16 neurons.
(A) Each dot represents the contribution of a neuron to a signal representation (when the firing rate
is 10 ? 16 Hz) (1st column). Here, we consider signals along a straight line (thin black line). We
simulate a network of neurons and measure firing rates (2nd column). These measurements closely
match our algorithmically predicted firing rates (3rd column), where each point in the 4th column
represents the firing rate of an individual neuron for a given stimulus. (B) Similar to ?(A)? except
that some noise is added to the connectivity. The representation error (bottom panels, column 2
and column 3) is similar to the network without connectivity noise. (C) Similar to ?(B)?, except
that we consider signals along a circle (thin black line). Each dot represents the contribution of a
neuron to a signal representation (when the firing rate is 20 ? 16 Hz) (1st column). This signal
produces bump-shaped tuning curves (2nd column), which we can also predict accurately (3rd and
4th column).
(A)
(B)
leak
membrane potential noise ?
Figure 4: Performance of quadratic programming in firing rate prediction. (A) The mean prediction
error (absolute difference between each prediction and measurement, averaged over neurons and
over 0.5 seconds) increases with ? (bottom line). The standard deviation of the prediction becomes
much larger with ? (top line). (B) The mean prediction error (bottom line) and standard deviation of
the prediction error (top line) also increase with noise. However, the prediction error remains less
that 1 Hz.
7
of x1 representing right and left side eye positions, respectively. Now, we can use the relationship
that we have developed between tuning curves and computational function to interpret oculomotor
tuning as an attempt to represent eye positions optimally.
Bump-shaped tuning curves can be produced by networks representing circular variables x1 = cos ?,
x2 = sin ?, where ? is the orientation of the signal (Fig. 3 c). As before, the tuning curves of
individual neurons are regularly spaced if the values of F are regularly spaced. If we add some
noise to the connectivity F, the tuning curves become inhomogeneous and highly irregular. Again,
this inhomogeneity has little effect on the signal representation error.
In all the above examples, our firing rate predictions closely match firing rate measurements from
network simulations (Fig. 3). The success of our algorithmic approach in calculating firing rates
depends on the success of spiking networks in algorithmically optimising a cost function. The
resolution of this spiking algorithm is determined by the leak ? and membrane potential noise. If
? is large, the firing rate prediction error will have large fluctuations about the optimal firing rate
value (Fig. 4 a). However, the average prediction error (averaged over time and neurons) remains
small. Similarly, membrane potential noise3 increases fluctuations about the optimal firing rate but
the average prediction error remains small (until the noise is large enough to generate spikes without
any input) (Fig. 4 b).
5
Discussion and Conclusions
We have developed a new algorithmic technique for calculating firing rates in tightly balanced networks. Our approach does not require us to make any linearising approximations. Rather, we directly identify the non-linear relationship between firing rates, connectivity, input and optimal signal
representation. Identifying such relationships is a long-standing problem in systems neuroscience,
largely because the mathematical language that we use to describe information representation is
very different to the language that we use to describe neural network spiking statistics. For tightly
balanced networks, we have essentially solved this problem, by matching the firing rate statistics of
neural activity to the structure of neural signal representation. The non-linear relationship that we
identify is the solution to a quadratic programming problem.
Previous studies have also interpreted firing rates to be the result of a constrained optimisation
problem [21], but for a population coding model, not for a network of spiking neurons. In a more
recent study, a spiking network was used to solve an optimisation problem, although this network
required positive and negative spikes, which is difficult to reconcile with biological spiking [22].
The firing rate tuning curves that we calculate have allowed us to investigate poorly understood
features of experimentally recorded tuning curves. In particular, we have been able to evaluate
the impact of tuning curve inhomogeneity on neural computation. This inhomogeneity often goes
unreported in experimental studies because it is difficulty to interpret [6], and in theoretical studies, it
is often treated as a form of noise that must be averaged out. We find that tuning curve inhomogeneity
is not necessarily noise because it does not necessarily harm signal representation. Therefore, we
propose that tuning curves are inhomogeneous simply because they can be.
Beyond the interpretation of tuning curve shape, our quadratic programming approach to firing rate
calculations promises to be useful in other areas of neuroscience - from data analysis, where it may
be possible to train our framework using neural data so as to predict firing rate responses to sensory
stimuli - to the study of computational neurodegeneration, where the impact of neural damage on
tuning curves and computation may be characterised.
Acknowledgements
We would like to thank Nuno Calaim for helpful comments on the manuscript. Also, we are grateful for generous funding from the Emmy-Noether grant of the Deutsche Forschungs-gemeinschaft
(CKM) and the Chaire dexcellence of the Agence National de la Recherche (CKM, DB), as well as
a James Mcdonnell Foundation Award (SD) and EU grants BACS FP6-IST-027140, BIND MECTCT-20095-024831, and ERC FP7-PREDSPIKE (SD).
3
Membrane potential noise can be included in our network model by adding a Wiener process noise term to
our membrane potential equation (Eqn. 1). We parametrise this noise with a constant ?.
8
References
[1] Adrian, E.D. and Zotterman, Y. (1926) The impulses produced by sensory nerve endings. The
Journal of physiology 49(61): 156-193
[2] Wohrer A., Humphries M.D. and Machens C.K. (2012) Population-wide distributions of neural
activity during perceptual decision-making. Progress in neurobiology 103: 156-193
[3] Sclar, G. and Freeman, R.D. (1982) Orientation selectivity in the cat?s striate cortex is invariant
with stimulus contrast. Experimental brain research 46(3): 457-61.
[4] Aksay E., Olasagasti I., Mensh B.D., Baker R., Goldman, M.S. and Tank, D.W. (2007) Functional dissection of circuitry in a neural integrator. Nature neuroscience 10(4): 494-504.
[5] Hubel D.H. and Wiesel T.N. (1962) Receptive fields, binocular interaction and functional
architecture in the cat?s visual cortex. Physiological Soc 1:(160)
[6] Olshausen B.A. and Field D.J. (2005) How close are we to understanding V1? Neural computation 8(17): 470-3.
[7] Aertsen A., Johannesma P.I.M. and Hermes D.J. (1980) Spectro-temporal receptive fields of
auditory neurons in the grassfrog. Biological Cybernetics
[8] Machens C.K., Wehr M.S. and Zador A.M. (2004) Linearity of cortical receptive fields measured with natural sounds. The Journal of neuroscience : the official journal of the Society for
Neuroscience 5(24): 1089-100.
[9] Ginzburg I. and Sompolinsky H. (1994) Theory of correlations in stochastic neural networks.
Physical Review E 4(50): 3171-3191.
[10] Trousdale J., Hu Y., Shea-Brown E. and Josi?c K. (2012) Impact of network structure and
cellular response on spike time correlations. PLoS computational biology 3(8): e1002408
[11] Beck J., Bejjanki V.R. and Pouget A. (2011) Insights from a simple expression for linear fisher
information in a recurrently connected population of spiking neurons. Neural computation
6(23): 1484-502
[12] van Vreeswijk C. and Sompolinsky H. (1996) Chaos in neuronal networks with balanced
excitatory and inhibitory activity. Neural computation 5293(274): 1724-1726
[13] van Vreeswijk C. and Sompolinsky H. (1998) Chaotic balanced state in a model of cortical
circuits. Neural computation 6(10): 1321-1371
[14] Haider, B., Duque, A., Hasenstaub, A.R. and McCormick, D.A. (2006) Neocortical network
activity in vivo is generated through a dynamic balance of excitation and inhibition. The Journal of neuroscience : the official journal of the Society for Neuroscience 17(26): 4535-45
[15] Bourdoukan R., Barrett D.G.T., Machens C. and Deneve S. (2012) Learning optimal spikebased representations Advances in Neural Information Processing Systems 25: 2294-2302.
[16] Knight B.W. (1972) Dynamics of encoding in a population of neurons. The Journal of general
physiology 6(59): 734-66
[17] Boerlin M., Machens, C.K. and Deneve S. (2012) Predictive coding of dynamical variables in
balanced spiking networks. PLoS computational biology, in press.
[18] Boerlin M., Deneve S. (2011) Spike-based population coding and working memory. PLoS
Comput Biol 7, e1001080.
[19] Boyd S. and Vandenberghe L. (2004) Convex optimization.
[20] Braitenber V. and Schuz A. (1991) Anatomy of the cortex. Statistics and Geometry. Springer
[21] Salinas E. (2006) How behavioral constraints may determine optimal sensory representations
PLoS biolog 12(4): 1545-7885
[22] Rozell C.J., Johnson D.H., Baraniuk R.G. and Olshausen B.A. (2011) Spike-based population
coding and working memory. PLoS Comput Biol 7, e1001080.
9
| 5053 |@word middle:7 wiesel:1 proportion:2 nd:2 adrian:2 hu:1 simulation:6 accounting:1 thereby:2 initial:1 configuration:1 series:2 ecole:2 biolog:1 si:2 written:3 must:8 reminiscent:2 plasticity:1 shape:6 christian:2 treating:1 plot:2 alone:1 short:1 recherche:1 penalises:1 org:1 sigmoidal:1 mathematical:1 along:2 become:1 ik:5 behavioral:1 expected:1 brain:1 integrator:1 chaire:1 freeman:1 decreasing:2 goldman:1 little:2 window:1 considering:1 increasing:2 becomes:6 provided:1 begin:3 underlying:3 linearity:3 panel:23 deutsche:1 baker:1 circuit:1 what:1 interpreted:3 developed:3 finding:1 guarantee:1 temporal:7 ti:4 act:1 exactly:1 returning:1 unit:1 grant:2 producing:2 arguably:1 before:3 positive:9 understood:2 bind:1 sd:2 limit:1 encoding:1 optimised:1 firing:85 fluctuation:3 black:5 co:1 range:1 bi:1 averaged:3 definite:1 irregularity:1 chaotic:1 area:1 ri2:1 mensh:1 johannesma:1 physiology:2 matching:1 boyd:1 spikebased:1 onto:1 close:2 parametrise:1 impossible:1 seminal:1 equivalent:1 humphries:1 go:1 zador:1 convex:1 resolution:1 identifying:2 pouget:1 insight:2 rule:2 regarded:1 vandenberghe:1 bourdoukan:1 initialise:1 population:7 fx:4 suppose:1 programming:10 machens:6 rozell:1 particularly:2 continues:1 ckm:2 observed:3 role:3 bottom:10 ft:3 solved:1 capture:3 champalimaud:2 calculate:7 region:8 connected:2 sompolinsky:3 eu:1 plo:5 knight:1 balanced:30 leak:4 dynamic:14 depend:1 solving:1 tight:2 rewrite:1 grateful:1 predictive:1 f2:9 easily:2 represented:3 cat:2 train:9 describe:3 emmy:1 salina:1 dt0:3 fluctuates:1 larger:2 valued:1 solve:2 whose:1 statistic:3 timescale:3 noisy:1 inhomogeneity:6 hermes:1 propose:1 linearising:2 interaction:3 reset:1 fr:2 cancelling:1 remainder:1 rapidly:1 fired:1 poorly:1 description:1 exploiting:1 regularity:1 optimum:2 produce:9 tk:1 derive:2 develop:2 illustrate:3 recurrent:4 measured:3 ij:2 received:2 progress:2 strong:1 soc:1 predicted:3 come:1 fij:3 closely:4 inhomogeneous:4 anatomy:1 filter:1 stochastic:1 fchampalimaud:1 require:2 f1:9 biological:2 summation:2 im:1 secondly:1 exploring:1 ft1:1 around:1 considered:1 great:2 algorithmic:4 predict:2 bump:5 circuitry:1 boerlin:2 generous:1 tik:2 weighted:2 normale:2 occupied:1 rather:1 fluctuate:2 varying:1 focus:2 vk:1 slowest:1 contrast:1 helpful:1 dependent:1 synonymous:1 bt:1 france:2 i1:1 interested:2 tank:1 arg:1 orientation:2 denoted:1 constrained:3 field:5 once:2 optimises:2 shaped:7 optimising:2 represents:5 broad:1 biology:2 cancel:1 thin:3 stimulus:6 simplify:1 randomly:1 tightly:12 national:1 individual:7 beck:1 phase:1 consisting:1 geometry:1 fire:2 attempt:4 wohrer:1 highly:3 investigate:2 multiply:1 circular:1 implication:1 closer:1 taylor:1 circle:1 theoretical:5 column:9 hasenstaub:1 cost:8 deviation:2 subset:1 entry:1 johnson:1 too:1 optimally:2 reported:1 answer:1 learnt:1 considerably:1 st:2 decayed:1 standing:1 probabilistic:1 minimised:1 together:1 connectivity:30 again:2 central:1 recorded:2 satisfied:2 positivity:2 potential:9 diversity:2 de:2 coding:5 sec:1 satisfy:1 caused:1 vi:5 depends:1 piece:1 sup:2 red:4 reached:1 complicated:1 slope:6 vivo:1 contribution:3 accuracy:2 wiener:1 largely:2 spaced:4 identify:4 accurately:2 produced:4 trajectory:1 cybernetics:1 straight:1 explain:1 whenever:2 synaptic:2 definition:2 term1:1 raster:1 nonetheless:1 bewildering:1 james:1 nuno:1 auditory:1 adjusting:1 nerve:1 manuscript:1 dt:1 tension:1 response:8 though:1 strongly:2 box:1 just:1 binocular:1 until:1 correlation:2 working:2 eqn:11 e1001080:2 nonlinear:1 impulse:1 olshausen:2 effect:2 excessively:1 brown:1 evolution:1 analytically:2 hence:1 read:1 symmetric:2 deal:1 sin:1 during:3 self:1 ft2:1 essence:1 excitation:9 timingdependent:1 coincides:3 neocortical:1 ranging:1 wise:1 chaos:1 recently:2 fi:5 funding:1 functional:2 spiking:29 physical:1 haider:1 exponentially:4 discussed:1 interpretation:1 relating:1 interpret:5 functionally:1 significant:1 measurement:7 tuning:43 rd:2 erieure:2 fk:1 similarly:3 erc:1 portugal:1 centre:1 cancellation:1 language:2 unreported:1 dot:3 cortex:4 surface:1 inhibition:9 add:1 agence:1 recent:3 driven:1 selectivity:1 certain:2 success:2 muscle:1 minimum:4 additional:1 relaxed:1 determine:2 monotonically:1 signal:32 ii:4 relates:2 sound:1 exceeds:2 match:6 calculation:6 cross:2 long:2 compensate:1 manipulate:1 award:1 schematic:1 prediction:20 neuro:1 impact:3 optimisation:7 essentially:1 sometimes:1 represent:6 irregular:2 diagram:1 comment:1 recording:1 subject:1 dri:1 hz:9 db:1 regularly:4 seem:1 eve:1 feedforward:2 enough:1 fft:2 variety:5 xj:3 architecture:1 silent:10 simplifies:1 minimise:3 t0:3 expression:2 proceed:1 constitute:1 useful:2 struggled:1 generate:2 inhibitory:4 neuroscience:8 algorithmically:2 per:1 blue:3 write:3 discrete:2 promise:1 group:2 key:1 ist:1 threshold:10 characterisation:1 v1:1 deneve:4 fp6:1 sum:3 year:1 baraniuk:1 throughout:2 saying:1 decision:1 quadratic:12 activity:13 strength:2 occur:1 constraint:5 ri:4 x2:5 aspect:1 simulate:2 min:1 according:3 mcdonnell:1 membrane:9 across:1 remain:1 partitioned:1 evolves:1 making:1 den:1 restricted:1 invariant:1 dissection:1 ginzburg:1 equation:14 remains:3 eventually:1 mechanism:6 vreeswijk:2 fp7:1 noether:1 unusual:1 rewritten:2 apply:3 simulating:2 top:7 denotes:1 ensure:1 schuz:1 calculating:7 especially:1 society:2 move:1 question:1 quantity:1 spike:32 already:1 receptive:4 added:1 rt:4 damage:1 striate:1 aertsen:1 thank:1 mapped:1 decoder:2 cellular:1 toward:1 relationship:11 kk:3 providing:1 balance:7 difficult:4 relate:1 negative:5 unknown:1 allowing:1 mccormick:1 neuron:70 observation:4 situation:1 neurobiology:1 ever:1 precise:1 varied:1 david:2 paris:2 required:1 connection:3 able:1 beyond:1 usually:1 dynamical:1 oculomotor:3 bacs:1 green:1 optimise:1 including:1 memory:2 lisbon:1 difficulty:1 circumvent:1 treated:1 natural:1 representing:3 eye:7 mediated:1 sn:1 review:1 understanding:1 acknowledgement:1 evolve:2 relative:1 loss:10 interesting:4 generation:2 filtering:2 foundation:1 integrate:2 metabolic:2 dvi:1 row:2 excitatory:4 side:3 understand:2 generalise:2 fall:1 wide:1 absolute:1 leaky:1 van:2 curve:40 dimension:1 cortical:4 boundary:1 ending:1 sensory:3 made:1 programme:1 far:1 spectro:1 implicitly:1 active:7 hubel:1 harm:1 quantifies:1 sk:3 nature:2 interact:2 necessarily:2 wehr:1 official:2 noise:14 reconcile:1 allowed:1 x1:11 neuronal:1 fig:19 en:2 fails:1 sub:1 position:7 comput:2 perceptual:1 third:1 rk:3 down:1 inset:1 recurrently:2 barrett:3 decay:1 physiological:1 adding:2 shea:1 magnitude:2 led:1 simply:1 explore:1 visual:1 sclar:1 monotonic:4 springer:1 determines:1 identity:1 goal:1 towards:2 fisher:1 experimentally:2 change:1 included:2 determined:3 specifically:1 operates:1 sophie:2 averaging:1 characterised:2 analysing:1 except:2 total:3 experimental:3 la:1 indicating:1 evaluate:1 biol:2 |
4,480 | 5,054 | Perfect Associative Learning with
Spike-Timing-Dependent Plasticity
Maren Westkott
Institute of Theoretical Physics
University of Bremen
28359 Bremen, Germany
[email protected]
Christian Albers
Institute of Theoretical Physics
University of Bremen
28359 Bremen, Germany
[email protected]
Klaus Pawelzik
Institute of Theoretical Physics
University of Bremen
28359 Bremen, Germany
[email protected]
Abstract
Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of
spiking neurons in the brain. It is not known, however, how the computational
power of the Perceptron might be accomplished by the plasticity mechanisms of
real synapses. Here we prove that spike-timing-dependent plasticity having an
anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent
plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the
original Perceptron Learning Rule if these respective plasticity mechanisms act in
concert with the hyperpolarisation of the post-synaptic neurons. We also show that
with these simple yet biologically realistic dynamics Tempotrons and Chronotrons
are learned. The proposed mechanism enables incremental associative learning
from a continuous stream of patterns and might therefore underly the acquisition
of long term memories in cortex. Our results underline that learning processes
in realistic networks of spiking neurons depend crucially on the interactions of
synaptic plasticity mechanisms with the dynamics of participating neurons.
1 Introduction
Perceptrons are paradigmatic building blocks of neural networks [1]. The original Perceptron Learning Rule (PLR) is a supervised learning rule that employs a threshold to control weight changes,
which also serves as a margin to enhance robustness [2, 3]. If the learning set is separable, the PLR
algorithm is guaranteed to converge in a finite number of steps [1], which justifies the term ?perfect
learning?.
Associative learning can be considered a special case of supervised learning where the activity of the
output neuron is used as a teacher signal such that after learning missing activities are filled in. For
this reason the PLR is very useful for building associative memories in recurrent networks where
it can serve to learn arbitrary patterns in a ?quasi-unsupervised? way. Here it turned out to be far
more efficient than the simple Hebb rule, leading to a superior memory capacity and non-symmetric
weights [4]. Note also that over-learning from repetitions of training examples is not possible with
the PLR because weight changes vanish as soon as the accumulated inputs are sufficient, a property
1
which in contrast to the na??ve Hebb rule makes it suitable also for incremental learning of associative
memories from sequential presentation of patterns.
On the other hand, it is not known if and how real synaptic mechanisms might realize the successdependent self-regulation of the PLR in networks of spiking neurons in the brain. For example in
the Tempotron [5], a generalization of the perceptron to spatio-temporal patterns, learning was conceived even somewhat less biological than the PLR, since here it not only depends on the potential
classification success, but also on a process that is not local in time, namely the localization of the
absolute maximum of the (virtual) postsynaptic membrane potential of the post-synaptic neuron.
The simplified tempotron learning rule, while biologically more plausible, still relies on a reward
signal which tells each neuron specifically that it should have spiked or not. Taken together, while
highly desirable, the feature of self regulation in the PLR still poses a challenge for biologically
realistic synaptic mechanisms.
The classical form of spike-timing-dependent plasticity (STDP) for excitatory synapses (here denoted CSTDP) states that the causal temporal order of first pre-synaptic activity and then postsynaptic activity leads to long-term potentiation of the synapse (LTP) while the reverse order leads to
long-term depression (LTD)[6, 7, 8]. More recently, however, it became clear that STDP can exhibit
different dependencies on the temporal order of spikes. In particular, it was found that the reversed
temporal order (first post- then presynaptic spiking) could lead to LTP (and vice versa; RSTDP),
depending on the location on the dendrite [9, 10]. For inhibitory synapses some experiments were
performed which indicate that here STDP exists as well and has the form of CSTDP [11]. Note that
CSTDP of inhibitory synapses in its effect on the postsynaptic neuron is equivalent to RSTDP of
excitatory synapses. Additionally it has been shown that CSTDP does not always rely on spikes, but
that strong subthreshold depolarization can replace the postsynaptic spike for LTD while keeping
the usual timing dependence [12]. We therefore assume that there exists a second threshold for the
induction of timing dependent LTD. For simplicity and without loss of generality, we restrict the
study to RSTDP for synapses that in contradiction to Dale?s law can change their sign.
It is very likely that plasticity rules and dynamical properties of neurons co-evolved to take advantage of each other. Combining them could reveal new and desirable effects. A modeling example
for a beneficial effect of such an interplay was investigated in [13], where CSTDP interacted with
spike-frequency adaptation of the postsynaptic neuron to perform a gradient descent on a square
error. Several other studies investigate the effect of STDP on network function, however mostly
with a focus on stability issues (e.g. [14, 15, 16]). In contrast, we here focus on the constructive role of STDP for associative learning. First we prove that RSTDP of excitatory synapses (or
CSTDP on inhibitory synapses) when acting in concert with neuronal after-hyperpolarisation and
depolarization-dependent LTD is sufficient for realizing the classical Perceptron learning rule, and
then show that this plasticity dynamics realizes a learning rule suited for the Tempotron and the
Chronotron [17].
2 Ingredients
2.1 Neuron model and network structure
We assume a feed-forward network of N presynaptic neurons and one postsynaptic integrate-andfire neuron with a membrane potential U governed by
?U U? = ?U + Isyn + Iext ,
(1)
where Isyn denotes the input from the presynaptic neurons, and Iext is an input which can be used
to drive the postsynaptic neuron to spike at certain times. When the neuron reaches a threshold
potential Uthr , it is reset to a reset potential Ureset < 0, from where it decays back to the resting
potential U? = 0 with time constant ?U . Spikes and other signals (depolarization) take finite times
to travel down the axon (?a ) and the dendrite (?d ). Synaptic transmission takes the form of delta
pulses, which reach the soma of the postsynaptic neuron after time ?a + ?d , and are modulated by
the synaptic weight w. We denote the presynaptic spike train of neuron i as xi with spike times tipre :
X
xi (t) =
?(t ? tipre ).
(2)
tipre
2
A
B
Uthr
postsynaptic trace y
Ust
U?
presynaptic spikes x
x
z(t)
w(t)
x(t)
subthreshold events z(t)
Figure 1: Illustration of STDP mechanism. A: Upper trace (red) is the membrane potential of the
postsynaptic neuron. Shown are the firing threshold Uthr and the threshold for LTD Ust . Middle
trace (black) is the variable z(t), the train of LTD threshold crossing events. Please note that the first
spike in z(t) occurs at a different time than the neuronal spike. Bottom traces show w(t) (yellow)
and x
? (blue) of a selected synapse. The second event in z reads out the trace of the presynaptic
spike x?, leading to LTD. B: Learning rule (4) is equivalent to RSTDP. A postsynaptic spike leads
to an instantaneous jump in the trace y? (top left, red line), which decays exponentially. Subsequent
presynaptic spikes (dark blue bars and corresponding thin gray bars in the STDP window) ?read? out
the state of the trace for the respective ?t = tpre ? tpost . Similarly, z(t) reads out the presynaptic
trace x
? (lower left, blue line). Sampling for all possible times results in the STDP window (right).
P
A postsynaptic neuron receives the input
P Isyn (t) = i wi xi (t ? ?a ? ?d ). The postsynaptic spike
train is similarly denoted by y(t) = tpost ?(t ? tpost ).
2.2 The plasticity rule
The plasticity rule we employ mimics reverse STDP: A postsynaptic spike which arrives at the
synapse shortly before a presynaptic spike leads to synaptic potentiation. For synaptic depression
the relevant signal is not the spike, but the point in time where U (t) crosses an additional threshold
Ust from below, with U? < Ust
P< Uthr (?subthreshold threshold?). These events are modelled as
?-pulses in the function z(t) = k ?(t?tk ), where tk are the times of the aforementioned threshold
crossing events (see Fig. 1 A for an illustration of the principle). The temporal characteristic of
(reverse) STDP is preserved: If a presynaptic spike occurs shortly before the membrane potential
crosses this threshold, the synapse depresses. Timing dependent LTD without postsynaptic spiking
has been observed, although with classical timing requirements [12].
We formalize this by letting pre- and postsynaptic spikes each drive a synaptic trace:
?? = ??
x + x(t ? ?a )
?pre x
y + y(t ? ?d ).
?post y?? = ??
(3)
The learning rule is a read?out of the traces by spiking and threshold crossing events, respectively:
w? ? y?x(t ? ?a ) ? ? x
?z(t ? ?d ),
(4)
where ? is a factor which scales depression and potentiation relative to each other. Fig. 1 B shows
how this plasticity rule creates RSTDP.
3 Equivalence to Perceptron Learning Rule
The Perceptron Learning Rule (PLR) for positive binary inputs and outputs is given by
?
?
?
?wi? ? xi,?
0 (2y0 ? 1)? [? ? (2y0 ? 1)(h ? ?)] ,
3
(5)
where xi,?
? {0, 1} denotes the activity of presynaptic neuron i in pattern ? ? {1, . . . , P },
0
y0? ? {0, 1} signals the desired response to pattern ?, ? > 0 is a margin which ensures a certain
P
robustness against noise after convergence, h? = i wi xi,?
0 is the input to a postsynaptic neuron,
? denotes the firing threshold, and ?(x) denotes the Heaviside step function. If the P patterns are
linearly separable, the perceptron will converge to a correct solution of the weights in a finite number
of steps. For random patterns this is generally the case for P < 2N . A finite margin ? reduces the
capacity.
Interestingly, for the case of temporally well separated synchronous spike patterns the combination
of RSTDP-like synaptic plasticity dynamics with depolarization-dependent LTD and neuronal hyperpolarization leads to a plasticity rule which can be mapped to the Perceptron Learning Rule. To
cut down unnecessary notation in the derivation, we drop the indices i and ? except where necessary
and consider only times 0 ? t ? ?a + 2?d .
We consider a single postsynaptic neuron with N presynaptic neurons, with the condition ?d < ?a .
During learning, presynaptic spike patterns consisting of synchronous spikes at time t = 0 are
induced, concurrent with a possibly occuring postsynaptic spike which signals the class the presynaptic pattern belongs to. This is equivalent to the setting of a single layered perceptron with binary neurons. With x0 and y0 used as above we can write the pre- and postsynaptic activity as
x(t) = x0 ?(t) and y(t) = y0 ?(t). The membrane potential of the postsynaptic neuron depends on
y0 :
U (t) = y0 Ureset exp(?t/?U )
U (?a + ?d ) = y0 Ureset exp(?(?a + ?d )/?U ) = y0 Uad .
Similarly, the synaptic current is
Isyn (t) =
X
wi xi0 ?(t ? ?a ? ?d )
i
Isyn (?a + ?d ) =
X
(6)
wi xi0 = Iad .
(7)
i
The activity traces at the synapses are
exp(?(t ? ?a )/?pre )
?pre
exp(?(t ? ?d )/?post )
y?(t) = y0 ?(t ? ?d )
.
?post
x
?(t) = x0 ?(t ? ?a )
(8)
The variable of threshold crossing z(t) depends on the history of the postsynaptic neurons, which
again can be written with the aid of y0 as:
z(t) = ?(Iad + y0 Uad ? Ust )?(t ? ?a ? ?d ).
(9)
Here, ? reflects the condition for induction of LTD. Only when the postsynaptic input at time
t = ?a + ?d is greater than the residual hyperpolarization (Uad < 0!) plus the threshold Ust , a
potential LTD event gets enregistered. These are the ingredients for the plasticity rule (4):
Z
?w ? [?
y x(t ? ?a ) ? ? x
?z(t ? ?d )] dt
(10)
exp(?(?a + ?d )/?post )
exp(?2?d /?pre )
=x0 y0
? ?x0
?(Iad + y0 Uad ? Ust ).
?post
?pre
We shorten this expression by choosing ? such that the factors of both terms are equal, which we
can drop subsequently:
?w ? x0 (y0 ? ?(Iad + y0 Uad ? Ust )) .
(11)
We expand the equation by adding and substracting y0 ?(Iad + y0 Uad ? Ust ):
?w ?x0 [y0 (1 ? ?(Iad + y0 Uad ? Ust )) ? (1 ? y0 )?(Iad + y0 Uad ? Ust )]
=x0 [y0 ?(?Iad ? Uad + Ust ) ? (1 ? y0 )?(Iad ? Ust )] .
(12)
We used 1 ? ?(x) = ?(?x) in the last transformation, and dropped y0 from the argument of the
Heaviside functions, as the two terms are seperated into the two cases y0 = 0 and y0 = 1. We do a
4
similar transformation to construct an expression G that turns either into the argument of the left or
right Heaviside function depending on y0 . That expression is
G = Iad ? Ust + y0 (?2Iad ? Uad + 2Ust ),
(13)
with which we replace the arguments:
?w ? x0 y0 ?(G) ? x0 (1 ? y0 )?(G) = x0 (2y0 ? 1)?(G).
(14)
The last task is to show that G and the argument of the Heaviside function in equation (5) are
equivalent. For this we choose Iad = h, Uad = ?2? and Ust = ? ? ? and keep in mind, that
? = Uthr is the firing threshold. If we put this into G we get
G =Iad ? Ust + y0 (?2Iad ? Uad + 2Ust )
=h ? ? + ? + 2y0 h + 2y0 ? + 2y0 ? ? 2y0 ?
=? ? (2y0 ? 1)(h ? ?),
(15)
which is the same as the argument of the Heaviside function in equation (5). Therefore, we have
shown the equivalence of both learning rules.
4 Associative learning of spatio-temporal spike patterns
4.1 Tempotron learning with RSTDP
The condition of exact spike synchrony used for the above equivalence proof can be relaxed to
include the association of spatio-temporal spike patterns with a desired postsynaptic activity. In the
following we take the perspective of the postsynaptic neuron which during learning is externally
activated (or not) to signal the respective class by spiking at time t = 0 (or not). During learning in
each trial presynaptic spatio-temporal spike patterns are presented in the time span 0 < t < T , and
plasticity is ruled by (4). For these conditions the resulting synaptic weights realize a Tempotron
with substantial memory capacity.
A Tempotron is an integrate-and-fire neuron with input weights adjusted to perform arbitrary classifications of (sparse) spike patterns [5, 18]. To implement a Tempotron, we make two changes
to the model. First, we separate the time scales of membrane potential and hyperpolarization by
introducing a variable ?:
?? ?? = ?? .
(16)
Immediately after a postsynaptic spike, ? is reset to ?spike < 0. The reason is that the length
of hyperpolarization determines the time window where significant learning can take place. To
improve comparability with the Tempotron as presented originally in [5], we set T = 0.5s and
?? = ?post = 0.2s, so that the postsynaptic neuron can learn to spike almost anywhere over the time
window, and we introduce postsynaptic potentials (PSP) with a finite rise time:
X
wi xi (t ? ?a ),
(17)
?s I?syn = ?Isyn +
i
where wi denotes the synaptic weight of presynaptic neuron i. With ?s = 3ms and ?U = 15ms the
PSPs match the ones used in the original Tempotron study. This second change has little impact on
the capacity or otherwise. With these changes, the membrane potential is governed by
?U U? = (? ? U ) + Isyn (t ? ?d ).
(18)
A postsynaptic spike resets U to ?spike = Ureset < 0. Ureset is the initial hyperpolarization which
is induced after a spike, which relaxes back to zero with the time constant ?? ? ?U . Presynaptic
spikes add up linearly, and for simplicity we assume that both the axonal and the dendritic delay are
negligibly small: ?a = ?d = 0.
It is a natural choice to set ?U = ?pre and ?? = ?post . ?U sets the time scale for the summation
of EPSP contributing to spurious spikes, ?? sets the time window where the desired spikes can lie.
They therefore should coincide with LTD and LTP, respectivly.
5
Figure 2: Illustration of Perceptron learning with RSTDP with subthreshold LTD and postsynaptic
hyperpolarization. Shown are the traces x
?, y? and U . Pre- and postsynaptic spikes are displayed as
black bars at t = 0. A: Learning in the case of y0 = 1, i.e. a postsynaptic spike as the desired
output. Initially the weights are too low and the synaptic current (summed PSPs) is smaller than
Ust . Weight change is LTP only until during pattern presentation the membrane potential hits Ust .
At this point LTP and LTD cancel exactly, and learning stops. B: Pattern completion for y0 = 1.
Shown are the same traces as in A at the absence of an inital postsynaptic spike. The membrane
potential after learning is drawn as a dashed line to highlight the amplitude. Without the initial hyperpolarization, the synaptic current after learning is large enough to cross the spiking threshold, the
postsynaptic neuron fires the desired spike. Learning until Ust is reached ensures a minimum height
of synaptic currents and therefore robustness against noise. C: Pattern presentation and completion
for y0 = 0. Initially, the synaptic current during pattern presentation causes a spike and consequently LTD. Learning stops when the membrane potential stays below Ust . Again, this ensures a
certain robustness against noise, analogous to the margin in the PLR.
6
A
B
Figure 3: Performance of Tempotron and Chronotron after convergence. A: Classification performance of the Tempotron. Shown is the fraction of pattern which elicit the desired postsynaptic activity upon presentation. Perfect recall for all N is achieved up to ? = 0.18. Beyond that mark, some
of the patterns become incorrectly classified. The inset shows the learning curves for ? = 7/16. The
final fraction of correctly classified pattern is the average fraction of the last 500 blocks of each run.
B: Performance of the Chronotron. Shown is the fraction of pattern which during recall succeed in
producing the correct postsynaptic spike time in a window of length 30 ms after the teacher spike.
See supplemental material for a detailed description. Please note that the scale of the load axis is
different in A and B.
Table 1: Parameters for Tempotron learning
?U , ?pre
15 ms
?? , ?post
200 ms
?s
3 ms
Uthr
20 mV
Ust
19 mV
?spike
-20 mV
?
10?5
?
2
4.1.1 Learning performance
We test the performance of networks of N input neurons at classifying spatio-temporal spike patterns
by generating P = ?N patterns, which we repeatedly present to the network. In each pattern,
each presynaptic neuron spikes exactly once at a fixed time in each presentation, with spike times
uniformly distributed over the trial. Learning is organized in learning blocks. In each block all P
patterns are presented in randomized order. Synaptic weights are initialized as zero, and are updated
after each pattern presentation. After each block, we test if the postsynaptic output matches the
desired activity for each pattern. If during training a postsynaptic spike at t = 0 was induced, the
output can lie anytime in the testing trial for a positive outcome. To test scaling of the capacity,
we generate networks of 100 to 600 neurons and present the patterns until the classification error
reaches a plateau. Examples of learning curves (Classification error over time) are shown in Fig. 3.
For each combination of ? and N , we run 40 simulations. The final classification error is the mean
over the last 500 blocks, averaged over all runs. The parameters we use in the simulations are shown
in Tab. 1. Fig. 3 shows the final classification performance as a function of the memory load ?, for
all network sizes we use. Up to a load of 0.18, the networks learns to perfectly classify each pattern.
Higher loads leave a residual error which increases with load. The drop in performance is steeper
for larger networks. In comparison, the simplified Tempotron learning rule proposed in [5] achieves
perfect classification up to ? ? 1.5, i.e. one order of magnitude higher.
4.2 Chronotron learning with RSTDP
In the Chronotron [17] input spike patterns become associated with desired spike trains. There are
different learning rules which can achieve this mapping, including E?learning, I?learning, ReSuMe
and PBSNLR [17, 19, 20]. The plasticity mechanism presented here has the tendency to generate
postsynaptic spikes as close in time as possible to the teacher spike during recall. The presented
learning principle is therefore a candidate for Chronotron learning. The average distance of these
7
spikes depends on the time constants of hyperpolarization and the learning window, especially ?post .
The modifications of the model necessary to implement Chronotron learning are described in the
supplement. The resulting capacity, i.e. the ability to generate the desired spike times within a short
window in time, is shown in Fig. 3 B. Up to a load of ? = 0.01, the recall is perfect within the limits
of the learning window ?lw = 30ms. Inspection of the spike times reveals that the average distance
of output spikes to the respective teacher spike is much shorter than the learning window (? 2ms
for ? = 0.01, see supplemental Fig. 1).
5 Discussion
We present a new and biologically highly plausible approach to learning in neuronal networks.
RSTDP with subthreshold LTD in concert with hyperpolarisation is shown to be mathematically
equivalent to the Perceptron learning rule for activity patterns consisting of synchronous spikes,
thereby inheriting the highly desirable properties of the PLR (convergence in finite time, stop condition if performance is sufficient and robustness against noise). This provides a biologically plausible
mechanism to build associative memories with a capacity close to the theoretical maximum. Equivalence of STDP with the PRL was shown before in [21], but this equivalence only holds on average.
We would like to stress that we here present a novel approach that ensures exact mathematical eqivalence to the PRL.
The mechanism proposed here is complementary to a previous approach [13] which uses CSTDP
in combination with spike frequency adaptation to perform gradient descent learning on a squared
error. However, that approach relies on an explicit teacher signal, and is not applicable to autoassociative memories in recurrent networks. Most importantly, the approach presented here inherits
the important feature of selfregulation and fast convergence from the original Perceptron which is
absent in [13].
For sparse spatio-temporal spike patterns extensive simulations show that the same mechanism is
able to learn Tempotrons and Chronotrons with substantial memory capacity. In the case of the
Tempotron, the capacity achieved with this mechanism is lower than with a comparably plausible
learning rule. However, in the case of the Chronotron the capacity comes close to the one obtained
with a commonly employed, supervised spike time learning rule. Moreover, these rules are biologically quite unrealistic. A prototypical example for such a supervised learning rule is the Temptron
rule proposed by G?utig and Sompolinski [5]. Essentially, after a pattern presentation the complete
time course of the membrane potential during the presentation is examined, and if classification was
erroneous, the synaptic weights which contributed most to the absolute maximum of the potential
are changed. In other words, the neurons would have to able to retrospectivly disentangle contributions to their membrane potential at a certain time in the past. As we showed here, RSTDP with
subthreshold LTD together with postsynaptic hyperpolarization for the first time provides a realistic
mechanism for Tempotron and Chronotron learning.
Spike after-hyperpolarization is often neglected in theoretical studies or assumed to only play a role
in network stabilization by providing refractoriness. Depolarization dependent STDP receives little
attention in modeling studies (but see [22]), possibly because there are only few studies which show
that such a mechanism exists [12, 23]. The novelty of the learning mechanism presented here lies
in the constructive roles both play in concert. After-hyperpolarization allows synaptic potentiation
for presynaptic inputs immediately after the teacher spike without causing additional non-teacher
spikes, which would be detrimental for learning. During recall, the absence of the hyperpolarization
ensures the then desired threshold crossing of the membrane potential (see Fig. 2 B). Subthreshold
LTD guarantees convergence of learning. It counteracts synaptic potentiation when the membrane
potential becomes sufficiently high after the teacher spike. The combination of both provides the
learning margin, which makes the resulting network robust against noise in the input. Taken together,
our results show that the interplay of neuronal dynamics and synaptic plasticity rules can give rise
to powerful learning dynamics.
Acknowledgments
This work was in part funded by the German ministry for Science and Education (BMBF), grant
number 01GQ0964. We are grateful to the anonymus reviewers who pointed out an error in first
version of the proof.
8
References
[1] Hertz J, Krogh A, Palmer RG (1991) Introduction to the Theory of Neural Computation., Addison-Wesley.
[2] Rosenblatt F (1957) The Perceptron?a perceiving and recognizing automaton. Report 85-460-1.
[3] Minsky ML, Papert SA (1969) Perceptrons. Cambridge, MA: MIT Press.
[4] Diederich S, Opper M (1987) Learning of correlated patterns in spin-glass networks by local learning rules.
Physical Review Letters 58(9):949-952.
[5] G?utig R, Sompolinsky H (2006) The Tempotron: a neuron that learns spike timing-based decisions. Nature
Neuroscience 9(3):420-8.
[6] Dan Y, Poo M (2004) Spike Timing-Dependent Plasticity of Neural Circuits. Neuron 44:2330.
[7] Dan Y, Poo M (2006) Spike timing-dependent plasticity: from synapse to perception. Physiological Reviews 86(3):1033-48.
[8] Caporale N, Dan Y (2008) Spike TimingDependent Plasticity: A Hebbian Learning Rule. Annual Review
of Neuroscience 31:2546.
[9] Froemke RC, Poo MM, Dan Y (2005) Spike-timing-dependent synaptic plasticity depends on dendritic
location. Nature 434:221-225.
[10] Sj?ostr?om PJ, H?ausser M (2006) A Cooperative Switch Determines the Sign of Synaptic Plasticity in Distal
Dendrites of Neocortical Pyramidal Neurons. Neuron 51:227-238.
[11] Haas JS, Nowotny T, Abarbanel HDI (2006) Spike-Timing-Dependent Plasticity of Inhibitory Synapses
in the Entorhinal Cortex. Journal of Neurophysiology 96(6):3305-3313.
[12] Sj?ostr?om PJ, Turrigiano GG, Nelson SB (2004) Endocannabinoid-Dependent Neocortical Layer-5 LTD
in the Absence of Postsynaptic Spiking. J Neurophysiol 92:3338-3343
[13] D?Souza P, Liu SC, Hahnloser RHR (2010) Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity. PNAS 107(10):47224727.
[14] Song S, Miller KD, Abbott LF (2000) Competitive Hebbian learning through spike-timing-dependent
synaptic plasticity. Nature Neuroscience 3:919-926.
[15] Izhikevich EM, Desai NS (2003) Relating STDP to BCM. Neural Computation 15:1511-1523
[16] Vogels TP, Sprekeler H, Zenkel F, Clopath C, Gerstner W (2011) Inhibitory Plasticity Balances Excitation
and Inhibition in Sensory Pathways and Memory Networks. Science 334(6062):1569-1573.
[17] Florian RV (2012) The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns.
PLoS ONE 7(8): e40233
[18] Rubin R, Monasson R, Sompolinsky H (2010) Theory of Spike Timing-Based Neural Classifiers. Physical
Review Letters 105(21): 218102.
[19] Ponulak F, Kasinski, A (2010) Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence
Learning, Classification, and Spike Shifting. Neural Computation 22:467-510
[20] Xu Y, Zeng X, Zhong S (2013) A New Supervised Learning Algorithm for Spiking Neurons. Neural
Computation 25: 1475-1511
[21] Legenstein R, Naeger C, Maass W (2005) What Can a Neuron Learn with Spike-Timing-Dependent
Plasticity? Neural Computation 17:2337-2382
[22] Clopath C, B?using L, Vasilaki E, Gerstner W (2010) Connectivity reflects coding: a model of voltagebased STDP with homeostasis. Nature Neuroscience 13:344-355
[23] Fino E, Deniau JM, Venance L (2009) Brief Subthreshold Events Can Act as Hebbian Signals for LongTerm Plasticity. PLoS ONE 4(8): e6557
9
| 5054 |@word neurophysiology:1 trial:3 version:1 middle:1 longterm:1 underline:1 pulse:2 crucially:1 simulation:3 thereby:1 initial:2 liu:1 interestingly:1 past:1 current:5 yet:1 ust:23 written:1 realize:2 underly:1 realistic:4 subsequent:1 plasticity:29 shape:1 christian:1 enables:1 drop:3 concert:4 selected:1 inspection:1 realizing:2 short:1 provides:3 location:2 height:1 mathematical:1 rc:1 become:2 prove:2 dan:4 pathway:1 introduce:1 x0:11 brain:2 pawelzik:2 little:2 window:10 jm:1 becomes:1 notation:1 moreover:1 circuit:1 what:1 evolved:1 depolarization:5 supplemental:2 transformation:2 guarantee:1 temporal:10 act:2 exactly:2 classifier:1 hit:1 control:1 grant:1 producing:1 before:3 positive:2 dropped:1 timing:16 local:2 limit:1 firing:3 might:3 black:2 plus:1 examined:1 equivalence:5 co:1 palmer:1 averaged:1 acknowledgment:1 testing:1 block:6 implement:2 lf:1 elicit:1 pre:11 word:1 suggest:1 get:2 close:3 layered:1 put:1 equivalent:5 reviewer:1 missing:1 poo:3 attention:1 automaton:1 simplicity:2 immediately:2 shorten:1 contradiction:1 rule:32 importantly:1 stability:1 analogous:1 updated:1 play:2 fino:1 exact:2 us:1 crossing:5 cut:1 cooperative:1 bottom:1 role:3 observed:1 negligibly:1 ensures:5 sompolinsky:2 desai:1 plo:2 substantial:2 reward:1 dynamic:6 neglected:1 depend:1 grateful:1 serve:1 localization:1 creates:1 upon:1 neurophysiol:1 derivation:1 train:4 separated:1 fast:1 sc:1 tell:1 klaus:1 choosing:1 outcome:1 quite:1 larger:1 plausible:4 otherwise:1 ability:1 final:3 associative:8 interplay:2 advantage:1 sequence:1 turrigiano:1 interaction:1 reset:4 adaptation:3 epsp:1 causing:1 relevant:2 turned:1 combining:1 achieve:1 description:1 participating:1 interacted:1 convergence:5 transmission:1 requirement:1 generating:1 perfect:5 incremental:2 leave:1 tk:2 depending:2 recurrent:2 completion:2 pose:1 albers:1 sa:1 krogh:1 strong:1 sprekeler:1 indicate:1 come:1 correct:2 subsequently:1 stabilization:1 virtual:1 material:1 education:1 potentiation:5 generalization:1 biological:1 dendritic:2 summation:1 mathematically:1 adjusted:1 extension:1 hold:1 mm:1 sufficiently:1 considered:1 stdp:14 exp:6 mapping:1 achieves:1 travel:1 applicable:1 realizes:1 concurrent:1 homeostasis:1 repetition:1 vice:1 reflects:2 mit:1 always:1 zhong:1 derived:1 focus:2 inherits:1 contrast:2 utig:2 glass:1 dependent:17 accumulated:1 sb:1 initially:2 spurious:1 expand:1 quasi:1 germany:3 issue:1 classification:10 aforementioned:1 denoted:2 special:1 summed:1 equal:1 construct:1 once:1 having:1 sampling:1 unsupervised:1 cancel:1 thin:1 mimic:1 report:1 employ:2 few:1 ve:1 minsky:1 consisting:2 fire:3 highly:4 investigate:1 arrives:1 activated:1 necessary:2 respective:4 shorter:1 hdi:1 filled:1 initialized:1 desired:10 ruled:1 causal:1 theoretical:6 classify:1 modeling:2 tp:1 introducing:1 delay:1 recognizing:1 too:1 dependency:1 teacher:8 randomized:1 stay:1 physic:3 enhance:1 together:3 na:1 connectivity:1 again:2 squared:1 choose:1 possibly:2 leading:2 abarbanel:1 potential:21 de:3 coding:1 mv:3 depends:5 stream:1 performed:1 tab:1 steeper:1 red:2 reached:1 competitive:1 depresses:1 maren:2 synchrony:1 contribution:1 om:2 square:1 spin:1 became:1 characteristic:1 who:1 miller:1 subthreshold:8 resume:2 yellow:1 modelled:1 comparably:1 drive:2 history:1 classified:2 plateau:1 synapsis:12 reach:3 synaptic:27 diederich:1 against:5 acquisition:1 frequency:3 proof:2 associated:1 stop:3 recall:5 anytime:1 organized:1 formalize:1 amplitude:1 syn:1 back:2 feed:1 wesley:1 originally:1 dt:1 supervised:6 higher:2 response:1 synapse:5 refractoriness:1 generality:1 anywhere:1 until:3 hand:1 receives:2 zeng:1 reveal:1 gray:1 vogels:1 izhikevich:1 building:2 effect:4 concept:1 read:4 symmetric:1 maass:1 distal:1 during:10 self:2 please:2 excitation:1 timingdependent:1 m:8 gg:1 stress:1 occuring:1 complete:1 neocortical:2 instantaneous:1 novel:1 recently:1 superior:1 spiking:11 hyperpolarization:12 physical:2 exponentially:1 association:1 xi0:2 counteracts:1 resting:1 relating:1 significant:1 versa:1 cambridge:1 inital:1 similarly:3 pointed:1 funded:1 cortex:2 tipre:3 inhibition:1 add:1 j:1 disentangle:1 recent:1 showed:1 perspective:1 belongs:1 ausser:1 reverse:3 nowotny:1 isyn:7 certain:4 binary:2 success:1 accomplished:1 minimum:1 additional:2 somewhat:1 greater:1 relaxed:1 employed:1 ministry:1 florian:1 converge:2 tempotron:17 novelty:1 paradigmatic:1 signal:9 dashed:1 rv:1 desirable:3 pnas:1 reduces:1 hebbian:5 match:2 cross:3 long:3 post:12 impact:1 neuro:3 essentially:1 achieved:2 preserved:1 pyramidal:1 induced:3 ltp:5 axonal:1 prl:2 enough:1 relaxes:1 switch:1 psps:2 restrict:1 perfectly:1 plr:10 absent:1 synchronous:3 caporale:1 expression:3 clopath:2 ltd:19 song:1 cause:1 repeatedly:1 depression:3 autoassociative:1 useful:1 generally:1 clear:1 detailed:1 dark:1 generate:3 inhibitory:6 sign:2 delta:1 conceived:1 correctly:1 neuroscience:4 blue:3 rosenblatt:1 write:1 soma:1 threshold:17 drawn:1 pj:2 andfire:1 abbott:1 sompolinski:1 fraction:4 run:3 letter:2 powerful:1 place:1 almost:1 legenstein:1 decision:1 scaling:1 layer:1 guaranteed:1 annual:1 activity:11 argument:5 span:1 separable:2 combination:4 kd:1 membrane:14 beneficial:1 psp:1 smaller:1 postsynaptic:41 y0:41 wi:7 hertz:1 em:1 biologically:6 modification:1 spiked:1 taken:2 equation:3 turn:1 german:1 mechanism:15 mind:1 letting:1 addison:1 serf:1 robustness:5 shortly:2 monasson:1 original:4 denotes:5 top:1 include:1 especially:1 build:1 classical:3 vasilaki:1 spike:84 occurs:2 dependence:1 usual:1 exhibit:1 gradient:2 comparability:1 detrimental:1 reversed:1 separate:1 mapped:1 distance:2 capacity:10 nelson:1 haas:1 presynaptic:19 reason:2 induction:2 length:2 index:1 illustration:3 providing:1 balance:1 regulation:2 mostly:1 trace:13 rise:2 perform:3 uad:12 upper:1 contributed:1 neuron:44 finite:6 descent:2 anti:1 displayed:1 incorrectly:1 precise:1 arbitrary:2 souza:1 namely:1 extensive:1 bcm:1 learned:1 tpre:1 beyond:1 bar:3 able:2 dynamical:1 pattern:37 below:2 perception:1 challenge:1 including:1 memory:10 shifting:1 power:1 suitable:1 event:8 natural:1 rely:1 unrealistic:1 residual:2 improve:1 brief:1 temporally:2 axis:1 review:4 understanding:1 contributing:1 relative:1 law:1 loss:1 highlight:1 prototypical:1 ingredient:2 substracting:1 integrate:2 sufficient:4 rubin:1 principle:2 bremen:9 classifying:1 excitatory:4 course:1 changed:1 last:4 soon:1 keeping:1 ostr:2 perceptron:16 institute:3 absolute:2 sparse:2 distributed:1 curve:2 opper:1 iad:14 dale:1 forward:1 tpost:3 jump:1 coincide:1 simplified:2 iext:2 commonly:1 far:1 sensory:1 sj:2 uni:3 keep:1 ml:1 reveals:1 unnecessary:1 assumed:1 spatio:6 xi:7 continuous:1 seperated:1 table:1 additionally:1 learn:4 nature:4 robust:1 ponulak:1 dendrite:3 investigated:1 gerstner:2 froemke:1 inheriting:1 linearly:2 noise:5 complementary:1 xu:1 neuronal:5 fig:7 hebb:2 axon:1 aid:1 bmbf:1 papert:1 n:1 explicit:1 lie:3 governed:2 candidate:1 vanish:1 lw:1 learns:3 externally:1 down:2 erroneous:1 load:6 inset:1 decay:2 physiological:1 exists:3 sequential:1 adding:1 supplement:1 magnitude:1 entorhinal:1 justifies:1 margin:5 suited:1 rg:1 likely:1 determines:2 relies:2 ma:1 succeed:1 hahnloser:1 presentation:9 consequently:1 replace:2 absence:3 change:7 specifically:1 except:1 uniformly:1 perceiving:1 acting:1 tendency:1 perceptrons:2 mark:1 modulated:1 constructive:2 heaviside:5 correlated:1 |
4,481 | 5,055 | Reciprocally Coupled Local Estimators Implement
Bayesian Information Integration Distributively
Wen-hao Zhang1,2,3 , Si Wu1
State Key Laboratory of Cognitive Neuroscience and Learning, and
IDG/McGovern Institute for Brain Research, Beijing Normal University, China.
2
Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
3
University of Chinese Academy of Sciences, Shanghai, China.
[email protected],
[email protected]
1
Abstract
Psychophysical experiments have demonstrated that the brain integrates information from multiple sensory cues in a near Bayesian optimal manner. The present
study proposes a novel mechanism to achieve this. We consider two reciprocally
connected networks, mimicking the integration of heading direction information
between the dorsal medial superior temporal (MSTd) and the ventral intraparietal
(VIP) areas. Each network serves as a local estimator and receives an independent
cue, either the visual or the vestibular, as direct input for the external stimulus.
We find that positive reciprocal interactions can improve the decoding accuracy
of each individual network as if it implements Bayesian inference from two cues.
Our model successfully explains the experimental finding that both MSTd and VIP
achieve Bayesian multisensory integration, though each of them only receives a
single cue as direct external input. Our result suggests that the brain may implement optimal information integration distributively at each local estimator through
the reciprocal connections between cortical regions.
1
Introduction
In our daily life, we sense the world through multiple sensory systems. For instance, while walking, we perceive heading direction through either the visual cue (optic flow), or the vestibular cue
generated by body movement, or both of them [1, 2]. In reality, because of noises, which arise
due to signal ambiguity and/or fluctuations in neural transmission, our perception of the input information is often uncertain. In order to achieve an accurate or improved representation of the input
information, it is critical for the brain to integrate information from multiple sensory modalities.
Mathematically, Bayesian inference provides an optimal way to estimate the stimulus value based
on multiple uncertain information resources. Consider the task of inferring heading direction ?
based on the visual and vestibular cues. Suppose that with a single cue cl (l = vi, ve correspond
to the visual and the vestibular cues, respectively), the estimation of heading direction satisfies the
Gaussian distribution p(cl |?), which has the mean ?l and the variance ?l2 . Under the condition that
noises from different cues are independent to each other, the Bayes? theorem states that
p(?|cvi , cve ) ? p(cvi |?)p(cve |?)p(?),
(1)
where p(?|cvi , cve ) is the posterior distribution of the stimulus when two cues are presented, and
p(?) the prior distribution. In the case of no prior knowledge, i.e., p(?) is uniform, p(?|cvi , cve ) also
1
satisfies the Gaussian distribution with the mean and variance given by
?b
=
1
?b2
=
2
?2
?ve
?vi + 2 vi 2 ?ve ,
2
+ ?ve
?vi + ?ve
1
1
2 + ?2 .
?vi
ve
2
?vi
(2)
(3)
A number of elegant psychophysical experiments have demonstrated that humans and animals integrate multisensory information in an optimal Bayesian way. These include, for instances, using
visual and auditory cues together to infer object location [3], getting the hand position from the visual and proprioceptive cues simultaneously [4], the combination of visual and haptic input to perceive
object height [5], the integration of visual and vestibular cues to derive heading direction [6, 7], and
the integration of texture and motion information to obtain depth [8]. Nevertheless, the detailed
neural mechanism underlying Bayesian information integration remains largely unclear. Ma et. al.,
proposed a feed-forward mechanism to achieve Bayesian integration [9]. In their framework, a centralized network integrates information from multiple resources. In particular, in their model, the
improved decoding accuracy after combining input cues (i.e., the decreased uncertainty given by
Eq.3) depends on the linear response nature of neurons, a feature in accordance with the statistics
of Poisson spiking train. However, it is unclear how well this result can be extended to non-Poisson
statistics. Moreover, it is not clear where this centralized network responsible for information integration locates in the cortex.
In this work, we propose a novel mechanism to implement Bayesian information integration, which
relies on the excitatory reciprocal interactions between local estimators, with each local estimator
receiving an independent cue as external input. Although our idea may be applicable to general cases, the present study focuses on two reciprocally connected networks, mimicking the integration of
heading direction information between the dorsal medial superior temporal (MSTd) area and ventral
intraparietal (VIP) area. It is known that MSTd and VIP receive the visual and the vestibular cues
as external input, respectively. We model each network as a continuous attractor neural network
(CANN), reflecting the property that neurons in MSTd and VTP are widely tuned by heading direction [10, 11]. Interestingly, we find that with positive reciprocal interactions, both networks read
out heading direction optimally in Bayesian sense, despite the fact that each network only receives a
single cue as directly external input. This agrees well with the experimental finding that both MSTd
and VIP integrate the visual and vestibular cues optimally [6, 7]. Our result suggests that the brain
may implement Bayesian information integration distributively at each local area through reciprocal
connections between cortical regions.
2
The Model
We consider two reciprocally connected networks, each of which receives the stimulus information
from an independent sensory cue (see Fig.1A). The two networks may be regarded as representing,
respectively, the neural circuits in MSTd and VIP. Anatomical and fMRI data have revealed that
there exist abundant reciprocal interactions between MSTd and VIP [12?14]. Neurons in MSTd and
VIP are tuned to heading direction, relying on the visual and the vestibular cues [10, 15].
CANNs, also known as neural field model, have been successfully applied to describe the encoding
of head-direction in neural systems [16]. Therefore, we build each network as a CANN. Denote
? to be the stimulus value (i.e. the heading direction) encoded by both networks, and the neuronal
preferred stimuli are in the range of ?? < ? ? ? with periodic boundary condition. Denote Ul (?, t),
for l = 1, 2, the synaptic input at time t to the neurons having the preferred stimulus ? in the l-th
network. The dynamics of Ul (?, t) is determined by the recurrent inputs from other neurons in the
same network, the reciprocal inputs from neurons in the other network, the external input Ilext (?, t),
and its own relaxation. It is written as
[
]
][
]
[ ext
] [
]
? [
?
U1 (?, t)
W11 W12
r1 (?? , t)
I1 (?, t)
U1 (?, t)
?
?
=?
d? +
?
, (4)
W21 W22
r2 (?? , t)
U2 (?, t)
I2ext (?, t)
?t U2 (?, t)
where ? is the time constant for synaptic current, which is typically in the order of 2-5ms. ? is
the neural density. rl (?, t) is the firing rate of neurons, which increases with the synaptic input but
saturates when the synaptic input is sufficiently large. The saturation is mediated by the contribution
2
A
Iext
2 (vestibular cue)
Net 2
(VIP)
B
?
r2(?)
50
Net 1
(MSTd)
0
50
r1(?)
W12
W21
25
0
?3
W11
?2
?1
0
?
W22
Iext
1 (visual cue)
25
1
2
3
Figure 1: Network structure and stationary state. (A). The two networks are reciprocally connected and each of them forms a CANN. Each disk represents an excitatory neuron with its preferred
heading direction indicated by the arrow inside. The gray disk in the middle of the network represents the inhibitory neuron pool which sums the total activities of excitatory neurons and generates
divisive normalization (Eq.5). The solid line with arrow is excitatory connection with the gray level
indicating the strength. The gray dashed line with dots represents inhibitory connection. (B). The
stationary states of two networks, which can locate at anywhere in the perceptual space. Parameters:
N = 100, k = 10?3 , a = 0.5, L = 7, J11 = J22 = 1.5Jc , J12 = J21 = 0.5J11 .
of inhibitory neurons not explicitly presented in our framework. A solvable model captures these
features is given by divisive normalization [17, 18],
2
rl (?, t) =
[Ul (?, t)]+
?
,
1 + k? ?? [Ul (?? , t)]2+ d??
(5)
where the symbol [x]+ denotes a half-rectifying function, i.e., [x]+ = 0, for x ? 0 and [x]+ = x,
for x > 0, and k reflects the strength of global inhibition.
Wlm (?, ?? ) denotes the connection from the neurons ?? in the network m to the neurons ? in the
network l. W11 (?, ?? ) and W22 (?, ?? ) are the recurrent connections within the same network, and
W12 (?, ?? ) and W21 (?, ?? ) the reciprocal connections between the networks. We assume they are of
the Gaussian form, i.e.,
[
]
Jlm
(? ? ?? )2
?
Wlm (?, ? ) = ?
exp ?
,
(6)
2a2lm
2?alm
where alm determines the neuronal interaction range. In the text below, we consider alm ? ? and
effectively take ?? < ? < ? in the theoretical analysis. We choose Jlm > 0, for l, m = 1, 2,
implying excitatory recurrent and reciprocal neuronal interactions. The contribution of inhibitory
neurons is implicitly included in the divisive normalization.
The external inputs to two networks are given by
[
Ilext (?, t)
=
2
(? ? ?l )
?l exp ?
4(all )2
]
+ ?l ?l (?, t),
(7)
where ?l denotes the stimulus value conveyed to the network l by the corresponding sensory cue.
This can be understood as Ilext drives the network l to be stable at ?l when no reciprocal interaction
and noise exist. ?l is the input strength, and ?l (?, t) is the Gaussian white noise of zero mean
and unit variance, with ?l the noise amplitude. The noise term causes the uncertainty of the input
information, which induces fluctuations of the network state. The exact form of Ilext is not critical
here, as long as it has an unimodal form.
2.1
The dynamics of uncoupled networks
It is instructive to first review the dynamics of two networks without reciprocal connection (by
setting Wlm = 0 for l ?= m in Eq.4). In this case, the dynamics of each network is independent
3
from the other. Because of the translation-invariance of the recurrent connections Wll (?, ?? ), each
network can support a continuous family of active stationary states even when the external input is
removed [19]. These attractor states are of Gaussian-shape, called bumps, which are given by,
]
[
(? ? zl )2
0
?
Ul (x) = Ul exp ?
,
(8)
4(all )2
where zl is a free parameter, representing the peak position of the bump,? and Ul0 =
? [1 + (1 ?
?
Jc /Jll )1/2 ]Jll /(4all k ?). The bumps are stable for Jll > Jc , with Jc = 2 2(2?)1/4 kall /?, the
critical connection strength below which only silent states, Ul0 = 0, exist.
In response to external inputs, the bump position zl is interpreted as the population decoding result
of the network. It has been proven that for a strong transient or a weak constant input, the network
bump will move toward and be stable at a position having the maximum overlap with the noisy input,
realizing the so called template-matching operation [17, 18]. For temporally fluctuating inputs, the
bump position also fluctuates in time, and the variance of bump position measures the network
decoding uncertainty.
In a CANN, its stationary states form a continuous manifold in which the network is neutrally
stable, i.e., the network state can translate smoothly when the external input changes continuously
[18, 20]. This neutral stability is the key that enables the neural system to track moving direction,
head-direction and spatial location of objects smoothly [16, 21, 22]. Due to the special structure of
a CANN, it has been proved that the dynamics of a CANN is dominated by a few motion modes,
corresponding to distortions in the height, position and other higher order features of the Gaussian
bump [19]. In the weak input limit, it is enough to project the network dynamics onto the first few
dominating motion modes and neglect the higher order ones then simplify the network dynamics
significantly. The first two dominating motion modes we are going to use are,
[
]
(? ? z)2
height : ?0 (?|z) = exp ?
,
(9)
4a2
(
)
[
]
??z
(? ? z)2
position : ?1 (?|z) =
(10)
exp ?
,
a
4a2
where a is the width of the basis function, whose value is determined by the bump width the network
?holds. By projecting
? a function f (?) on a motion mode ?(?|z), we mean to compute the quantity,
f
(?)?(?|z)d?/
?(?|z)d?.
?
?
When reciprocal connections are included, the dynamics of the two networks interact with each
other. The bump position of each network is no longer solely determined by its own input, but is
also affected by the input to the other network, enabling both networks to integrate two sensory
cues via reciprocal connections. We consider the reciprocal connections, Wlm (?, ?? ), for l ?= m,
also translation-invariant (Eq.6), so that two networks still hold the key property of CANNs. That
is, they can hold a continuous family of stationary states and track time-varying inputs smoothly
(Fig.1B).
3
Dynamics of Coupled Networks
It is in general difficult to analyze the dynamics of two coupled networks. In the text below, we
will consider the weak input limit and use a projection method to simplify the network dynamics.
The simplified model allows us to solve the network decoding performances analytically and gives
us insight into the understanding of how reciprocal connections help both networks to integrate
information optimally from independent cues.
For simplicity, we consider two networks that are completely symmetric, i.e., they have the same
structure, i.e., J11 = J22 ? Jrc , J12 = J21 ? Jrp , and alm = a; and they receive the same mean
input value and input strength, i.e., ?1 = ?2 ? ?, ?1 = ?2 ? ? and ?1 = ?2 ? ?. They receive,
however, independent noises, i.e., ??1 ?2 ? = 0, implying that two cues are independent to each other
given the stimulus.
In the weak input limit (i.e., for small enough ?), we find that the network states have approximately
Gaussian shape and their variations are dominated by the height and position changes of the bump
4
A
B
C
Height
Basis
50
?0
r2(?)
50
25
0
?2
?1
0
1
2
3
Projection
z
f
? (?)?(?_ )d?
?(?_z )d?
?
Position
1
?1
r1(?)
25
0
?3
50
25
0
A
?3
?2
?1
0
?
1
2
z1
?3?2?1 0 1 2 3
?
3
?1
0
20
t (?)
40
Figure 2: Characters of the network dynamics. (A). Two networks receive the same external input,
whose value jumps from ?1 to 1 abruptly. The network states move smoothly from the initial to the
target position, and their main changes are the height and the position of the Gaussian bumps. (B)
The basis functions for the two dominating motion modes. (C) The simplified network dynamics
after projecting onto the two dominating motion modes. Parameters: ?1 = ?2 = 0.2U 0 , ?1 = ?2 =
0, and others are the same as Fig.1.
(see Fig.2). Thus, we take the Gaussian ansatz and assume the network states to be
[
]
2
(? ? zl (t))
,
Ul (?, t) ? A(t)exp ?
4a2
[
]
2
(? ? zl (t))
rl (?, t) ? B(t)exp ?
,
2a2
(11)
(12)
where A(t) represents
?the bump height, z(t) the bump position in the network l, a the bump width
and B = [A]2+ /(1 + 2?k?a[A]2+ ) according to Eq.(5). Note that the bumps in two networks have
the same shape but different positions due to independent noises.
Substituting Eqs.(11,12) and (7) into the network dynamics Eq.(4), and projecting them onto the
height and position motion modes (9-10), we obtain the dynamics for the height and position of the
bumps in two networks (see Supplemental information 1), which are
dA
dt
dz1
?
dt
?
?
dz2
dt
= ?A + (J?rc + J?rp )B + ?,
=
=
J?rp B
A
J?rp B
A
?
?
2? a
?1 (t),
(z2 ? z1 ) + (? ? z1 ) +
A
(2?)1/4 A
?
?
2? a
?2 (t),
(z1 ? z2 ) + (? ? z2 ) +
A
(2?)1/4 A
(13)
(14)
(15)
?
where J? ? ?J/ 2 for simplifying notation. By removing the external inputs (by setting ? = 0 in
Eq.(13)), we can get the necessary condition for the networks holding self-sustained bump states,
which is (see Supplemental information 2)
?
?
Jrc + Jrp ? 2 2(2?)1/4 ka/?.
(16)
It indicates that positive reciprocal interactions Jrp help the networks to retain attractor states.
To get clear understanding of the effect of reciprocal connections, we decouple the dynamics of
z1 and z2 by studying the dynamics of their their difference, zd = z1 ? z2 , and their summation,
zs = z1 + z2 . From Eqs.(14) and (15), we obtain
? ?
? + 2J?rp B
2 2? a
dzd
= ?
zd +
?
?d (t),
(17)
dt
A
(2?)1/4 A
? ?
dzs
?
2?
2 2? a
?
?s (t),
(18)
= ? zs +
?+
dt
A
A
(2?)1/4 A
5
where
?d (t) and ?s (t) are independent Gaussian white noises re-organized from ?1 (t) and ?2 (t)
?
( 2? = ?1 ? ?2 ).
By solving the above stochastic differential equations, we get the means and variances of zd and zs
in the limit of t = ?, which are
?zd ? =
0, ?zs ? = 2?,
?
?
4? 2 a
1
Var(zd ) ? (zd ? ?zd ?)2 = ?
,
2?? A ? + 2J?rp B
?
?
4? 2 a
Var(zs ) ? (zs ? ?zs ?)2 = ?
,
2?? A?
(19)
(20)
(21)
where the symbol ??? represents averaging over many trails. Eq.(20) indicates that the positive reciprocal connections J?rp tend to decrease the variance of zd , i.e, the difference between the states in
two networks (in practice, varying J?rp also induces mild changes in A, B and a; we have confirmed
in simulation that for a wide range of parameters, increasing J?rp indeed decrease Var(zd )).
The decoding error of each network, measured by the variance of zl , is calculated to be (two networks have the same result due to the symmetry),
?zl ?
Var(zl )
= ?, for l = 1, 2,
= [Var(zd ) + Var(zs )] /4,
(
)
?2 a
1
1
= ?
+
.
2?? A ? ? + 2J?rp B
(22)
(23)
We see that the network decoding is unbiased and their errors tend to decrease with the reciprocal
connection strength J?rp (see the second term in the right-hand of Eq.23). It is easy to check that in
the extreme cases and assuming the bump shape is unchanged (which is not true but is still a good
indication), the network decoding variance with vanishing reciprocal interaction (J?rp = 0) is twofold of that with infinitely strong reciprocal interactions (J?rp = ?). Thus, reciprocal connections
between networks do provide an effective way to integrate information from independent input cues.
To further display the advantage of reciprocal connection, we also calculate the situation when a
single network receives both input cues.
? This equals to setting the external input to a single CANN
2
2
to be I ext (x, t) = 2?e?(x??) /4a + 2??(x, t) (see Eq.(7) and consider the independence between
two cues). The result in this ?
case can be obtained straightforwardly from Eq.(23) by
? choosing
J?rp = 0 and replacing ? with 2? and ? with 2?, which gives Var(z)single = 2? 2 a/( 2?? A?).
This result equals to the error when two networks are uncoupled and is larger than that of the coupled
case.
In the weak input limit, the decoding errors in general situations when two networks are not symmetric can also be calculated, (see Supplemental information 3)
2a [(J?12 B2 ?2 + J?21 B1 ?1 + ?1 ?2 )A2 /A1 + (J?21 B1 + ?2 )2 ]?12 + (J?12 B2 )2 ?22
Var(z1 ) = ?
. (24)
2?? (J?12 B2 A2 + ?1 A2 + J?21 B1 A1 + ?2 A1 )(J?12 B2 ?2 + J?21 B1 ?1 + ?1 ?2 )
Var(z2 ) has the same form as Var(z1 ) except that the indexes 1 and 2 are interchanged.
4
Coupled Networks Implement Bayesian Information Integration
In this section, we compare the network performances with experimental findings. Mimicking the
experimental setting for exploring the integration of visual and vestibular cues in the inference of
heading direction [6, 7], we apply three input conditions to two networks (see Fig.3A), which are:
? Only visual cue:
? Only vestibular cue:
? Combined cues:
?1 = ?,
?1 = 0,
?1 = ?,
?2 = 0.
?2 = ?.
?2 = ?.
In three conditions, the noise amplitude is unchanged and the reciprocal connections are intact.
6
A
Input
condition
Both cues
Vestibular cue
Visual cue
I
Net 2 (VIP)
Iext
2
ext
2
Net 1 (MSTd)
Iext
1
B
ext
(z1|cvi)
(z1|c ve) (z1|c b )
I1
z1
0.05
0
C
Network mean
0.05
100
200
300
t (?)
D
Net 1
Net 2
0
400
1
Network variance
?0.05
0
500 P(z1|cl)
?4
x 10
0.5
?4
x 10
0
0
0.5
1
Bayesian estimate of variance
?0.05
?0.05
0
0.05
Bayesian estimate of mean
Figure 3: Two coupled-networks implement (nearly) Bayesian inference. (A). The three input conditions to two networks. (B). The bump position of the network 1 fluctuates around the true stimulus
value 0. The right panel displays the bump position distributions in three input conditions, from
which we estimate the mean and variance of the decoding results. (C),(D). Compared the network
decoding results with two cues with the predictions of Bayesian inference. (C) for the mean value and (D) for the variance. Different combinations of the input strengths ?l and the reciprocal
connection strengths Jrp are chosen. Parameters: ?1 = ?0.07, ?2 = 0.07, ?1 = ?2 = 0.5,
?i ? [0.1, 0.5]U 0 , Jrp ? [0.3, 1]Jrc , and the others are the same as Fig.1.
Considering the symmetric structures of two networks and ignoring the mild changes in the bump
shape in the weak input limit, we can obtain from Eq.(24) the decoding variance in the three input
conditions, which are (because of the symmetry, only the results for the network 1 are shown)
Var(z1 |cvi ) =
Var(z1 |cve ) =
Var(z1 |cb ) =
2a? 2
,
2?? ?A
2a? 2 J?rp B + ?
?
,
2?? ?A J?rp B
2a? 2 J?rp B + ?
?
,
2?? ?A 2J?rp B + ?
?
(25)
(26)
(27)
where Var(z1 |cvi ), Var(z1 |cve ) and Var(z1 |cb ) denote, respectively, the decoding errors when only
the visual cue, only the vestibular cue and both cues are presented. It is straightforward to check that
1
1
1
=
+
.
Var(z1 |cb )
Var(z1 |cvi ) Var(z1 |cve )
(28)
Thus, in the weak input limit, the coupled CANNs implements Bayesian inference perfectly (compare Eq.(28) to the Bayesian criterion Eq.(3)).
We carry out simulations to further confirm the above theoretical analysis. We run the network
dynamics under three input conditions for many trials, and calculate the means and variances of
the bump positions in each condition. Fig.3B shows that the bump position fluctuations become
7
( ? vi )
0.07
?1/?2
?b
2.5
2
0
1.5
1
0.5
?0.07
( ?ve)
0
0.5
1
1.5
Var(z1|c vi )/Var(z1|c ve)
2
Figure 4: The decoding mean of the network 1
shifts toward the more reliable cue. The color encodes the ratio of the input strengths to two networks ?1 /?2 , which generates varied reliability for two cues. For increasing the ratio
Var(z1 |cvi )/Var(z1 |cve ), i.e., the vestibular cue
becomes more reliable than the visual one, the
network estimation shifts to the stimulus value
?2 conveyed by the vestibular cue. Parameters:
?1 = 0.07, ?2 = ?0.07, ?1 = ?2 = 0.01 and the
others are the same as Fig.1.
narrower in the combined cue input condition, indicating greater accuracy in the decoding. We
compare the result when both cues are presented with the prediction of the Bayesian inference,
obtained by using Eqs.(2, 3). Fig.3C and D show that two networks indeed achieve near Bayesian
optimal inference for a wide range of input amplitudes and reciprocal connection strengths.
A salient feature of Bayesian inference is that its decoding value is biased to the more reliable cue.
The reliability of cues is quantified by their variance ratio, e.g., (?vi )2 < (?ve )2 means that visual cue is more reliable than vestibular cue. From Eq.2, we see that Bayesian inference gives a
larger weight to the more reliable cue. This property has been used as a criterion in experiment
to check the implementation of Bayesian inference, called ?reliability based cue weighting? [23].
We also test this property in our model. To achieve different reliability of the cues, we adjust the
input strength ?1 , and keep the other input parameters unchanged, mimicking the experimental
finding that the firing rate of MT neuron, the earlier stage before MSTd, increases with the input
coherence for its preferred stimuli [24]. With varying input strengths ?1 , and hence varied ratios
Var(z1 |cvi )/Var(z1 |cve ), we calculate the mean of the network decoding. Fig. 4 shows that the decoded mean in the combined cues condition indeed shifts towards to the more reliable cue, agreeing
with the experimental finding and the property of Bayesian inference.
5
Conclusion
In the present study, we have proposed a novel mechanism to implement Bayesian information integration. We consider two networks which are reciprocally connected, and each of them is modeled
as a CANN receiving the stimulus information from an independent cue. Our network model may
be regarded as mimicking the information integration on heading direction between the neural circuits in MSTd and VIP. Experimental data has revealed that the two areas are densely connected in
reciprocity and that neurons in both areas are widely tuned by heading direction, favoring our model
assumptions.
We use a projection method to solve the network dynamics in the weak input limit analytically
and get insights into how positive reciprocal connections enable one network to effectively integrate information from the other. We then carry out simulations to confirm the theoretical analysis,
following the experimental protocols. Our results show that both networks realize near Bayesian optimal decoding for a wide range of parameters, supporting the experimental finding that both MSTd
and VIP optimally integrate the visual and the vestibular cues in heading direction inference, though
each of them only receives a single cue directly.
Our study may have a far-reaching implication on neural information processing. It suggests that
the brain can implement efficient information integration in a distributive manner through reciprocal
connections between cortical regions. Compared to centralized information integration, distributive
processing is more robust to local failures and facilitates parallel computation.
6
Acknowledgements
This work is supported by National Foundation of Natural Science of China (No.91132702 and
No.31261160495).
8
References
[1] L. R. Harris, M. Jenkin, D. C. Zikovitz, Experimental Brain Research 135, 12 (2000).
[2] R. Bertin, A. Berthoz, Experimental Brain Research 154, 11 (2004).
[3] B. E. Stein, T. R. Stanford, Nature Reviews Neuroscience 9, 255 (2008).
[4] R. J. van Beers, A. C. Sittig, J. J. D. van der Gon, Journal of Neurophysiology 81, 1355 (1999).
[5] M. O. Ernst, M. S. Banks, Nature 415, 429 (2002).
[6] Y. Gu, D. E. Angelaki, G. C. DeAngelis, Nature Neuroscience 11, 1201 (2008).
[7] A. Chen, G. C. DeAngelis, D. E. Angelaki, The Journal of Neuroscience 33, 3567 (2013).
[8] R. A. Jacobs, Vision Research 39, 3621 (1999).
[9] W. J. Ma, J. M. Beck, P. E. Latham, A. Pouget, Nature Neuroscience 9, 1432 (2006).
[10] Y. Gu, P. V. Watkins, D. E. Angelaki, G. C. DeAngelis, The Journal of Neuroscience 26, 73
(2006).
[11] A. Chen, G. C. DeAngelis, D. E. Angelaki, The Journal of Neuroscience 31, 3082 (2011).
[12] D. Boussaoud, L. G. Ungerleider, R. Desimone, Journal of Comparative Neurology 296, 462
(1990).
[13] J. S. Baizer, L. G. Ungerleider, R. Desimone, The Journal of Neuroscience 11, 168 (1991).
[14] J. Vincent, et al., Nature 447, 83 (2007).
[15] A. Chen, G. C. DeAngelis, D. E. Angelaki, The Journal of Neuroscience 31, 12036 (2011).
[16] K. Zhang, The Journal of Neuroscience 16, 2112 (1996).
[17] S. Deneve, P. Latham, A. Pouget, Nature Neuroscience 2, 740 (1999).
[18] S. Wu, S.-I. Amari, H. Nakahara, Neural Computation 14, 999 (2002).
[19] C. A. Fung, K. M. Wong, S. Wu, Neural Computation 22, 752 (2010).
[20] S.-I. Amari, Biological Cybernetics 27, 77 (1977).
[21] A. P. Georgopoulos, M. Taira, A. Lukashin, et al., Science 260, 47 (1993).
[22] A. Samsonovich, B. L. McNaughton, The Journal of Neuroscience 17, 5900 (1997).
[23] C. R. Fetsch, A. Pouget, G. C. DeAngelis, D. E. Angelaki, Nature Neuroscience 15, 146 (2011).
[24] K. H. Britten, M. N. Shadlen, W. T. Newsome, J. A. Movshon, et al., Visual Neuroscience 10,
1157 (1993).
9
| 5055 |@word mild:2 trial:1 neurophysiology:1 middle:1 disk:2 dz1:1 simulation:3 simplifying:1 jacob:1 solid:1 carry:2 initial:1 idg:1 tuned:3 interestingly:1 current:1 z2:7 ka:1 si:1 written:1 realize:1 wll:1 shape:5 enables:1 medial:2 wlm:4 stationary:5 cue:59 half:1 implying:2 vtp:1 reciprocal:28 realizing:1 vanishing:1 provides:1 location:2 zhang:1 height:9 rc:1 direct:2 differential:1 become:1 sustained:1 inside:1 manner:2 alm:4 indeed:3 samsonovich:1 brain:8 relying:1 considering:1 increasing:2 becomes:1 project:1 underlying:1 moreover:1 circuit:2 notation:1 panel:1 interpreted:1 z:8 supplemental:3 finding:6 jlm:2 temporal:2 zl:8 unit:1 zhang1:1 positive:5 before:1 understood:1 local:7 accordance:1 limit:8 despite:1 encoding:1 ext:4 fluctuation:3 firing:2 solely:1 approximately:1 china:4 quantified:1 suggests:3 range:5 responsible:1 practice:1 implement:10 area:6 significantly:1 matching:1 projection:3 get:4 onto:3 wong:1 demonstrated:2 straightforward:1 simplicity:1 perceive:2 pouget:3 estimator:5 insight:2 regarded:2 j12:2 population:1 stability:1 variation:1 dzd:1 mcnaughton:1 target:1 suppose:1 exact:1 trail:1 walking:1 gon:1 capture:1 calculate:3 region:3 jrc:3 connected:6 movement:1 removed:1 decrease:3 dynamic:19 solving:1 basis:3 completely:1 mstd:14 gu:2 train:1 describe:1 effective:1 deangelis:6 mcgovern:1 choosing:1 whose:2 encoded:1 widely:2 fluctuates:2 dominating:4 distortion:1 solve:2 larger:2 stanford:1 amari:2 statistic:2 noisy:1 advantage:1 indication:1 net:6 baizer:1 propose:1 interaction:10 combining:1 translate:1 ernst:1 achieve:6 academy:2 getting:1 transmission:1 r1:3 comparative:1 object:3 help:2 derive:1 recurrent:4 ac:1 measured:1 eq:18 strong:2 direction:18 stochastic:1 human:1 transient:1 enable:1 explains:1 biological:1 summation:1 mathematically:1 exploring:1 hold:3 cvi:10 sufficiently:1 distributively:3 ungerleider:2 normal:1 exp:7 around:1 cb:3 bump:24 substituting:1 interchanged:1 ventral:2 a2:7 estimation:2 integrates:2 applicable:1 agrees:1 successfully:2 reflects:1 gaussian:10 reaching:1 varying:3 focus:1 indicates:2 check:3 sense:2 inference:13 typically:1 favoring:1 going:1 i1:2 mimicking:5 canns:3 proposes:1 animal:1 spatial:1 integration:18 special:1 field:1 equal:2 having:2 represents:5 nearly:1 fmri:1 others:3 stimulus:13 simplify:2 few:2 wen:1 simultaneously:1 ve:10 densely:1 individual:1 national:1 beck:1 taira:1 attractor:3 centralized:3 adjust:1 extreme:1 implication:1 accurate:1 desimone:2 daily:1 necessary:1 abundant:1 re:1 theoretical:3 uncertain:2 instance:2 earlier:1 newsome:1 neutral:1 uniform:1 optimally:4 straightforwardly:1 periodic:1 combined:3 density:1 peak:1 retain:1 receiving:2 decoding:18 pool:1 ansatz:1 together:1 continuously:1 ambiguity:1 choose:1 cognitive:1 external:13 b2:5 jc:4 explicitly:1 vi:9 depends:1 analyze:1 bayes:1 parallel:1 rectifying:1 contribution:2 accuracy:3 variance:15 largely:1 correspond:1 weak:8 bayesian:26 vincent:1 bnu:1 confirmed:1 drive:1 cybernetics:1 w21:3 synaptic:4 failure:1 auditory:1 proved:1 kall:1 knowledge:1 color:1 organized:1 amplitude:3 reflecting:1 feed:1 higher:2 dt:5 response:2 improved:2 though:2 anywhere:1 stage:1 hand:2 receives:6 replacing:1 jll:3 mode:7 indicated:1 gray:3 effect:1 unbiased:1 true:2 analytically:2 hence:1 read:1 symmetric:3 laboratory:1 proprioceptive:1 white:2 width:3 self:1 m:1 criterion:2 latham:2 motion:8 novel:3 superior:2 spiking:1 rl:3 mt:1 shanghai:2 dot:1 reliability:4 moving:1 stable:4 cortex:1 longer:1 inhibition:1 posterior:1 own:2 cve:9 life:1 der:1 greater:1 signal:1 dashed:1 multiple:5 unimodal:1 infer:1 long:1 locates:1 neutrally:1 a1:3 prediction:2 vision:1 poisson:2 normalization:3 ion:1 receive:4 decreased:1 modality:1 biased:1 haptic:1 tend:2 elegant:1 j11:3 facilitates:1 flow:1 near:3 revealed:2 enough:2 easy:1 independence:1 j21:2 wu1:1 silent:1 perfectly:1 idea:1 cn:2 shift:3 j22:2 ul:7 movshon:1 abruptly:1 cause:1 detailed:1 clear:2 stein:1 induces:2 exist:3 inhibitory:4 neuroscience:15 intraparietal:2 track:2 anatomical:1 zd:10 affected:1 key:3 salient:1 nevertheless:1 deneve:1 relaxation:1 sum:1 beijing:1 run:1 uncertainty:3 family:2 wu:2 w12:3 coherence:1 display:2 activity:1 strength:12 optic:1 w22:3 georgopoulos:1 encodes:1 dominated:2 generates:2 u1:2 fung:1 according:1 combination:2 dz2:1 character:1 agreeing:1 sittig:1 berthoz:1 projecting:3 invariant:1 resource:2 equation:1 remains:1 lukashin:1 mechanism:5 fetsch:1 vip:12 serf:1 studying:1 operation:1 apply:1 fluctuating:1 rp:17 denotes:3 include:1 neglect:1 chinese:2 build:1 unchanged:3 psychophysical:2 move:2 quantity:1 unclear:2 distributive:2 manifold:1 toward:2 assuming:1 reciprocity:1 index:1 modeled:1 cann:8 ratio:4 difficult:1 holding:1 hao:1 implementation:1 neuron:16 enabling:1 supporting:1 situation:2 extended:1 saturates:1 head:2 locate:1 varied:2 connection:24 z1:29 uncoupled:2 vestibular:17 below:3 perception:1 saturation:1 reliable:6 reciprocally:6 critical:3 overlap:1 natural:1 solvable:1 representing:2 improve:1 w11:3 temporally:1 mediated:1 coupled:7 britten:1 text:2 prior:2 review:2 l2:1 understanding:2 acknowledgement:1 proven:1 var:25 bertin:1 foundation:1 integrate:8 conveyed:2 beer:1 shadlen:1 bank:1 translation:2 excitatory:5 supported:1 free:1 heading:15 institute:2 jrp:5 template:1 wide:3 van:2 boundary:1 depth:1 cortical:3 world:1 calculated:2 sensory:6 forward:1 jump:1 simplified:2 iext:4 far:1 preferred:4 implicitly:1 keep:1 confirm:2 global:1 active:1 ul0:2 b1:4 neurology:1 continuous:4 reality:1 nature:8 robust:1 ignoring:1 symmetry:2 interact:1 cl:3 protocol:1 da:1 main:1 arrow:2 noise:10 arise:1 angelaki:6 body:1 neuronal:3 fig:10 inferring:1 position:21 decoded:1 perceptual:1 watkins:1 weighting:1 theorem:1 removing:1 symbol:2 r2:3 wusi:1 effectively:2 texture:1 chen:3 smoothly:4 infinitely:1 visual:20 u2:2 satisfies:2 relies:1 determines:1 ma:2 harris:1 narrower:1 nakahara:1 towards:1 twofold:1 change:5 included:2 determined:3 except:1 averaging:1 decouple:1 total:1 called:3 invariance:1 experimental:11 divisive:3 multisensory:2 intact:1 indicating:2 support:1 dorsal:2 instructive:1 |
4,482 | 5,056 | Multisensory Encoding, Decoding, and Identification
Yevgeniy B. Slutskiy?
Department of Electrical Engineering
Columbia University
New York, NY 10027
[email protected]
Aurel A. Lazar
Department of Electrical Engineering
Columbia University
New York, NY 10027
[email protected]
Abstract
We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of
biophysical spike generators. We demonstrate that stimuli of different dimensions
can be faithfully multiplexed and encoded in the spike domain and derive tractable
algorithms for decoding each stimulus from the common pool of spikes. We also
show that the identification of multisensory processing in a single neuron is dual
to the recovery of stimuli encoded with a population of multisensory neurons, and
prove that only a projection of the circuit onto input stimuli can be identified. We
provide an example of multisensory integration using natural audio and video and
discuss the performance of the proposed decoding and identification algorithms.
1
Introduction
Most organisms employ a mutlitude of sensory systems to create an internal representation of their
environment. While the advantages of functionally specialized neural circuits are numerous, many
benefits can also be obtained by integrating sensory modalities [1, 2, 3]. The perceptual advantages
of combining multiple sensory streams that provide distinct measurements of the same physical
event are compelling, as each sensory modality can inform the other in environmentally unfavorable
circumstances [4]. For example, combining visual and auditory stimuli corresponding to a person
talking at a cocktail party can substantially enhance the accuracy of the auditory percept [5].
Interestingly, recent studies demonstrated that multisensory integration takes place in brain areas that
were traditionally considered to be unisensory [2, 6, 7]. This is in contrast to classical brain models in
which multisensory integration is relegated to anatomically established sensory convergence regions,
after extensive unisensory processing has already taken place [4]. Moreover, multisensory effects
were shown to arise not solely due to feedback from higher cortical areas. Rather, they seem to be
carried by feedforward pathways at the early stages of the processing hierarchy [2, 7, 8].
The computational principles of multisensory integration are still poorly understood. In part, this is
because most of the experimental data comes from psychophysical and functional imaging experiments which do not provide the resolution necessary to study sensory integration at the cellular level
[2, 7, 9, 10, 11]. Moreover, although multisensory neuron responses depend on several concurrently
received stimuli, existing identification methods typically require separate experimental trials for
each of the sensory modalities involved [4, 12, 13, 14]. Doing so creates major challenges, especially when unisensory responses are weak or together do not account for the multisensory response.
Here we present a biophysically-grounded spiking neural circuit and a tractable mathematical
methodology that together allow one to study the problems of multisensory encoding, decoding,
and identification within a unified theoretical framework. Our neural circuit is comprised of a bank
?
The authors? names are listed in alphabetical order.
1
u1n1 (x1 , ..., xn1 )
Receptive
Field 1
v i1 (t)
?
uM
nM (x1 , ..., xnM )
Receptive
Field M
v i (t)
v iM (t)
Neuron i
v i (t)
(tik )k?Z
+
1
Ci
?i
bi
(a)
(tik )k?Z
voltage reset to 0
(b)
Figure 1: Multisensory encoding on neuronal level. (a) Each neuron i = 1, ..., N receives multiple stimuli
i
um
nm , m=1, ..., M , of different modalities and encodes them into a single spike train (tk )k?Z . (b) A spiking
P im
i
point neuron model, e.g., the IAF model, describes the mapping of the current v (t) = m v (t) into spikes.
of multisensory receptive fields in cascade with a population of neurons that implement stimulus
multiplexing in the spike domain. The circuit architecture is quite flexible in that it can incorporate
complex connectivity [15] and a number different spike generation models [16], [17].
Our approach is grounded in the theory of sampling in Hilbert spaces. Using this theory, we show
that signals of different modalities, having different dimensions and dynamics, can be faithfully
encoded into a single multidimensional spike train by a common population of neurons. Some
benefits of using a common population include (a) built-in redundancy, whereby, by rerouting, a
circuit could take over the function of another faulty circuit (e.g., after a stroke) (b) capability to
dynamically allocate resources for the encoding of a given signal of interest (e.g., during attention)
(c) joint processing and storage of multisensory signals/stimuli (e.g., in associative memory tasks).
First we show that, under appropriate conditions, each of the stimuli processed by a multisensory
circuit can be decoded loss-free from a common, unlabeled set of spikes. These conditions provide
clear lower bounds on the size of the population of multisensory neurons and the total number of
spikes generated by the entire circuit. We then discuss the open problem of identifying multisensory processing using concurrently presented sensory stimuli. We show that the identification of
multisensory processing in a single neuron is elegantly related to the recovery of stimuli encoded
with a population of multisensory neurons. Moreover, we prove that only a projection of the circuit
onto the space of input stimuli can be identified. Finally, we present examples of both decoding and
identification algorithms and demonstrate their performance using natural stimuli.
2
Modeling Sensory Stimuli, their Processing and Encoding
Our formal model of multisensory encoding, called the multisensory Time Encoding Machine
(mTEM) is closely related to traditional TEMs [18]. TEMs are real-time asynchronous mechanisms for encoding continuous and discrete signals into a time sequence. They arise as models of
early sensory systems in neuroscience [17, 19] as well as nonlinear sampling circuits and analogto-discrete (A/D) converters in communication systems [17, 18]. However, in contrast to traditional
TEMs that encode one or more stimuli of the same dimension n, a general mTEM receives M input
stimuli u1n1 , ..., uM
nM of different dimensions nm ? N, m = 1, ..., M , and possibly different dynamics
(Fig. 1a). The mTEM processes and encodes these signals into a multidimensional spike train using
a population of N neurons. For each neuron i = 1, ..., N , the results of this processing are aggregated
into the dendritic current v i flowing into the spike initiation zone, where it is encoded into a time
sequence (tik )k?Z , with tik denoting the timing of the k th spike of neuron i.
Similarly to traditional TEMs, mTEMs can employ a myriad of spiking neuron models. Several
examples include conductance-based models such as Hodgkin-Huxley, Morris-Lecar, FitzhughNagumo, Wang-Buzsaki, Hindmarsh-Rose [20] as well as simpler models such as the ideal and
leaky integrate-and-fire (IAF) neurons [15]. For clarity, we will limit our discussion to the ideal IAF
neuron, since other models can be handled as described previously [20, 21]. For an ideal IAF neuron
with a bias bi ? R+ , capacitance C i ? R+ and threshold ? i ? R+ (Fig. 1b), the mapping of the
current v i into spikes is described by a set of equations formerly known as the t-transform [18]:
Z tik+1
v i (s)ds = qki ,
k ? Z,
(1)
tik
where qki = C i ? i ? bi (tik+1 ? tik ). Intuitively, at every spike time tik+1 , the ideal IAF neuron is
providing a measurement qki of the current v i (t) on the time interval [tik , tik+1 ).
2
2.1
Modeling Sensory Inputs
We model input signals as elements of reproducing kernel Hilbert spaces (RKHSs) [22]. Most
real world signals, including natural stimuli can be described by an appropriately chosen RKHS
[23]. For practical and computational reasons we choose to work with the space of trigonometric
polynomials Hnm defined below, where each element of the space is a function in nm variables
(nm ? N, m = 1, 2, ..., M ). However, we note that the results obtained in this paper are not limited
to this particular choice of RKHS (see, e.g., [24]).
Definition 1. The space of trigonometric polynomials Hnm is a Hilbert space of complex-valued
functions
Lnm
L1
X
X
um
?
?
?
(x
,
...,
x
)
=
um
1
nm
l1 ...lnm el1 ...lnm (x1 , ..., xnm ),
nm
l1 =?L1
lnm =?Lnm
Qnm
[0, T ], where um
over the domain Dnm = n=1
l1 ...lnm ? C and the functions el1 ...lnm (x1 , ..., xnm ) =
p n
P
nm
exp
n=1 jln ?n xn /Ln / T1 ? ? ? Tnm , with j denoting the imaginary number. Here ?n is the
bandwidth, Ln is the order, and Tn = 2?Ln /?n is the period in dimension xn . Hnm is endowed
with the inner product h?, ?i : Hnm ? Hnm ? C, where
Z
m
m
hum
(2)
,
w
um
i
=
nm
nm
nm (x1 , ..., xnm )wnm (x1 , ..., xnm )dx1 ...dxnm .
Dnm
Given the inner product in (2), the set of elements el1 ...lnm (x1 , ..., xnm ) forms an orthonormal basis
in Hnm . Moreover, Hnm is an RKHS with the reproducing kernel (RK)
Knm (x1 , ..., xnm ; y1 , ..., ynm ) =
L1
X
Lnm
...
X
el1 ...lnm (x1 , ..., xnm )el1 ...lnm (y1 , ..., ynm ).
l1 =?L1 lnm =?Lnm
Remark 1. In what follows, we will primarily be concerned with time-varying stimuli, and the
dimension xnm will denote the temporal dimension t of the stimulus um
nm , i.e., xnm = t.
Remark 2. For M concurrently received stimuli, we have Tn1 = Tn2 = ? ? ? = TnM .
m
Example 1. We model audio stimuli um
1 = u1 (t) as elements of the RKHS H1 over the domain
D1 = [0, T1 ]. For notational convenience, we drop the dimensionality subscript and use T , ? and L,
to denote the period, bandwidth and order of the space H1 . An audio signal um
be written
1 ? H1 can?
PL
m
m
m
as u1 (t) = l=?L ul el (t), where the coefficients ul ? C and el (t) = exp (jl?t/L) / T .
m
Example 2. We model video stimuli um
3 = u3 (x, y, t) as elements of the RKHS H3 defined on
D3 = [0, T1 ] ? [0, T2 ] ? [0, T3 ], where T1 = 2?L1 /?1 , T2 = 2?L2 /?2 , T3 = 2?L3 /?3 ,
with (?1 , L1 ), (?2 , L2 ) and (?3 , L3 ) denoting the (bandwidth, order) pairs in spatial directions
m
x, y and in time t, respectively. A video signal um
3 ? H3 can be written as u3 (x, y, t) =
PL1
PL2
PL3
m
m
u
? C and the funcl1 =?L1
l2 =?L2
l3 =?L3 ul1 l2 l3 el1 l2 l3 (x, y, t), where the coefficients
? l1 l2 l3
tions el1 l2 l3 (x, y, t) = exp (jl1 ?1 x/L1 + jl2 ?2 y/L2 + jl3 ?3 t/L3 ) / T1 T2 T3 .
2.2
Modeling Sensory Processing
Multisensory processing can be described by a nonlinear dynamical system capable of modeling
linear and nonlinear stimulus transformations, including cross-talk between stimuli [25]. For clarity,
here we will consider only the case of linear transformations that can be described by a linear filter
having an impulse response, or kernel, hm
nm (x1 , ..., xnm ). The kernel is assumed to be boundedinput bounded-output (BIBO)-stable and causal. Without loss of generality, we assume that such
transformations involve convolution in the time domain (temporal dimension xnm ) and integration
in dimensions x1 , ..., xnm ?1 . We also assume that the kernel has a finite support in each direction
xn , n = 1, ..., nm . In other words, the kernel hm
n belongs to the space Hnm defined below.
mm
Definition 2. The filter kernel space Hnm = hn ? L1 (Rnm ) supp(hm
n ) ? Dn m .
m
m
Definition 3. The projection operator P : Hnm ? Hnm is given (by abuse of notation) by
m
(Phm
(3)
nm )(x1 , ..., xnm ) = hnm (?, ..., ?), Knm (?, ..., ?; x1 , ..., xnm ) .
P
P
L
L
nm
1
m
m
Since Phm
nm ? Hnm , (Phnm )(x1 , ..., xnm ) =
l1 =-L1 ...
ln =-Ln hl1 ...lnmel1 ...lnm (x1 , ..., xnm ).
m
3
m
3
Multisensory Decoding
Consider an mTEM comprised of a population of N ideal IAF neurons receiving M input signals
um
nm of dimensions nm , m = 1, ..., M . Assuming that the multisensory processing is given by
kernels him
nm , m = 1, ..., M , i = 1, ..., N , the t-transform in (1) can be rewritten as
i
Tki1 [u1n1 ] + Tki2 [u2n2 ] + ... + TkiM [uM
nM ] = q k ,
k ? Z,
(4)
where Tkim : Hnm ? R are linear functionals defined by
Z tik+1 Z
im m
im
m
Tk [unm ] =
hnm (x1 , ..., xnm ?1 , s)unm (x1 , ..., xnm ?1 , t ? s)dx1 ...dxnm ?1 ds dt.
tik
Dnm
We observe that each qki in (4) is a real number representing a quantal measurement of all M
stimuli, taken by the neuron i on the interval [tik , tik+1 ). These measurements are produced in an
asynchronous fashion and can be computed directly from spike times (tik )k?Z using (1). We now
i
demonstrate that it is possible to reconstruct stimuli um
nm , m = 1, ..., M from (tk )k?Z , i = 1, ..., N .
Theorem 1. (Multisensory Time Decoding Machine (mTDM))
Let M signals um
nm ? Hnm be encoded by a multisensory TEM comprised of N ideal IAF neurons
and N ? M receptive fields with full spectral support. Assume that the IAF neurons do not have the
same parameters, and/or the receptive fields for each modality are linearly independent. Then given
m
the filter kernel coefficients him
l1 ...lnm , i = 1, ..., N , all inputs unm can be perfectly recovered as
um
nm (x1 , ..., xnm ) =
L1
X
l1 =?L1
Lnm
...
X
um
l1 ...lnm el1 ...lnm (x1 , ..., xnm ),
(5)
lnm =?Lnm
+
+
where um
l1 ...lnm are elements of u = ? q, and ? denotes the pseudoinverse of ?. Furthermore,
? = [?1 ; ?2 ; ... ; ?N ], q = [q1 ; q2 ; ... ; qN ] and [qi ]k = qki . Each matrix ?i = [?i1 , ?i2 , ..., ?im ],
with
?
im
i
?
? tik ),
lnm = 0
? h?l1 ,?l2 ,...,?lnm ?1 ,lnm (tk+1p
im
im
i
i
[? ]kl = h?l1 ,?l2 ,...,?lnm ?1 ,lnm Lnm Tnm elnm (tk+1 ) ? elnm (tk )
,
(6)
?
, lnm 6= 0
?
jlnm ?nm
where the column index l traverses all possible subscript combinations of l1 , l2 , ..., lnm . A necessary condition for recovery is that the total number of spikes generated by all neurons is larger than
PM Qnm
spikes in an interval of length Tn1 = Tn2 m=
m=1
n=1 (2Ln +1)+N . If each neuron produces
lP ?Q
M
nm
? ? ? = TnM , a sufficient condition is N ?
m=1
n=1 (2Ln + 1)/ min(? ? 1, 2Lnm + 1) ,
where dxe denotes the smallest integer greater than x.
1
M
i1
iM
Proof: Substituting (5) into (4), qki = Tki1 [u1n1 ]+...+TkiM [uM
nM ] = un1 , ?1k +...+ unM , ?M k =
P
P
P
P
M
1
i1
iM
l1 ...
lnM u?l1 ,?l2 ,?lnM ?1 ,lnM ?l1 ...lnM k ,
l1 ...
ln1 u?l1 ,?l2 ,?ln1 ?1 ,ln1 ?l1 ...ln1 k + ... +
where k ? Z and the second equality follows from the Riesz representation theorem with
i
i
?im
nm k ? Hnm , m = 1, ..., M . In matrix form the above equality can be written as q = ? u, with
[qi ]k = qki , ?i = [?i1 , ?i2 , ..., ?iM ], where elements [?im ]kl are given by [?im ]kl = ?im
l1 ...lnm k ,
with index l traversing all possible subscript combinations of l1 , l2 , ..., lnm . To find the coefficients
im
im
?im
l1 ...lnm k , we note that ?l1 ...lnm k = Tnm k (el1 ...lnm ), m = 1, ..., M , i = 1, ..., N . The column
Qnm
vector u = [u1 ; u2 ; ...; um ] with the vector um containing n=1
(2Ln + 1) entries corresponding
to coefficients um
l1 l2 ...lnm . Repeating for all neurons i = 1, ..., N , we obtain q = ?u with
? = [?1 ; ?2 ; ... ; ?N ] and q = [q1 ; q2 ; ... ; qN ]. This systemPof linear
Qnm equations can be solved
for u, provided that the rank r(?) of matrix ? satisfies r(?) = m n=1
(2Ln + 1). A necessary
condition for the latter
Qnmis that the total number of measurements generated by all N neurons is
greater or equal to n=1 (2Ln +Q1). Equivalently, the total number of spikes produced by all N
nm
neurons should be greater than n=1
(2Ln + 1) + N . Then u can be uniquely specified as the
solution to a convex optimization problem, e.g., u = ?+ q. To find the sufficient condition, we note
4
h11
1 (t)
v 11 (t)
h12
3 (x, y, t)
v 1 (t)
u1-L
Neuron 1
v (t)
(t1k )k?Z
+
?
12
(t1k )k?Z
e -L
?
u11 (t)
x
t
?
u23 (x, y, t)
h22
3 (x, y, t)
v 2 (t)
Neuron 2
v 22 (t)
(t2k )k?Z
(t2k )k?Z
y
v
u2-L1 ,-L2 ,-L3
e -L1 ,-L2 ,-L3
?
x
u23 (x, y, t)
t
y
(t)
v N(t)
u2L1 ,L2 ,L3
Neuron N
v N 2(t)
+
?
2
hN
3 (x, y, t)
eL (t)
+
h1N 1 (t)
N1
e.g., u = ?+ q
t
u1L
+
h21
1 (t)
v 21 (t)
Convex Optimization Problem
t
u11 (t)
(tN
k )k?Z
(tN
k )k?Z
e L1 ,L2 ,L3 (t)
(a)
(b)
Figure 2: Multimodal TEM & TDM for audio and video integration (a) Block diagram of the multimodal
TEM. (b) Block diagram of the multimodal TDM.
that the mth component v im of the dendritic current v i has a maximal bandwidth of ?nm and we
need only 2Lnm + 1 measurements to specify it. Thus each neuron can produce a maximum of only
2P Lnm + 1 informative measurements, or equivalently, 2P Lnm + 2Q
informative spikes on a time
nm
interval [0, Tnm ]. It follows that for each modality,
we
require
at
least
n=1 (2Ln + 1)/(2Lnm + 1)
Qnm
neurons if ? ? (2Lnm + 2) and at least d n=1 (2Ln + 1)/(? ? 1)e neurons if ? < (2Lnm + 2).
4
Multisensory Identification
We now investigate the following nonlinear neural identification problem: given stimuli um
nm ,
m = 1, ..., M , at the input to a multisensory neuron i and spikes at its output, find the multisensory
receptive field kernels him
nm , m = 1, ..., M . We will show that this problem is mathematically dual
to the decoding problem discussed above. Specifically, we will demonstrate that the identification
problem can be converted into a neural encoding problem, where each spike train (tik )k?Z produced
during an experimental trial i, i = 1, ..., N , is interpreted to be generated by the ith neuron in a
population of N neurons. We consider identifying kernels for only one multisensory neuron (identification for multiple neurons can be performed in a serial fashion) and drop the superscript i in him
nm
throughout this section. Instead, we introduce the natural notion of performing multiple experimental trials and use the same superscript i to index stimuli uim
nm on different trials i = 1, ..., N .
Consider the multisensory neuron depicted in Fig. 1. Since for every trial i, an input signal uim
nm ,
m = 1, ..., M , can be modeled as an element of some space Hnm , we have uim
(x
,
...,
x
)
1
nm =
nm
huim
(?,
...,
?),
K
(?,
...,
?;
x
,
...,
x
)i
by
the
reproducing
property
of
the
RK
K
.
It
follows
that
nm
1
nm
nm
nm
Z
im
hm
nm (s1 , ..., snm ?1 , snm )unm (s1 , ..., snm ?1 , t ? snm )ds1 ...dsnm ?1 dsnm =
Dnm
(a)
=
(b)
=
Z
m
uim
nm (s1 , ..., snm ?1 , snm ) hnm (?, ..., ?), Knm (?, ..., ?; s1 , ..., snm ?1 , t ? snm ) ds1 ...dsnm =
Dnm
Z
m
uim
nm (s1 , ..., snm ?1 , snm )(Phnm )(s1 , ..., snm ?1 , t ? snm )ds1 ...dsnm ?1 dsnm ,
Dnm
where (a) follows from the reproducing property and symmetry of Knm and Definition 2, and (b)
from the definition of Phm
nm in (3). The t-transform of the mTEM in Fig. 1 can then be written as
1
i2
2
iM
M
i
Li1
k [Phn1 ] + Lk [Phn2 ] + ... + Lk [PhnM ] = qk ,
5
(7)
u11
1 (t)
v 11 (t)
u12
3 (x, y, t)
v 1 (t)
h1-L
Trial 1
v (t)
(t1k )k?Z
+
?
12
(t1k )k?Z
e -L
x
t
y
u21
1 (t)
v (t)
?
(Ph23 )(x, y, t)
u22
3 (x, y, t)
v
N1
(t2k )k?Z
eL (t)
h2-L1 ,-L2 ,-L3
e -L1 ,-L2 ,-L3
?
x
(Ph23 )(x, y, t)
t
(t)
N2
(t)
v N(t)
h2L1 ,L2 ,L3
Trial N
+
v
(t2k )k?Z
y
?
2
uN
3 (x, y, t)
Trial 2
t
+
1
uN
3 (t)
v 2 (t)
v 22 (t)
h1L
e.g., h = ?+ q
t
21
(Ph11 )(t)
+
(Ph11 )(t)
Convex Optimization Problem
?
(tN
k )k?Z
(tN
k )k?Z
e L1 ,L2 ,L3 (t)
(a)
(b)
Figure 3: Multimodal CIM for audio and video integration (a) Time encoding interpretation of the multimodal CIM. (b) Block diagram of the multimodal CIM.
where Lim
k : Hnm ? R, m = 1, ..., M , k ? Z, are linear functionals defined by
Z tik+1 Z
m
im
m
Lim
[Ph
]
=
u
(s
,
...
,
s
)(Ph
)(s
,
...,
t
?
s
)ds
...
ds
nm
1
nm
1
nm dt.
k
nm
nm 1
nm
tik
Dm
Remark 3. Intuitively, each inter-spike interval [tik , tik+1 ) produced by the IAF neuron is a time
measurement qki of the (weighted) sum of all kernel projections Phm
nm , m = 1, ..., M .
im
Remark 4. Each projection Phm
nm is determined by the corresponding stimuli unm , i = 1, ..., N ,
employed during identification and can be substantially different from the underlying kernel hm
nm .
It follows that we should be able to identify the projections Phm
nm , m = 1, ..., M , from the measurements (qki )k?Z . Since we are free to choose any of the spaces Hnm , an arbitrarily-close identification
of original kernels is possible, provided that the bandwidth of the test signals is sufficiently large.
Theorem 2. (Multisensory Channel Identification Machine (mCIM))
i
i1
iM T
im
Let {ui }N
i=1 , u = [un1 , ..., unM ] , unm ? Hnm , m = 1, ..., M , be a collection of N linearly independent stimuli at the input to an mTEM circuit comprised of receptive fields with kernels hm
nm ? Hnm ,
m = 1, ..., M , in cascade with an ideal IAF neuron. Given the coefficients uim
l1 ,...,lnm of stimuli
m
,
m
=
1,
...,
M
,
can
be perfectly iden,
i
=
1,
...,
N
,
m
=
1,
...,
M
,
the
kernel
projections
Ph
uim
nm
nm
PL1
PLnm
m
m
tified as (Phnm )(x1 , ..., xnm ) =
l1 =?L1 ...
lnm =?Lnm hl1 ...lnm el1 ...lnm (x1 , ..., xnm ), where
m
+
+
hl1 ...lnm are elements of h = ? q, and ? denotes the pseudoinverse of ?. Furthermore,
? = [?1 ; ?2 ; ... ; ?N ], q = [q1 ; q2 ; ... ; qN ] and [qi ]k = qki . Each matrix ?i = [?i1 , ?i2 , ..., ?im ],
with
?
im
i
?
? tik ),
lnm = 0
? u?l1 ,?l2 ,...,?lnm ?1 ,lnm (tk+1p
im
im
i
i
[? ]kl = u?l1 ,?l2 ,...,?lnm ?1 ,lnm Lnm Tnm elnm (tk+1 ) ? elnm (tk )
,
(8)
?
, lnm 6= 0
?
jlnm ?nm
where l traverses all subscript combinations of l1 , l2 , ..., lnm . A necessary condition for identification is that the total number of spikes generated in response to all N trials is larger than
PM Qnm
condim=1
n=1 (2Ln + 1) + N . If thel neuron produces ? spikes on each trial, a sufficient
m
PM Qnm
tion is that the number of trials N ?
m=1
n=1 (2Ln + 1)/ min(? ? 1, 2Lnm + 1) .
Proof: The equivalent representation of the t-transform in equations (4) and (7) implies that the
m
decoding of the stimulus um
nm (in Theorem 1) and the identification of the filter projections Phnm
encountered here are dual problems. Therefore, the receptive field identification problem is equivalent to a neural encoding problem: the projections Phm
nm , m = 1, ..., M , are encoded with an mTEM
comprised of N neurons and receptive fields uim
,
i
= 1, ..., N , m = 1, ..., M . The algorithm for
nm
finding the coefficients hm
is
analogous
to
the
one
for um
l1 ...lnm
l1 ...lnm in Theorem 1.
6
5
Examples
A simple (mono) audio/video TEM realized using a bank of temporal and spatiotemporal linear
filters and a population of integrate-and-fire neurons, is shown in Fig. 2. An analog audio signal
u11 (t) and an analog video signal u23 (x, y, t) appear as inputs to temporal filters with kernels hi1
1 (t)
and spatiotemporal filters with kernels hi2
3 (x, y, t), i = 1, ..., N . Each temporal and spatiotemporal
filter could be realized in a number of ways, e.g., using gammatone and Gabor filter banks. For
simplicity, we assume that the number of temporal and spatiotemporal filters in Fig. 2 is the same.
In practice, the number of components could be different and would be determined by the bandwidth
of input stimuli ?, or equivalently the order L, and the number of spikes produced (Theorems 1-2).
For each neuron i, i = 1, ..., N , the filter outputs v i1 and v i2 , are summed to form the aggregate
dendritic current v i , which is encoded into a sequence of spike times (tik )k?Z by the ith integrateand-fire neuron. Thus each spike train (tik )k?Z carries information about two stimuli of completely
different modalities (audio and video) and, under certain conditions, the entire collection of spike
trains {tik }N
i=1 , k ? Z, can provide a faithful representation of both signals.
To demonstrate the performance of the algorithm presented in Theorem 1, we simulated a multisensory TEM with each neuron having a non-separable spatiotemporal receptive field for video stimuli
and a temporal receptive field for audio stimuli. Spatiotemporal receptive fields were chosen randomly and had a bandwidth of 4 Hz in temporal direction t and 2 Hz in each spatial direction x and
y. Similarly, temporal receptive fields were chosen randomly from functions bandlimited to 4 kHz.
Thus, two distinct stimuli having different dimensions (three for video, one for audio) and dynamics (2-4 cycles vs. 4, 000 cycles in each direction) were multiplexed at the level of every spiking
neuron and encoded into an unlabeled set of spikes. The mTEM produced a total of 360, 000 spikes
in response to a 6-second-long grayscale video and mono audio of Albert Einstein explaining the
mass-energy equivalence formula E = mc2 : ?... [a] very small amount of mass may be converted
into a very large amount of energy.? A multisensory TDM was then used to reconstruct the video and
audio stimuli from the produced set of spikes. Fig. 4a-b shows the original (top row) and recovered
(middle row) video and audio, respectively, together with the error between them (bottom row).
The neural encoding interpretation of the identification problem for the grayscale video/mono audio
TEM is shown in Fig. 3a. The block diagram of the corresponding mCIM appears in Fig. 3b.
Comparing this diagram to the one in Fig. 2, we note that neuron blocks have been replaced by trial
blocks. Furthermore, the stimuli now appear as kernels describing the filters and the inputs to the
circuit are kernel projections Phm
nm , m = 1, ..., M . In other words, identification of a single neuron
has been converted into a population encoding problem, where the artificially constructed population
of N neurons is associated with the N spike trains generated in response to N experimental trials.
The performance of the mCIM algorithm is visualized in Fig. 5. Fig. 5a-b shows the original
(top row) and recovered (middle row) spatio-temporal and temporal receptive fields, respectively,
together with the error between them (bottom row).
6
Conclusion
We presented a spiking neural circuit for multisensory integration that encodes multiple information
streams, e.g., audio and video, into a single spike train at the level of individual neurons. We derived
conditions for inverting the nonlinear operator describing the multiplexing and encoding in the spike
domain and developed methods for identifying multisensory processing using concurrent stimulus
presentations. We provided explicit algorithms for multisensory decoding and identification and
evaluated their performance using natural audio and video stimuli. Our investigations brought to
light a key duality between identification of multisensory processing in a single neuron and the recovery of stimuli encoded with a population of multisensory neurons. Given the powerful machinery
of employed RKHSs, extensions to neural circuits with noisy neurons are straightforward [15, 23].
Acknowledgement
The work presented here was supported in part by AFOSR under grant #FA9550-12-1-0232 and, in
part, by NIH under the grant #R021 DC012440001.
7
Video
x, [px ]
x, [px ]
MSE =-8.4 dB
MSE =- 19.2 dB
y , [px ]
MSE =-21.0 dB
x, [px ]
A m plit ude
MSE =-21.5 dB
A mplit ude
y , [px ]
Error
Decoded
A m plit ude
t = 4s
y , [px ]
Original
Audio
t = 2s
t = 0s
(a)
T ime , [s ]
(b)
Figure 4: Multisensory decoding. (a) Grayscale Video Recovery. (top row) Three frames of the original
grayscale video u23 . (middle row) Corresponding three frames of the decoded video projection P3 u23 . (bottom
row) Error between three frames of the original and identified video. ?1 = 2? ? 2 rad/s, L1 = 30, ?2 =
2? ? 36/19 rad/s, L2 = 36, ?3 = 2? ? 4 rad/s, L3 = 4. (b) Mono Audio Recovery. (top row) Original mono
audio signal u11 . (middle row) Decoded projection P1 u11 . (bottom row) Error between the original and decoded
audio. ? = 2? ? 4, 000 rad/s, L = 4, 000. Click here to see and hear the decoded video and audio stimuli.
Spatiotemporal RF
t = 15 ms
Temporal RF
t = 30 ms
t = 45 ms
A m plit ude
Original
y
A mplit ude
Identified
y
y
Error
A m plit ude
x
x
(a)
x
T ime , [ms ]
(b)
Figure 5: Multisensory identification. (a) (top row) Three frames of the original spatiotemporal kernel
h23 (x, y, t). Here, h23 is a spatial Gabor function rotating clockwise in space as a function of time. (middle row)
Corresponding three frames of the identified kernel Ph2?
3 (x, y, t). (bottom row) Error between three frames of
the original and identified kernel. ?1 = 2? ?12 rad/s, L1 = 9, ?2 = 2? ?12 rad/s, L2 = 9, ?3 = 2? ?100 rad/s,
L3 = 5. (b) Identification of the temporal RF (top row) Original temporal kernel h11 (t). (middle row) Identified
1
1?
projection Ph1?
1 (t). (bottom row) Error between h1 and Ph1 . ? = 2? ? 200 rad/s, L = 10.
8
References
[1] Barry E. Stein and Terrence R. Stanford. Multisensory integration: Current issues from the perspective of
a single neuron. Nature Reviews Neuroscience, 9:255?266, April 2008.
[2] Christoph Kayser, Christopher I. Petkov, and Nikos K. Logothetis. Multisensory interactions in primate
auditory cortex: fmri and electrophysiology. Hearing Research, 258:80?88, March 2009.
[3] Stephen J. Huston and Vivek Jayaraman. Studying sensorimotor integration in insects. Current Opinion
in Neurobiology, 21:527?534, June 2011.
[4] Barry E. Stein and M. Alex Meredith. The merging of the senses. The MIT Press, 1993.
[5] David A. Bulkin and Jennifer M. Groh. Seeing sounds: Visual and auditory interactions in the brain.
Current Opinion in Neurobiology, 16:415?419, July 2006.
[6] Jon Driver and Toemme Noesselt. Multisensory interplay reveals crossmodal influences on ?sensoryspecific? brain regions, natural responses, and judgments. Neuron, 57:11?23, January 2008.
[7] Christoph Kayser, Nikos K. Logothetis, and Stefano Panzeri. Visual enhancement of the information
representation in auditory cortex. Current Biology, pages 19?24, January 2010.
[8] Asif A. Ghazanfar and Charles E. Schroeder. Is neocortex essentially multisensory? Trends in Cognitive
Sciences, 10:278?285, June 2006.
[9] Paul J. Laurienti, Thomas J. Perrault, Terrence R. Stanford, Mark T. Wallace, and Barry E. Stein. On the
use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging
studies. Experimental Brain Research, 166:289?297, 2005.
[10] Konrad P. K?ording and Joshua B. Tenenbaum. Causal inference in sensorimotor integration. Advances in
Neural Information Processing Systems 19, 2007.
[11] Ulrik R. Beierholm, Konrad P. K?ording, Ladan Shams, and Wei Ji Ma. Comparing bayesian models for
multisensory cue combination without mandatory integration. Advances in Neural Information Processing
Systems 20, 2008.
[12] Daniel C. Kadunce, J. William Vaughan, Mark T. Wallace, and Barry E. Stein. The influence of visual and
auditory receptive field organization on multisensory integration in the superior colliculus. Experimental
Brain Research, 2001.
[13] Wei Ji Ma and Alexandre Pouget. Linking neurons to behavior in multisensory perception: A computational review. Brain Research, 1242:4?12, 2008.
[14] Mark A. Frye. Multisensory systems integration for high-performance motor control in flies. Current
Opinion in Neurobiology, 20:347?352, 2010.
[15] Aurel A. Lazar and Yevgeniy B. Slutskiy. Channel Identification Machines. Computational Intelligence
and Neuroscience, 2012.
[16] Aurel A. Lazar.
Time encoding with an integrate-and-fire neuron with a refractory period.
Neurocomputing, 58-60:53?58, June 2004.
[17] Aurel A. Lazar. Population encoding with Hodgkin-Huxley neurons. IEEE Transactions on Information
Theory, 56(2), February 2010.
[18] Aurel A. Lazar and Laszlo T. T?oth. Perfect recovery and sensitivity analysis of time encoded bandlimited
signals. IEEE Transactions on Circuits and Systems-I: Regular Papers, 51(10):2060?2073, 2004.
[19] Aurel A. Lazar and Eftychios A. Pnevmatikakis. Faithful representation of stimuli with a population of
integrate-and-fire neurons. Neural Computation, 20(11):2715?2744, November 2008.
[20] Aurel A. Lazar and Yevgeniy B. Slutskiy. Functional identification of spike-processing neural circuits.
Neural Computation, in press, 2013.
[21] Anmo J. Kim and Aurel A. Lazar. Recovery of stimuli encoded with a Hodgkin-Huxley neuron using
conditional PRCs. In N.W. Schultheiss, A.A. Prinz, and R.J. Butera, editors, Phase Response Curves in
Neuroscience. Springer, 2011.
[22] Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer Academic Publishers, 2004.
[23] Aurel A. Lazar, Eftychios A. Pnevmatikakis, and Yiyin Zhou. Encoding natural scenes with neural circuits
with random thresholds. Vision Research, 2010. Special Issue on Mathematical Models of Visual Coding.
[24] Aurel A. Lazar and Eftychios A. Pnevmatikakis. Reconstruction of sensory stimuli encoded with
integrate-and-fire neurons with random thresholds. EURASIP Journal on Advances in Signal Processing,
2009, 2009.
[25] Yevgeniy B. Slutskiy. Identification of Dendritic Processing in Spiking Neural Circuits. PhD thesis,
Columbia University, 2013.
9
| 5056 |@word trial:13 middle:6 polynomial:2 open:1 u11:6 q1:4 carry:1 daniel:1 denoting:3 rkhs:5 interestingly:1 ording:2 existing:1 imaginary:1 current:11 recovered:3 comparing:2 written:4 informative:2 motor:1 drop:2 v:1 cue:1 intelligence:1 ith:2 el1:10 fa9550:1 tems:4 traverse:2 simpler:1 mathematical:2 dn:1 constructed:1 xnm:23 driver:1 prove:2 pathway:1 ghazanfar:1 introduce:1 jayaraman:1 inter:1 behavior:1 p1:1 wallace:2 brain:7 provided:3 moreover:4 bounded:1 circuit:20 notation:1 underlying:1 mass:2 what:1 interpreted:1 substantially:2 q2:3 developed:1 unified:1 finding:1 transformation:3 temporal:14 every:3 multidimensional:2 um:26 berlinet:1 control:1 grant:2 appear:2 t1:5 engineering:2 understood:1 timing:1 limit:1 encoding:17 subscript:4 solely:1 dnm:6 abuse:1 mc2:1 dynamically:1 equivalence:1 christoph:2 limited:1 bi:3 practical:1 faithful:2 thel:1 practice:1 alphabetical:1 implement:1 block:6 kayser:2 area:2 jl3:1 cascade:3 gabor:2 projection:13 word:2 integrating:1 regular:1 seeing:1 prc:1 onto:2 unlabeled:2 convenience:1 operator:2 close:1 faulty:1 storage:1 influence:2 crossmodal:1 vaughan:1 equivalent:2 demonstrated:1 straightforward:1 attention:1 convex:3 resolution:1 petkov:1 simplicity:1 recovery:8 identifying:3 pouget:1 orthonormal:1 population:16 notion:1 traditionally:1 analogous:1 hierarchy:1 logothetis:2 beierholm:1 element:9 trend:1 bottom:6 fly:1 electrical:2 wang:1 solved:1 region:2 cycle:2 rose:1 environment:1 ui:1 dynamic:3 depend:1 myriad:1 creates:1 basis:1 completely:1 multimodal:6 joint:1 talk:1 hl1:3 train:8 distinct:2 aggregate:1 lazar:10 quite:1 encoded:14 larger:2 valued:1 stanford:2 reconstruct:2 statistic:1 transform:4 noisy:1 superscript:2 associative:1 interplay:1 advantage:2 sequence:3 biophysical:1 un1:2 h22:1 reconstruction:1 interaction:2 product:2 reset:1 maximal:1 combining:2 trigonometric:2 poorly:1 gammatone:1 buzsaki:1 hi2:1 convergence:1 enhancement:1 produce:3 perfect:1 tk:9 tions:1 derive:1 groh:1 h3:2 received:2 come:1 riesz:1 implies:1 direction:5 closely:1 filter:12 kadunce:1 opinion:3 require:2 integrateand:1 h11:2 investigation:1 dendritic:4 im:29 mathematically:1 extension:1 pl:1 mm:1 sufficiently:1 considered:1 exp:3 panzeri:1 mapping:2 substituting:1 major:1 u3:2 early:2 smallest:1 tik:26 him:4 concurrent:1 pnevmatikakis:3 faithfully:2 create:1 weighted:1 brought:1 concurrently:3 mit:1 rather:1 zhou:1 varying:1 voltage:1 encode:1 derived:1 june:3 notational:1 rank:1 u21:1 phm:8 contrast:2 kim:1 inference:1 el:4 typically:1 entire:2 mth:1 relegated:1 jln:1 i1:8 issue:2 dual:3 flexible:1 insect:1 spatial:3 integration:17 summed:1 special:1 field:16 equal:1 having:4 yevgeniy:4 sampling:2 biology:1 jon:1 tem:6 fmri:1 t2:3 stimulus:48 employ:2 primarily:1 randomly:2 ime:2 neurocomputing:1 iden:1 individual:1 replaced:1 phase:1 t2k:4 fire:6 n1:2 william:1 conductance:1 organization:1 interest:1 investigate:2 hindmarsh:1 light:1 sens:1 phnm:5 ynm:2 oth:1 laszlo:1 capable:1 necessary:4 traversing:1 machinery:1 rotating:1 causal:2 theoretical:1 column:2 modeling:4 compelling:1 u23:5 hearing:1 entry:1 comprised:6 spatiotemporal:8 person:1 sensitivity:1 terrence:2 receiving:1 decoding:11 plit:4 pool:1 enhance:1 together:4 perrault:1 connectivity:1 thesis:1 nm:65 containing:1 choose:2 possibly:1 hn:2 cognitive:1 supp:1 account:1 converted:3 knm:4 coding:1 coefficient:7 stream:2 performed:1 h1:5 tion:1 doing:1 capability:1 accuracy:1 qk:1 percept:1 t3:3 identify:1 judgment:1 weak:1 identification:26 biophysically:1 bayesian:1 produced:7 stroke:1 inform:1 definition:5 energy:2 u22:1 sensorimotor:2 involved:1 dm:1 proof:2 associated:1 xn1:1 ph2:1 auditory:6 lim:2 dimensionality:1 hilbert:4 appears:1 alexandre:1 higher:1 dt:2 snm:12 methodology:1 response:9 flowing:1 specify:1 april:1 wei:2 evaluated:1 generality:1 furthermore:3 stage:1 aurel:11 d:4 receives:2 h23:2 christopher:1 nonlinear:5 impulse:1 name:1 effect:1 equality:2 butera:1 i2:5 tn1:2 vivek:1 konrad:2 during:3 uniquely:1 whereby:1 m:4 analogto:1 demonstrate:5 tn:5 l1:52 stefano:1 christine:1 charles:1 nih:1 common:4 superior:1 specialized:1 functional:3 spiking:7 physical:1 ji:2 khz:1 refractory:1 jl:1 discussed:1 organism:1 interpretation:2 analog:2 functionally:1 linking:1 kluwer:1 measurement:9 pm:3 similarly:2 had:1 l3:19 stable:1 cortex:2 recent:1 pl1:2 perspective:1 belongs:1 mandatory:1 certain:1 initiation:1 tdm:3 asif:1 rerouting:1 arbitrarily:1 joshua:1 greater:3 nikos:2 employed:2 aggregated:1 period:3 barry:4 signal:19 clockwise:1 ul1:1 multiple:6 full:1 stephen:1 sound:1 july:1 sham:1 academic:1 cross:1 long:1 serial:1 qi:3 vision:1 circumstance:1 essentially:1 metric:1 albert:1 grounded:2 kernel:25 h1n:1 interval:5 meredith:1 diagram:5 modality:9 appropriately:1 publisher:1 hz:2 db:4 seem:1 integer:1 ee:1 ideal:7 feedforward:1 concerned:1 t1k:4 architecture:1 identified:7 converter:1 slutskiy:4 bandwidth:7 inner:2 perfectly:2 li1:1 click:1 eftychios:3 handled:1 allocate:1 ul:2 york:2 remark:4 cocktail:1 bibo:1 clear:1 listed:1 involve:1 amount:2 repeating:1 stein:4 neocortex:1 tenenbaum:1 morris:1 ph:3 processed:1 visualized:1 neuroscience:4 ladan:1 discrete:2 lnm:63 ds1:3 redundancy:1 key:1 threshold:3 rnm:1 d3:1 clarity:2 mono:5 hi1:1 imaging:1 pl2:1 sum:1 colliculus:1 powerful:1 hodgkin:3 place:2 throughout:1 p3:1 h12:1 bound:1 encountered:1 schroeder:1 huxley:3 alex:1 scene:1 multiplexing:2 encodes:3 u1:4 min:2 performing:1 separable:1 px:6 department:2 combination:4 march:1 describes:1 lp:1 primate:1 s1:6 anatomically:1 intuitively:2 taken:2 ln:15 resource:1 equation:3 previously:1 jennifer:1 discus:2 describing:2 mechanism:1 ln1:4 tractable:2 studying:1 endowed:1 tn2:2 lecar:1 rewritten:1 observe:1 einstein:1 appropriate:1 jl2:1 spectral:1 rkhss:2 original:12 thomas:2 denotes:3 top:6 include:2 especially:1 february:1 classical:1 psychophysical:1 capacitance:1 already:1 realized:2 spike:36 hum:1 receptive:16 ude:6 traditional:3 separate:1 simulated:1 ulrik:1 hnm:23 cellular:1 reason:1 assuming:1 length:1 index:3 quantal:1 modeled:1 providing:1 equivalently:3 neuroimaging:1 iaf:10 neuron:65 convolution:1 finite:1 november:1 january:2 neurobiology:3 communication:1 y1:2 frame:6 reproducing:5 david:1 inverting:1 pair:1 kl:4 extensive:1 specified:1 rad:8 established:1 prinz:1 able:1 below:2 dynamical:1 u12:1 perception:1 agnan:1 challenge:1 hear:1 built:1 including:2 memory:1 video:22 rf:3 bandlimited:2 event:1 natural:7 representing:1 tified:1 numerous:1 cim:3 lk:2 qki:10 carried:1 hm:7 columbia:5 formerly:1 review:2 l2:29 acknowledgement:1 afosr:1 loss:2 generation:1 generator:1 h2:1 integrate:5 sufficient:3 wnm:1 principle:1 editor:1 bank:4 row:18 supported:1 free:2 asynchronous:2 alain:1 formal:1 allow:1 bias:1 explaining:1 characterizing:1 leaky:1 benefit:2 curve:1 feedback:1 dimension:11 cortical:1 world:1 xn:3 dxe:1 qn:3 sensory:14 author:1 collection:2 frye:1 party:1 transaction:2 functionals:2 unm:8 pseudoinverse:2 reveals:1 environmentally:1 assumed:1 spatio:1 grayscale:4 continuous:1 un:2 ph1:2 channel:2 nature:1 symmetry:1 unisensory:3 mse:4 complex:2 artificially:1 domain:6 elegantly:1 linearly:2 arise:2 paul:1 n2:1 x1:21 neuronal:1 fig:12 fashion:2 ny:2 decoded:6 explicit:1 perceptual:1 qnm:7 rk:2 theorem:7 formula:1 tnm:7 dx1:2 merging:1 ci:1 phd:1 depicted:1 electrophysiology:1 visual:5 talking:1 u2:2 springer:1 satisfies:1 ma:2 conditional:1 presentation:1 eurasip:1 specifically:1 determined:2 total:6 called:1 duality:1 experimental:7 uim:8 unfavorable:1 multisensory:52 zone:1 internal:1 support:2 h21:1 latter:1 mark:3 incorporate:1 multiplexed:2 audio:21 d1:1 |
4,483 | 5,057 | Recurrent networks of coupled Winner-Take-All
oscillators for solving constraint satisfaction problems
?
Hesham Mostafa, Lorenz K. Muller,
and Giacomo Indiveri
Institute for Neuroinformatics
University of Zurich and ETH Zurich
{hesham,lorenz,giacomo}@ini.uzh.ch
Abstract
We present a recurrent neuronal network, modeled as a continuous-time dynamical system, that can solve constraint satisfaction problems. Discrete variables are
represented by coupled Winner-Take-All (WTA) networks, and their values are encoded in localized patterns of oscillations that are learned by the recurrent weights
in these networks. Constraints over the variables are encoded in the network connectivity. Although there are no sources of noise, the network can escape from
local optima in its search for solutions that satisfy all constraints by modifying
the effective network connectivity through oscillations. If there is no solution that
satisfies all constraints, the network state changes in a seemingly random manner
and its trajectory approximates a sampling procedure that selects a variable assignment with a probability that increases with the fraction of constraints satisfied by
this assignment. External evidence, or input to the network, can force variables to
specific values. When new inputs are applied, the network re-evaluates the entire
set of variables in its search for states that satisfy the maximum number of constraints, while being consistent with the external input. Our results demonstrate
that the proposed network architecture can perform a deterministic search for the
optimal solution to problems with non-convex cost functions. The network is
inspired by canonical microcircuit models of the cortex and suggests possible dynamical mechanisms to solve constraint satisfaction problems that can be present
in biological networks, or implemented in neuromorphic electronic circuits.
1
Introduction
The brain is able to integrate noisy and partial information from both sensory inputs and internal
states to construct a consistent interpretation of the actual state of the environment. Consistency
among different interpretations is likely to be inferred according to an internal model constructed
from prior experience [1]. If we assume that a consistent interpretation is specified by a proper configuration of discrete variables, then it is possible to build an internal model by providing a set of
constraints on the configurations that these variables are allowed to take. Searching for consistent
interpretations under this internal model is equivalent to solving a max-constraint satisfaction problem (max-CSP). In this paper, we propose a recurrent neural network architecture with cortically
inspired connectivity that can represent such an internal model, and we show that the network dynamics solve max-CSPs by searching for the optimal variable assignment that satisfies the maximum
number of constraints, while being consistent with external evidence.
Although there are many efficient algorithmic approaches to solving max-CSPs, it is still not clear
how these algorithms can be implemented as biologically realistic dynamical systems. In particular,
a challenging problem in systems whose dynamics embody a search for the optimal solution of a
max-CSP is escaping from local optima. One possible approach is to formulate a stochastic neural
network that samples from a probability distribution in which the correct solutions have higher
1
probability [2]. However, the stochastic network will continuously explore the solution space and
will not stabilize at fully consistent solutions. Another possible solution is to use simulated annealing
techniques [3]. Simulated annealing techniques, however, cannot be easily mapped to plausible
biological neural circuits due to the cooling schedule used to control the exploratory aspect of the
search process. An alternative deterministic dynamical systems approach for solving combinatorial
optimization problems is to formulate a quadratic cost function for the problem and construct a
Hopfield network whose Lyapunov function is this cost function [4]. Considerable parameter tuning
is needed to get such networks to converge to good solutions and to avoid local optima [5]. The
addition of noise [6] or the inclusion of an initial chaotic exploratory phase [7] in Hopfield networks
partially mitigate the problem of getting stuck in local optima.
The recurrent neural network we propose does not need a noise source to carry out the search process. Its deterministic dynamics directly realize a form of ?usable computation? [8] that is suitable
for solving max-CSPs. The form of computation implemented is distributed and ?executive-free? [9]
in the sense that there is no central controller managing the dynamics or the flow of information. The
network is cortically inspired as it is composed of coupled Winner-Take-All (WTA) circuits. The
WTA circuit is a possible cortical circuit motif [10] as its dynamics can explain the amplification
of genico-cortical inputs that was observed in intracellular recordings in cat visual cortex [11]. In
addition to elucidating possible computational mechanisms in the brain, implementing ?usable computation? with the dynamics of a neural network holds a number of advantages over conventional
digital computation, including massive parallelism and fault tolerance. In particular, by following such dynamical systems approach, we can exploit the rich behavior of physical devices such
as transistors to directly emulate these dynamics, and obtain more dense and power efficient computation [12]. For example, the network proposed could be implemented using low-power analog
current-mode WTA circuits [13], or by appropriately coupling silicon neurons in neuromorphic Very
Large Scale Integration (VLSI) chips [14].
In the next section we describe the architecture of the proposed network and the models that we use
for the network elements. Section 3 contains simulation results showing how the proposed network
architecture solves a number of max-CSPs with binary variables. We discuss the network dynamics
in Section 4 and present our conclusions in Section 5.
2
Network Architecture
The basic building block of the proposed network is the WTA circuit in which multiple excitatory
populations are competing through a common inhibitory population as shown in Fig. 1a. When the
excitatory populations of the WTA network receive inputs of different amplitudes, their activity will
increase and be amplified due to the recurrent excitatory connections. This will in turn activate the
inhibitory population which will suppress activity in the excitatory populations until an equilibrium
is reached. Typically, the excitatory population that receives the strongest external input is the only
one that remains active (the network has selected a winner). By properly tuning the connection
strengths, it is possible to configure the network so that it settles into a stable state of activity (or an
attractor) that persists after input removal [15].
2.1
Neuronal and Synaptic Dynamics
The network that we propose is a population-level, rate-based network. Each population is modeled
as a linear threshold unit (LTU) which has the following dynamics:
X
?i x? i (t) + xi (t) = max(0,
wji (t)xj (t) ? Ti )
(1)
j
where xi (t) is the average firing rate in population i, wji (t) is the connection weight from population
j to population i, and ?i and Ti are the time constant and the threshold of population i respectively.
The steady state population activity in eq. 1 is a good approximation of the steady state average
firing rate in a population of integrate and fire neurons receiving noisy, uncorrelated inputs [16].
For a step increase in mean input, the actual average firing rate in a population settles into a steady
state after a number of transient modes have died out [17] but in eq. 1, we assume the firing rate
approaches steady state only through first order dynamics.
2
D
network activity after
stimulus removal
WA
network activity during
stimulus presentation
WB
C
A
B
input stimulus
excitatory population
?xed connection
plastic connection
inhibitory population
(a)
(b)
80
Weight
Rate(Hz)
0.06
D
60
20
0
0
1
2
3
4
0.04
0
80
2
3
4
5
6
40
20
0
0
Rate(Hz)
1
60
1
2
3
Rate(Hz)
Rate(Hz)
C
ra
WA
WB
0.05
40
4
A
B
60
40
20
0
0
1
2
Time(s)
3
60
20
0
0
4
(c)
A
B
40
1
2
3
4
Time(s)
5
6
(d)
Figure 1: (a) A single WTA network. (b) Three coupled WTA circuits form the network representation of a single binary variable. Circles labeled A,B,C, and D are excitatory populations. Red circles
on the right are inhibitory populations. (c) Simulation results of the network in (b) showing activity
in the four excitatory populations. Shaded rectangles indicate the time intervals in which the state of
the oscillator can be changed by external input. (d) Switching the state of the oscillator. The bottom
plot shows the activity of the A and B populations. External input is applied to the A population in
the time intervals denoted by the shaded rectangles. While the first input has no effect, the second
input is applied at the right time and triggers a change in the variable/oscillator state. The top plot
shows time evolution of the weights WA and WB.
The plastic connections in the proposed network obey a learning rule analogous to the BienenstockCooper-Munro (BCM) rule [18]:
(wmax ? w(t))[v(t) ? vth ]+
(w(t) ? wmin )[vth ? v(t)]?
+
(2)
w(t)
?
= Ku(t)
?dep
?pot
where [x]+ = max(0, x), and [x]? = min(0, x). w(t) is the connection weight, and u(t) and v(t)
are the activities of the source and target populations respectively. The parameters wmin and wmax
are soft bounds on the weight, ?dep and ?pot are the depression and potentiation time constants
respectively, vth is a threshold on the activity of the target population that delimits the transition
between potentiation and depression, and K is a term that controls the overall speed of learning
or the plasticity rate. The learning rule captures the dependence of potentiation and depression
induction on the postsynaptic firing rate [19].
2.2
Variable Representation
Point attractor states in WTA networks like the one shown in Fig. 1a are computationally useful
as they enable the network to disambiguate the inputs to the excitatory populations by making a
categorical choice based on the relative strengths of these inputs. Point attractor dominated dynamics
promote noise robustness at the expense of reduced input sensitivity: external input has to be large
to move the network state out of the basin of attraction of one point attractor, and into the basin of
attraction of another.
In this work, instead of using distinct point attractors to represent different variable values, we
use limit cycle attractors. To obtain limit cycle attractors, we asymmetrically couple a number of
WTA circuits to form a loop as shown in Fig. 1b. This has the effect of destroying the fixed point
3
attractors in each WTA stage. As a consequence, persistent activity can no longer appear in a single
WTA stage if there is no input. If we apply a short input pulse to the bottom WTA stage of Fig. 1b,
we start oscillatory activity and we observe the following sequence of events: (1) the activity in
the bottom WTA stage ramps up due to recurrent excitation, and when it is high enough it begins
activating the middle WTA stage; (2) activity in the middle WTA stage ramps up and as activity in
the inhibitory population of this stage rises, it shuts down the bottom stage activity; activity in the
middle WTA stage keeps on increasing until it activates the top stage; (3) activity in the top WTA
stage increases, shuts down the middle stage, and provides input back into the bottom stage via the
plastic connections. As a consequence, a bump of activity continuously jumps from one WTA stage
to the next. Since the stages are connected in a loop, the network will exhibit oscillatory activity.
There are two stable limit cycles that the network trajectory can follow. The limit cycle chosen by the
network depends on the outcome of the winner selection process in the bottom WTA stage. The limit
cycles are stable as the weak coupling between the stages leaves the signal restoration properties of
the destroyed attractors intact allowing activity in each WTA stage to be restored to a point close
to that of the destroyed attractor. The winner selection process takes place at the beginning of each
oscillation period in the bottom WTA stage. In the absence of external input, the dynamics of the
winner selection process in the bottom stage will favor the population that receives the stronger
projection weight from D. These projection weights obey the plasticity rule given by eq. 2.
The oscillatory network in Fig. 1b can represent one binary variable whose value is encoded in the
identity of the winning population in the bottom WTA stage, which determines the limit cycle the
network follows. The identity of the winning population is a reflection of the relative strengths of WA
and WB. More than two values can be encoded by increasing the number of excitatory populations
in the bottom WTA stage. Fig. 1c shows the simulation results of the network in Fig. 1b when the
weight WB is larger than WA. This is expressed by a limit cycle in which populations B,C, and D
are periodically activated.
During the winner selection process in the bottom WTA stage, the WTA circuit is very sensitive to
external input, which can bias the competition towards a particular limit cycle. Once the winner
selection process is complete, i.e, activity in the winning population has ramped up to a high level,
the WTA circuit is relatively insensitive to external input. This is illustrated in Fig. 1d, where input
is applied in two different intervals. The first external input to population A arrives after the winner,
B, has already been selected so it is ineffective. A second external input having the same strength
and duration as the first input arrives during the winner selection phase and biases the competition
towards A. As soon as A wins, the plasticity rule in eq. 2 causes WA to potentiate and WB to depress
so that activity in the network continues to follow the new limit cycle even after the input is removed.
2.3
Constraint Representation
Each variable, as represented by the network in Fig. 1b, is a multi-stable oscillator. Pair-wise constraints can be implemented by coupling the excitatory populations of the bottom WTA stages of
two variables. Fig 2a shows the implementation of a constraint that requires two variables to be
unequal, i.e., one variable should oscillate in the cycle involving the A population, and the other
in the cycle involving the B population. Variable X1 will maximally affect X2 when the activity
peak in the bottom WTA stage of X1 coincides with the winner selection interval of X2 and vice
versa. The coupling of the middle and top WTA stages of the two variables in Fig. 2a is not related
to the constraint, but it is there to prevent coupled variables in large networks from phase locking.
We explain why this is important in the next section. We define the zero phase point of a variable as
the point at which activity in the winning excitatory population in the bottom WTA stage reaches a
peak and we assume the phase changes linearly during an oscillation period (from one peak to the
next). The phase difference between two coupled variables determines the direction and strength of
mutual influence. This can be seen in Fig. 2b. Initially the constraint is violated as both variables
are oscillating in the A cycle. X1 gradually begins to lead X2 until at a particular phase difference,
input from X1 is able to bias the competition in X2 so that the B population in X2 wins even though
the A population is receiving a stronger projection from the D population in X2.
A constraint involving more than two variables can be implemented by introducing an intermediate
variable which will in general have a higher cardinality than the variables in the constraint (the
cardinality of a variable is reflected in the number of excitatory populations in the bottom WTA
stage; the middle and top WTA stages have the same structure irrespective of cardinality). An
example is shown in Fig. 2c where three binary variables are related by an XOR relation and the
4
X1
Rate(Hz)
X2
D
D
40
20
0
0
C
1
2
3
4
5
6
3
4
5
6
C
B
A
B
Rate(Hz)
A
X1:A
X1:B
60
40
20
0
0
(a)
X2:A
X2:B
60
1
2
Time(s)
(b)
(c)
Figure 2: (a) Coupling X1 and X2 to implement the constraint X1 6= X2. (b) Activity in the
A and B populations of X1 and X2 that are coupled as shown in (a). (c) Constraint involving
three variables: X1 XOR X2 = X3. Only the bottom WTA stages of the four variables and the
inter-variable connections coupling the bottom WTA stages are shown.
intermediate variable has four possible states. The tertiary XOR constraint has been effectively
broken down into three pair-wise constraints. The only states, or oscillatory modes, of X1, X2, and
X3 that are stable under arbitrary phase relations with the intermediate variable are the states which
satisfy the constraint X1 XOR X2 = X3.
3
Solving max-CSPs
From simulations, we observe that the phase differences between the variables/oscillators are highly
irregular in large networks comprised of many variables and constraints. These irregular phase relations enable the network to search for the optimal solution of a max-CSP. The weight attached
to a constraint is an analogue quantity that is a function of the phase differences between the variables in the constraint. The phase differences also determine which of the variables in a violated
constraint changes in order to satisfy the constraint (see Fig. 2b). The irregular phase relations result
in a continuous perturbation of the strengths of the different constraints by modulating the effective network connectivity embodying these constraints. This is what allows the network to escape
from the local optima of the underlying max-CSP. At a local optimum, the ongoing perturbation of
constraint strengths will eventually lead to a configuration that de-emphasizes the currently satisfied
constraints and emphasizes the unsatisfied constraints. The transiently dominant unsatisfied constraints will reassign the values of the variables in their domain and pull the network out of the local
optimum. The network thus searches for optimal solutions by effectively perturbing the underlying
max-CSP. Under this search scheme, states that satisfy all constraints are dynamically stable since
any perturbation of the strengths of the constraints defining the max-CSP will result in a constraints
configuration that reinforces the current fully consistent state of the network.
In principle, if some variables/oscillators phase-lock, then the weights of the constraint(s) among
these variables will not change anymore, which will impact the ability of the network to find good
solutions. In practice, however, we see that this happens only in very small networks, and not in
large ones, such as the networks described in the following sections.
3.1
Network Behavior in the Presence of a Fully Consistent Variable Assignment
We simulated a recurrent neuronal network that represents a CSP that has ten binary variables and
nine tertiary constraints (see Fig. 3a). Each variable is represented by the network in Fig. 1b. Each
tertiary constraint is implemented by introducing an intermediate variable and using a coupling
scheme similar to the one in Fig. 2c. We constructed the problem so that only two variable assignments are fully consistent. The problem is thus at the boundary between over-constrained and
under-constrained problems which makes it difficult for a search algorithm to find the optimum [20].
We ran 1000 trials starting from random values for the synaptic weights within each variable (each
variable effectively starts with a random value). The network always converges to one of the optimal
variable assignments. Fig. 3b shows a histogram of the number of oscillation cycles needed to
converge to an optimal solution in the 1000 trials. The number of cycles is averaged over the ten
variables as the number of cycles needed to converge to an optimal solution is not the same for
5
AND X2 = X5
AND X4 = X6
XOR X6 = X7
NAND X7 = X8
NOR X8 = X9
AND X9 = X10
XOR X10 = X1
XOR X9 = X2
AND X8 = X3
150
100
Mean=646
Median=436
Count
X1
X3
X5
X5
X6
X4
X8
X8
X2
50
0
0
500
1000
1500 2000 2500 3000 3500 4000
Average number of cycles to convergence
(a)
4500
5000
(b)
Number of violated constraints
Number of violated constraints
one variable externally forced to be incompatible
with consistent state 2 but compatible with
consistent state 1
Hamming Distance to consistent state 1
Hamming Distance to consistent state 2
Hamming Distance to consistent state 1
Hamming Distance to consistent state 2
Average number of cycles
Average number of cycles
(c)
(d)
Figure 3: Solving a CSP with ten binary variables and nine tertiary constraints. (a) CSP definition.
(b) Histogram of the number of cycles needed for convergence, averaged over all ten variables,
in 1000 trials. (c) Evolution of network state in a sample trial. The top plot shows the number
of constraints violated by the variable assignment decoded from the network state. The bottom
plot shows the Hamming distance between the decoded variable assignment to each of the two fully
consistent solutions. (d) One variable is externally forced to take a value that is incompatible with the
current fully consistent variable assignment. The search resumes to find a fully consistent variable
assignment that is compatible with the external input.
all variables. Although the sub-networks representing the variables are identical, each oscillates
at a different instantaneous frequency due to the non-uniform coupling and switching dynamics.
Fig. 3c shows how the network state evolves in a sample trial. Due to the continuous perturbation
of the weights caused by the irregular phase relations between the variables/oscillators, the network
sometimes takes steps that lead to the violation of more constraints. This prevents the network from
getting stuck in local optima.
We model the arrival of external evidence by activating an additional variable/oscillator that has only
one state, or limit cycle, and which is coupled to one of the original problem variables. External
evidence in this case is sparse since it only affects one problem variable. External evidence also
does not completely fix the value of that one problem variable, but rather, the single state ?evidence
variable? affects the problem variable only at particular phase differences between the two. Fig. 3d
shows that the network is able to take the external evidence into account by searching for, and finally
settling into, the only remaining fully consistent state that accommodates the external evidence.
3.2
Network Behavior in the Absence of Fully Consistent Variable Assignments
As shown in the previous section, if a fully consistent solution exists, the network state will end up
in that solution and stay there. If no such solution exists, the network will never settle into one variable assignment, but will keep exploring possible assignments and will spend more time in solutions
that satisfy more constraints. This behavior can be interpreted as a sampling process where each
oscillation cycle lets one variable re-sample its current state; at any point in time, the network state
represents a sample from a probability distribution defined over the space of all possible solutions
6
A
B
C
D
E
F
G
H
I
K
D
L
C
K
A
F
H
I
B
E
G
(a)
(b)
Figure 4: Ising model type problems. Each square indicates a binary variable like in Fig. 1b; solid
black lines denote a constraint requiring two variables to be equal, dashed red lines a constraint that
requires two variables to be unequal. In both problems, all states violate at least one constraint.
Problem in Fig.4a
1e+06
100000
10000
Data
Average of Equal Energy
Exponential Fit to Averages
100000
10000
1000
Time
1000
Time
Problem in Fig.4b
1e+06
Data
Average of Equal Energy
Exponential Fit to Averages
100
10
100
10
1
0.1
1
0.01
0.1
0.001
0.01
0.0001
1
2
3
4
5
6
7
1
2
3
4
5
6
7
8
Energy (#violated constraints)
Energy (#violated constraints)
(a)
(b)
Figure 5: Behavior of two networks representing the CSPs in Fig. 4. Red squares are data points (the
time the network spent in one particular state), a blue star is the average time spent in states of equal
energy and the green line is an exponential fit to the blue stars. (a) Note that at energies 1 and 2 there
are two complementary states each that are visited almost equally often. (b) Not all assignments of
energy 2 are equally probable in this case (not a finite samples artifact, but systematic) as can be seen
in the bimodal distribution there. This is caused by variables that are part of only one constraint.
to the max-CSP, where more consistent solutions have higher probability. The oscillatory dynamics
thus give rise to a decentralized, deterministic, and time-continuous sampling process. This sampling analogy is only valid when there are no fully consistent solutions. To illustrate this behavior,
we consider two max-CSPs having an Ising-model like structure as shown in Figs. 4a, 4b. We
describe the behavior of two networks that represent the max-CSPs embodied by these two graphs.
Let E(s) be a function that maps a network state s to the number of constraints it violates; this
is analogous to an energy function and we will refer to E(s) as the energy of state s. For the
problem in Fig. 4a, we observe that the average time the network spends in states with energy E
is t(E) = c1 exp(?c2 E) as can be seen in Fig. 5a. The network spends almost equal times in
complementary states that have low energy. Complementary states are maximally different but the
network is able to traverse the space of intervening states, which can have higher energy, in order to
visit the complementary states almost equally often.
We expect the network to spend less time in less consistent states; the higher the number of violated constraints, the more rapidly the variable values change because there are more possible phase
relations that can emphasize a violated constraint. However, we do not have an analytical explanation for the good exponential fit to the energy-time spent relation. We expect a worse fit for high
energies. For example, the network can never go into states where all constraints are violated even
though they have finite energies.
For the problem in Fig. 4b, not all states of equally low energy are equally likely as can be seen in
Fig. 5b. For example, the states of energy 2, where C and D (or K and L) are unequal, are less likely
than other assignments of the same energy. This is not surprising. When C is in some state, D has no
reason to be in a different state (no other variables try to force it to be different from C) apart from
the memory in its plastic weights. We expect that this effect becomes small for sufficiently densely
connected constraint graphs. The exponential fit to the averages is still very good in Fig. 5b.
7
4
Discussion
Oscillations are ubiquitous in cortex. Local field potential measurements as well as intracellular recordings point to a plethora of oscillatory dynamics operating in many distinct frequency
bands [21]. One possible functional role for oscillatory activity is that it rhythmically modulates
the sensitivity of neuronal circuits to external influence [22, 23]. Attending to a periodic stimulus
has been shown to result in the entrainment of delta-band oscillations (2-4 Hz) so that intervals of
high excitability coincide with relevant events in the stimulus [24]. We have used the idea of oscillatory modulation of sensitivity to construct multi-stable neural oscillators whose state, or limit
cycle, can be changed by external inputs only in narrow, periodically recurring temporal windows.
Selection between multiple limit cycles is done through competitive dynamics which are thought to
underlie many cognitive processes such as decision making in prefrontal cortex [25].
External input to the network can be interpreted as an additional constraint that immediately affects
the search for maximally consistent states. Continuous reformulation of the problem, by adding new
constraints, is problematic for any approach that works by having an initial exploratory phase that
slowly morphs into a greedy search for optimal solutions, as the exploratory phase has to be restarted
after a change in the problem. For a biological system that has to deal with a continuously changing
set of constraints, the search algorithm should not exhibit an exploratory/greedy behavior dichotomy.
The search procedure used in the proposed networks does not exhibit this dichotomy. The search is
driven solely by the violated constraints. This can be seen in the sampling-like behavior in Fig. 5
where the network spends less time in a state that violates more constraints.
The size of the proposed network grows linearly with the number of variables in the problem. CSPs
are in general NP-complete, hence convergence time of networks embodying CSPs will grow exponentially (in the worst case) with the size of the problem. We observed that in addition to problem
size, time to convergence/solution depends heavily on the density of solutions in the search space.
We used the network to solve a graph coloring problem with 17 nodes and 4 colors (each oscillator/variable representing a node had 4 possible stable limit cycles). The problem was chosen so that
there is an abundance of solutions. This led to a faster convergence to an optimal solution compared
to the problem in Fig. 3a even though the graph coloring problem had a much larger search space.
5
Conclusions and Future Work
By combining two basic dynamical mechanisms observed in many brain areas, oscillation and competition, we constructed a recurrent neuronal network that can solve constraint satisfaction problems.
The proposed network deterministically searches for optimal solutions by modulating the effective
network connectivity through oscillations. This, in turn, perturbs the effective weights of the constraints. The network can take into account partial external evidence that constrains the values of
some variables and extrapolate from this partial evidence to reach states that are maximally consistent with the external evidence and the internal constraints. For sample problems, we have shown
empirically that the network searches for, and settles into, a state that satisfies all constraints if there
is one, otherwise it explores the space of highly consistent states with a stronger bias towards states
that satisfy more constraints. An analytic framework for understanding the search scheme employed
by the network is a topic for future work.
The proposed network exploits its temporal dynamics and analog properties to solve a class of
computationally intensive problems. The WTA modules making up the network can be efficiently
implemented using neuromorphic VLSI circuits [26]. The results presented in this work encourage
the design of neuromorphic circuits and components that implement the full network in order to
solve constraint satisfaction problems in compact and ultra-low power VLSI systems.
Acknowledgments
This work was supported by the European CHIST-ERA program, via the ?Plasticity in NEUral
Memristive Architectures? (PNEUMA) project and by the European Research council, via the ?Neuromorphic Processors? (neuroP) project, under ERC grant number 257219.
8
References
[1] Pietro Berkes, Gerg?o Orb?an, M?at?e Lengyel, and J?ozsef Fiser. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science, 331(6013):83?87, 2011.
[2] Stefan Habenschuss, Zeno Jonke, and Wolfgang Maass. Stochastic Computations in Cortical Microcircuit
Models. PLoS computational biology, 9:e1003311, 2013.
[3] Scott Kirkpatrick, D. Gelatt Jr., and Mario P Vecchi. Optimization by simulated annealing. science,
220(4598):671?680, 1983.
[4] John J Hopfield and David W Tank. neural computation of decisions in optimization problems. Biological
cybernetics, 52(3):141?152, 1985.
[5] Behzad Kamgar-Parsi. Dynamical stability and parameter selection in neural optimization. In Neural
Networks, 1992. IJCNN., International Joint Conference on, volume 4, pages 566?571. IEEE, 1992.
[6] Kate Smith, Marimuthu Palaniswami, and Mohan Krishnamoorthy. Neural techniques for combinatorial
optimization with applications. Neural Networks, IEEE Transactions on, 9(6):1301?1318, 1998.
[7] Luonan Chen and Kazuyuki Aihara. Chaotic simulated annealing by a neural network model with transient chaos. Neural networks, 8(6):915?930, 1995.
[8] James P Crutchfield. Critical computation, phase transitions, and hierarchical learning. Towards the
Harnessing of Chaos, Amsterdam, 1994.
[9] Jerry A Fodor. Information and association. Notre Dame journal of formal logic, 27(3):307?323, 1986.
[10] Rodney J. Douglas and Kevan A. Martin. Neuronal circuits of the neocortex. Annual review of neuroscience, 27(1):419?451, 2004.
[11] R. J. Douglas and K. A. Martin. A functional microcircuit for cat visual cortex. The Journal of Physiology,
440(1):735?769, January 1991.
[12] Carver Mead. Neuromorphic electronic systems. In Proc. IEEE, 78:16291636, 1990.
[13] J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C.A. Mead. Winner-take-all networks of O(n) complexity. In D.S. Touretzky, editor, Advances in neural information processing systems, volume 2, pages
703?711, San Mateo - CA, 1989. Morgan Kaufmann.
[14] G. Indiveri, B. Linares-Barranco, T.J. Hamilton, A. van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C.
Liu, P. Dudek, P. H?afliger, S. Renaud, J. Schemmel, G. Cauwenberghs, J. Arthur, K. Hynna, F. Folowosele,
S. Saighi, T. Serrano-Gotarredona, J. Wijekoon, Y. Wang, and K. Boahen. Neuromorphic silicon neuron
circuits. Frontiers in Neuroscience, 5:1?23, 2011.
[15] Ueli Rutishauser, Rodney J Douglas, and Jean-Jacques Slotine. Collective stability of networks of winnertake-all circuits. Neural computation, 23(3):735?773, 2011.
[16] Stefano Fusi and Maurizio Mattia. Collective Behavior of Networks with Linear (VLSI) Integrate-andFire Neurons. Neural Computation, 11(3):633?652, April 1999.
[17] Maurizio Mattia and Paolo Del Giudice. Population dynamics of interacting spiking neurons. Physical
Review E, 66(5):051917+, November 2002.
[18] E.L. Bienenstock, L.N. Cooper, and P.W. Munro. Theory for the development of neuron selectivity:
orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience, 2(1):32?
48, 1982.
[19] Per J. Sj?ostr?om, Gina G. Turrigiano, and Sacha B. Nelson. Rate, Timing, and Cooperativity Jointly
Determine Cortical Synaptic Plasticity. Neuron, 32(6):1149?1164, December 2001.
[20] Bob Kanefsky and W Taylor. Where the really hard problems are. In Proceedings of IJCAI, volume 91,
pages 163?169, 1991.
[21] Xiao-Jing Wang. Neurophysiological and computational principles of cortical rhythms in cognition. Physiological reviews, 90(3):1195?1268, 2010.
[22] Pascal Fries et al. A mechanism for cognitive dynamics: neuronal communication through neuronal
coherence. Trends in cognitive sciences, 9(10):474?480, 2005.
[23] Thilo Womelsdorf, Jan-Mathijs Schoffelen, Robert Oostenveld, Wolf Singer, Robert Desimone, Andreas K Engel, and Pascal Fries. Modulation of neuronal interactions through neuronal synchronization.
science, 316(5831):1609?1612, 2007.
[24] Peter Lakatos, George Karmos, Ashesh D Mehta, Istvan Ulbert, and Charles E Schroeder. Entrainment
of neuronal oscillations as a mechanism of attentional selection. science, 320(5872):110?113, 2008.
[25] Xiao-Jing Wang. Decision making in recurrent neuronal circuits. Neuron, 60(2):215 ? 234, 2008.
[26] G. Indiveri. A current-mode hysteretic winner-take-all network, with excitatory and inhibitory coupling.
Analog Integrated Circuits and Signal Processing, 28(3):279?291, September 2001.
9
| 5057 |@word trial:5 oostenveld:1 middle:6 stronger:3 mehta:1 simulation:4 pulse:1 solid:1 carry:1 initial:2 configuration:4 contains:1 liu:1 current:5 surprising:1 john:1 realize:1 ashesh:1 periodically:2 realistic:1 plasticity:5 analytic:1 plot:4 greedy:2 selected:2 device:1 leaf:1 beginning:1 smith:1 short:1 tertiary:4 schaik:1 provides:1 node:2 traverse:1 constructed:3 c2:1 persistent:1 ramped:1 manner:1 inter:1 ra:1 embody:1 behavior:10 nor:1 multi:2 brain:3 inspired:3 actual:2 window:1 cardinality:3 increasing:2 becomes:1 begin:2 project:2 underlying:2 circuit:19 what:1 xed:1 interpreted:2 spends:3 shuts:2 temporal:2 mitigate:1 ti:2 oscillates:1 control:2 unit:1 underlie:1 grant:1 appear:1 hamilton:1 persists:1 local:9 died:1 timing:1 limit:13 consequence:2 switching:2 era:1 mead:2 firing:5 modulation:2 solely:1 black:1 mateo:1 dynamically:1 suggests:1 challenging:1 shaded:2 averaged:2 hynna:1 acknowledgment:1 practice:1 block:1 implement:2 x3:5 chaotic:2 procedure:2 jan:1 area:1 eth:1 thought:1 physiology:1 projection:3 specificity:1 lakatos:1 get:1 cannot:1 close:1 selection:10 influence:2 equivalent:1 deterministic:4 conventional:1 map:1 destroying:1 go:1 starting:1 duration:1 convex:1 formulate:2 immediately:1 rule:5 attraction:2 attending:1 pull:1 population:43 searching:3 exploratory:5 stability:2 analogous:2 fodor:1 target:2 trigger:1 heavily:1 massive:1 spontaneous:1 element:1 trend:1 continues:1 cooling:1 ising:2 labeled:1 observed:3 bottom:18 role:1 module:1 wang:3 capture:1 worst:1 cycle:24 connected:2 renaud:1 plo:1 removed:1 ran:1 boahen:1 environment:2 broken:1 locking:1 constrains:1 complexity:1 dynamic:20 solving:7 completely:1 easily:1 joint:1 hopfield:3 chip:1 represented:3 cat:2 emulate:1 distinct:2 forced:2 effective:4 describe:2 activate:1 cooperativity:1 dichotomy:2 outcome:1 neuroinformatics:1 harnessing:1 whose:4 encoded:4 larger:2 solve:7 plausible:1 spend:2 ramp:2 otherwise:1 jean:1 favor:1 ability:1 jointly:1 noisy:2 seemingly:1 advantage:1 sequence:1 transistor:1 analytical:1 behzad:1 turrigiano:1 propose:3 interaction:2 serrano:1 relevant:1 loop:2 combining:1 rapidly:1 amplified:1 amplification:1 intervening:1 competition:4 getting:2 convergence:5 ijcai:1 optimum:9 plethora:1 jing:2 oscillating:1 converges:1 spent:3 coupling:9 recurrent:10 illustrate:1 parsi:1 dep:2 eq:4 solves:1 implemented:8 pot:2 indicate:1 lyapunov:1 direction:1 orb:1 correct:1 modifying:1 stochastic:3 transient:2 settle:4 enable:2 violates:2 implementing:1 potentiation:3 activating:2 fix:1 really:1 ultra:1 biological:4 probable:1 exploring:1 frontier:1 hold:1 sufficiently:1 ueli:1 exp:1 equilibrium:1 algorithmic:1 cognition:1 bump:1 mostafa:1 proc:1 combinatorial:2 currently:1 visited:1 sensitive:1 modulating:2 council:1 vice:1 engel:1 stefan:1 activates:1 always:1 csp:10 rather:1 avoid:1 indiveri:3 properly:1 indicates:1 sense:1 motif:1 integrated:1 entire:1 typically:1 nand:1 initially:1 vlsi:4 relation:7 bienenstock:1 selects:1 tank:1 overall:1 among:2 orientation:1 pascal:2 denoted:1 schoffelen:1 development:1 constrained:2 integration:1 mutual:1 equal:5 construct:3 once:1 having:3 never:2 sampling:5 field:1 biology:1 x4:2 represents:2 identical:1 promote:1 future:2 np:1 stimulus:5 transiently:1 escape:2 dudek:1 composed:1 densely:1 phase:20 fire:1 attractor:10 highly:2 elucidating:1 violation:1 kirkpatrick:1 arrives:2 notre:1 configure:1 activated:1 desimone:1 encourage:1 partial:3 arthur:1 experience:1 carver:1 taylor:1 re:2 circle:2 chist:1 soft:1 wb:6 restoration:1 assignment:15 neuromorphic:7 mahowald:1 cost:3 introducing:2 delbruck:1 uniform:1 comprised:1 sacha:1 afliger:1 periodic:1 morphs:1 giacomo:2 density:1 peak:3 sensitivity:3 explores:1 international:1 stay:1 systematic:1 receiving:2 continuously:3 connectivity:5 central:1 satisfied:2 x9:3 prefrontal:1 slowly:1 worse:1 external:23 cognitive:3 usable:2 account:2 potential:1 de:1 star:2 stabilize:1 satisfy:7 kate:1 ltu:1 caused:2 depends:2 try:1 wolfgang:1 mario:1 reached:1 red:3 start:2 competitive:1 cauwenberghs:1 rodney:2 om:1 square:2 palaniswami:1 xor:7 kaufmann:1 efficiently:1 resume:1 weak:1 plastic:4 krishnamoorthy:1 emphasizes:2 trajectory:2 cybernetics:1 lengyel:1 processor:1 bob:1 explain:2 strongest:1 oscillatory:8 reach:2 touretzky:1 synaptic:3 definition:1 evaluates:1 energy:18 frequency:2 slotine:1 james:1 hamming:5 couple:1 color:1 ubiquitous:1 schedule:1 amplitude:1 back:1 coloring:2 higher:5 cummings:1 follow:2 reflected:1 x6:3 maximally:4 april:1 done:1 microcircuit:3 though:3 stage:31 binocular:1 fiser:1 until:3 receives:2 wmax:2 del:1 mode:4 artifact:1 grows:1 building:1 effect:3 requiring:1 evolution:2 hence:1 jerry:1 excitability:1 linares:1 maass:1 illustrated:1 deal:1 x5:3 during:4 steady:4 excitation:1 coincides:1 rhythm:1 ini:1 complete:2 demonstrate:1 reflection:1 stefano:1 hallmark:1 wise:2 instantaneous:1 chaos:2 barranco:1 charles:1 common:1 functional:2 spiking:1 physical:2 perturbing:1 empirically:1 winner:14 insensitive:1 attached:1 exponentially:1 analog:3 interpretation:4 approximates:1 volume:3 association:1 silicon:2 potentiate:1 refer:1 versa:1 measurement:1 tuning:2 consistency:1 inclusion:1 erc:1 winnertake:1 had:2 stable:8 cortex:6 longer:1 operating:1 berkes:1 dominant:1 csps:10 apart:1 driven:1 selectivity:1 binary:7 fault:1 muller:1 wji:2 seen:5 morgan:1 additional:2 george:1 employed:1 managing:1 converge:3 determine:2 period:2 signal:2 dashed:1 multiple:2 violate:1 full:1 habenschuss:1 x10:2 schemmel:1 faster:1 equally:5 visit:1 impact:1 involving:4 basic:2 controller:1 histogram:2 represent:4 sometimes:1 bimodal:1 irregular:4 receive:1 addition:3 c1:1 annealing:4 interval:5 median:1 source:3 grow:1 appropriately:1 ineffective:1 recording:2 hz:7 december:1 flow:1 presence:1 intermediate:4 enough:1 destroyed:2 xj:1 affect:4 fit:6 architecture:6 competing:1 escaping:1 rhythmically:1 andreas:1 idea:1 intensive:1 crutchfield:1 munro:2 depress:1 peter:1 cause:1 oscillate:1 reassign:1 depression:3 nine:2 kevan:1 useful:1 lazzaro:1 clear:1 neocortex:1 ten:4 band:2 embodying:2 reduced:1 canonical:1 inhibitory:6 problematic:1 delta:1 neuroscience:3 jacques:1 reinforces:1 gerg:1 blue:2 per:1 discrete:2 paolo:1 four:3 reformulation:1 threshold:3 memristive:1 hysteretic:1 changing:1 prevent:1 douglas:3 andfire:1 rectangle:2 graph:4 pietro:1 fraction:1 place:1 almost:3 electronic:2 oscillation:11 fusi:1 incompatible:2 decision:3 coherence:1 bound:1 dame:1 quadratic:1 annual:1 activity:28 schroeder:1 strength:8 ijcnn:1 constraint:68 x2:18 giudice:1 dominated:1 x7:2 aspect:1 speed:1 vecchi:1 min:1 relatively:1 martin:2 maurizio:2 according:1 jr:1 postsynaptic:1 wta:36 biologically:1 making:4 happens:1 evolves:1 pneuma:1 aihara:1 gradually:1 mattia:2 computationally:2 zurich:2 remains:1 discus:1 turn:2 mechanism:5 eventually:1 needed:4 count:1 singer:1 end:1 decentralized:1 apply:1 obey:2 observe:3 hierarchical:1 fry:2 gelatt:1 anymore:1 alternative:1 robustness:1 original:1 top:6 remaining:1 lock:1 etienne:1 exploit:2 build:1 move:1 already:1 quantity:1 restored:1 ryckebusch:1 dependence:1 istvan:1 exhibit:3 september:1 win:2 distance:5 perturbs:1 mapped:1 simulated:5 accommodates:1 attentional:1 nelson:1 topic:1 reason:1 induction:1 modeled:2 providing:1 zeno:1 difficult:1 robert:2 expense:1 rise:2 suppress:1 implementation:1 design:1 proper:1 collective:2 perform:1 allowing:1 neuron:8 finite:2 november:1 january:1 defining:1 communication:1 interacting:1 perturbation:4 arbitrary:1 inferred:1 david:1 pair:2 specified:1 connection:9 bcm:1 unequal:3 learned:1 narrow:1 able:4 recurring:1 dynamical:7 pattern:1 parallelism:1 scott:1 program:1 max:17 including:1 explanation:1 green:1 analogue:1 power:3 suitable:1 satisfaction:6 event:2 force:2 settling:1 critical:1 representing:3 scheme:3 irrespective:1 x8:5 categorical:1 coupled:8 embodied:1 prior:1 understanding:1 kazuyuki:1 removal:2 review:3 relative:2 unsatisfied:2 synchronization:1 fully:11 expect:3 analogy:1 localized:1 digital:1 executive:1 integrate:3 rutishauser:1 uzh:1 basin:2 consistent:27 xiao:2 principle:2 editor:1 uncorrelated:1 excitatory:14 changed:2 compatible:2 supported:1 free:1 soon:1 bias:4 formal:1 ostr:1 institute:1 wmin:2 sparse:1 distributed:1 tolerance:1 boundary:1 van:1 cortical:6 transition:2 valid:1 rich:1 sensory:1 stuck:2 jump:1 coincide:1 san:1 transaction:1 sj:1 emphasize:1 compact:1 keep:2 memory:1 logic:1 active:1 reveals:1 xi:2 gotarredona:1 continuous:5 search:21 why:1 disambiguate:1 ku:1 ca:1 european:2 domain:1 dense:1 intracellular:2 linearly:2 noise:4 arrival:1 allowed:1 complementary:4 x1:15 neuronal:12 fig:32 cooper:1 cortically:2 sub:1 decoded:2 deterministically:1 winning:4 exponential:5 abundance:1 externally:2 down:3 specific:1 showing:2 physiological:1 evidence:11 exists:2 lorenz:2 adding:1 effectively:3 modulates:1 mohan:1 chen:1 led:1 likely:3 explore:1 neurophysiological:1 visual:3 vth:3 prevents:1 expressed:1 amsterdam:1 partially:1 restarted:1 ch:1 thilo:1 wolf:1 satisfies:3 determines:2 identity:2 presentation:1 towards:4 oscillator:11 absence:2 considerable:1 change:7 hard:1 entrainment:2 asymmetrically:1 ozsef:1 intact:1 internal:7 violated:11 ongoing:1 kanefsky:1 extrapolate:1 |
4,484 | 5,058 | Capacity of strong attractor patterns to model
behavioural and cognitive prototypes
Abbas Edalat
Department of Computing
Imperial College London
London SW72RH, UK
[email protected]
Abstract
We solve the mean field equations for a stochastic Hopfield network with temperature (noise) in the presence of strong, i.e., multiply stored, patterns, and use
this solution to obtain the storage capacity of such a network. Our result provides
for the first time a rigorous solution of the mean filed equations for the standard
Hopfield model and is in contrast to the mathematically unjustifiable replica technique that has been used hitherto for this derivation. We show that the critical
temperature for stability of a strong pattern is equal to its degree or multiplicity,
when the sum of the squares of degrees of the patterns is negligible compared
to the network size. In the case of a single strong pattern, when the ratio of the
number of all stored pattens and the network size is a positive constant, we obtain
the distribution of the overlaps of the patterns with the mean field and deduce that
the storage capacity for retrieving a strong pattern exceeds that for retrieving a
simple pattern by a multiplicative factor equal to the square of the degree of the
strong pattern. This square law property provides justification for using strong
patterns to model attachment types and behavioural prototypes in psychology and
psychotherapy.
1
Introduction: Multiply learned patterns in Hopfield networks
The Hopfield network as a model of associative memory and unsupervised learning was introduced
in [23] and has been intensively studied from a wide range of viewpoints in the past thirty years.
However, properties of a strong pattern, as a pattern that has been multiply stored or learned in
these networks, have only been examined very recently, a surprising delay given that repetition of an
activity is the basis of learning by the Hebbian rule and long term potentiation. In particular, while
the storage capacity of a Hopfield network with certain correlated patterns has been tackled [13, 25],
the storage capacity of a Hopfield network in the presence of strong as well as random patterns has
not been hitherto addressed.
The notion of a strong pattern of a Hopfield network has been proposed in [15] to model attachment
types and behavioural prototypes in developmental psychology and psychotherapy. This suggestion has been motivated by reviewing the pioneering work of Bowlby [9] in attachment theory and
highlighting how a number of academic biologists, psychiatrists, psychologists, sociologists and
neuroscientists have consistently regarded Hopfield-like artificial neural networks as suitable tools
to model cognitive and behavioural constructs as patterns that are deeply and repeatedly learned by
individuals [11, 22, 24, 30, 29, 10].
A number of mathematical properties of strong patterns in Hopfield networks, which give rise to
strong attractors, have been derived in [15]. These show in particular that strong attractors are
strongly stable; a series of experiments have also been carried out which confirm the mathematical
1
results and also indicate that a strong pattern stored in the network can be retrieved even in the presence of a large number of simple patterns, far exceeding the well-known maximum load parameter
or storage capacity of the Hopfield network with random patterns (?c ? 0.138).
In this paper, we consider strong patterns in stochastic Hopfield model with temperature, which accounts for various types of noise in the network. In these networks, the updating rule is probabilistic
and depend on the temperature. Since analytical solution of such a system is not possible in general,
one strives to obtain the average behaviour of the network when the input to each node, the so-called
field at the node, is replaced with its mean. This is the basis of mean field theory for these networks.
Due to the close connection between the Hopfield network and the Ising model in ferromagnetism [1,
8], the mean field approach for the Hopfield network and its variations has been tackled using the
replica method, starting with the pioneering work of Amit, Gutfreund and Sompolinsky [3, 2, 4, 19,
31, 1, 13]. Although this method has been widely used in the theory of spin glasses in statistical
physics [26, 16] its mathematical justification has proved to be elusive as we will discuss in the next
section; see for example [20, page 264], [14, page 27], and [7, page 9].
In [17] and independently in [27], an alternative technique to the replica method for solving the
mean field equations has been proposed which is reproduced and characterised as heuristic in [20,
section 2.5] since it relies on a number of assumptions that are not later justified and uses a number
of mathematical steps that are not validated.
Here, we use the basic idea of the above heuristic to develop a verifiable mathematical framework
with provable results grounded on elements of probability theory, with which we assume the reader
is familiar. This technique allows us to solve the mean field equations for the Hopfield network in
the presence of strong patterns and use the results to study, first, the stability of these patterns in the
presence of temperature (noise) and, second, the storage capacity of the network with a single strong
pattern at temperature zero.
We show that the critical temperature for the stability of a strong pattern is equal to its degree (i.e.,
its multiplicity) when the ratio of the sum of the squares of degrees of the patterns to the network
size tends to zero when the latter tends to infinity. In the case that there is only one strong pattern
present with its degree small compared to the number of patterns and the latter is a fixed multiple of
the number of nodes, we find the distribution of the overlap of the mean field and the patterns when
the strong pattern is being retrieved. We use these distributions to prove that the storage capacity
for retrieving a strong pattern exceeds that for a simple pattern by a multiplicative factor equal to
the square of the degree of the strong attractor. This result matches the finding in [15] regarding the
capacity of a network to recall strong patterns as mentioned above. Our results therefore show that
strong patterns are robust and persistent in the network memory as attachment types and behavioural
prototypes are in the human memory system.
In this paper, we will several times use Lyapunov?s theorem in probability which provides a simple
sufficient condition to generalise the Central Limit theorem when we deal with independent but
not necessarily identically distributed random variables. We require a general form of this theorem
Pkn
as follows. Let Yn =
i=1 Yni , for n ? IN , be a triangular array of random variables such
that for each n, the random variables Yni , for 1 ? i ? kn are independent with E(Yni ) = 0
2
and E(Yni
) = ? 2 , where E(X) stands for the expected value of the random variable X. Let
Pkn 2 ni
2
sn = i=1 ?ni . We use the notation X ? Y when the two random variables X and Y have the
same distribution (for large n if either or both of them depend on n).
Theorem 1.1 (Lyapunov?s theorem [6, page 368]) If for some ? > 0, we have the condition:
1
E(|Yn |2+? |) ? 0
s2+?
n
d
d
as n ? ?
then s1n Yn ?? N (0, 1) as n ? ? where ?? denotes convergence in distribution, and we denote
by N (a, ? 2 ) the normal distribution with mean a and variance ? 2 . Thus, for large n we have
Yn ? N (0, s2n ).
2
2
Mean field theory
We consider a Hopfield network with N neurons i = 1, . . . , N with values Si = ?1 and follow the
notations in [20]. As in [15], we assume patterns can be multiply stored and the degree of a pattern
is defined as its multiplicity. The total number of patterns, counting their multiplicity, is denoted by
1
n
p and we assume there
Pn are n patterns ? , . . . , ? with degrees d1 , . . . , dn ? 1 respectively and that
the remaining p ? k=1 dk ? 0 patterns are simple, i.e., each has degree one. Note that by our
assumptions there are precisely
n
X
p0 = p + n ?
dk
k=1
distinct patterns, which we assume are independent and identically distributed with equal probability
of taking value ?1 for each node. More generally, for any non-negative integer k ? IN , we let
pk =
p0
X
dk? .
?=1
Pp0
We use the generalized Hebbian rule for the synaptic couplings: wij = N1 ?=1
d? ?i? ?j? for i 6= j
with wii = 0 for 1 ? i, j ? N . As in the standard stochastic Hopfield model [20], we use Glauber
dynamics [18] for the stochastic updating rule with pseudo-temperature T > 0, which accounts for
various types of noise in the network, and assume zero bias in the local field. Putting ? = 1/T
(i.e., with the Boltzmann constant kB = 1) and letting f? (h) = 1/(1 + exp(?2?h)), the stochastic
updating rule at time t is given by:
Pr(Si (t + 1) = ?1) = f? (?hi (t)),
where hi (t) =
N
X
wij Sj (t),
(1)
j=1
is the local field at i at time t. The updating is implemented asynchronously in a random way.
The energy of the network in the configuration S = (Si )N
i=1 is given by
N
1 X
Si Sj wij .
H(S) = ?
2 i,j=1
For large N , this specifies a complex system, with an underlying state space of dimension 2N , which
in general cannot be solved exactly. However, mean field theory has proved very useful in studying
Hopfield networks. The average updated value of Si (t + 1) in Equation (1) is
hSi (t + 1)i = 1/(1 + e?2?hi (t) ) ? 1/(1 + e2?hi (t) ) = tanh(?hi (t)),
(2)
where h. . .i denotes taking average with respect to the probability distribution in the updating rule
in Equation (1). The stationary solution for the mean field thus satisfies:
hSi i = htanh(?hi )i,
(3)
The average overlap of pattern ? ? with the mean field at the nodes of the network is given by:
m? =
N
1 X ?
? hSi i
N i=1 i
(4)
The replica technique for solving the mean field problem, used in the case p/N = ? > 0 as N ? ?,
seeks to obtain the average of the overlaps in Equation (4) by evaluating the partition function of the
system, namely,
Z = TrS exp(??H(S)),
where the trace TrS stands for taking sum over all possible configurations S = (Si )N
i=1 . As it
is generally the case in statistical physics, once the partition function of the system is obtained,
3
all required physical quantities can in principle be computed. However, in this case, the partition
function is very difficult to compute since it entails computing the average hhlog Zii of log Z, where
hh. . .ii indicates averaging over the random distribution of the stored patterns ? ? . To overcome this
problem, the identity
Zk ? 1
log Z = lim
k?0
k
is used to reduce the problem to finding the average hhZ k ii of Z k , which is then computed for
positive integer values of k. For such k, we have:
Z k = TrS 1 TrS 2 . . . TrS k exp(??(H(S 1 ) + H(S 1 ) + . . . + H(S k ))),
where for each i = 1, . . . , k the super-scripted configuration S i is a replica of the configuration
state. In computing the trace over each replica, various parameters are obtained and the replica
symmetry condition assumes that these parameters are independent of the particular replica under
consideration. Apart from this assumption, there are two basic mathematical problems with the technique which makes it unjustifiable [20, page 264]. Firstly, the positive integer k above is eventually
treated as a real number near zero without any mathematical justification. Secondly, the order of
taking limits, in particular the order of taking the two limits k ? 0 and N ? ?, are several times
interchanged again without any mathematical justification.
Here, we develop a mathematically rigorous method for solving the mean field problem, i.e., computing the average of the overlaps in Equation (4) in the case of p/N = ? > 0 as N ? ?. Our
method turns the basic idea of the heuristic presented in [17] and reproduced in [20] for solving
the mean field equation into a mathematically verifiable formalism, which for the standard Hopfield
network with random stored patterns gives the same result as the replica method, assuming replica
symmetry. In the presence of strong patterns we obtain a set of new results as explained in the next
two sections.
The mean field equation is obtained from Equation (3) by approximating the right hand side of
PN
this equation by the value of tanh at the mean field hhi i =
j=1 wij hSj i, ignoring the sum
PN
j=1 wij (Sj ? hSj i) for large N [17, page 32]:
hSi i = tanh(?hhi i) = tanh
?
N
PN Pp0
j=1
?=1
d? ?i? ?j? hSj i .
(5)
Equation (5) gives the mean
equation for the Hopfield network with n possible strong patterns
Pfield
n
? ? (1 ? ? ? n) and p ? ?=1 d? simple patterns ? ? with n + 1 ? ? ? p0 . As in the standard
Hopfield model, where all patterns are simple, we have two cases to deal with. However, we now
have to account for the presence
attractors and our two cases will be as follows: (i) In the
Pp0 of strong
first case we assume p2 := ?=1
d2? = o(N ), which includes the simpler case p2 N when p2
is fixed and independent of N . (ii) In the second case we assume we have a single strong attractor
with the load parameter p/N = ? > 0.
3
Stability of strong patterns with noise: p2 = o(N )
The case of constant p and N ? ? is usually referred to as ? = 0 in the standard Hopfield
model. Here, we need to consider the sum of degrees of all stored patterns (and not just the number
of patterns) compared to N . We solve the mean field equation with T > 0 by using a method
similar in spirit to [20, page 33] for the standard Hopfield model, but in our case strong patterns
induce a sequence of independent but non-identically distributed random variables in the crosstalk
term, where the Central Limit Theorem cannot be used; we show however that Lyapunov?s theorem
(Theorem (1.1) can be invoked. In retrieving pattern ? 1 , we look for a solution of the mean filed
equation of the form: hSi i = m?i1 , where m > 0 is a constant. Using Equation (5) and separating
the contribution of ? 1 in the argument of tanh, we obtain:
?
m?i1 = tanh ?
?
??
m? ? 1
d1 ?i +
N
4
X
j6=i,?>1
d? ?i? ?j? ?j1 ?? .
(6)
For each N , ? > 1 and j 6= i, let
d? ? ? 1
(7)
? ? ? .
N i j j
This gives (p0 ? 1)(N ? 1) independent random variables with E(YN ?j ) = 0, E(YN2 ?j ) = d2? /N 2 ,
and E(|YN3 ?j |) = d3? /N 3 . We have:
YN ?j =
s2N :=
X
E(YN2 ?j ) =
?>1,j6=i
1 X 2
N ?1 X 2
d? ?
d .
2
N ?>1
N ?>1 ?
(8)
Thus, as N ? ?, we have:
1
s3N
X
P
E(|YN3 ?j |)
??
?>1,j6=i
?>1
N(
P
d3?
?>1
d2? )3/2
? 0.
(9)
P
P
as N ? ? since for positive numbers d? we always have ?>1 d3? < ( ?>1 d2? )3/2 . Thus the
Lyapunov condition is satisfied for ? = 1. By Lyapunov?s theorem we deduce:
!
X
1 X
d? ?i? ?j? ?j1 ? N 0,
d2? /N
(10)
N
?>1
?>1,j6=i
Since we also have p2 = o(N ), it follows that we can ignore the second term, i.e., the crosstalk
term, in the argument of tanh in Equation (6) as N ? ?; we thus obtain:
m = tanh ?d1 m.
(11)
To examine the fixed points of the Equation (11), we let d = d1 for convenience and put x = ?dm =
dm/T , so that tanh x = T x/d; see Figure 1. It follows that Tc = d is the critical temperature. If
T < d then there is a non-zero (non-trivial) solution for m, whereas for T > d we only have the
trivial solution. For d = 1 our solution is that of the standard Hopfield network as in [20, page 34].
(d < T) y>x
y = x ( d = T)
y = tanh x
y<x(T<d )
x
Figure 1: Stability of strong attractors with noise
Theorem 3.1 The critical temperature for stability of a strong attractor is equal to its degree.
4
Mean field equations for p/N = ? > 0.
The case p/N = ?, as for the standard Hopfield model, is much harder and we here assume we
have only a single pattern ? 1 with d1 ? 1 and the rest of the patterns ? ? are simple with d? = 1 for
2 ? ? ? p0 . The case when there are more than one strong patterns is harder and will be dealt with
in a future paper. Moreover, we assume d1 p0 which is the interesting case in applications. If
d1 > 1 then we have a single strong pattern whereas if d1 = 1 the network is reduced to the standard
Hopfield network. We recall that all patterns ? ? for 1 ? ? ? p0 are independent and random. Since
5
p and N are assumed to be large and d1 p0 , we will replace p0 with p and approximate terms like
p ? 2 with p.
We again consider the mean field equation (5) for retrieving pattern ? 1 but now the cross talk term
in (6) is large and can no longer be ignored. We therefore look at the overlaps, Equation (4), of the
mean field with all the stored patterns ? ? and not just ? 1 .
Combining Equation (5) and (4), we eliminate the mean field to obtain a recursive equation for the
overlaps as the new variables:
p
N
X
1 X ?
m? =
?i tanh ?
d? ?i? m?
N i=1
?=1
!
(12)
We now have a family of p stochastic equations for the random variables m? with 1 ? ? ? p in
order to retrieve the random pattern ? 1 . Formally, we assume we have a probability space (?, F, P )
with the real-valued random variables m? : ? ? IR, which are measurable with respect to F and
the Borel sigma field B over the real line and which take value m? (?) ? IR for each sample point
? ? ?. The probability of an event A ? B is given by Pr{? : m? (?) ? A}. As usual ? can itself
be taken to the real line with its Borel sigma field and we will usually drop all references to ?. We
a.s.
need two lemmas to prove our main result. We write XN ?? X for the almost sure convergence
d
of the sequence of random variables XN to X, whereas XN ?? X indicates convergence in
distribution [6]. Recall that almost sure convergence implies convergence in distribution. To help
us compute the right hand side of Equation (12), we need the following lemma, which extends the
standard result for the Law of Large Numbers and its rate of convergence [5, pages 112 and 113].
Lemma 4.1 Let X be a random variable on IR such that its probability distribution F (x) =
Pr(X ? x) is differentiable with density F 0 (x) = f (x). If g : IR ? IR is a bounded measurable function and Xk (k ? 1) is a sequence of of independent and identically distributed random
variables with distribution X, then
Z ?
N
1 X
a.s.
g(Xi ) ?? Eg(X) =
g(x)f (x)dx,
(13)
N i=1
?
and for all > 0 and t > 1, we have:
Pr
sup
k?N
k
1X
(g(Xi ) ? kE(g)(X))
k i=1
!
!
?
= o(1/N t?1 )
(14)
The proof of the above lemma is given on-line in the supplementary material.
Assume p/N = ? > 0 with d1 p0 and d? = 1 for 1 < ? ? p0 . In the following theorem, we use
the basic idea of the heuristic in [17] which is reproduced in [20, section 2.5] to develop a verifiable
mathematical method with provable results to solve the mean field equation in the more general case
that we have a single strong pattern present in the network.
Theorem 4.2 There is a solution to the mean field equations (12) for retrieving ? 1 with independent
random variables m? (for 1 ? ? ? p0 ), where m1 ? N (m, s/N ) and m? ? N (0, r/N ) (for
? 6= 1), if the real numbers m, s and r satisfy the four simultaneous equations:
?
R?
?
z2
?
(i) m = ?? ?dz
e? 2 tanh(?(d1 m + ?rz))
?
?
2?
?
?
? (ii) s = q ? m2
(15)
R ? dz ? z2
?
?
? e 2 tanh2 (?(d1 m +
?
(iii)
q
=
?rz))
?
??
2?
?
?
? (iv) r =
q
(1??(1?q))2
In the proof of this theorem, as given below, we seek a solution of the mean field equations assuming
we have independent random variables m? (for 1 ? ? ? p0 ) such that for large N and p with
6
p/N = ?, we have m1 ? N (m, s/N ) and m? ? N (0, r/N ) (? 6= 1), and then find conditions in
terms of m, s and r to ensure that such a solution exists. These assumptions are in effect equivalent
to the replica symmetry approximation [17, page 262], since they lead, as shown below, to the same
solution derived from the replica method when all stored patterns are simple. In analogy with the
replica technique, we call our solution symmetric. Since by our assumption
about the distribution of
?
the overlaps m? , the standard deviation
of each overlap is O(1/ N?
), we ignore terms of O(1/N )
?
and more generally terms of o(1/ N ) compared to terms of O(1/ N ) in the proof including in
the lemma below, which enables us to compute the argument of tanh in Equation (12) for large N .
Lemma 4.3 If m? ? N (0, r/N ) (for ? 6= 1), then we have the equivalence of distributions:
X
X
?i1 ?i? m? ? N (0, ?r) ?
?i1 ?i? m? .
?6=1,?
?6=1
The proofs of the above lemma and Theorem (4.2) are given on-line in the supplementary material.
We note that in the heuristic described in [20] the distributions of m1 and m? (? 6= 1) are not
eventually determined yet an initial assumption about the variance of m? is made. Moreover, the
heuristic has no assumption on how m? is distributed, and no valid justification is provided for
computing the double summation to obtain m? , which is similar to the lack of justification for the
interchange of limits in the replica technique mentioned in Section 2.
Comparing the equations for m, q and r in Equations (15) with those obtained by the replica
method [20, pages 263-4] or the heuristic in [20, page 37], we see that m has been replaced by
d1 m on the right hand side of the equations for m and q. It follows that for d1 = 1, we obtain the
solution for random patterns in the standard Hopfield network produced by the replica method.
We can solve the simultaneous equations in (15) for m, q and r (and then for s) numerically. As
in [20, page 38], we examine when these equations have non-trivial solutions (i.e., m 6= 0) when
T ? 0 corresponding to ? ? ?, where we also have q ? 1 but C := ?(1 ? q) remains finite:
Using the relations:
( R?
2
2
2
?dz e?z /2 (1 ? tanh2 ?(az + b)) ? 2 1 e?b /2a
? a?
?? 2?
?
R ? dz ?z2 /2
???
? e
tanh ?(az + b) ?? erf(b/ 2a),
?? 2?
(16)
where erf is the error function, the three equations for m, q and r become:
(
C := ?(1 ? q)
=
r
=
p
2/??r exp(?(dm)2 /2?r)
?
1/(1 ? C)2 ,
m = erf(dm/ 2?r),
?
where we have put d := d1 . Let y = dm/ 2?r; then we obtain:
2
y ?
2
f?,d (y) := ( 2? + ? e?y ) = erf(y)
d
?
(17)
(18)
Figure 2, gives a schematic view of the solution of Equation (18). The dotted curve is the erf function
on the right hand side of the equation, whereas the three solid curves correspond to the graphs of the
function f?,d on the left hand side of the equation for a given value of d and three different values
of ?. The heights of these graphs increase with ?.
The critical load parameter ?c (d) is the threshold such that for ? < ?c (d) the strong pattern with
degree d can be retrieved whereas for ?c (d) < ? this memory is lost. Geometrically, ?c (d) corresponds to the curve that is tangent, say at yd , to the error function, i.e.,
f?0 c (d),d (yd ) = erf 0 (yd ).
For ? < ?c (d), the function f?,d has two non-trivial intersections (away from the origin) with erf
while for ?c (d) < ? there are no non-trivial intersections.
We can compare the storage capacity of strong patterns with that of simple patterns, assuming the
independence of m? (equivalently replica symmetry), by finding a lower bound for ?c (d) in terms
7
f?, d
?
f? c(d ),d
. .
0
.
erf(y)
f?, d
yd
y
Figure 2: Capacity of strong attractors
of ?c (1) as follows. We have:
p
p
2
2
2
2
f?,d (y) = y( 2(?/d2 ) + ? e?y ) ? y( 2(?/d2 ) + ? e?y )
d ?
?
(19)
where equality holds iff d = 1. Putting ? = d2 ?c (1) and y = y1 , we have for d > 1:
fd2 ?c (1),d (y1 ) < f?c (1),1 (y1 ) = erf(y1 ),
(20)
Therefore, for a strong pattern, the graphs of fd2 ?c (1),d and erf intersect in two non-trivial points and
thus ?c (d) > d2 ?c (1). Since ?c (1) = ?c ? 0.138, this yields: ?c (d)/0.138 > d2 , i.e., the relative
increase in the storage capacity exceeds the square of the degree of the strong pattern.
In the case of the standard Hopfield network with simple patterns only, we have ?c (1) = ?c ?
0.138, but simulation experiments show that for values in the narrow range 0.138 < ? < 0.144
there are replica symmetry breaking solutions for which a stored pattern can still be retrieved [12].
We show that the square property holds when we take into account symmetry breaking solutions.
By [15, Theorem 1], it follows that the error probability of retrieving a single strong attractor is:
?
1
Prer ? (1 ? erf(d/ 2?),
2
?
for ? = p/N . Thus, this error will be constant if d/ ? remains fixed, indicating that the critical
value of the load parameter is proportional to the square of the degree of the strong attractor.
Corollary 4.4 The storage capacity for retrieving a single strong pattern exceeds that of a simple
pattern by the square of the degree of the strong pattern.
This square property shows that a multiply learned pattern is retained in the memory in the presence
of a large number of other random patterns, proportional to the square of its multiplicity.
5
Conclusion
We have developed a mathematically justifiable method to derive the storage capacity of the Hopfield
network when the load parameter ? = p/N remains a positive constant as the network size N ? ?.
For the standard model, our result confirms that of the replica technique, i.e., ?c ? 0.138. However,
our method also computes the storage capacity when retrieving a single strong pattern of degree d
in the presence of other random patterns and we have shown that this capacity exceeds that of a
simple pattern by a multiplicative factor d2 , providing further justification for using strong patterns
of Hopfield networks to model attachment types and behavioural prototypes in psychology.
The storage capacity of Hopfield networks when there are more than a single strong pattern and in
networks with low neural activation will be addressed in future work. It is also of interest to examine
the behaviour of strong patterns in Boltzmann Machines [20], Restricted Boltzmann Machines [28]
and Deep Learning Networks [21].
8
References
[1] D. J. Amit. Modeling Brain Function: The World of Attractor Neural Networks. Cambridge, 1989.
[2] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Spin-glass models of neural networks. Phys. Rev. A,
32:1007?1018, 1985.
[3] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Storing infinite numbers of patterns in a spin-glass model
of neural networks. Phys. Rev. Lett., 55:1530?1533, Sep 1985.
[4] D. J. Amit, H. Gutfreund, and H. Sompolinsky. Information storage in neural networks with low levels of
activity. Phys. Rev. A, 35:2293?2303, Mar 1987.
[5] L. E. Baum and M. Katz. Convergence rates in the law of large numbers. Transactions of the American
Mathematical Society, 120(1):108?123, 1965.
[6] P. Billingsley. Probability and Measure. John Wiley & Sons, second edition, 1986.
[7] E. Bolthausen. Random media and spin glasses: An introduction into some mathematical results and
problems. In E. Bolthausen and A. Bovier, editors, Spin Glasses, volume 1900 of Lecture Notes in
Mathematics. Springer, 2007.
[8] A. Bovier and V. Gayrard. Hopfield models as generalized random mean field models. In A. Bovier and
P. Picco, editors, Mathematical Aspects of Spin Glasses and Neural Networks, pages 3?89. Birkhuser,
1998.
[9] John Bowlby. Attachment: Volume One of the Attachment and Loss Trilogy. Pimlico, second revised
edition, 1997.
[10] L. Cozolino. The Neuroscience of Human Relationships. W. W. Norton, 2006.
[11] F. Crick and G. Mitchison. The function of dream sleep. Nature, 304:111?114, 1983.
[12] A. Crisanti, D. J. Amit, and H. Gutfreund. Saturation level of the hopfield model for neural network.
Europhys. Lett., 2(337), 1986.
[13] L. F. Cugliandolo and M. V. Tsodyks. Capacity of networks with correlated attractors. Journal of Physics
A: Mathematical and General, 27(3):741, 1994.
[14] V. Dotsenko. An Introduction to the theory of spin glasses and neural networks. World Scientific, 1994.
[15] A. Edalat and F. Mancinelli. Strong attractors of Hopfield neural networks to model attachment types and
behavioural patterns. In IJCNN 2013 Conference Proceedings. IEEE, August 2013.
[16] K. H. Fischer and J. A. Hertz. Spin Glasses (Cambridge Studies in Magnetism). Cambridge, 1993.
[17] T. Geszti. Physical Models of Neural Networks. World Scientific, 1990.
[18] R. J. Glauber. Time?dependent statistics of the Ising model. J. Math. Phys., 294(4), 1963.
[19] H. Gutfreund. Neural networks with hierarchically correlated patterns. Phys. Rev. A, 37:570?577, 1988.
[20] J. A. Hertz, A. S. Krogh, and R. G. Palmer. Introduction To The Theory Of Neural Computation. Westview
Press, 1991.
[21] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527?1554, 2006.
[22] R. E. Hoffman. Computer simulations of neural information processing and the schizophrenia-mania
dichotomy. Arch Gen Psychiatry., 44(2):178?88, 1987.
[23] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities.
Proceedings of the National Academy of Science, USA, 79:2554?2558, 1982.
[24] T. Lewis, F. Amini, and R. Richard. A General Theory of Love. Vintage, 2000.
[25] Matthias Lowe. On The Storage Capacity of Hopfield Models with Correlated Patterns. Annals of Appplied Probability, 8(4):1216?1250, 1998.
[26] M. Mezard, G. Parisi, and M. Virasoro, editors. Spin Glass Theory and Beyond. World Scientific, 1986.
[27] P. Peretto. On learning rules and memory storage abilities of asymmetrical neural networks. J. Phys.
France, 49:711?726, 1998.
[28] A. Salakhutdinov, R.and Mnih and G. Hinton. Restricted Boltzmann machines for collaborative filtering.
In Proceedings of the 24th international conference on Machine learning, pages 791?798, 2007.
[29] A. N. Schore. Affect Dysregulation and Disorders of the Self. W. W. Norton, 2003.
[30] T. S. Smith, G. T. Stevens, and S. Caldwell. The familiar and the strange: Hopfield network models
for prototype-entrained. In T. S. Franks, D. D. and Smith, editor, Mind, brain, and society: Toward a
neurosociology of emotion, volume 5 of Social perspectives on emotion. Elsevier/JAI, 1999.
[31] M. Tsodyks and M. Feigelman. Enhanced storage capacity in neural networks with low level of activity.
Europhysics Letters,, 6:101?105, 1988.
9
| 5058 |@word d2:11 confirms:1 simulation:2 seek:2 ferromagnetism:1 p0:13 solid:1 harder:2 initial:1 configuration:4 series:1 past:1 yni:4 z2:3 comparing:1 surprising:1 si:6 yet:1 dx:1 activation:1 john:2 partition:3 j1:2 enables:1 drop:1 stationary:1 xk:1 smith:2 provides:3 math:1 node:5 firstly:1 simpler:1 zii:1 mathematical:13 dn:1 height:1 become:1 retrieving:9 persistent:1 prove:2 expected:1 examine:3 love:1 brain:2 salakhutdinov:1 bolthausen:2 provided:1 notation:2 underlying:1 moreover:2 bounded:1 medium:1 hitherto:2 developed:1 gutfreund:6 finding:3 pseudo:1 exactly:1 uk:2 yn:6 positive:5 negligible:1 local:2 tends:2 limit:5 yd:4 studied:1 examined:1 equivalence:1 palmer:1 range:2 thirty:1 crosstalk:2 recursive:1 lost:1 intersect:1 induce:1 unjustifiable:2 close:1 cannot:2 convenience:1 storage:17 put:2 measurable:2 equivalent:1 dz:4 baum:1 elusive:1 starting:1 independently:1 ke:1 disorder:1 m2:1 rule:7 array:1 regarded:1 retrieve:1 stability:6 notion:1 variation:1 justification:7 updated:1 annals:1 enhanced:1 us:1 origin:1 element:1 updating:5 ising:2 solved:1 tsodyks:2 sompolinsky:4 feigelman:1 deeply:1 mentioned:2 developmental:1 dynamic:1 depend:2 reviewing:1 solving:4 magnetism:1 basis:2 sep:1 hopfield:36 tanh2:2 emergent:1 various:3 talk:1 derivation:1 distinct:1 fast:1 london:2 artificial:1 dichotomy:1 europhys:1 heuristic:7 widely:1 solve:5 valued:1 supplementary:2 say:1 triangular:1 ability:2 erf:11 fischer:1 statistic:1 itself:1 asynchronously:1 associative:1 reproduced:3 sequence:3 pp0:3 differentiable:1 analytical:1 net:1 matthias:1 parisi:1 combining:1 gen:1 iff:1 academy:1 az:2 convergence:7 double:1 help:1 coupling:1 develop:3 ac:1 derive:1 p2:5 krogh:1 implemented:1 strong:49 indicate:1 implies:1 lyapunov:5 stevens:1 stochastic:6 kb:1 human:2 material:2 require:1 potentiation:1 behaviour:2 secondly:1 mathematically:4 summation:1 hold:2 ic:1 normal:1 exp:4 pkn:2 interchanged:1 tanh:14 repetition:1 tool:1 hoffman:1 always:1 super:1 pn:4 corollary:1 derived:2 validated:1 consistently:1 indicates:2 contrast:1 rigorous:2 psychiatry:1 glass:9 elsevier:1 dependent:1 eliminate:1 relation:1 wij:5 france:1 i1:4 denoted:1 biologist:1 field:30 equal:6 construct:1 once:1 emotion:2 look:2 unsupervised:1 patten:1 future:2 richard:1 national:1 individual:1 familiar:2 replaced:2 attractor:14 n1:1 neuroscientist:1 interest:1 mnih:1 multiply:5 dotsenko:1 iv:1 virasoro:1 formalism:1 modeling:1 deviation:1 crisanti:1 delay:1 osindero:1 s3n:1 stored:11 kn:1 density:1 international:1 filed:2 probabilistic:1 physic:3 again:2 central:2 satisfied:1 cognitive:2 american:1 account:4 includes:1 satisfy:1 multiplicative:3 later:1 view:1 lowe:1 sup:1 contribution:1 collaborative:1 square:11 ni:2 spin:9 ir:5 variance:2 correspond:1 yield:1 caldwell:1 dealt:1 produced:1 justifiable:1 j6:4 simultaneous:2 phys:6 synaptic:1 norton:2 energy:1 e2:1 dm:5 proof:4 billingsley:1 proved:2 intensively:1 recall:3 lim:1 vintage:1 follow:1 strongly:1 mar:1 just:2 arch:1 hand:5 lack:1 scientific:3 usa:1 effect:1 asymmetrical:1 equality:1 symmetric:1 deal:2 glauber:2 eg:1 self:1 generalized:2 temperature:10 consideration:1 invoked:1 recently:1 physical:3 volume:3 m1:3 katz:1 numerically:1 cambridge:3 mathematics:1 htanh:1 stable:1 entail:1 longer:1 deduce:2 mania:1 retrieved:4 perspective:1 apart:1 certain:1 hsi:5 ii:4 multiple:1 hebbian:2 exceeds:5 match:1 academic:1 cross:1 long:1 schizophrenia:1 europhysics:1 schematic:1 basic:4 ae:1 grounded:1 abbas:1 scripted:1 justified:1 whereas:5 addressed:2 rest:1 sure:2 entrained:1 spirit:1 integer:3 call:1 near:1 presence:9 counting:1 iii:1 identically:4 independence:1 affect:1 psychology:3 reduce:1 idea:3 prototype:6 regarding:1 motivated:1 repeatedly:1 deep:2 ignored:1 generally:3 useful:1 verifiable:3 reduced:1 specifies:1 dotted:1 neuroscience:1 write:1 putting:2 four:1 threshold:1 imperial:1 d3:3 replica:19 graph:3 geometrically:1 sum:5 year:1 letter:1 extends:1 family:1 reader:1 almost:2 strange:1 bound:1 hi:6 tackled:2 hhi:2 psychiatrist:1 sleep:1 trs:5 activity:3 ijcnn:1 infinity:1 precisely:1 aspect:1 argument:3 department:1 hertz:2 strives:1 son:1 psychotherapy:2 rev:4 psychologist:1 explained:1 restricted:2 multiplicity:5 pr:4 taken:1 behavioural:7 equation:40 remains:3 discus:1 eventually:2 turn:1 hh:1 mind:1 letting:1 studying:1 wii:1 away:1 amini:1 cugliandolo:1 s2n:2 alternative:1 rz:2 denotes:2 remaining:1 assumes:1 ensure:1 amit:6 approximating:1 society:2 quantity:1 usual:1 separating:1 capacity:20 yn2:2 trivial:6 toward:1 provable:2 dream:1 assuming:3 retained:1 relationship:1 ratio:2 providing:1 equivalently:1 difficult:1 frank:1 trace:2 negative:1 rise:1 sigma:2 collective:1 boltzmann:4 teh:1 neuron:1 revised:1 peretto:1 finite:1 hinton:2 y1:4 august:1 introduced:1 namely:1 required:1 connection:1 learned:4 narrow:1 beyond:1 usually:2 pattern:86 below:3 pioneering:2 saturation:1 including:1 memory:6 belief:1 critical:6 overlap:9 suitable:1 treated:1 event:1 attachment:8 carried:1 sn:1 jai:1 tangent:1 relative:1 law:3 loss:1 lecture:1 suggestion:1 interesting:1 proportional:2 filtering:1 analogy:1 degree:17 sufficient:1 principle:1 viewpoint:1 editor:4 storing:1 bias:1 side:5 generalise:1 wide:1 taking:5 distributed:5 overcome:1 lett:2 dimension:1 xn:3 stand:2 evaluating:1 valid:1 curve:3 computes:1 interchange:1 made:1 world:4 dysregulation:1 far:1 social:1 transaction:1 sj:3 approximate:1 ignore:2 confirm:1 assumed:1 xi:2 mitchison:1 nature:1 zk:1 robust:1 hsj:3 ignoring:1 symmetry:6 necessarily:1 complex:1 pk:1 main:1 hierarchically:1 s2:1 noise:6 edition:2 referred:1 borel:2 wiley:1 mezard:1 exceeding:1 breaking:2 theorem:15 load:5 dk:3 exists:1 tc:1 intersection:2 highlighting:1 springer:1 corresponds:1 satisfies:1 relies:1 lewis:1 identity:1 replace:1 crick:1 westview:1 characterised:1 determined:1 infinite:1 averaging:1 lemma:7 called:1 total:1 s1n:1 indicating:1 formally:1 college:1 latter:2 d1:15 correlated:4 |
4,485 | 5,059 | Compete to Compute
Rupesh Kumar Srivastava, Jonathan Masci, Sohrob Kazerounian,
Faustino Gomez, J?rgen Schmidhuber
IDSIA, USI-SUPSI
Manno?Lugano, Switzerland
{rupesh, jonathan, sohrob, tino, juergen}@idsia.ch
Abstract
Local competition among neighboring neurons is common in biological neural networks (NNs). In this paper, we apply the concept to gradient-based,
backprop-trained artificial multilayer NNs. NNs with competing linear
units tend to outperform those with non-competing nonlinear units, and
avoid catastrophic forgetting when training sets change over time.
1
Introduction
Although it is often useful for machine learning methods to consider how nature has arrived
at a particular solution, it is perhaps more instructive to first understand the functional
role of such biological constraints. Indeed, artificial neural networks, which now represent
the state-of-the-art in many pattern recognition tasks, not only resemble the brain in a
superficial sense, but also draw on many of its computational and functional properties.
One of the long-studied properties of biological neural circuits which has yet to fully impact
the machine learning community is the nature of local competition. That is, a common
finding across brain regions is that neurons exhibit on-center, off-surround organization
[1, 2, 3], and this organization has been argued to give rise to a number of interesting
properties across networks of neurons, such as winner-take-all dynamics, automatic gain
control, and noise suppression [4].
In this paper, we propose a biologically inspired mechanism for artificial neural networks
that is based on local competition, and ultimately relies on local winner-take-all (LWTA)
behavior. We demonstrate the benefit of LWTA across a number of different networks and
pattern recognition tasks by showing that LWTA not only enables performance comparable
to the state-of-the-art, but moreover, helps to prevent catastrophic forgetting [5, 6] common
to artificial neural networks when they are first trained on a particular task, then abruptly
trained on a new task. This property is desirable in continual learning wherein learning
regimes are not clearly delineated [7]. Our experiments also show evidence that a type of
modularity emerges in LWTA networks trained in a supervised setting, such that different
modules (subnetworks) respond to different inputs. This is beneficial when learning from
multimodal data distributions as compared to learning a monolithic model.
In the following, we first discuss some of the relevant neuroscience background motivating
local competition, then show how we incorporate it into artificial neural networks, and
how LWTA, as implemented here, compares to alternative methods. We then show how
LWTA networks perform on a variety of tasks, and how it helps buffer against catastrophic
forgetting.
2
Neuroscience Background
Competitive interactions between neurons and neural circuits have long played an important
role in biological models of brain processes. This is largely due to early studies showing that
1
many cortical [3] and sub-cortical (e.g., hippocampal [1] and cerebellar [2]) regions of the
brain exhibit a recurrent on-center, off-surround anatomy, where cells provide excitatory
feedback to nearby cells, while scattering inhibitory signals over a broader range. Biological
modeling has since tried to uncover the functional properties of this sort of organization,
and its role in the behavioral success of animals.
The earliest models to describe the emergence of winner-take-all (WTA) behavior from local
competition were based on Grossberg?s shunting short-term memory equations [4], which
showed that a center-surround structure not only enables WTA dynamics, but also contrast
enhancement, and normalization. Analysis of their dynamics showed that networks with
slower-than-linear signal functions uniformize input patterns; linear signal functions preserve
and normalize input patterns; and faster-than-linear signal functions enable WTA dynamics.
Sigmoidal signal functions which contain slower-than-linear, linear, and faster-than-linear
regions enable the supression of noise in input patterns, while contrast-enhancing, normalizing and storing the relevant portions of an input pattern (a form of soft WTA). The
functional properties of competitive interactions have been further studied to show, among
other things, the effects of distance-dependent kernels [8], inhibitory time lags [8, 9], development of self-organizing maps [10, 11, 12], and the role of WTA networks in attention [13].
Biological models have also been extended to show how competitive interactions in spiking
neural networks give rise to (soft) WTA dynamics [14], as well as how they may be efficiently
constructed in VLSI [15, 16].
Although competitive interactions, and WTA dynamics have been studied extensively in the
biological literature, it is only more recently that they have been considered from computational or machine learning perspectives. For example, Maas [17, 18] showed that feedforward
neural networks with WTA dynamics as the only non-linearity are as computationally powerful as networks with threshold or sigmoidal gates; and, networks employing only soft
WTA competition are universal function approximators. Moreover, these results hold, even
when the network weights are strictly positive?a finding which has ramifications for our
understanding of biological neural circuits, as well as the development of neural networks
for pattern recognition. The large body of evidence supporting the advantages of locally
competitive interactions makes it noteworthy that this simple mechanism has not provoked
more study by the machine learning community. Nonetheless, networks employing local
competition have existed since the late 80s [21], and, along with [22], serve as a primary
inspiration for the present work. More recently, maxout networks [19] have leveraged locally
competitive interactions in combination with a technique known as dropout [20] to obtain
the best results on certain benchmark problems.
3
Networks with local winner-take-all blocks
This section describes the general network architecture with locally competing neurons.
The network consists of B blocks which are organized into layers (Figure 1). Each block,
bi , i = 1..B, contains n computational units (neurons), and produces an output vector yi ,
determined by the local interactions between the individual neuron activations in the block:
yij = g(h1i , h2i ..., hni ),
(1)
where g(?) is the competition/interaction function, encoding the effect of local interactions
in each block, and hji , j = 1..n, is the activation of the j-th neuron in block i computed by:
T
hi = f (wij
x),
(2)
where x is the input vector from neurons in the previous layer, wij is the weight vector of
neuron j in block i, and f (?) is a (generally non-linear) activation function. The output
activations y are passed as inputs to the next layer. In this paper we use the winner-take-all
interaction function, inspired by studies in computational neuroscience. In particular, we
use the hard winner-take-all function:
j
hi if hji ? hki , ?k = 1..n
yij =
0 otherwise.
In the case of multiple winners, ties are broken by index precedence. In order to investigate the capabilities of the hard winner-take-all interaction function in isolation, f (x) = x
2
Figure 1: A Local Winner-Take-All (LWTA) network with blocks of size two showing the
winning neuron in each block (shaded) for a given input example. Activations flow forward
only through the winning neurons, errors are backpropagated through the active neurons.
Greyed out connections do not propagate activations. The active neurons form a subnetwork
of the full network which changes depending on the inputs.
(identity) is used for the activation function in equation (2). The difference between this
Local Winner Take All (LWTA) network and a standard multilayer perceptron is that no
non-linear activation functions are used, and during the forward propagation of inputs, local
competition between the neurons in each block turns off the activation of all neurons except
the one with the highest activation. During training the error signal is only backpropagated
through the winning neurons.
In a LWTA layer, there are as many neurons as there are blocks active at any one time for
a given input pattern1 . We denote a layer with blocks of size n as LWTA-n. For each input
pattern presented to a network, only a subgraph of the full network is active, e.g. the highlighted neurons and synapses in figure 1. Training on a dataset consists of simultaneously
training an exponential number of models that share parameters, as well as learning which
model should be active for each pattern. Unlike networks with sigmoidal units, where all of
the free parameters need to be set properly for all input patterns, only a subset is used for
any given input, so that patterns coming from very different sub-distributions can potentially be modelled more efficiently through specialization. This modular property is similar
to that of networks with rectified linear units (ReLU) which have recently been shown to
be very good at several learning tasks (links with ReLU are discussed in section 4.3).
4
4.1
Comparison with related methods
Max-pooling
Neural networks with max-pooling layers [23] have been found to be very useful, especially
for image classification tasks where they have achieved state-of-the-art performance [24, 25].
These layers are usually used in convolutional neural networks to subsample the representation obtained after convolving the input with a learned filter, by dividing the representation
into pools and selecting the maximum in each one. Max-pooling lowers the computational
burden by reducing the number of connections in subsequent convolutional layers, and adds
translational/rotational invariance.
1
However, there is always the possibility that the winning neuron in a block has an activation
of exactly zero, so that the block has no output.
3
0.5
0.5
0
0.8
0.8
before
after
0.8
0.8
before
after
(a) max-pooling
(b) LWTA
Figure 2: Max-pooling vs. LWTA. (a) In max-pooling, each group of neurons in a layer
has a single set of output weights that transmits the winning unit?s activation (0.8 in this
case) to the next layer, i.e. the layer activations are subsampled. (b) In an LWTA block,
there is no subsampling. The activations flow into subsequent units via a different set of
connections depending on the winning unit.
At first glance, the max-pooling seems very similar to a WTA operation, however, the two
differ substantially: there is no downsampling in a WTA operation and thus the number of
features is not reduced, instead the representation is "sparsified" (see figure 2).
4.2
Dropout
Dropout [20] can be interpreted as a model-averaging technique that jointly trains several
models sharing subsets of parameters and input dimensions, or as data augmentation when
applied to the input layer [19, 20]. This is achieved by probabilistically omitting (?dropping?) units from a network for each example during training, so that those neurons do not
participate in forward/backward propagation. Consider, hypothetically, training an LWTA
network with blocks of size two, and selecting the winner in each block at random. This
is similar to training a neural network with a dropout probability of 0.5. Nonetheless, the
two are fundamentally different. Dropout is a regularization technique while in LWTA the
interaction between neurons in a block replaces the per-neuron non-linear activation.
Dropout is believed to improve generalization performance since it forces the units to learn
independent features, without relying on other units being active. During testing, when
propagating an input through the network, all units in a layer trained with dropout are
used with their output weights suitably scaled. In an LWTA network, no output scaling is
required. A fraction of the units will be inactive for each input pattern depending on their
total inputs. Viewed this way, WTA is restrictive in that only a fraction of the parameters
are utilized for each input pattern. However, we hypothesize that the freedom to use different
subsets of parameters for different inputs allows the architecture to learn from multimodal
data distributions more accurately.
4.3
Rectified Linear units
Rectified Linear Units (ReLU) are simply linear neurons that clamp negative activations to
zero (f (x) = x if x > 0, f (x) = 0 otherwise). ReLU networks were shown to be useful for
Restricted Boltzmann Machines [26], outperformed sigmoidal activation functions in deep
neural networks [27], and have been used to obtain the best results on several benchmark
problems across multiple domains [24, 28].
Consider an LWTA block with two neurons compared to two ReLU neurons, where x1 and
x2 are the weighted sum of the inputs to each neuron. Table 1 shows the outputs y1 and
y2 in all combinations of positive and negative x1 and x2 , for ReLU and LWTA neurons.
For both ReLU and LWTA neurons, x1 and x2 are passed through as output in half of the
possible cases. The difference is that in LWTA both neurons are never active or inactive at
the same time, and the activations and errors flow through exactly one neuron in the block.
For ReLU neurons, being inactive (saturation) is a potential drawback since neurons that
4
Table 1: Comparison of rectified linear activation and LWTA-2.
x1
x2
Positive
Positive
Negative
Positive
Negative
Negative
Positive
Negative
Negative
Positive
Positive
Negative
ReLU neurons
y1
y2
x1 > x2
x1
x2
x1
0
0
0
x2 > x1
x1
x2
0
x2
0
0
LWTA neurons
y1
y2
x1
x1
x1
0
0
0
0
0
0
x2
x2
x2
do not get activated will not get trained, leading to wasted capacity. However, previous
work suggests that there is no negative impact on optimization, leading to the hypothesis
that such hard saturation helps in credit assignment, and, as long as errors flow through
certain paths, optimization is not affected adversely [27]. Continued research along these
lines validates this hypothesis [29], but it is expected that it is possible to train ReLU
networks better.
While many of the above arguments for and against ReLU networks apply to LWTA networks, there is a notable difference. During training of an LWTA network, inactive neurons
can become active due to training of the other neurons in the same block. This suggests
that LWTA nets may be less sensitive to weight initialization, and a greater portion of the
network?s capacity may be utilized.
5
Experiments
In the following experiments, LWTA networks were tested on various supervised learning
datasets, demonstrating their ability to learn useful internal representations without utilizing
any other non-linearities. In order to clearly assess the utility of local competition, no special
strategies such as augmenting data with transformations, noise or dropout were used. We
also did not encourage sparse representations in the hidden layers by adding activation
penalties to the objective function, a common technique also for ReLU units. Thus, our
objective is to evaluate the value of using LWTA rather than achieving the absolute best
testing scores. Blocks of size two are used in all the experiments.2
All networks were trained using stochastic gradient descent with mini-batches, learning rate
lt and momentum mt at epoch t given by
?0 ?t if ?t > ?min
?t =
?
otherwise
t min
m
+
(1 ? Tt )mf if t < T
i
T
mt =
pf
if t ? T
where ? is the learning rate annealing factor, ?min is the lower learning rate limit, and
momentum is scaled from mi to mf over T epochs after which it remains constant at
mf . L2 weight decay was used for the convolutional network (section 5.2), and max-norm
normalization for other experiments. This setup is similar to that of [20].
5.1
Permutation Invariant MNIST
The MNIST handwritten digit recognition task consists of 70,000 28x28 images (60,000
training, 10,000 test) of the 10 digits centered by their center of mass [33]. In the permutation
invariant setting of this task, we attempted to classify the digits without utilizing the 2D
structure of the images, e.g. every digit is a vector of pixels. The last 10,000 examples in the
training set were used for hyperparameter tuning. The model with the best hyperparameter
setting was trained until convergence on the full training set. Mini-batches of size 20 were
2
To speed up our experiments, the Gnumpy [30] and CUDAMat [31] libraries were used.
5
Table 2: Test set errors on the permutation invariant MNIST dataset for methods without
data augmentation or unsupervised pre-training
Activation
Sigmoid [32]
ReLU [27]
ReLU + dropout in hidden layers [20]
LWTA-2
Test Error
1.60%
1.43%
1.30%
1.28%
Table 3: Test set errors on MNIST dataset for convolutional architectures with no data
augmentation. Results marked with an asterisk use layer-wise unsupervised feature learning
to pre-train the network and global fine tuning.
Architecture
2-layer CNN + 2 layer MLP [34] *
2-layer ReLU CNN + 2 layer LWTA-2
3-layer ReLU CNN [35]
2-layer CNN + 2 layer MLP [36] *
3-layer ReLU CNN + stochastic pooling [33]
3-layer maxout + dropout [19]
Test Error
0.60%
0.57%
0.55%
0.53%
0.47%
0.45%
used, the pixel values were rescaled to [0, 1] (no further preprocessing). The best model
obtained, which gave a test set error of 1.28%, consisted of three LWTA layers of 500
blocks followed by a 10-way softmax layer. To our knowledge, this is the best reported
error, without utilizing implicit/explicit model averaging, for this setting which does not use
deformations/noise to enhance the dataset or unsupervised pretraining. Table 2 compares
our results with other methods which do not use unsupervised pre-training. The performance
of LWTA is comparable to that of a ReLU network with dropout in the hidden layers. Using
dropout in input layers as well, lower error rates of 1.1% using ReLU [20] and 0.94% using
maxout [19] have been obtained.
5.2
Convolutional Network on MNIST
For this experiment, a convolutional network (CNN) was used consisting of 7 ? 7 filters in
the first layer followed by a second layer of 6 ? 6, with 16 and 32 maps respectively, and
ReLU activation. Every convolutional layer is followed by a 2 ? 2 max-pooling operation.
We then use two LWTA-2 layers each with 64 blocks and finally a 10-way softmax output
layer. A weight decay of 0.05 was found to be beneficial to improve generalization. The
results are summarized in Table 3 along with other state-of-the-art approaches which do not
use data augmentation (for details of convolutional architectures, see [33]).
5.3
Amazon Sentiment Analysis
LWTA networks were tested on the Amazon sentiment analysis dataset [37] since ReLU units
have been shown to perform well in this domain [27, 38]. We used the balanced subset of the
dataset consisting of reviews of four categories of products: Books, DVDs, Electronics and
Kitchen appliances. The task is to classify the reviews as positive or negative. The dataset
consists of 1000 positive and 1000 negative reviews in each category. The text of each review
was converted into a binary feature vector encoding the presence or absence of unigrams
and bigrams. Following [27], the 5000 most frequent vocabulary entries were retained as
features for classification. We then divided the data into 10 equal balanced folds, and
tested our network with cross-validation, reporting the mean test error over all folds. ReLU
activation was used on this dataset in the context of unsupervised learning with denoising
autoencoders to obtain sparse feature representations which were used for classification. We
trained an LWTA-2 network with three layers of 500 blocks each in a supervised setting to
directly classify each review as positive or negative using a 2-way softmax output layer. We
obtained mean accuracies of Books: 80%, DVDs: 81.05%, Electronics: 84.45% and Kitchen:
85.8%, giving a mean accuracy of 82.82%, compared to 78.95% reported in [27] for denoising
autoencoders using ReLU and unsupervised pre-training to find a good initialization.
6
Table 4: LWTA networks outperform sigmoid and ReLU activation in remembering dataset
P1 after training on dataset P2.
Testing error on P1
After training on P1
After training on P2
6
LWTA
1.55 ? 0.20%
6.12 ? 3.39%
Sigmoid
1.38 ? 0.06%
57.84 ? 1.13%
ReLU
1.30 ? 0.13%
16.63 ? 6.07%
Implicit long term memory
This section examines the effect of the LWTA architecture on catastrophic forgetting. That
is, does the fact that the network implements multiple models allow it to retain information
about dataset A, even after being trained on a different dataset B? To test for this implicit
long term memory, the MNIST training and test sets were each divided into two parts, P1
containing only digits {0, 1, 2, 3, 4}, and P2 consisting of the remaining digits {5, 6, 7, 8, 9}.
Three different network architectures were compared: (1) three LWTA layers each with 500
blocks of size 2, (2) three layers each with 1000 sigmoidal neurons, and (3) three layers each
of 1000 ReLU neurons. All networks have a 5-way softmax output layer representing the
probability of an example belonging to each of the five classes. All networks were initialized
with the same parameters, and trained with a fixed learning rate and momentum.
Each network was first trained to reach a 0.03 log-likelihood error on the P1 training set.
This value was chosen heuristically to produce low test set errors in reasonable time for
all three network types. The weights for the output layer (corresponding to the softmax
classifier) were then stored, and the network was trained further, starting with new initial
random output layer weights, to reach the same log-likelihood value on P2. Finally, the
output layer weights saved from P1 were restored, and the network was evaluated on the
P1 test set. The experiment was repeated for 10 different initializations.
Table 4 shows that the LWTA network remembers what was learned from P1 much better
than sigmoid and ReLU networks, though it is notable that the sigmoid network performs
much worse than both LWTA and ReLU. While the test error values depend on the learning
rate and momentum used, LWTA networks tended to remember better than the ReLU
network by about a factor of two in most cases, and sigmoid networks always performed
much worse. Although standard network architectures are known to suffer from catastrophic
forgetting, we not only show here, for the first time, that ReLU networks are actually quite
good in this regard, and moreover, that they are outperformed by LWTA. We expect this
behavior to manifest itself in competitive models in general, and to become more pronounced
with increasingly complex datasets. The neurons encoding specific features in one dataset
are not affected much during training on another dataset, whereas neurons encoding common
features can be reused. Thus, LWTA may be a step forward towards models that do not
forget easily.
7
Analysis of subnetworks
A network with a single LWTA-m of N blocks consists of mN subnetworks which can be
selected and trained for individual examples while training over a dataset. After training,
we expect the subnetworks consisting of active neurons for examples from the same class to
have more neurons in common compared to subnetworks being activated for different classes.
In the case of relatively simple datasets like MNIST, it is possible to examine the number
of common neurons between mean subnetworks which are used for each class. To do this,
which neurons were active in the layer for each example in a subset of 10,000 examples were
recorded. For each class, the subnetwork consisting of neurons active for at least 90% of the
examples was designated the representative mean subnetwork, which was then compared to
all other class subnetworks by counting the number of neurons in common.
Figure 3a shows the fraction of neurons in common between the mean subnetworks of each
pair of digits. Digits that are morphologically similar such as ?3? and ?8? have subnetworks
with more neurons in common than the subnetworks for digits ?1? and ?2? or ?1? and ?5?
which are intuitively less similar. To verify that this subnetwork specialization is a result
of training, we looked at the fraction of common neurons between all pairs of digits for the
7
untrained
trained
0.4
0.7
0.6
0.5
0.4
0.3
0.3
0.2
0
1
2
3
4 5 6
Digits
7
8
9
0.2
0
10
20
30
40
50
Fraction of neurons in common
Digits
0
1
2
3
4
5
6
7
8
9
0.1
MNIST digit pairs
(b)
(a)
Figure 3: (a) Each entry in the matrix denotes the fraction of neurons that a pair of MNIST
digits has in common, on average, in the subnetworks that are most active for each of the
two digit classes. (b) The fraction of neurons in common in the subnetworks of each of the
55 possible digit pairs, before and after training.
same 10000 examples both before and after training (Figure 3b). Clearly, the subnetworks
were much more similar prior to training, and the full network has learned to partition its
parameters to reflect the structure of the data.
8
Conclusion and future research directions
Our LWTA networks automatically self-modularize into multiple parameter-sharing subnetworks responding to different input representations. Without significant degradation of
state-of-the-art results on digit recognition and sentiment analysis, LWTA networks also
avoid catastrophic forgetting, thus retaining useful representations of one set of inputs even
after being trained to classify another. This has implications for continual learning agents
that should not forget representations of parts of their environment when being exposed to
other parts. We hope to explore many promising applications of these ideas in the future.
Acknowledgments
This research was funded by EU projects WAY (FP7-ICT-288551), NeuralDynamics (FP7ICT-270247), and NASCENCE (FP7-ICT-317662); additional funding from ArcelorMittal.
References
[1] Per Anderson, Gary N. Gross, Terje L?mo, and Ola Sveen. Participation of inhibitory and
excitatory interneurones in the control of hippocampal cortical output. In Mary A.B. Brazier,
editor, The Interneuron, volume 11. University of California Press, Los Angeles, 1969.
[2] John Carew Eccles, Masao Ito, and J?nos Szent?gothai. The cerebellum as a neuronal machine.
Springer-Verlag New York, 1967.
[3] Costas Stefanis. Interneuronal mechanisms in the cortex. In Mary A.B. Brazier, editor, The
Interneuron, volume 11. University of California Press, Los Angeles, 1969.
[4] Stephen Grossberg. Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52:213?257, 1973.
[5] Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks:
The sequential learning problem. The Psychology of Learning and Motivation, 24:109?164,
1989.
[6] Gail A. Carpenter and Stephen Grossberg. The art of adaptive pattern recognition by a
self-organising neural network. Computer, 21(3):77?88, 1988.
[7] Mark B. Ring. Continual Learning in Reinforcement Environments. PhD thesis, Department
of Computer Sciences, The University of Texas at Austin, Austin, Texas 78712, August 1994.
[8] Samuel A. Ellias and Stephen Grossberg. Pattern formation, contrast control, and oscillations
in the short term memory of shunting on-center off-surround networks. Bio. Cybernetics, 1975.
[9] Brad Ermentrout. Complex dynamics in winner-take-all neural nets with slow inhibition.
Neural Networks, 5(1):415?431, 1992.
8
[10] Christoph von der Malsburg. Self-organization of orientation sensitive cells in the striate cortex.
Kybernetik, 14(2):85?100, December 1973.
[11] Teuvo Kohonen. Self-organized formation of topologically correct feature maps. Biological
cybernetics, 43(1):59?69, 1982.
[12] Risto Mikkulainen, James A. Bednar, Yoonsuck Choe, and Joseph Sirosh. Computational maps
in the visual cortex. Springer Science+ Business Media, 2005.
[13] Dale K. Lee, Laurent Itti, Christof Koch, and Jochen Braun. Attention activates winner-takeall competition among visual filters. Nature Neuroscience, 2(4):375?81, April 1999.
[14] Matthias Oster and Shih-Chii Liu. Spiking inputs to a winner-take-all network. In Proceedings
of NIPS, volume 18. MIT; 1998, 2006.
[15] John P. Lazzaro, Sylvie Ryckebusch, Misha Anne Mahowald, and Caver A. Mead. Winnertake-all networks of O(n) complexity. Technical report, 1988.
[16] Giacomo Indiveri. Modeling selective attention using a neuromorphic analog VLSI device.
Neural Computation, 12(12):2857?2880, 2000.
[17] Wolfgang Maass. Neural computation with winner-take-all as the only nonlinear operation. In
Proceedings of NIPS, volume 12, 1999.
[18] Wolfgang Maass. On the computational power of winner-take-all. Neural Computation,
12:2519?2535, 2000.
[19] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio.
Maxout networks. In Proceedings of the ICML, 2013.
[20] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R.
Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors,
2012. arXiv:1207.0580.
[21] Juergen Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent
networks. Connection Science, 1(4):403?412, 1989.
[22] Rupesh K. Srivastava, Bas R. Steunebrink, and Juergen Schmidhuber. First experiments with
powerplay. Neural Networks, 2013.
[23] Maximillian Riesenhuber and Tomaso Poggio. Hierarchical models of object recognition in
cortex. Nature Neuroscience, 2(11), 1999.
[24] Alex Krizhevsky, Ilya Sutskever, and Goeffrey E. Hinton. Imagenet classification with deep
convolutional neural networks. In Proceedings of NIPS, pages 1?9, 2012.
[25] Dan Ciresan, Ueli Meier, and J?rgen Schmidhuber. Multi-column deep neural networks for
image classification. Proceeedings of the CVPR, 2012.
[26] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the ICML, number 3, 2010.
[27] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier networks. In AISTATS, volume 15, pages 315?323, 2011.
[28] George E. Dahl, Tara N. Sainath, and Geoffrey E. Hinton. Improving Deep Neural Networks
for LVCSR using Rectified Linear Units and Dropout. In Proceedings of ICASSP, 2013.
[29] Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectifier nonlinearities improve neural
network acoustic models. In Proceedings of the ICML, 2013.
[30] Tijmen Tieleman. Gnumpy: an easy way to use GPU boards in Python. Department of
Computer Science, University of Toronto, 2010.
[31] Volodymyr Mnih. CUDAMat: a CUDA-based matrix class for Python. Department of Computer Science, University of Toronto, Tech. Rep. UTML TR, 4, 2009.
[32] Patrice Y. Simard, Dave Steinkraus, and John C. Platt. Best practices for convolutional
neural networks applied to visual document analysis. In International Conference on Document
Analysis and Recognition (ICDAR), 2003.
[33] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 1998.
[34] Marc?Aurelio Ranzato, Christopher Poultney, Sumit Chopra, and Yann LeCun. Efficient learning of sparse representations with an energy-based model. In Proceedings of NIPS, 2007.
[35] Matthew D. Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional
neural networks. In Proceedings of the ICLR, 2013.
[36] Kevin Jarrett, Koray Kavukcuoglu, Marc?Aurelio Ranzato, and Yann LeCun. What is the best
multi-stage architecture for object recognition? In Proc. of the ICCV, pages 2146?2153, 2009.
[37] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes and
blenders: Domain adaptation for sentiment classification. Annual Meeting-ACL, 2007.
[38] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the ICML, number 1, 2011.
9
| 5059 |@word cnn:6 bigram:1 seems:1 norm:1 suitably:1 reused:1 risto:1 heuristically:1 tried:1 propagate:1 blender:1 tr:1 initial:1 liu:1 contains:1 score:1 selecting:2 electronics:2 document:3 anne:1 activation:24 yet:1 gpu:1 john:4 subsequent:2 partition:1 enables:2 utml:1 hypothesize:1 v:1 half:1 selected:1 device:1 short:3 appliance:1 toronto:2 organising:1 sigmoidal:5 five:1 along:3 constructed:1 become:2 consists:5 dan:1 behavioral:1 forgetting:6 expected:1 indeed:1 tomaso:1 p1:8 examine:1 behavior:3 multi:2 brain:4 inspired:2 relying:1 salakhutdinov:1 steinkraus:1 automatically:1 pf:1 project:1 moreover:3 linearity:2 circuit:3 mass:1 medium:1 what:2 interpreted:1 substantially:1 finding:2 transformation:1 remember:1 every:2 continual:3 braun:1 tie:1 exactly:2 scaled:2 classifier:1 platt:1 interneuronal:1 bio:1 control:3 unit:19 christof:1 positive:11 before:4 local:15 monolithic:1 limit:1 kybernetik:1 encoding:4 mead:1 laurent:1 path:1 noteworthy:1 acl:1 initialization:3 studied:3 suggests:2 shaded:1 christoph:1 co:1 range:1 bi:1 ola:1 jarrett:1 grossberg:4 acknowledgment:1 lecun:3 testing:3 practice:1 block:27 implement:1 digit:17 universal:1 h2i:1 pre:4 get:2 context:1 map:4 center:5 attention:3 starting:1 sainath:1 amazon:2 h1i:1 usi:1 examines:1 continued:1 utilizing:3 hypothesis:2 goodfellow:1 idsia:2 recognition:10 utilized:2 sirosh:1 role:4 module:1 constancy:1 region:3 ranzato:2 eu:1 highest:1 rescaled:1 balanced:2 gross:1 environment:2 broken:1 complexity:1 instructive:1 ermentrout:1 warde:1 dynamic:9 ultimately:1 trained:16 depend:1 exposed:1 serve:1 manno:1 multimodal:2 easily:1 icassp:1 various:1 train:3 describe:1 artificial:5 formation:2 kevin:1 quite:1 lag:1 modular:1 cvpr:1 otherwise:3 ability:1 emergence:1 highlighted:1 jointly:1 validates:1 itself:1 patrice:1 advantage:1 net:2 matthias:1 propose:1 clamp:1 interaction:12 coming:1 product:1 adaptation:3 frequent:1 neighboring:1 relevant:2 kohonen:1 uniformize:1 ramification:1 organizing:1 subgraph:1 pronounced:1 competition:11 normalize:1 los:2 sutskever:2 convergence:1 enhancement:2 produce:2 ring:1 bednar:1 object:2 help:3 depending:3 recurrent:2 andrew:2 augmenting:1 propagating:1 blitzer:1 sohrob:2 cudamat:2 p2:4 dividing:1 implemented:1 resemble:1 differ:1 switzerland:1 direction:1 anatomy:1 drawback:1 saved:1 correct:1 filter:3 stochastic:3 centered:1 enable:2 backprop:1 argued:1 carew:1 generalization:2 biological:9 awni:1 yij:2 strictly:1 precedence:1 hold:1 koch:1 considered:1 credit:1 ueli:1 mo:1 matthew:1 rgen:2 early:1 powerplay:1 ruslan:1 proc:1 faustino:1 outperformed:2 sensitive:2 gail:1 weighted:1 hope:1 mit:1 clearly:3 activates:1 always:2 rather:1 avoid:2 broader:1 probabilistically:1 earliest:1 indiveri:1 properly:1 likelihood:2 tech:1 contrast:3 suppression:1 sense:1 rupesh:3 dependent:1 hidden:3 vlsi:2 wij:2 selective:1 pixel:2 translational:1 among:3 classification:7 morphologically:1 orientation:1 retaining:1 development:2 animal:1 art:6 special:1 softmax:5 equal:1 never:1 ng:1 koray:1 choe:1 unsupervised:6 icml:4 jochen:1 future:2 connectionist:1 report:1 fundamentally:1 mirza:1 hni:1 yoshua:4 preserve:1 simultaneously:1 individual:2 subsampled:1 kitchen:2 consisting:5 freedom:1 organization:4 mlp:2 investigate:1 possibility:1 mnih:1 misha:1 farley:1 activated:2 implication:1 encourage:1 poggio:1 initialized:1 deformation:1 classify:4 modeling:2 soft:3 column:1 assignment:1 juergen:3 mahowald:1 neuromorphic:1 subset:5 entry:2 brazier:2 krizhevsky:2 teuvo:1 supsi:1 sumit:1 motivating:1 reported:2 stored:1 giacomo:1 nns:3 international:1 retain:1 lee:1 off:4 pool:1 enhance:1 michael:1 ilya:2 augmentation:4 reflect:1 recorded:1 thesis:1 containing:1 provoked:1 leveraged:1 von:1 worse:2 adversely:1 book:2 convolving:1 simard:1 leading:2 itti:1 volodymyr:1 potential:1 converted:1 nonlinearities:1 summarized:1 boom:1 notable:2 performed:1 unigrams:1 wolfgang:2 portion:2 competitive:7 sort:1 capability:1 ass:1 accuracy:2 convolutional:11 largely:1 efficiently:2 modelled:1 handwritten:1 chii:1 kavukcuoglu:1 accurately:1 rectified:6 cybernetics:2 dave:1 detector:1 synapsis:1 reach:2 tended:1 sharing:2 against:2 nonetheless:2 energy:1 james:1 transmits:1 mi:1 gain:1 costa:1 dataset:15 manifest:1 knowledge:1 emerges:1 organized:2 uncover:1 actually:1 scattering:1 supervised:3 wherein:1 april:1 evaluated:1 though:1 box:1 anderson:1 implicit:3 stage:1 until:1 autoencoders:2 christopher:1 mehdi:1 nonlinear:2 propagation:2 glance:1 perhaps:1 mary:2 dredze:1 effect:3 omitting:1 concept:1 contain:1 y2:3 consisted:1 verify:1 regularization:2 inspiration:1 xavier:2 maass:2 neal:1 cerebellum:1 tino:1 self:5 during:6 samuel:1 hippocampal:2 arrived:1 tt:1 demonstrate:1 eccles:1 performs:1 image:4 wise:1 recently:3 funding:1 common:14 sigmoid:6 functional:4 spiking:2 mt:2 cohen:1 winner:16 hki:1 volume:5 discussed:1 analog:1 significant:1 surround:4 automatic:1 tuning:2 mathematics:1 winnertake:1 funded:1 cortex:4 inhibition:1 add:1 patrick:1 masao:1 showed:3 perspective:1 schmidhuber:4 buffer:1 certain:2 verlag:1 binary:1 success:1 rep:1 approximators:1 yi:1 der:1 meeting:1 greater:1 remembering:1 additional:1 george:1 fernando:1 signal:6 stephen:3 multiple:4 desirable:1 full:4 greyed:1 technical:1 faster:2 x28:1 believed:1 long:5 cross:1 divided:2 shunting:2 impact:2 multilayer:2 enhancing:1 arxiv:1 represent:1 cerebellar:1 normalization:2 kernel:1 achieved:2 cell:3 background:2 whereas:1 fine:1 annealing:1 unlike:1 pooling:10 tend:1 thing:1 december:1 flow:4 chopra:1 presence:1 counting:1 feedforward:2 bengio:4 vinod:1 easy:1 variety:1 isolation:1 relu:30 gave:1 architecture:9 competing:3 psychology:1 ciresan:1 idea:1 haffner:1 texas:2 angeles:2 inactive:4 specialization:2 sylvie:1 utility:1 passed:2 penalty:1 abruptly:1 sentiment:5 suffer:1 lvcsr:1 york:1 pretraining:1 lazzaro:1 deep:7 useful:5 generally:1 backpropagated:2 extensively:1 locally:3 category:2 reduced:1 outperform:2 inhibitory:3 cuda:1 neuroscience:5 per:2 hji:2 hyperparameter:2 dropping:1 affected:2 group:1 four:1 shih:1 threshold:1 demonstrating:1 achieving:1 prevent:1 dahl:1 backward:1 wasted:1 fraction:7 sum:1 compete:1 powerful:1 respond:1 topologically:1 reporting:1 reasonable:1 yann:3 oscillation:1 draw:1 scaling:1 comparable:2 dropout:13 layer:44 hi:2 followed:3 gomez:1 played:1 courville:1 existed:1 replaces:1 fold:2 modularize:1 annual:1 constraint:1 alex:2 x2:12 dvd:2 nearby:1 speed:1 argument:1 min:3 nitish:1 kumar:1 leon:1 relatively:1 department:3 designated:1 combination:2 belonging:1 across:4 beneficial:2 describes:1 increasingly:1 joseph:1 delineated:1 biologically:1 wta:12 rob:1 intuitively:1 restricted:2 invariant:3 iccv:1 interference:1 computationally:1 equation:2 remains:1 hannun:1 discus:1 turn:1 mechanism:3 icdar:1 fp7:2 subnetworks:14 operation:4 takeall:1 apply:2 hierarchical:1 alternative:1 batch:2 slower:2 gate:1 denotes:1 remaining:1 subsampling:1 responding:1 zeiler:1 malsburg:1 giving:1 restrictive:1 especially:1 objective:2 looked:1 restored:1 strategy:1 primary:1 ryckebusch:1 striate:1 antoine:2 exhibit:2 gradient:3 subnetwork:4 iclr:1 distance:1 link:1 capacity:2 participate:1 index:1 retained:1 mini:2 rotational:1 downsampling:1 tijmen:1 setup:1 proceeedings:1 potentially:1 negative:12 rise:2 ba:1 boltzmann:2 perform:2 neuron:54 datasets:3 benchmark:2 descent:1 riesenhuber:1 supporting:1 sparsified:1 extended:1 hinton:4 y1:3 august:1 community:2 david:1 pair:5 required:1 meier:1 connection:4 imagenet:1 california:2 acoustic:1 learned:3 nip:4 usually:1 pattern:15 regime:1 saturation:2 poultney:1 max:9 memory:5 power:1 business:1 force:1 participation:1 mn:1 representing:1 improve:4 library:1 gnumpy:2 remembers:1 oster:1 text:1 epoch:2 literature:1 understanding:1 l2:1 review:5 prior:1 ict:2 python:2 fully:1 expect:2 permutation:3 interesting:1 nascence:1 geoffrey:3 asterisk:1 validation:1 agent:1 editor:2 storing:1 share:1 bordes:2 austin:2 excitatory:2 maas:2 ellias:1 last:1 free:1 allow:1 understand:1 perceptron:1 absolute:1 sparse:4 benefit:1 regard:1 feedback:1 dimension:1 cortical:3 vocabulary:1 contour:1 dale:1 forward:4 preventing:1 adaptive:1 preprocessing:1 reinforcement:1 employing:2 global:1 active:12 fergus:1 modularity:1 table:8 promising:1 nature:4 learn:3 superficial:1 improving:2 steunebrink:1 untrained:1 complex:2 bottou:1 domain:4 marc:2 bollywood:1 did:1 aistats:1 aurelio:2 motivation:1 noise:4 subsample:1 repeated:1 body:1 x1:12 neuronal:1 representative:1 carpenter:1 board:1 slow:1 sub:2 momentum:4 pereira:1 explicit:1 lugano:1 winning:6 exponential:1 late:1 ito:1 masci:1 ian:1 specific:1 rectifier:2 showing:3 reverberating:1 decay:2 evidence:2 normalizing:1 burden:1 glorot:2 mnist:9 adding:1 sequential:1 phd:1 interneuron:2 mf:3 forget:2 lt:1 simply:1 explore:1 visual:3 brad:1 mccloskey:1 springer:2 ch:1 gary:1 tieleman:1 relies:1 nair:1 identity:1 viewed:1 marked:1 towards:1 maxout:4 absence:1 change:2 hard:3 determined:1 except:1 reducing:1 averaging:2 denoising:2 degradation:1 kazerounian:1 total:1 catastrophic:7 invariance:1 attempted:1 aaron:1 tara:1 hypothetically:1 internal:1 mark:2 jonathan:2 incorporate:1 evaluate:1 tested:3 biography:1 srivastava:3 |
4,486 | 506 | Principles of Risk Minimization
for Learning Theory
V. Vapnik
AT &T Bell Laboratories
Holmdel, NJ 07733, USA
Abstract
Learning is posed as a problem of function estimation, for which two principles of solution are considered: empirical risk minimization and structural
risk minimization. These two principles are applied to two different statements of the function estimation problem: global and local. Systematic
improvements in prediction power are illustrated in application to zip-code
recognition.
1
INTRODUCTION
The structure of the theory of learning differs from that of most other theories for
applied problems. The search for a solution to an applied problem usually requires
the three following steps:
1. State the problem in mathematical terms.
2. Formulate a general principle to look for a solution to the problem.
3. Develop an algorithm based on such general principle.
The first two steps of this procedure offer in general no major difficulties; the
third step requires most efforts, in developing computational algorithms to solve
the problem at hand.
In the case of learning theory, however, many algorithms have been developed, but
we still lack a clear understanding of the mathematical statement needed to describe
the learning procedure, and of the general principle on which the search for solutions
831
832
Vapnik
should be based. This paper is devoted to these first two steps, the statement of
the problem and the general principle of solution.
The paper is organized as follows. First, the problem of function estimation is
stated, and two principles of solution are discussed: the principle of empirical risk
minimization and the principle of structural risk minimization. A new statement
is then given: that of local estimation of function, to which the same principles are
applied. An application to zip-code recognition is used to illustrate these ideas.
2
FUNCTION ESTIMATION MODEL
The learning process is described through three components:
1. A generator of random vectors x, drawn independently from a fixed but unknown
distribution P(x).
2. A supervisor which returns an output vector y to every input vector x, according
to a conditional distribution function P(ylx), also fixed but unknown.
3. A learning machine capable of implementing a set of functions !(x, w), wE W.
The problem of learning is that of choosing from the given set of functions the one
which approximates best the supervisor's response. The selection is based on a
training set of e independent observations:
(1)
The formulation given above implies that learning corresponds to the problem of
function approximation.
3
PROBLEM OF RISK MINIMIZATION
In order to choose the best available approximation to the supervisor's response,
we measure the loss or discrepancy L(y, !(x, w? between the response y of the
supervisor to a given input x and the response !(x, w) provided by the learning
machine. Consider the expected value of the loss, given by the risk functional
R(w) =
J
L(y, !(x, w?dP(x,y).
(2)
The goal is to minimize the risk functional R( w) over the class of functions
!(x, w), w E W. But the joint probability distribution P(x, y) = P(ylx )P(x)
is unknown and the only available information is contained in the training set (1).
4
EMPIRICAL RISK MINIMIZATION
In order to solve this problem, the following induction principle is proposed: the
risk functional R( w) is replaced by the empirical risk functional
1
E(w)
=i
l
LL(Yi,!(Xi'W?
i=l
(3)
Principles of Risk Minimization for Learning Theory
constructed on the basis of the training set (1). The induction principle of empirical
risk minimization (ERM) assumes that the function I(x, wi) ,which minimizes E(w)
over the set w E W, results in a risk R( wi) which is close to its minimum.
This induction principle is quite general; many classical methods such as least square
or maximum likelihood are realizations of the ERM principle.
The evaluation of the soundness of the ERM principle requires answers to the following two questions:
1. Is the principle consistent? (Does R( wi) converge to its minimum value on the
set wE W when f - oo?)
2. How fast is the convergence as f increases?
The answers to these two questions have been shown (Vapnik et al., 1989) to be
equivalent to the answers to the following two questions:
1. Does the empirical risk E( w) converge uniformly to the actual risk R( w) over
the full set I(x, w), wE W? Uniform convergence is defined as
0
as
f - 00.
(4)
Prob{ sup IR(w) - E(w)1 > ?} wEW
2. What is the rate of convergence?
It is important to stress that uniform convergence (4) for the full set of functions is
a necessary and sufficient condition for the consistency of the ERM principle.
5
VC-DIMENSION OF THE SET OF FUNCTIONS
The theory of uniform convergence of empirical risk to actual risk developed in
the 70's and SO's, includes a description of necessary and sufficient conditions as
well as bounds for the rate of convergence (Vapnik, 19S2). These bounds, which
are independent of the distribution function P(x,y), are based on a quantitative
measure of the capacity of the set offunctions implemented by the learning machine:
the VC-dimension of the set.
For simplicity, these bounds will be discussed here only for the case of binary pattern recognition, for which y E {O, 1} and I(x, w), wE W is the class of indicator
functions. The loss function takes only two values L(y, I(x, w))
0 if y I(x, w)
and L(y, I(x, w)) = 1 otherwise. In this case, the risk functional (2) is the probability of error, denoted by pew). The empirical risk functional (3), denoted by
v(w), is the frequency of error in the training set.
=
=
The VC-dimension of a set of indicator functions is the maximum number h of
vectors which can be shattered in all possible 2h ways using functions in the set.
For instance, h = n + 1 for linear decision rules in n-dimensional space, since they
can shatter at most n + 1 points.
6
RATES OF UNIFORM CONVERGENCE
The notion of VC-dimension provides a bound to the rate of uniform convergence.
For a set of indicator functions with VC-dimension h, the following inequality holds:
833
834
Vapnik
Prob{ SUp IP(w) - v(w)1 > c}
2fe
h
2
< (-h) exp{-e fl?
(5)
wEW
It then follows that with probability 1 - T}, simultaneously for all w E W,
pew) < v(w)
+ Co(f/h, T}),
(6)
with confidence interval
C (f/h
o
) _ . Ih(1n 21/h + 1) - In T}
,T}-V
f
.
(7)
This important result provides a bound to the actual risk P( w) for all w E W,
including the w? which minimizes the empirical risk v(w).
The deviation IP(w) - v(w)1 in (5) is expected to be maximum for pew) close
to 1/2, since it is this value of pew) which maximizes the error variance u(w) =
P( w)( 1 - P( w)). The worst case bound for the confidence interval (7) is thus
likely be controlled by the worst decision rule. The bound (6) is achieved for the
worst case pew) = 1/2, but not for small pew), which is the case of interest. A
uniformly good approximation to P( w) follows from considering
J
Prob{ sup
wEW
pew) - v(w)
> e}.
(j(w)
(8)
The variance of the relative deviation (P( w) - v( w))/ (j( w) is now independent of w.
A bound for the probability (8), if available, would yield a uniformly good bound
for actual risks for all P( w).
Such a bound has not yet been established. But for pew) ?
(j(w) ~ JP(w) is true, and the following inequality holds:
Prob{ sup
wEW
1, the approximation
pew) - v(w)
2le h
e2 f
> e} < (-) exp{--}.
JP(w)
h
4
(9)
It then follows that with probability 1 - T}, simultaneously for all w E W,
pew) < v(w)
+ CI(f/h, v(w), T}),
(10)
with confidence interval
CI(l/h,v(w),T}) =2 (h(ln2f/h;l)-lnT})
(1+ 1+
v(w)f
)
h(1n 2f/h + 1) -In T} .
(11)
Note that the confidence interval now depends on v( w), and that for v( w) = 0 it
reduces to
CI(f/ h, 0, T}) = 2C'5(f/ h, T}),
which provides a more precise bound for real case learning.
7
STRUCTURAL RISK MINIMIZATION
The method of ERM can be theoretically justified by considering the inequalities
(6) or (10). When l/h is large, the confidence intervals Co or C 1 become small, and
Principles of Risk Minimization for Learning Theory
can be neglected . The actual risk is then bound by only the empirical risk, and the
probability of error on the test set can be expected to be small when the frequency
of error in the training set is small.
However, if ljh is small, the confidence interval cannot be neglected, and even
v( w) = 0 does not guarantee a small probability of error. In this case the minimization of P( w) requires a new principle, based on the simultaneous minimization of
v( w) and the confidence interval. It is then necessary to control the VC-dimension
of the learning machine.
To do this, we introduce a nested structure of subsets Sp
that
SlCS2C ... CSn
= {lex, w), wE Wp}, such
.
The corresponding VC-dimensions of the subsets satisfy
hl
< h2 < ... < h n .
The principle of structure risk minimization (SRM) requires a two-step process: the
empirical risk has to be minimized for each element of the structure. The optimal
element S* is then selected to minimize the guaranteed risk, defined as the sum
of the empirical risk and the confidence interval. This process involves a trade-off:
as h increases the minimum empirical risk decreases, but the confidence interval
mcreases.
8
EXAMPLES OF STRUCTURES FOR NEURAL NETS
The general principle of SRM can be implemented in many different ways . Here
we consider three different examples of structures built for the set of functions
implemented by a neural network .
1. Structure given by the architecture of the neural network. Consider an
ensemble of fully connected neural networks in which the number of units in one of
the hidden layers is monotonically increased. The set of implement able functions
makes a structure as the number of hidden units is increased.
2. Structure given by the learning procedure. Consider the set of functions
S = {lex, w), w E W} implementable by a neural net of fixed architecture. The
parameters {w} are the weights of the neural network. A structure is introduced
through Sp = {lex, w), Ilwll < Cp } and C l < C2 < ... < Cn. For a convex
loss function, the minimization of the empirical risk within the element Sp of the
structure is achieved through the minimizat.ion of
1
E(w"P)
=l
l
LL(Yi,!(Xi'W? +'P llwI1 2
i=l
with appropriately chosen Lagrange multipliers II > 12 > ... > In' The well-known
"weight decay" procedure refers to the minimization of this functional.
3. Structure given by preprocessing. Consider a neural net with fixed arK(x, 13),
chitecture. The input representation is modified by a transformation z
where the parameter f3 controls the degree of the degeneracy introduced by this
transformation (for instance f3 could be the width of a smoothing kernel).
=
835
836
Vapnik
A structure is introduced in the set of functions S
through 13 > CP1 and C l > C2 > ... > Cn?
9
= {!(I?x, 13), w),
w E W}
PROBLEM OF LOCAL FUNCTION ESTIMATION
The problem of learning has been formulated as the problem of selecting from the
class of functions !(x, w), w E W that which provides the best available approximation to the response of the supervisor. Such a statement of the learning problem
implies that a unique function !( x, w?) will be used for prediction over the full input
space X. This is not necessarily a good strategy: the set !(x, w), w E W might
not contain a good predictor for the full input space, but might contain functions
capable of good prediction on specified regions of input space.
In order to formulate the learning problem as a problem of local function approximation, consider a kernel I?x - Xo, b) ~ 0 which selects a region of input space of
width b, centered at xo. For example, consider the rectangular kernel,
I< (x _ x b) = { 1 if Ix - ~o I < b
,.
0,
0 otherwIse
and a more general general continuous kernel, such as the gaussian
r
/ig(x-xo,b)=exp-{
(x-xO)2
b2
}.
The goal is to minimize the local risk functional
R(w, b, xo) =
J
L(y, !(x, w?
K(x - Xo, b)
K(xo, b) dP(x, V)?
(12)
The normalization is defined by
K(xo, b) =
J
K(x - Xo, b) dP(x).
(13)
The local risk functional (12) is to be minimized over the class of functions
!(x, w), w E Wand over all possible neighborhoods b E (0,00) centered at xo.
As before, the joint probability distribution P( x, y) is unknown, and the only available information is contained in the training set (1).
10
EMPIRICAL RISK MINIMIZATION FOR LOCAL
ESTIMATION
In order to solve this problem, the following induction principle is proposed: for
fixed b, the local risk functional (12) is replaced by the empirical risk functional
E(w,b,xo)
K(Xi - Xo, b)
= l1 ~
L..tL(Yj,!(Xj,w? 1?
b) ,
Xo,
i=l
(14)
Principles of Risk Minimization for Learning Theory
constructed on the basis of the training set. The empirical risk functional (14) is
to be minimized over w E W. In the simplest case, the class of functions is that of
constant functions, I(x, w) = C( w). Consider the following examples:
1. K-Nearest Neighbors Method: For the case of binary pattern recognition, the class of constant indicator functions contains only two functions: either
I(x, w)
for all x, or I(x, w)
1 for all x. The minimization of the empirical
risk functional (14) with the rectangular kernel Kr(x-xo,b) leads to the K-nearest
neighbors algorithm.
=
?
=
2. Watson-Nadaraya Method: For the case y E R, the class of constant functions contains an infinite number of elements, I(x,w) = C(w), C(w) E R. The
minimization of the empirical risk functional (14) for general kernel and a quadratic
loss function L(y, I(x, w)) = (y - I(x, w))2 leads to the estimator
l
I( Xo ) -- "".
~YI
i=1
K(Xi - Xo, b)
,
L;=I/\ (x; - xo, b)
l.
which defines the Watson-Nadaraya algorithm.
These classical methods minimize (14) with a fixed b over the class of constant
functions. The supervisor's response in the vicinity of Xo is thus approximated by a
constant, and the characteristic size b of the neighborhood is kept fixed, independent
of Xo.
A truly local algorithm would adjust the parameter b to the characteristics of the
region in input space centered at Xo . Further improvement is possible by allowing
for a richer class of predictor functions I(x, w) within the selected neighborhood.
The SRM principle for local estimation provides a tool for incorporating these two
features .
11
STRUCTURAL RISK MINIMIZATION FOR LOCAL
ESTIMATION
The arguments that lead to the inequality (6) for the risk functional (2) can be
extended to the local risk functional (12), to obtain the following result: with
probability 1 - T}, and simultaneously for all w E Wand all b E (0,00)
R(w,b,xo) < E(w,b,xo) + C 2(flh, b, T}).
(15)
The confidence interval C2(flh, b, T}) reduces to Co(llh, T}) in the b -+ 00 limit.
As before, a nested structure is introduced in the class of functions, and the empirical
risk (14) is minimized with respect to both w E Wand bE (0,00) for each element
of the structure. The optimal element is then selected to minimize the guaranteed
risk, defined as the sum of the empirical risk and the confidence interval. For fixed
b this process involves an already discussed trade-off: as h increases, the empirical
risk decreases but the confidence interval increases. A new trade-off appears by
varying b at fixed h: as b increases the empirical risk increases, but the confidence
interval decreases. The use of b as an additional free parameter allows us to find
deeper minima of the guaranteed risk.
837
838
Vapnik
12
APPLICATION TO ZIP-CODE RECOGNITION
We now discuss results for the recognition of the hand written and printed digits in
the US Postal database, containing 9709 training examples and 2007 testing examples. Human recognition of this task results in an approximately 2.5% prediction
error (Sackinger et al., 1991).
The learning machine considered here is a five-layer neural network with shared
weights and limited receptive fields. When trained with a back-propagation algorithm for the minimization of the empirical risk, the network achieves 5.1% prediction error (Le Cun et al., 1990).
Further performance improvement with the same network architecture has required
the introduction a new induction principle. Methods based on SRM have achieved
prediction errors of 4.1% (training based on a double-back-propagation algorithm
which incorporates a special form of weight decay (Drucker, 1991? and 3.95% (using
a smoothing transformation in input space (Simard, 1991?.
The best result achieved so far, of 3.3% prediction error, is based on the use of the
SRM for local estimation of the predictor function (Bottou, 1991).
It is obvious from these results that dramatic gains cannot be achieved through
minor algorithmic modifications, but require the introduction of new principles.
Acknowledgements
I thank the members of the Neural Networks research group at Bell Labs, Holmdel,
for supportive and useful discussions. Sara Solla, Leon Bottou, and Larry Jackel
provided invaluable help to render my presentation more clear and accessible to the
neural networks community.
References
V. N. Vapnik (1982), Estimation of Dependencies Based on Empirical Data,
Springer-Verlag (New York).
V. N. Vapnik and A. J a. Chervonenkis (1989) 'Necessary and sufficient conditions
for consistency of the method of empirical risk minimization' [in Russian], Yearbook of the Academy of Sciences of the USSR on Recognition, Classification, and
Forecasting, 2, 217-249, Nauka (Moscow) (English translation in preparation).
E. Sackinger and J. Bromley (1991), private communication.
Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and
L. D. Jackel (1990) 'Handwritten digit recognition with a back-propagation network', Neural Information Processing Systems 2, 396-404, ed. by D. S. Touretzky,
Morgan Kaufmann (California).
H. Drucker (1991), private communication.
P. Simard (1991), private communication.
L. Bottou (1991), private communication.
| 506 |@word private:4 dramatic:1 contains:2 selecting:1 chervonenkis:1 csn:1 yet:1 written:1 offunctions:1 selected:3 provides:5 postal:1 five:1 mathematical:2 shatter:1 constructed:2 c2:3 become:1 introduce:1 theoretically:1 expected:3 actual:5 considering:2 provided:2 maximizes:1 what:1 minimizes:2 developed:2 transformation:3 nj:1 guarantee:1 quantitative:1 every:1 control:2 unit:2 before:2 local:13 limit:1 approximately:1 might:2 sara:1 co:3 nadaraya:2 limited:1 unique:1 yj:1 testing:1 implement:1 differs:1 digit:2 procedure:4 empirical:26 bell:2 printed:1 confidence:13 refers:1 cannot:2 close:2 selection:1 risk:53 equivalent:1 independently:1 convex:1 rectangular:2 formulate:2 simplicity:1 rule:2 estimator:1 notion:1 element:6 recognition:9 approximated:1 ark:1 database:1 worst:3 region:3 connected:1 solla:1 trade:3 decrease:3 neglected:2 trained:1 basis:2 joint:2 fast:1 describe:1 choosing:1 neighborhood:3 quite:1 richer:1 posed:1 solve:3 otherwise:2 soundness:1 ip:2 net:3 realization:1 academy:1 description:1 convergence:8 double:1 help:1 illustrate:1 develop:1 oo:1 nearest:2 minor:1 implemented:3 involves:2 implies:2 vc:7 centered:3 human:1 larry:1 implementing:1 require:1 hold:2 considered:2 bromley:1 exp:3 algorithmic:1 major:1 achieves:1 estimation:11 jackel:2 hubbard:1 tool:1 minimization:23 gaussian:1 modified:1 varying:1 improvement:3 likelihood:1 shattered:1 hidden:2 selects:1 classification:1 denoted:2 ussr:1 smoothing:2 special:1 field:1 f3:2 look:1 discrepancy:1 minimized:4 simultaneously:3 replaced:2 interest:1 chitecture:1 evaluation:1 adjust:1 henderson:1 truly:1 devoted:1 capable:2 necessary:4 instance:2 increased:2 deviation:2 subset:2 uniform:5 srm:5 predictor:3 supervisor:6 dependency:1 answer:3 my:1 accessible:1 systematic:1 off:3 containing:1 choose:1 simard:2 return:1 b2:1 includes:1 satisfy:1 depends:1 lab:1 sup:4 wew:4 minimize:5 square:1 ir:1 variance:2 characteristic:2 kaufmann:1 ensemble:1 yield:1 handwritten:1 simultaneous:1 touretzky:1 ed:1 lnt:1 frequency:2 obvious:1 e2:1 degeneracy:1 gain:1 organized:1 back:3 appears:1 response:6 formulation:1 hand:2 sackinger:2 lack:1 propagation:3 defines:1 russian:1 usa:1 contain:2 true:1 multiplier:1 vicinity:1 laboratory:1 wp:1 illustrated:1 ll:2 width:2 stress:1 minimizat:1 cp:1 l1:1 invaluable:1 functional:16 jp:2 discussed:3 approximates:1 pew:10 consistency:2 verlag:1 inequality:4 binary:2 watson:2 supportive:1 yi:3 morgan:1 minimum:4 additional:1 zip:3 converge:2 monotonically:1 ii:1 full:4 reduces:2 offer:1 controlled:1 prediction:7 kernel:6 normalization:1 achieved:5 ion:1 justified:1 interval:13 appropriately:1 member:1 incorporates:1 structural:4 xj:1 architecture:3 idea:1 cn:2 drucker:2 effort:1 forecasting:1 render:1 york:1 useful:1 clear:2 ylx:2 simplest:1 cp1:1 group:1 drawn:1 kept:1 sum:2 wand:3 prob:4 decision:2 holmdel:2 flh:2 bound:12 fl:1 layer:2 guaranteed:3 quadratic:1 argument:1 leon:1 developing:1 according:1 wi:3 cun:2 modification:1 hl:1 erm:5 xo:22 discus:1 needed:1 available:5 denker:1 assumes:1 moscow:1 classical:2 question:3 lex:3 already:1 strategy:1 receptive:1 dp:3 thank:1 capacity:1 induction:5 code:3 fe:1 statement:5 stated:1 unknown:4 allowing:1 observation:1 yearbook:1 howard:1 implementable:1 extended:1 communication:4 precise:1 community:1 introduced:4 required:1 specified:1 california:1 boser:1 established:1 able:1 usually:1 pattern:2 built:1 including:1 power:1 difficulty:1 indicator:4 understanding:1 acknowledgement:1 relative:1 loss:5 fully:1 generator:1 h2:1 degree:1 sufficient:3 consistent:1 principle:28 translation:1 free:1 english:1 deeper:1 neighbor:2 dimension:7 llh:1 preprocessing:1 ig:1 far:1 global:1 xi:4 search:2 continuous:1 bottou:3 necessarily:1 sp:3 s2:1 tl:1 third:1 ix:1 decay:2 incorporating:1 ilwll:1 ih:1 vapnik:9 kr:1 ci:3 likely:1 lagrange:1 contained:2 springer:1 corresponds:1 nested:2 conditional:1 goal:2 formulated:1 presentation:1 shared:1 infinite:1 uniformly:3 nauka:1 preparation:1 |
4,487 | 5,060 | RNADE: The real-valued neural autoregressive
density-estimator
Benigno Uria and Iain Murray
School of Informatics
University of Edinburgh
{b.uria,i.murray}@ed.ac.uk
Hugo Larochelle
D?epartement d?informatique
Universit?e de Sherbrooke
[email protected]
Abstract
We introduce RNADE, a new model for joint density estimation of real-valued
vectors. Our model calculates the density of a datapoint as the product of onedimensional conditionals modeled using mixture density networks with shared
parameters. RNADE learns a distributed representation of the data, while having
a tractable expression for the calculation of densities. A tractable likelihood
allows direct comparison with other methods and training by standard gradientbased optimizers. We compare the performance of RNADE on several datasets of
heterogeneous and perceptual data, finding it outperforms mixture models in all
but one case.
1
Introduction
Probabilistic approaches to machine learning involve modeling the probability distributions over large
collections of variables. The number of parameters required to describe a general discrete distribution
grows exponentially in its dimensionality, so some structure or regularity must be imposed, often
through graphical models [e.g. 1]. Graphical models are also used to describe probability densities
over collections of real-valued variables.
Often parts of a task-specific probabilistic model are hard to specify, and are learned from data using
generic models. For example, the natural probabilistic approach to image restoration tasks (such as
denoising, deblurring, inpainting) requires a multivariate distribution over uncorrupted patches of
pixels. It has long been appreciated that large classes of densities can be estimated consistently by
kernel density estimation [2], and a large mixture of Gaussians can closely represent any density. In
practice, a parametric mixture of Gaussians seems to fit the distribution over patches of pixels and
obtains state-of-the-art restorations [3]. It may not be possible to fit small image patches significantly
better, but alternative models could further test this claim. Moreover, competitive alternatives to
mixture models might improve performance in other applications that have insufficient training data
to fit mixture models well.
Restricted Boltzmann Machines (RBMs), which are undirected graphical models, fit samples of
binary vectors from a range of sources better than mixture models [4, 5]. One explanation is that
RBMs form a distributed representation: many hidden units are active when explaining an observation,
which is a better match to most real data than a single mixture component. Another explanation is
that RBMs are mixture models, but the number of components is exponential in the number of hidden
units. Parameter tying among components allows these more flexible models to generalize better
from small numbers of examples. There are two practical difficulties with RBMs: the likelihood of
the model must be approximated, and samples can only be drawn from the model approximately
by Gibbs sampling. The Neural Autoregressive Distribution Estimator (NADE) overcomes these
difficulties [5]. NADE is a directed graphical model, or feed-forward neural network, initially derived
as an approximation to an RBM, but then fitted as a model in its own right.
1
In this work we introduce the Real-valued Autoregressive Density Estimator (RNADE), an extension
of NADE. An autoregressive model expresses the density of a vector as an ordered product of
one-dimensional distributions, each conditioned on the values of previous dimensions in the (perhaps
arbitrary) ordering. We use the parameter sharing previously introduced by NADE, combined with
mixture density networks [6], an existing flexible approach to modeling real-valued distributions with
neural networks. By construction, the density of a test point under RNADE is cheap to compute,
unlike RBM-based models. The neural network structure provides a flexible way to alter the mean
and variance of a mixture component depending on context, potentially modeling non-linear or
heteroscedastic data with fewer components than unconstrained mixture models.
2
Background: Autoregressive models
Both NADE [5] and our RNADE model are based on the chain rule (or product rule), which factorizes
QD
any distribution over a vector of variables into a product of terms: p(x) = d=1 p(xd | x<d ),
where x<d denotes all attributes preceding xd in a fixed arbitrary ordering of the attributes. This
factorization corresponds to a Bayesian network where every variable is a parent of all variables after
it. As this model assumes no conditional independences, it says nothing about the distribution in
itself. However, the (perhaps arbitrary) ordering we choose will matter if the form of the conditionals
is constrained. If we assume tractable parametric forms for each of the conditional distributions, then
the joint distribution can be computed for any vector, and the parameters of the model can be locally
fitted to a penalized maximum likelihood objective using any gradient-based optimizer.
For binary data, each conditional distribution can be modeled with logistic regression, which is called
a fully visible sigmoid belief network (FVSBN) [7]. Neural networks can also be used for each
binary prediction task [8]. The neural autoregressive distribution estimator (NADE) also uses neural
networks for each conditional, but with parameter sharing inspired by a mean-field approximation to
Restricted Boltzmann Machines [5]. In detail, each conditional is given by a feed-forward neural
network with one hidden layer, hd ? RH :
p(xd = 1 | x<d ) = sigm v >
where hd = sigm (W ?,<d x<d + c) ,
(1)
d hd + bd
where v d ? RH , bd ? R, c ? RH , and W ? RH?(D?1) are neural network parameters, and sigm
represents the logistic sigmoid function 1/(1 + e?x ).
The weights between the inputs and the hidden units for each neural network are tied: W ?,<d is
the first d ? 1 columns of a shared weight matrix W . This parameter sharing reduces the total
number of parameters from quadratic in the number of input dimensions to linear, lessening the
need for regularisation. Computing the probability of a datapoint can also be done in time linear in
dimensionality, O(DH), by sharing the computation when calculating the hidden activation of each
neural network (ad = W ?,<d x<d + c):
a1 = c,
ad+1 = ad + xd W ?,d .
(2)
When approximating Restricted Boltzmann Machines, the output weights {v d } in (1) were originally
tied to the input weights W . Untying these weights gave better statistical performance on a range of
tasks, with negligible extra computational cost [5].
NADE has recently been extended to count data [9]. The possibility of extending generic neural
autoregressive models to continuous data has been mentioned [8, 10], but has not been previously
explored to our knowledge. An autoregressive mixture of experts with scale mixture model experts has
been developed as part of a sophisticated multi-resolution model specifically for natural images [11].
In more general work, Gaussian processes have been used to model the conditional distributions of a
fully visible Bayesian network [12]. However, these ?Gaussian process networks? cannot deal with
multimodal conditional distributions or with large datasets (currently ' 104 points would require
further approximation). In the next section we propose a more flexible and scalable approach.
3
Real-valued neural autoregressive density estimators
The original derivation of NADE suggests deriving a real-valued version from a mean-field approximation to the conditionals of a Gaussian-RBM. However, we discarded this approach because the
2
limitations of the Gaussian-RBM are well documented [13, 14]: its isotropic conditional noise model
does not give competitive density estimates. Approximating a more capable RBM model, such as the
mean-covariance RBM [15] or the spike-and-slab RBM [16], might be a fruitful future direction.
The main characteristic of NADE is the tying of its input-to-hidden weights. The output layer was
?untied? from the approximation to the RBM to give the model greater flexibility. Taking this idea
further, we add more parameters to NADE to represent each one-dimensional conditional distribution
with a mixture of Gaussians instead of a Bernoulli distribution. That is, the outputs are mixture
density networks [6], with a shared hidden layer, using the same parameter tying as NADE.
Thus, our Real-valued Neural Autoregressive Density-Estimator or RNADE model represents the
probability density of a vector as:
p(x) =
D
Y
d=1
p(xd | x<d )
with p(xd | x<d ) = pM (xd | ? d ),
(3)
where pM is a mixture of Gaussians with parameters ? d . The mixture model parameters are calculated
using a neural network with all of the preceding dimensions, x<d , as inputs. We now give the details.
RNADE computes the same hidden unit activations, ad , as before using (2). As discussed by Bengio
[10], as an RNADE (or a NADE) with sigmoidal units progresses across the input dimensions
d ? {1 . . . D}, its hidden units will tend to become more and more saturated, due to their input
being a weighted sum of an increasing number of inputs. Bengio proposed alleviating this effect by
rescaling the hidden units? activation by a free factor ?d at each step, making the hidden unit values
hd = sigm (?d ad ) .
(4)
Learning these extra rescaling parameters worked slightly better, and all of our experiments use them.
Previous work on neural networks with real-valued outputs has found that rectified linear units can
work better than sigmoidal non-linearities [17]. The hidden values for rectified linear units are:
?d ad if ?d ad > 0
hd =
(5)
0
otherwise.
In preliminary experiments we found that these hidden units worked better than sigmoidal units in
RNADE, and used them throughout (except for an example result with sigmoidal units in Table 2).
Finally, the mixture of Gaussians parameters for the d-th conditional, ? d = {?d , ?d , ? d }, are set by:
>
?
K mixing fractions,
?d = softmax V ?
(6)
d hd + bd
>
?d = V ?d hd + b?d
? d = exp V ?d > hd + b?d ,
K component means,
K component standard deviations,
(7)
(8)
?
?
?
?
?
where free parameters V ?
d , V d , V d are H ?K matrices, and bd , bd , bd are vectors of size K. The
softmax [18] ensures the mixing fractions are positive and sum to one, the exponential ensures the
standard deviations are positive.
Fitting an RNADE can be done using gradient ascent on the model?s likelihood given a training set of
examples. We used minibatch stochastic gradient ascent in all our experiments. In those RNADE
models with MoG conditionals, we multiplied the gradient of each component mean by its standard
deviation (for a Gaussian, Newton?s method multiplies the gradient by its variance, but empirically
multiplying by the standard deviation worked better). This gradient scaling makes tight components
move more slowly than broad ones, a heuristic that we found allows the use of higher learning rates.
Variants: Using a mixture of Gaussians to represent the conditional distributions in RNADE is an
arbitrary parametric choice. Given several components, the mixture model can represent a rich set
of skewed and multimodal distributions with different tail behaviors. However, other choices could
be appropriate in particular circumstances. For example, work on natural images often uses scale
mixtures, where components share a common mean. Conditional distributions of perceptual data
are often assumed to be Laplacian [e.g. 19]. We call our main variant with mixtures of Gaussians
RNADE-MoG, but also experiment with mixtures of Laplacian outputs, RNADE-MoL.
3
Table 1: Average test-set log-likelihood per datapoint for 4 different models on five UCI datasets.
Performances not in bold can be shown to be significantly worse than at least one of the results in
bold as per a paired t-test on the ten mean-likelihoods, with significance level 0.05.
Dataset
dim
size
Gaussian
11
11
15
32
10
1599
4898
5875
351
506
?13.18
?13.20
?10.85
?41.24
?11.37
Red wine
White wine
Parkinsons
Ionosphere
Boston housing
4
MFA
?10.19
?10.73
?1.99
?17.55
?4.54
FVBN
RNADE-MoG
RNADE-MoL
?11.03
?10.52
?0.71
?26.55
?3.41
?9.36
?10.23
?0.90
?2.50
?0.64
?9.46
?10.38
?2.63
?5.87
?4.04
Experiments
We compared RNADE to mixtures of Gaussians (MoG) and factor analyzers (MFA), which are
surprisingly strong baselines in some tasks [20, 21]. Given the known poor performance of discrete
mixtures [4, 5], we limited our experiments to modeling continuous attributes. However it would be
easy to include both discrete and continuous variables in a NADE-like architecture.
4.1
Low-dimensional data
We first considered five UCI datasets [22], previously used to study the performance of other density
estimators [23, 20]. These datasets have relatively low dimensionality, with between 10 and 32
attributes, but have hard thresholds and non-linear dependencies that may make it difficult to fit
mixtures of Gaussians or factor analyzers.
Following Tang et al. [20], we eliminated discrete-valued attributes and an attribute from every pair
with a Pearson correlation coefficient greater than 0.98. Each dimension of the data was normalized
by subtracting its training subset sample mean and dividing by its standard deviation. All results are
reported on the normalized data.
As baselines we fitted full-covariance Gaussians and mixtures of factor analysers. To measure the
performance of the different models, we calculated their log-likelihood on held-out test data. Because
these datasets are small, we used 10-folds, with 90% of the data for training, and 10% for testing.
We chose the hyperparameter values for each model by doing per-fold cross-validation; using a ninth
of the training data as validation data. Once the hyperparameter values had been chosen, we trained
each model using all the training data (including the validation data) and measured its performance
on the 10% of held-out testing data. In order to avoid overfitting, we stopped the training after
reaching a training likelihood higher than the one obtained on the best validation-wise iteration of the
corresponding validation run. Early stopping is crucial to avoid overfitting the RNADE models. It
also improves the results of the MFAs, but to a lesser degree.
The MFA models were trained using the EM algorithm [24, 25], the number of components and
factors were crossvalidated. The number of factors was chosen from even numbers from 2 . . . D,
where selecting D gives a mixture of Gaussians. The number of components was chosen among all
even numbers from 2 . . . 50 (crossvalidation always selected fewer than 50 components).
RNADE-MoG and RNADE-MoL models were fitted using minibatch stochastic gradient descent,
using minibatches of size 100, for 500 epochs, each epoch comprising 10 minibatches. For each
experiment, the number of hidden units (50), the non-linear activation-function of the hidden units
(RLU), and the form of the conditionals were fixed. Three hyperparameters were crossvalidated
using grid-search: the number of components on each one-dimensional conditional was chosen from
the set {2, 5, 10, 20}; the weight-decay (used only to regularize the input to hidden weights) from
the set {2.0, 1.0, 0.1, 0.01, 0.001, 0}; and the learning rate from the set {0.1, 0.05, 0.025, 0.0125}.
Learning-rates were decreased linearly to reach 0 after the last epoch.
We also trained fully-visible Bayesian networks (FVBN), an autoregressive model where each onedimensional conditional is modelled by a separate mixture density network using no parameter tying.
4
Figure 1: Top: 15 8x8 patches from the BSDS test set. Center: 15 samples from Zoran and Weiss?s
MoG model with 200 components. Bottom: 15 samples from an RNADE with 512 hidden units and
10 output components per dimension. All data and samples were drawn randomly.
The same cross-validation procedure and hyperparameters as for RNADE training were used. The
best validationwise MDN for each one-dimensional conditional was chosen.
The results are shown in Table 1. Autoregressive methods obtained statistical performances superior
to mixture models on all datasets. An RNADE with mixture of Gaussian conditionals was among the
statistically significant group of best models on all datasets. Unfortunately we could not reproduce
the data-folds used by previous work, however, our improvements are larger than those demonstrated
by a deep mixture of factor analyzers over standard MFA [20].
4.2
Natural image patches
We also measured the ability of RNADE to model small patches of natural images. Following the
recent work of Zoran and Weiss [3], we use 8-by-8-pixel patches of monochrome natural images,
obtained from the BSDS300 dataset [26] (Figure 1 gives examples).
Pixels in this dataset can take a finite number of brightness values ranging from 0 to 255. Modeling
discretized data using a real-valued distribution can lead to arbitrarily high density values, by locating
narrow high density spike on each of the possible discrete values. In order to avoid this ?cheating?
solution, we added noise uniformly distributed between 0 and 1 to the value of each pixel. We then
divided by 256, making each pixel take a value in the range [0, 1].
In previous experiments, Zoran and Weiss [3] subtracted the mean pixel value from each patch,
reducing the dimensionality of the data by one: the value of any pixel could be perfectly predicted
as minus the sum of all other pixel values. However, the original study still used a mixture of fullcovariance 64-dimensional Gaussians. Such a model could obtain arbitrarily high model likelihoods,
so unfortunately the likelihoods reported in previous work on this dataset [3, 20] are difficult to
interpret. In our preliminary experiment using RNADE, we observed that if we model the 64dimensional data, the 64th pixel is always predicted by a very thin spike centered at its true value.
The ability of RNADE to capture this spurious dependency is reassuring, but we wouldn?t want our
results to be dominated by it. Recent work by Zoran and Weiss [21], projects the data on the leading
63 eigenvectors of each component, when measuring the model likelihood [27]. For comparison
amongst a range of methods, we advocate simply discarding the 64th (bottom-right) pixel.
We trained our model using patches drawn randomly from 180 images in the training subset of
BSDS300. A validation dataset containing 1,000 random patches from the remaining 20 images in the
training subset were used for early-stopping when training RNADE. We measured the performance
of each model by measuring their log-likelihood on one million patches drawn randomly from the
test subset, which is composed of 100 images not present in the training subset. Given the larger
scale of this dataset, hyperparameters of the RNADE and MoG models were chosen manually using
the performance of preliminary runs on the validation data, rather than by an extensive search.
The RNADE model had 512 rectified-linear hidden units and a mixture of 20 one-dimensional
Gaussian components per output. Training was done by minibatch gradient descent, with 25 datapoints
per minibatch, for a total of 200 epochs, each comprising 1,000 minibatches. The learning-rate was
scheduled to start at 0.001 and linearly decreased to reach 0 after the last epoch. Gradient momentum
with momentum factor 0.9 was used, but initiated at the beginning of the second epoch. A weight
decay rate of 0.001 was applied to the input-to-hidden weight matrix only. Again, we found that
multiplying the gradient of the mean output parameters by the standard deviation improves results.
RNADE training was early stopped but didn?t show signs of overfitting. We produced a further run
5
Table 2: Average per-example log-likelihood of several mixture of Gaussian and RNADE models,
with mixture of Gaussian (MoG) or mixture of Laplace (MoL) conditionals, on 8-by-8 patches of
natural images. These results are measured in nats and were calculated using one million patches.
Standard errors due to the finite test sample size are lower than 0.1 in every case. K gives the number
of one-dimensional components for each conditional in RNADE, and the number of full-covariance
components for MoG.
Model
Training LogL
MoG K = 200 (Z&W)
MoG K = 100
MoG K = 200
MoG K = 300
RNADE-MoG K = 5
RNADE-MoG K = 10
RNADE-MoG K = 20
RNADE-MoL K = 5
RNADE-MoL K = 10
RNADE-MoL K = 20
RNADE-MoG K = 10 (sigmoid h. units)
RNADE-MoG K = 10 (1024 units, 400 epochs)
161.9
152.8
159.3
159.3
158.0
160.0
158.6
150.2
149.7
150.1
155.1
161.1
Test LogL
152.8
144.7
150.4
150.4
149.1
151.0
149.7
141.5
141.1
141.5
146.4
152.1
with 1024 hidden units for 400 epochs, with still no signs of overfitting; even larger models might
perform better.
The MoG model was trained using minibatch EM, for 1,000 iterations. At each iteration 20,000
randomly sampled datapoints were used in an EM update. A step was taken from the previous mixture
model towards the parameters resulting from the M-step: ?t = (1 ? ?)?t?1 + ??EM , where the
step size (?) was scheduled to start at 0.1 and linearly decreased to reach 0 after the last update. The
training of the MoG was also early-stopped and also showed no signs of overfitting.
The results are shown in Table 2. We compare RNADE with a mixtures of Gaussians model trained
on 63 pixels, and with a MoG trained by Zoran and Weiss (downloaded from Daniel Zoran?s website)
from which we removed the 64th row and column of each covariance matrix. The best RNADE test
log-likelihood is, on average, 0.7 nats per patch lower than Zoran and Weiss?s MoG, which had a
different training procedure than our mixture of Gaussians.
Figure 1 shows a few examples from the test set, and samples from the MoG and RNADE models.
Some of the samples from RNADE are unnaturally noisy, with pixel values outside the legal range
(see fourth sample from the right in Figure 1). If we constrain the pixels values to a unit range, by
rejection sampling or otherwise, these artifacts go away. Limiting the output range of the model
would also improve test likelihood scores slightly, but not by much: log-likelihood does not strongly
penalize models for putting a small fraction of probability mass on ?junk? images.
All of the results in this section were obtained by fitting the pixels in a raster-scan order. Perhaps
surprisingly, but consistent with previous results on NADE [5] and by Frey [28], randomizing
the order of the pixels made little difference to these results. The difference in performance was
comparable to the differences between multiple runs with the same pixel ordering.
4.3
Speech acoustics
We also measured the ability of RNADE to model small patches of speech spectrograms, extracted
from the TIMIT dataset [29]. The patches contained 11 frames of 20 filter-banks plus energy; totaling
231 dimensions per datapoint. These filter-bank encoding is common in speech-recognition, and
better for visualization than the more frequently used MFCC features. A good generative model of
speech could be used, for example, in denoising, or speech detection tasks.
We fitted the models using the standard TIMIT training subset, and compared RNADE with a MoG
by measuring their log-likelihood on the complete TIMIT core-test dataset.
6
Table 3: Log-likelihood of several MoG and RNADE models on the core-test set of TIMIT measured
in nats. Standard errors due to the finite test sample size are lower than 0.3 nats in every case. RNADE
obtained a higher (better) log-likelihood.
Model
Training LogL
Test LogL
111.6
113.4
113.9
114.1
125.9
126.7
120.3
122.2
110.4
112.0
112.5
112.5
123.9
124.5
118.0
119.8
MoG N = 50
MoG N = 100
MoG N = 200
MoG N = 300
RNADE-MoG K = 10
RNADE-MoG K = 20
RNADE-MoL K = 10
RNADE-MoL K = 20
Figure 2: Top: 15 datapoints from the TIMIT core-test set. Center: 15 samples from a MoG model
with 200 components. Bottom: 15 samples from an RNADE with 1024 hidden units and output
components per dimension. On each plot, time is shown on the horizontal axis, the bottom row
displays the energy feature, while the others display the filter bank features (in ascending frequency
order from the bottom). All data and samples were drawn randomly.
The RNADE model has 1024 rectified-linear hidden units and a mixture of 20 one-dimensional
Gaussian components per output. Given the larger scale of this dataset hyperparameter choices were
again made manually using validation data, and the same minibatch training procedures for RNADE
and MoG were used as for natural image patches.
The results are shown in Table 3. RNADE obtained, on average, 10 nats more per test example
than a mixture of Gaussians. In Figure 2 a few examples from the test set, and samples from the
MoG and RNADE models are shown. In contrast with the log-likelihood measure, there are no
marked differences between the samples from each model. Both set of samples look like blurred
spectrograms, but RNADE seems to capture sharper formant structures (peaks of energy at the lower
frequency bands characteristic of vowel sounds).
5
Discussion
Mixture Density Networks (MDNs) [6] are a flexible conditional model of probability densities,
that can capture skewed, heavy-tailed, and multi-modal distributions. In principle, MDNs can be
applied to multi-dimensional data. However, the number of parameters that the network has to output
grows quadratically with the number of targets, unless the targets are assumed independent. RNADE
exploits an autoregressive framework to apply practical, one-dimensional MDNs to unsupervised
density estimation.
To specify an RNADE we needed to set the parametric form for the output distribution of each
MDN. A sufficiently large mixture of Gaussians can closely represent any density, but it is hard to
learn the conditional densities found in some problems with this representation. The marginal for
the brightness of a pixel in natural image patches is heavy tailed, closer to a Laplace distribution
7
10
(a)
(b) p(x1 |x<1 )
4
8
6
4
2
0
?0.4 ?0.2
0.1
0.0
?0.1
?0.2
10
0.0
0.2
(c) log p(x1 |x<1 )
4
(d) log p(x19 |x<19 )
2
2
0
0
0
?2
?2
?2
?4
0.4 ?0.4 ?0.2
0.0
0.2
30
(e) log p(x37 |x<37 )
2
?4
0.4 ?0.4 ?0.2
0.0
(f) log pM oG (xi |x<i ) ? log pM oL (xi |x<i )
20
4
40
0.2
?4
0.4 ?0.4 ?0.2
50
0.0
0.2
0.4
60
Figure 3: Comparison of Mixture of Gaussian (MoG) and Mixture of Laplace (MoL) conditionals.
(a) Example test patch. (b) Density of p(x1 ) under RNADE-MoG (dashed-red) and RNADE-MoL
(solid-blue), both with K = 10. RNADE-MoL closely matches a histogram of brightness values from
patches in the test-set (green). The vertical line indicates the value in (a). (c) Log-density of the
distributions in (b). (d) Log-density of MoG and MoL conditionals of pixel 19 in (a). (e) Log-density
of MoG and MoL conditionals of pixel 37 in (a). (f) Difference in predictive log-density between
MoG and MoL conditionals for each pixel, averaged over 10,000 test patches.
than Gaussian. Therefore, RNADE-MoG must fit predictions of the first pixel, p(x1 ), with several
Gaussians of different widths, that coincidentally have zero mean. This solution can be difficult to
fit, and RNADE with a mixture of Laplace outputs predicted the first pixel of image patches better
than with a mixture of Gaussians (Figure 3b and c). However, later pixels were predicted better
with Gaussian outputs (Figure 3f); the mixture of Laplace model is not suitable for predicting with
large contexts. For image patches, a scale mixture can work well [11], and could be explored within
our framework. However for general applications, scale mixtures within RNADE would be too
restrictive (e.g., p(x1 ) would be zero-mean and unimodal). More flexible one-dimensional forms
may aid RNADE to generalize better for different context sizes and across a range of applications.
One of the main drawbacks of RNADE, and of neural networks in general, is the need to decide
the value of several training hyperparameters. The gradient descent learning rate can be adjusted
automatically using, for example, the techniques developed by Schaul et al. [30]. Also, methods for
choosing hyperparameters more efficiently than grid search have been recently developed [31, 32].
These, and several other recent improvements in the neural network field, like dropouts [33], should
be directly applicable to RNADE, and possibly obtain even better performance than shown in this
work. RNADE makes it relatively straight-forward to translate advances in the neural-network field
into better density estimators, or at least into new estimators with different inductive biases.
In summary, we have presented RNADE, a novel ?black-box? density estimator. Both likelihood
computation time and the number of parameters scale linearly with the dataset dimensionality.
Generalization across a range of tasks, representing arbitrary feature vectors, image patches, and
auditory spectrograms is excellent. Performance on image patches was close to a recently reported
state-of-the-art mixture model [3], and RNADE outperformed mixture models on all other datasets
considered.
Acknowledgments
We thank John Bridle, Steve Renals, Amos Storkey, and Daniel Zoran for useful interactions.
References
[1] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT Press, 2009.
[2] T. Cacoullos. Estimation of a multivariate density. Annals of the Institute of Statistical Mathematics, 18
(1):179?189, 1966.
[3] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In
International Conference on Computer Vision, pages 479?486. IEEE, 2011.
[4] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of
the 25th International Conference on Machine learning, pages 872?879. Omnipress, 2008.
8
[5] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. Journal of Machine
Learning Research W&CP, 15:29?37, 2011.
[6] C. M. Bishop. Mixture density networks. Technical Report NCRG 4288, Neural Computing Research
Group, Aston University, Birmingham, 1994.
[7] B. J. Frey, G. E. Hinton, and P. Dayan. Does the wake-sleep algorithm produce good density estimators?
In Advances in Neural Information Processing Systems 8, pages 661?670. MIT Press, 1996.
[8] Y. Bengio and S. Bengio. Modeling high-dimensional discrete data with multi-layer neural networks.
Advances in Neural Information Processing Systems, 12:400?406, 2000.
[9] H. Larochelle and S. Lauly. A neural autoregressive topic model. In Advances in Neural Information
Processing Systems 25, 2012.
[10] Y. Bengio. Discussion of the neural autoregressive distribution estimator. Journal of Machine Learning
Research W&CP, 15:38?39, 2011.
[11] L. Theis, R. Hosseini, and M. Bethge. Mixtures of conditional Gaussian scale mixtures applied to
multiscale image representations. PLoS ONE, 7(7), 2012. doi: 10.1371/journal.pone.0039857.
[12] N. Friedman and I. Nachman. Gaussian process networks. In Proceedings of the Sixteenth Conference on
Uncertainty in Artificial Intelligence, pages 211?219. Morgan Kaufmann Publishers Inc., 2000.
[13] I. Murray and R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models.
In Advances in Neural Information Processing Systems 21, pages 1137?1144, 2009.
[14] L. Theis, S. Gerwinn, F. Sinz, and M. Bethge. In all likelihood, deep belief is not enough. Journal of
Machine Learning Research, 12:3071?3096, 2011.
[15] M. A. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized third-order
Boltzmann machines. In Computer Vision and Pattern Recognition, pages 2551?2558. IEEE, 2010.
[16] A. Courville, J. Bergstra, and Y. Bengio. A spike and slab restricted Boltzmann machine. Journal of
Machine Learning Research, W&CP, 15, 2011.
[17] V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In Proceedings
of the 27th International Conference on Machine Learning, pages 807?814. Omnipress, 2010.
[18] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships
to statistical pattern recognition. In Neuro-computing: algorithms, architectures and applications, pages
227?236. Springer-Verlag, 1989.
[19] T. Robinson. SHORTEN: simple lossless and near-lossless waveform compression. Technical Report
CUED/F-INFENG/TR.156, Engineering Department, Cambridge University, 1994.
[20] Y. Tang, R. Salakhutdinov, and G. Hinton. Deep mixtures of factor analysers. In Proceedings of the 29th
International Conference on Machine Learning, pages 505?512. Omnipress, 2012.
[21] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. Advances in Neural
Information Processing Systems, 25:1745?1753, 2012.
[22] K. Bache and M. Lichman. UCI machine learning repository, 2013. http://archive.ics.uci.edu/ml.
[23] R. Silva, C. Blundell, and Y. W. Teh. Mixed cumulative distribution networks. Journal of Machine
Learning Research W&CP, 15:670?678, 2011.
[24] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report
CRG-TR-96-1, University of Toronto, 1996.
[25] J. Verbeek. Mixture of factor analyzers Matlab implementation, 2005. http://lear.inrialpes.fr/ verbeek/code/.
[26] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. In International
Conference on Computer Vision, volume 2, pages 416?423. IEEE, July 2001.
[27] D. Zoran. Personal communication, 2013.
[28] B. Frey. Graphical models for machine learning and digital communication. MIT Press, 1998.
[29] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren, and V. Zue. Timit
acoustic-phonetic continuous speech corpus. Linguistic Data Consortium, 10(5):0, 1993.
[30] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. In Proceedings of the 30th
international conference on Machine learning, 2013.
[31] J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Journal of Machine
Learning Research, 13:281?305, 2012.
[32] J. Snoek, H. Larochelle, and R. Adams. Practical Bayesian optimization of machine learning algorithms.
In Advances in Neural Information Processing Systems 25, pages 2960?2968, 2012.
[33] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural
networks by preventing co-adaptation of feature detectors. Arxiv preprint arXiv:1207.0580, 2012.
9
| 5060 |@word repository:1 version:1 compression:1 seems:2 covariance:5 brightness:3 tr:2 inpainting:1 solid:1 minus:1 epartement:1 score:1 selecting:1 lichman:1 daniel:2 outperforms:1 existing:1 rnade:74 activation:4 must:3 bd:6 john:1 lauly:1 uria:2 visible:3 cheap:1 plot:1 update:2 generative:1 fewer:2 selected:1 website:1 intelligence:1 leaf:1 isotropic:1 beginning:1 core:3 provides:1 toronto:1 sigmoidal:4 zhang:1 five:2 direct:1 become:1 lessening:1 fitting:2 advocate:1 introduce:2 snoek:1 behavior:1 frequently:1 multi:4 ol:1 discretized:1 untying:1 inspired:1 salakhutdinov:4 automatically:1 little:1 increasing:1 project:1 moreover:1 linearity:1 didn:1 mass:1 factorized:1 tying:4 developed:3 finding:1 sinz:1 quantitative:1 every:4 dahlgren:1 xd:7 universit:1 uk:1 unit:24 before:1 negligible:1 positive:2 frey:3 engineering:1 x19:1 encoding:1 initiated:1 approximately:1 might:3 chose:1 plus:1 black:1 garofolo:1 suggests:1 heteroscedastic:1 co:1 factorization:1 limited:1 range:9 statistically:1 averaged:1 directed:1 practical:3 acknowledgment:1 lecun:1 testing:2 practice:1 pesky:1 optimizers:1 procedure:3 significantly:2 consortium:1 cannot:1 close:1 context:3 fruitful:1 imposed:1 demonstrated:1 center:2 go:1 resolution:1 shorten:1 estimator:13 iain:1 rule:2 deriving:1 regularize:1 datapoints:3 hd:8 laplace:5 limiting:1 annals:1 construction:1 target:2 alleviating:1 us:2 deblurring:1 storkey:1 approximated:1 recognition:3 bache:1 database:1 bottom:5 observed:1 preprint:1 capture:3 ensures:2 ranzato:1 ordering:4 plo:1 removed:1 fiscus:1 mentioned:1 nats:5 sherbrooke:1 personal:1 trained:7 zoran:11 tight:1 predictive:1 multimodal:2 joint:2 sigm:4 derivation:1 informatique:1 describe:2 doi:1 artificial:1 analyser:2 hyper:1 pearson:1 outside:1 choosing:1 heuristic:1 larger:4 valued:11 say:1 otherwise:2 ability:3 formant:1 statistic:1 itself:1 noisy:1 housing:1 propose:1 subtracting:1 interaction:1 product:4 fr:1 renals:1 adaptation:1 uci:4 mixing:2 flexibility:1 translate:1 sixteenth:1 schaul:2 crossvalidation:1 sutskever:1 parent:1 regularity:1 extending:1 produce:1 adam:1 cued:1 depending:1 ac:1 measured:6 school:1 progress:1 strong:1 dividing:1 predicted:4 larochelle:5 qd:1 direction:1 waveform:1 closely:3 drawback:1 attribute:6 filter:3 stochastic:2 centered:1 human:1 require:1 benigno:1 generalization:1 preliminary:3 cacoullos:1 adjusted:1 extension:1 crg:1 gradientbased:1 sufficiently:1 considered:2 ic:1 exp:1 bsds:1 claim:1 slab:2 optimizer:1 early:4 wine:2 estimation:4 outperformed:1 applicable:1 birmingham:1 nachman:1 currently:1 weighted:1 amos:1 mit:3 gaussian:17 always:2 reaching:1 rather:1 avoid:3 parkinson:1 factorizes:1 og:1 totaling:1 linguistic:1 derived:1 monochrome:1 improvement:2 consistently:1 bernoulli:1 likelihood:22 indicates:1 fvbn:2 contrast:1 baseline:2 dim:1 dayan:1 stopping:2 initially:1 hidden:22 spurious:1 koller:1 reproduce:1 comprising:2 pixel:25 among:3 flexible:6 classification:1 multiplies:1 art:2 constrained:1 softmax:2 marginal:1 field:4 once:1 having:1 sampling:2 eliminated:1 manually:2 represents:2 broad:1 look:1 unsupervised:1 thin:1 alter:1 future:1 others:1 report:3 few:2 randomly:5 composed:1 vowel:1 friedman:2 detection:1 possibility:1 saturated:1 mixture:61 held:2 chain:1 capable:1 closer:1 unless:1 bsds300:2 fitted:5 stopped:3 column:2 modeling:7 measuring:4 restoration:3 cost:1 deviation:6 subset:6 krizhevsky:1 too:1 reported:3 dependency:2 randomizing:1 combined:1 density:37 peak:1 international:6 probabilistic:5 informatics:1 bethge:2 again:2 containing:1 choose:1 slowly:1 possibly:1 worse:1 dead:1 expert:2 leading:1 rescaling:2 de:1 bergstra:2 bold:2 coefficient:1 matter:1 blurred:1 pone:1 inc:1 ad:7 later:1 doing:1 red:2 competitive:2 start:2 timit:6 variance:2 characteristic:2 efficiently:1 kaufmann:1 generalize:2 modelled:1 bayesian:4 produced:1 multiplying:2 mfcc:1 rectified:5 straight:1 datapoint:4 detector:1 reach:3 sharing:4 ed:1 raster:1 rbms:4 energy:3 frequency:2 rbm:8 junk:1 bridle:2 sampled:1 auditory:1 dataset:10 knowledge:1 dimensionality:5 improves:2 segmentation:1 sophisticated:1 feed:2 steve:1 originally:1 higher:3 specify:2 wei:8 modal:1 done:3 box:1 strongly:1 correlation:1 horizontal:1 logl:4 multiscale:1 minibatch:6 logistic:2 artifact:1 perhaps:3 scheduled:2 grows:2 effect:1 normalized:2 true:1 inductive:1 deal:1 white:1 skewed:2 width:1 complete:1 cp:4 lamel:1 omnipress:3 silva:1 image:23 wise:1 ranging:1 novel:1 recently:3 inrialpes:1 sigmoid:3 common:2 superior:1 hugo:2 empirically:1 exponentially:1 volume:1 million:2 discussed:1 tail:1 ncrg:1 interpretation:1 onedimensional:2 interpret:1 significant:1 cambridge:1 gibbs:1 unconstrained:1 grid:2 pm:4 mathematics:1 analyzer:5 had:3 add:1 multivariate:2 own:1 recent:3 showed:1 phonetic:1 verlag:1 ecological:1 gerwinn:1 binary:3 arbitrarily:2 usherbrooke:1 uncorrupted:1 morgan:1 greater:2 preceding:2 spectrogram:3 dashed:1 july:1 full:2 multiple:1 sound:1 reduces:1 unimodal:1 segmented:1 technical:3 match:2 calculation:1 cross:2 long:1 divided:1 a1:1 laplacian:2 calculates:1 prediction:2 scalable:1 regression:1 variant:2 heterogeneous:1 circumstance:1 mog:40 vision:3 neuro:1 arxiv:2 iteration:3 kernel:1 represent:5 histogram:1 paired:1 penalize:1 background:1 conditionals:11 want:1 decreased:3 wake:1 source:1 crucial:1 publisher:1 extra:2 unlike:1 archive:1 ascent:2 tend:1 undirected:1 call:1 near:1 feedforward:1 bengio:7 easy:1 enough:1 independence:1 fit:7 gave:1 architecture:2 perfectly:1 idea:1 lesser:1 pallett:1 blundell:1 expression:1 locating:1 speech:6 matlab:1 deep:4 useful:1 involve:1 eigenvectors:1 coincidentally:1 mdns:3 locally:1 ten:1 band:1 documented:1 http:2 sign:3 estimated:1 per:12 blue:1 discrete:6 hyperparameter:3 express:1 group:2 putting:1 threshold:1 drawn:5 fraction:3 sum:3 run:4 fourth:1 uncertainty:1 throughout:1 decide:1 patch:26 scaling:1 comparable:1 dropout:1 layer:4 display:2 courville:1 fold:3 quadratic:1 sleep:1 worked:3 constrain:1 untied:1 tal:1 dominated:1 relatively:2 martin:1 department:1 poor:1 across:3 slightly:2 em:5 making:2 restricted:5 taken:1 legal:1 visualization:1 previously:3 count:1 zue:1 needed:1 tractable:3 ascending:1 gaussians:18 multiplied:1 apply:1 away:1 generic:2 appropriate:1 fowlkes:1 subtracted:1 alternative:2 original:2 denotes:1 assumes:1 include:1 top:2 remaining:1 graphical:6 newton:1 mdn:2 calculating:1 exploit:1 restrictive:1 ghahramani:1 murray:5 hosseini:1 approximating:2 objective:1 move:1 added:1 malik:1 spike:4 parametric:4 gradient:11 amongst:1 separate:1 thank:1 unnaturally:1 topic:1 code:1 modeled:2 relationship:1 insufficient:1 difficult:3 unfortunately:2 potentially:1 sharper:1 implementation:1 boltzmann:6 perform:1 teh:1 vertical:1 observation:1 datasets:9 discarded:1 finite:3 descent:3 extended:1 hinton:6 communication:2 frame:1 ninth:1 arbitrary:5 introduced:1 pair:1 required:1 cheating:1 extensive:1 acoustic:2 learned:1 narrow:1 quadratically:1 robinson:1 pattern:2 verbeek:2 including:1 green:1 explanation:2 belief:3 mfa:5 suitable:1 natural:12 difficulty:2 predicting:1 representing:1 improve:3 aston:1 lossless:2 axis:1 x8:1 epoch:8 theis:2 regularisation:1 fully:3 mixed:1 limitation:1 validation:9 digital:1 downloaded:1 degree:1 consistent:1 principle:2 bank:3 share:1 heavy:2 row:2 penalized:1 summary:1 surprisingly:2 last:3 free:2 appreciated:1 bias:1 institute:1 explaining:1 taking:1 crossvalidated:2 edinburgh:1 distributed:3 dimension:8 calculated:3 evaluating:2 cumulative:1 rich:1 autoregressive:16 computes:1 forward:3 collection:2 made:2 wouldn:1 preventing:1 infeng:1 obtains:1 overcomes:1 ml:1 active:1 overfitting:5 corpus:1 assumed:2 xi:2 continuous:4 search:4 latent:1 tailed:2 table:7 learn:1 ca:1 mol:15 improving:1 excellent:1 significance:1 main:3 linearly:4 rh:4 whole:1 noise:2 hyperparameters:5 nothing:1 x1:5 nade:14 aid:1 momentum:2 exponential:2 perceptual:2 tied:2 third:1 learns:1 tang:2 specific:1 discarding:1 bishop:1 explored:2 decay:2 ionosphere:1 conditioned:1 boston:1 rejection:1 simply:1 ordered:1 contained:1 springer:1 corresponds:1 dh:1 minibatches:3 extracted:1 reassuring:1 conditional:19 nair:1 marked:1 lear:1 towards:1 shared:3 fisher:1 hard:3 specifically:1 except:1 uniformly:1 reducing:1 denoising:2 called:1 total:2 scan:1 srivastava:1 |
4,488 | 5,061 | Real-Time Inference for a Gamma Process
Model of Neural Spiking
David Carlson, 2 Vinayak Rao, 2 Joshua Vogelstein, 1 Lawrence Carin
1
Electrical and Computer Engineering Department, Duke University
2
Statistics Department, Duke University
{dec18,lcarin}@duke.edu, {var11,jovo}@stat.duke.edu
1
Abstract
With simultaneous measurements from ever increasing populations of neurons,
there is a growing need for sophisticated tools to recover signals from individual
neurons. In electrophysiology experiments, this classically proceeds in a two-step
process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the
waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma
process model. Importantly, we develop an online approximate inference scheme
enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis?using data with partial ground truth as well as
two novel data sets?we find several features of our model collectively contribute
to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring
many thousands of neurons and possibly adapting stimuli dynamically to probe
ever deeper into the mysteries of the brain.
1
Introduction
The recent heightened interest in understanding the brain calls for the development of technologies that will advance our understanding of neuroscience. Crucial for this endeavor is the advancement of our ability to understand the dynamics of the brain, via the measurement of large populations
of neural activity at the single neuron level. Such reverse engineering efforts benefit from real-time
decoding of neural activity, to facilitate effectively adapting the probing stimuli. Regardless of the
experimental apparati used (e.g., electrodes or calcium imaging), real-time decoding of individual
neuron responses requires identifying and labeling individual spikes from recordings from large
populations. In other words, real-time decoding requires real-time spike sorting.
Automatic spike sorting methods are continually evolving to deal with more sophisticated experiments. Most recently, several methods have been proposed to (i) learn the number of separable
neurons on each electrode or ?multi-trode? [1, 2], or (ii) operate online to resolve overlapping spikes
from multiple neurons [3]. To our knowledge, no method to date is able to simultaneously address
both of these challenges.
We develop a nonparametric Bayesian continuous-time generative model of population activity.
Our model explains the continuous output of each neuron by a latent marked Poisson process, with
the ?marks? characterizing the shape of each spike. Previous efforts to address overlapping spiking
often assume a fixed kernel for each waveform, but joint intracellular and extracellular recording
clearly indicate that this assumption is false (see Figure 3c). Thus, we assume that the statistics of
the marks are time-varying. We use the framework of completely random measures to infer how
many of a potentially infinite number of neurons (or single units) are responsible for the observed
data, simultaneously characterizing spike times and waveforms of these neurons
We describe an intuitive discrete-time approximation to the above infinite-dimensional
continuous-time stochastic process, then develop an online variational Bayesian inference algorithm
for this model. Via numerical simulations, we demonstrate that our inference procedure improves
1
over the previous state-of-the-art, even though we allow the other methods to use the entire dataset
for training, whereas we learn online. Moreover, we demonstrate that we can effectively track the
time-varying changes in waveform, and detect overlapping spikes. Indeed, it seems that the false
positive detections from our approach have indistinguishable first order statistics from the true positives, suggesting that second-order methods may be required to reduce the false positive rate (i.e.,
template methods may be inadequate). Our work therefore suggests that further improvements in
real-time decoding of activity may be most effective if directed at simultaneous real-time spike sorting and decoding. To facilitate such developments and support reproducible research, all code and
data associated with this work is provided in the Supplementary Materials.
2
Model
Our data is a time-series of multielectrode recordings X ? (x1 , ? ? ? , xT ), and consists of T
recordings from M channels. As in usual measurement systems, the recording times lie on regular grid, with interval length , and xt 2 RM for all t. Underlying these observations is a
continuous-time electrical signal driven by an unknown number of neurons. Each neuron generates a continuous-time voltage trace, and the outputs of all neurons are superimposed and discretely
sampled to produce the recordings X. At a high level, in ?2.1 we model the continuous-time output of each neuron as a series of idealized Poisson events smoothed with appropriate kernels, while
?2.2 uses the Gamma process to develop a nonparametric prior for an entire population. ?2.3 then
describes a discrete-time approximation based on the Bernoulli approximation to the Poisson process. For conceptual clarity, we restrict ourselves to single channel recordings until ?2.4, where we
describe the complete model for multichannel data.
2.1 Modeling the continuous-time output of a single neuron
There is a rich literature characterizing the spiking activity of a single neuron [4] accounting
in detail for factors like non-stationarity, refractoriness and spike waveform. We however make a
number of simplifying assumptions (some of which we later relax). First, we model the spiking
activity of each neuron are stationary and memoryless, so that its set of spike times are distributed as
a homogeneous Poisson process (PP). We model the neurons themselves are heterogeneous, with the
ith neuron having an (unknown) firing rate i . Call the ordered set of spike times of the ith neuron
Ti = (?i1 , ?i2 , . . .); then the time between successive elements of Ti is exponentially distributed
with mean 1/ i . We write this as Ti ? PP( i ).
The actual electrical output of a neuron is not binary; instead each spiking event is a smooth
perturbation in voltage about a resting state. This perturbation forms the shape of the spike, with the
spike shapes varying across neurons as well as across different spikes of the same neuron. However,
each neuron has its own characteristic distribution over shapes, and we let ? ?i 2 ? parametrize this
distribution for neuron i. Whenever this neuron emits a spike, a new shape is drawn independently
from the corresponding distribution. This waveform is then offset to the time of the spike, and
contributes to the voltage trace associated with that spike.
The complete recording from the neuron is the superposition of all these spike waveforms plus
noise. Rather than treating the noise as white as is common in the literature [5], we allow it to exhibit
temporal correlation, recognizing that the ?noise? is in actual fact background neural activity. We
model it as a realization of a Gaussian process (GP) [6], with the covariance kernel K of the GP
determining the temporal structure. We use an exponential kernel, modeling the noise as Markov.
We model each spike shape as weighted superpositions of a dictionary of K basis functions
d(t) ? (d1 (t), ? ? ? , dK (t))T . The dictionary elements are shared across all neurons, and each
is a real-valued function of time, i.e., dk 2 L2 . Each spike time ?ij is associated with a ran?
?
?
dom K-dimensional weight vector yij
? (yij1
, . . . yijK
)T , and the shape of this spike at time t
PK
?
?
is given by the weighted sum k=1 yijk dk (t ?ij ). We assume yij
? NK (??i , ??i ), indicating a K-dimensional Gaussian distribution with mean and covariance given by (??i , ??i ); we let
P|Ti | PK
?
?i? ? (??i , ??i ). Then, at any time t, the output of neuron i is xi (t) = j=1
?ij ).
k=1 yijk dk (t
The total signal received by any electrode is the superposition of the outputs of all neurons. Assume for the moment there are N neurons, and define T ? [i2[N ] Ti as the (ordered) union of the
spike times of all neurons. Let ?l 2 T indicate the time of the lth overall spike, whereas ?ij 2 Ti
is the time of the j th spike of neuron i. This defines a pair of mappings: ? : [|T |] ! [N ], and
p : [|T |] ! T?i , with ?l = ??l pl . In words, ?l 2 N is the neuron to which the lth element of T
belongs, while pl indexes this spike in the spike train T?l . Let ? l ? (?l , ?l ) be the neuron parameter
associated with spike l, so that ? l = ? ??l . Finally, define yl ? (yl1 , . . . , ylK )T ? y??j pj as the weight
2
vector of spike ?l . Then, we have that
X
X X
x(t) =
xi (t) =
ylk dk (t
i2[N ]
l2|T | k2[K]
?l ),
where yl ? NK (?l , ?l ).
(1)
From the superposition
property of the Poisson process [7], the overall spiking activity T is Poisson
P
with rate ? = i2[N ] i . Each event ?l 2 T has a pair of labels, its neuron parameter ? l ? (?l , ?l ),
and yl , the weight-vector characterizing the spike shape. We view these weight-vectors as the
?marks? of a marked Poisson process T . From the properties of the Poisson
P process, we have
that the marks ? l are drawn i.i.d. from a probability measure G(d?) = 1/? i2[N ] i ??i .
With probability one, the neurons have distinct parameters, so that the mark ? l identifies the
neuron which produced spike l: G(? l = ? ?i ) = P(?l = i) = i /?. Given ? l , yl is distributed as in
Eq. (1). The output waveform x(t) is then a linear functional of this marked Poisson process.
2.2 A nonparametric model of population activity
In practice, the number of neurons driving the recorded activity is unknown. We do not wish to
bound this number a priori, moreover we expect this number to increase as we record over longer
intervals. This suggests a nonparametric Bayesian approach: allow the total number of underlying
neurons to be infinite. Over any finite interval, only a finite subset of these will be active, and
typically, these dominate spiking activity over any interval. This elegant and flexible modeling
approach allows the data to suggest how many neurons are active, and has already proved successful
in neuroscience applications [8]. We use the framework of completely random measures (CRMs)
[9] to model our data. CRMs have been well studied in the Bayesian nonparametrics community,
and there is a wealth of literature on theoretical properties, as well as posterior computation; see e.g.
?
[10, 11, 12]. Recalling that each neuron is characterized by a pair of parameters ( i , ?P
i ), we map
1
?
the infinite collection of pairs {( i , ? i )} to an random measure ?(?) on ?: ?(d?) = i=1 i ??i .
For a CRM, the distribution over measures is induced by distributions over the infinite sequence of
weights, and the infinite sequence of their locations. The weights i are the jumps of a L?evy process
[13], and their distribution is characterized by a L?evy measure ?( ). The locations ? ?i are drawn
i.i.d. from a base probability measure H(? ? ). As is typical, we assume these to be independent.
We set the L?evy measure ?( ) = ? 1 exp( ), resulting in a CRM called the GammaP
process
1
( P) [14]. The Gamma process has the convenient property that the total rate ? ? ?(?) = i=1 i
is Gamma distributed (and thus conjugate to the Poisson process prior on T ). The Gamma process is
also closely connected with the Dirichlet process [15], which will prove useful later on. To complete
the specification on the Gamma process, we set H (? ? ) to the conjugate normal-Wishart distribution
with hyperparameters .
It is easy to directly specify the resulting continuous-time model, we provide the equations in the
Supplementary Material. However it is more convenient to represent the model using the marked
Poisson process of Eq. (1). There, the overall process T is a rate ? Poisson process, and under a
Gamma process prior, ? is Gamma(?, 1) distributed [15]. The labels P
? i assigning events to neurons
1
are drawn i.i.d. from a normalized Gamma process: G(d?) = (1/?) l=1 l .
G(d?) is a random probability measure (RPM) called a normalized random measure [10]. Crucially, a normalized Gamma process is the Dirichlet process (DP) [15], so that the spike parameters
? are i.i.d. draws with a DP-distributed RPM. For spike l, the shape vector is drawn from a normal
with parameters (?l , ?l ): these are thus draws from a DP mixture (DPM) of Gaussians [16].
We can exploit the connection with the DP to integrate out the infinite-dimensional measure G(?)
(and thus ?(?)), and assign spikes to neurons via the so-called Chinese restaurant process (CRP)
[17]. Under this scheme, the lth spike is assigned the same parameter as an earlier spike with
probability proportional to the number of earlier spikes having that parameter. It is assigned a new
parameter (and thus, a new neuron is observed) with probability proportional to ?. Letting Ct be the
number of neurons observed until time t, and Tit = Ti \ [0, t) be the times of spikes produced by
neuron i before time t, we then have for spike l at time?t = ?l :
|Tit | i 2 [Ct ],
? l = ? ??l , where P (?l = i) /
(2)
? i = Ct + 1,
This marginalization property of the DP allows us to integrate out the infinite-dimensional rate
vector ?(?), and sequentially assign spikes to neurons based on the assignments of earlier spikes.
This requires one last property: for the Gamma process, the RPM G(?) is independent of the total
mass ?. Consequently, the clustering of spikes (determined by G(?)) is independent of the rate ? at
which they are produced. We then have the following model:
3
T ? PP(?),
yl ? NK (?l , ?l ),
P
P
x(t) = l2|T | k2[K] ylk dk (t
?l ) + " t
where ? ? P(?, 1),
where (?l , ?l ) ? CRP(?, H (?)),
where " ? GP(0, K).
(3a)
l 2 [|T |], (3b)
(3c)
2.3 A discrete-time approximation
The previous subsections modeled the continuous-time voltage output of a neural population. Our
data on the other hand consists of recordings at a discrete set of times. While it is possible to make
inferences about the continuous-time process underlying these discrete recordings, in this paper, we
restrict ourselves to the discrete case. The marked Poisson process characterization of Eq. 3 leads to
a simple discrete-time approximation of our model.
Recall first the Bernoulli approximation to the Poisson process: a sample from a Poisson process
with rate ? can be approximated by discretizing time at a granularity , and assigning each bin an
event independently with probability ? (the accuracy of the approximation increasing as tends
to 0). To approximate the marked Poisson process T , all that is additionally required is to assign
marks ? i and yi to each event in the Bernoulli approximation. Following Eqs. (3b) and (3c), the
? l ?s are distributed according to a Chinese restaurant process, while each yl is drawn from a normal
distribution parametrized by the corresponding ? l . We discretize the elements of dictionary as well,
e k,: = (dek,1 , . . . , dek,L )T . These form the rows of a K ? L
yielding discrete dictionary elements d
e :,h ). The shape of the j th spike is now a vector of length L, and for
e (we call its columns d
matrix D
e
a weight vector y, is given by Dy.
We can simplify notation a little for the discrete-time model. Let t index time-bins (so that for an
observation interval of length T , t 2 [T / ]). We use tildes for variables indexed by bin-position.
et is its weightThus, ?et and ?et are the neuron and neuron parameter associated with time bin t, and y
vector. Let the binary variable zet indicate whether or not a spike is present in time bin t (recall that
e.
zet ? Bernoulli(? )). If there is no spike associated with bin t, then we ignore the marks ?
e and y
PL
T
et h 1 + "t . Note that the noise "t
Thus the output at time t, xt is given by xt = h=1 zet h d:,h y
is now a discrete-time Markov Gaussian process. Let a and rt be the decay and innovation of the
resulting autoregressive (AR) process, so that "t+1 = a"t + rt .
2.4 Correlations in time and across electrodes
So far, for simplicity, we restricted our model to recordings from a single channel. We now
describe the full model we use in experiments with multichannel recordings. We let every spike
affect the recordings at all channels, with the spike shape varying across channels. For spike l in
channel m, call the weight-vector ylm . All these vectors must be correlated as they correspond to the
same spike; we do this simply by concatenating the set of vectors into a single M K-element vector
yl = (yl1 ; ? ? ? ; ylM ), and modeling this as a multivariate normal. In principle, one might expect the
associated covariance matrix to possess a block structure (corresponding to the subvector associated
with each channel); however, rather than building this into the model, we allow the data to inform
us about any such structure.
We also relax the requirement that the parameters ? ? of each neuron remain constant, and instead
allow ?? , the mean of the weight-vector distribution, to evolve with time (we keep the covariance
parameter ??i fixed, however). Such flexibility can capture effects like changing cell characteristics
or moving electrodes. Like the noise term, we model the time-evolution of this quantity as a realization of a Markov Gaussian process; again, in discrete-time, this corresponds to a simple first-order
AR process. With B 2 RK?K the transition matrix, and r t 2 RK , independent Gaussian innovations, we have ??t+1 = B??t + rt . Where we previously had a DP mixture of Gaussians, we now
have a DP mixture of GPs. Each neuron is now associated with a vector-valued function ? ? (?), rather
than a constant. When a spike at time ?l is assigned to neuron i, it is assigned a weight-vector yl
drawn from a Gaussian with mean ??i (?l ). Algorithm 1 in the Supplementary Material summarizes
the full generative mechanism for the full discrete-time model.
3
Inference
There exists a vast literature on computational approaches to posterior inference for Bayesian nonparametric models, especially so for models based on the DP. Traditional approaches are samplingbased, typically involving Markov chain Monte Carlo techniques (see eg. [18, 19]), and recently
there has also been work on constructing deterministic approximations to the intractable posterior
(eg. [20, 21]). Our problem is complicated by two additional factors. The first is the convolutional
nature of our observation process, where at each time, we observe a function of the previous obser4
vations drawn from the DPMM. This is in contrast to the usual situation where one directly observes
the DPMM outputs themselves. The second complication is a computational requirement: typical
inference schemes are batch methods that are slow and computationally expensive. Our ultimate
goal, on the other hand, is to perform inference in real time, making these approaches unsuitable.
Instead, we develop an online algorithm for posterior inference. Our algorithm is inspired by the
sequential update and greedy search (SUGS) algorithm of [22], though that work was concerned
with the usual case of i.i.d. observations from a DPMM. We generalize SUGS to our observation
process, also accounting for the time-evolution of the cluster parameters and correlated noise.
Below, we describe a single iteration of our algorithm for the case a single electrode; generalizing
to the multielectrode case is straightforward. At each time t, our algorithm maintains the set of
times of the spikes it has inferred from the observations so far. It also maintains the identities of the
neurons that it assigned each of these spikes to, as well as the weight vectors determining the shapes
of the associated spike waveforms. We indicate these point estimates with the hat operator, so, for
example Tbit is the set of estimated spike times before time t assigned to neuron i. In addition to
these point estimates, the algorithm also keeps a set of posterior distributions qit (?i? ) where i spans
bt ]). For each i, qit (?? ) approximates the distribution
over the set of neurons seen so far (i.e. i 2 [C
i
?
?
?
over the parameters ?i ? (?i , ?i ) of neuron i given the observations until time t.
Having identified the time and shape of spikes from earlier times, we can calculate their conT
tribution to the recordings xL
t ? (xt , ? ? ? , xt+L 1 ) . Recalling that the basis functions D,
and thus all spike
waveforms,
span
L
time
bins,
the
residual at time t + t1 is then given by
P
xt+t1 = xt
z
b
Db
y
(at
time
t,
for
t
bt+t1 = 0). We treat
t
h
t
h
1 > 0, we define z
h2[L t1 ]
T
the residual xt = ( xt , ? ? ? , xt+L ) as an observation from a DP mixture model, and use this to
make hard decisions about whether or not this was produced by an underlying spike, what neuron
that spike belongs to (one of the earlier neurons or a new neuron), and what the shape of the associated spike waveform is. The latter is used to calculate qi,t+1 (?i? ), the new distribution over neuron
parameters at time t + 1. Our algorithm proceeds recursively in this manner.
For the first step we use Bayes? rule to decide whether there is a spike underlying the residual:
P
P(e
zt = 1| xt ) / i2Cbt +1 P( xt , ?t = i|e
zt = 1)P(e
zt = 1)
(4)
R
Here, P( xt |?t = i, zet = 1) = ? P( xt |?t )qit (?t )d?t , while P(?t = i|e
zt = 1) follows from the
CRP update rule (equation (2)). P( xt |?t ) is just the normal distribution, while we restrict qit (?) be
the family of normal-Wishart distribution. We can then evaluate the integral, and then summation
(4) to approximate P(e
zt = 1| xt ). If this exceeds a threshold of 0.5 we decide that there is a spike
present at time t, otherwise, we set zet = 0. Observe that making this decision involves marginalizing
over all possible cluster assignments ?t , and all values of the weight vector yt . On the other hand,
bt
having made this decision, we collapse these posterior distributions to point estimates ?bt and y
equal to their MAP values.
In the event of a spike (b
zt = 1), we use these point estimates to update the posterior distribution
over parameters of cluster ?bt , to obtain qi,t+1 (?) from qi,t (?); this is straightforward because of
conjugacy. We follow this up with an additional update step for the distributions of the means of all
clusters: this is to account for the AR evolution of the cluster means. We use a variational update
to keep qi,t+1 (?) in the normal-Wishart distribution. Finally we take a stochastic gradient step to
update any hyperparameters we wish to learn. We provide all details in the Supplementary material.
4
Experiments
Data: In the following, we refer to our algorithm as O P A S S 1 . We used two different datasets
to demonstrate the efficacy of O P A S S . First, the ever popular, publicly available HC1 dataset as
described in [23]. We used the dataset d533101 that consisted of an extracellular tetrode and a single
intracellular electrode. The recording was made simultaneously on all electrodes and was set up such
that the cell with the intracellular electrode was also recorded on the extracellular array implanted in
the hippocampus of an anesthetized rat. The intracellular recording is relatively noiseless and gives
nearly certain firing times of the intracellular neuron. The extracellular recording contains the spike
waveforms from the intracellular neuron as well as an unknown number of additional neurons. The
data is a 4-minute recording at a 10 kHz sampling rate.
The second dataset comes from novel NeuroNexus devices implanted in the rat motor cortex.
The data was recorded at 32.5 kHz in freely-moving rats. The first device we consider is a set of
1
Online gamma Process Autoregressive Spike Sorting
5
3 channels of data (Fig. 7a). The neighboring electrode sites in these devices have 30 ?m between
electrode edges and 60 ?m between electrode centers. These devices are close enough that a locallyfiring neuron could appear on multiple electrode sites [2], so neighboring channels warrant joint
processing. The second device has 8-channels (see Fig. 10a), but is otherwise similar to the first. We
used a 15-minute segment of this data for our experiments.
For both datasets, we preprocessed with a high-pass filter at 800 Hz using a fourth order Butterworth filter before we analyzed the time series. To define D, we used the first five principle
components of all spikes detected with a threshold (three times the standard deviation of the noise
above the mean) in the first five seconds. The noise standard deviation was estimated both over
the first five seconds of the recording as well as the entire recording, and the estimate was nearly
identical. Our results were also robust to minor variations in the choice of the number of principal
components. The autoregressive parameters were estimated by using lag-1 autocorrelation on the
same set of data. For the multichannel algorithms we estimate the covariance between channels and
normalize by our noise variance estimate.
Each algorithm gives a clustering of the detected spikes. In this dataset, we only have a partial
ground truth, so we can only verify accuracy for the neuron with the intracellular (IC) recording. We
define a detected spike to be an IC spike if the IC recording has a spike within 0.5 milliseconds (ms)
of the detected spike in the extracellular recording. We define the cluster with the greatest number
of intracellular spikes as a the ?IC cluster?. We refer to these data as ?partial ground truth data?,
because we know the ground truth spike times for one of the neurons, but not all the others.
Algorithm Comparisons We compare a number of variants of O P A S S , as well as several previously proposed methods, as described below. The vanilla version of O P A S S operates on a single
channel with colored noise. When using multiple channels, we append an ?M? to obtain MO P A S S .
When we model the mean of the waveforms as an auto-regressive process, we ?post-pend? to obtain
O P A S S R. We compare these variants of O P A S S to Gaussian mixture models and k-means [5] with
N components (G M M -N and K-N, respectively), where N indicates the number of components. We
compare with a Dirichlet Process Mixture Model (DPMM) [8] as well as the Focused Mixture Model
(F M M ) [24], a recently proposed Bayesian generative model with state-of-the-art performance. Finally, with compare with OSORT [25], an online sorting algorithm. Only O P A S S and OSORT methods were online as we desired to compare to the state-of-the-art batch algorithms which use all the
data. Note that O P A S S algorithms learned D from the first five seconds of data, whereas all other
algorithms used a dictionary learned from the entire data set.
The single-channel experiments were all run on channel 2 (the results were nearly identical for
all channels). The spike detections for the offline methods used a threshold of three times the noise
standard deviation [5] (unless stated otherwise), and windowed at a size L = 30. For multichannel
data, we concatenated the M channels for each waveform to obtain a M ? L-dimensional vector.
The online algorithms were all run with weakly informative parameters. For the normal-Wishart,
we used ?0 = 0 , 0 = 0.1, W = 10I, and ? = 1 (I is the identity matrix). The AR process corresponded to a GP with length-scale 30 seconds, and variance 0.1. ? was set to 0.1. The parameters
were insensitive to minor changes. Running time in unoptimized MATLAB code for 4 minutes of
data was 31 seconds for a single channel and 3 minutes for all 4 channels on a 3.2 GHz Intel Core
i5 machine with 6 GB of memory (see Supplementary Fig. 11 for details).
Performance on partial ground truth data The main empirical result of our contribution is that
all variants of O P A S S detect more true positives with fewer false positives than any of the other
algorithms on the partial ground truth data (see Fig. 1). The only comparable result is the OSORT;
however, the OSORT algorithm split the IC cluster into 2 different clusters and we combined the
two clusters into one by hand. Our improved sensitivity and specificity is despite the fact that
O P A S S is fully online, whereas all the algorithms (besides OSORT) that we compare to are batch
algorithms using all data for all spikes. Note that all the comparison algorithms pre-process the
data via thresholding at some constant (which we set to three standard deviations above the mean).
To assess the extent to which performance of O P A S S is due to not thresholding, we implement
F A K E -O P A S S , which thresholds the data. Indeed, F A K E -O P A S S ?s performance is much like that
of the batch algorithms. To get uncertainty estimates, we split the data into ten random two minute
segments and repeat this analysis and the results are qualitatively similar.
One possible explanation for the relatively poor performance of the batch algorithms as compared
to O P A S S is a poor choice of the important?but often overlooked?threshold parameter. The right
panel of Fig. 1 shows the receiver operating characteristic (ROC) curve for the k-means algorithms
as well as O P A S S and MO P A S S (where M indicates multichannel, see below for detail). Although we
6
ROC Curves for the IC Cluster
1
Performance on the IC Cluster
1
MOR
OR
True Positive Rate
0.95
0.9
0.85
O
K?2
GMM?2
GMM?5
FOSORT
DPMM
K?4
K?3
FAKE?O
K?5
FMM
0.8
0.75
4
6
GMM?3
GMM?4
True Positive Rates
MO
0.8
0.6
0.4
0.2
O?W
8
10
False Positive Rate
0
0
12
?5
x 10
Overlapping Spikes
Amplitude, mv
1.5
2
2.5
3
3.5
4
Residuals
1
0
?1
0
Frequency
100
1
0.5
False Positive Rates
1
80
No Spikes
1st Only
2nd Only
Both
60
40
20
1
2
Time (ms)
3
?4
x 10
Overlapping Spike Residuals
120
1
0
?1
0.5
K?4
OR
MOR
4
0
0
5
10
Residual Sum of Squares
Figure 1: O P A S S achieves improved
sensitivity and specificity over all
competing methods on partial ground
truth data. (a) True positive and
false positive rates for all variants of
O P A S S and several competing algorithms. (b) ROC curves demonstrating
that O P A S S outperforms all competitor algorithms, regardless of threshold
(? indicates learning ? from the data).
Figure 2: O P A S S detects multiple overlapping waveforms (Top Left) The observed voltage (solid black), MAP
waveform 1 (red), MAP waveform 2
(blue), and waveform from the sum
(dashed-black). (Bottom Left) Residuals from same example snippet, showing a clear improvement in residuals.
typically run O P A S S without tuning parameters, the prior on ? sets the expected number of spikes,
which we can vary in a kind of ?empirical Bayes? strategy. Indeed, the O P A S S curves are fully
above the batch curves for all thresholds and priors, suggesting that regardless of which threshold
one chooses for pre-processing, O P A S S always does better on these data than all the competitor
algorithms. Moreover, in O P A S S we are able to infer the parameter ? at a reasonable point, and the
inferred ? is shown in the left panel of Fig. 1. and the points along the curve in the right panel.
These figures also reveal that using the correlated noise model greatly improves performance.
The above analysis suggests O P A S S ?s ability to detect signals more reliably than thresholding
contributes to its success. In the following, we provide evidence suggesting how several of O P A S S ?s
key features are fundamental to this improvement.
Overlapping Spike Detection A putative reason for the improved sensitivity and specificity of
O P A S S over other algorithms is its ability to detect overlapping spikes. When spikes overlap, although the result can accurately be modeled as a linear sum in voltage space, the resulting waveform
often does not appear in any cluster in PC space (see [1]). However, our online approach can readily
find such overlapping spikes. Fig. 2 (top left panel) shows one example of 135 examples where
O P A S S believed that multiple waveforms were overlapping. Note that even though the waveform
peaks are approximately 1 ms from one another, thresholding algorithms do not pick up these spikes,
because they look different in PC space.
Indeed, by virtue of estimating the presence of multiple spikes, the residual squared error between
the expected voltage and observed voltage shrinks for this snippet (bottom left). The right panel
of Fig. 2 shows the density of the residual errors for all putative overlapping spikes. The mass
of this density is significantly smaller than the mass of the other scenarios. Of the 135 pairs of
overlapping spikes, 37 of those spikes came from the intracellular neuron. Thus, while it seems
detecting overlapping spikes helps, it does not fully explain the improvements over the competitor
algorithms.
Time-Varying Waveform Adaptation As has been demonstrated previously [26], the waveform
shape of a neuron may change over time. The mean waveform over time for the intracellular neuron
is shown in Fig. 3a. Clearly, the mean waveform is changing over time. Moreover, these changes are
reflected in the principal component space (Fig. 3b). We therefore compared means and variances
O P A S S with O P A S S R, which models the mean of the dictionary weights via an auto-regressive
process. Fig. 3c shows that the auto-regressive model for the mean dictionary weights yields a timevarying posterior (top), whereas the static prior yields a constant posterior mean with increasing
posterior marginal variances (bottom). More precisely, the mean of the posterior standard deviations
for the time-varying prior is about half of that for the static prior?s posteriors. Indeed, the O P A S S R
yields 11 more true detections than O P A S S .
Multielectrode Array O P A S S achieved a heightened sensitivity by incorporating multiple channels (see MO P A S S point in Fig. 1). We further evaluate the impact of multiple channels using a three
7
0
?0.5
0
1
ms
2
200
166
1.5
133
1
100
66
0.5
33
0
0.5
3
1 Min
3 Min
4 Min
O
0.5
Elapsed Seconds
PCA Component 2
Amplitude, units
Evolution of IC Waveform in PC Space
2
OR
IC Cluster Posterior Parameters
Evolution of the IC Waveform Shape
1
(a)
1
1.5
PCA Component 1
(b)
2
0
1
ms
2
3
(c)
Figure 3: The IC waveform changes over time, which our posterior parameters track. (a) Mean
IC waveforms over time. Each colored line represents the mean of the waveform averaged over 24
seconds with color denoting the time interval. This neuron decreases in amplitude over the period
of the recording. (b) The same waveforms plotted in PC space still captures the temporal variance.
(c) The mean and standard deviation of the waveforms at three time points for the auto-regressive
prior on the mean waveform (top) and static prior (bottom). While the auto-regressive prior admits
adaptation to the time-varying mean, the posterior of the static prior simply increases its variance.
Ch1
Ch2
Ch1
Ch3
0
?0.05
Ch2
0.05
Amplitude, mv
Amplitude, mv
0.05
0
?0.05
Ch3
Figure 4: Improving O P A S S by incorporating multiple channels. The
top 2 most prevalent waveforms
from the NeuroNexus dataset with
three channels. Note that the left
panel has a waveform that appears
on both channel 2 and channel 3,
whereas the waveform in the right
panel only appears in channel 3. If
only channel 3 was used, it would be
difficult to separate these waveform.
channel NeuroNexus shank (Supp. Fig. 7a). In Fig. 4 we show the top two most prevalent waveforms from these data across the three electrodes. Had only the third electrode been used, these two
waveforms would not be distinct (as evidenced by their substantial overlap in PC space upon using
only the third channel in Fig. 7b). This suggests that borrowing strength across electrodes improves
detection accuracy. Supplementary Fig. 10 shows a similar plot for the eight channel data.
5
Discussion
Our improved sensitivity and specificity seem to arise from multiple sources including (i) improved detection, (ii) accounting for correlated noise, (iii) capturing overlapping spikes, (iv) tracking waveform dynamics, and (v) utilizing multiple channels. While others have developed closely
related Bayesian models for clustering [8, 27], deconvolution based techniques [1], time-varying
waveforms [26], or online methods [25, 3], we are the first to our knowledge to incorporate all of
these.
An interesting implication of our work is that it seems that our errors may be irreconcilable using
merely first order methods (that only consider the mean waveform to detect and cluster). Supp. Fig.
8a shows the mean waveform of the true and false positives are essentially identical, suggesting that
even in the full 30-dimensional space excluding those waveforms from intracellular cluster would
be difficult. Projecting each waveform into the first two PCs is similarly suggestive, as the missed
positives do not seem to be in the cluster of the true positives (Supp. Fig. 8b). Thus, in future work,
we will explore dynamic and multiscale dictionaries [28], as well as incorporate a more rich history
and stimulus dependence.
Acknowledgments
This research was supported in part by the Defense Advanced Research Projects Agency (DARPA),
under the HIST program managed by Dr. Jack Judy.
8
References
[1] J W Pillow, J Shlens, E J Chichilnisky, and E P Simoncelli. A model-based spike sorting algorithm for
removing correlation artifacts in multi-neuron recordings. PLoS ONE, 8(5):1?15, 2013.
[2] J S Prentice, J Homann, K D Simmons, G Tka?cik, V Balasubramanian, and P C Nelson. Fast, scalable,
Bayesian spike identification for multi-electrode arrays. PloS one, 6(7):e19884, January 2011.
[3] F Franke, M Natora, C Boucsein, M H J Munk, and K Obermayer. An online spike detection and spike
classification algorithm capable of instantaneous resolution of overlapping spikes. Journal of Computational Neuroscience, 29(1-2):127?148, August 2010.
[4] W Gerstner and W M Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 1 edition, August 2002.
[5] M S Lewicki. A review of methods for spike sorting: the detection and classification of neural action
potentials. Network: Computation in Neural Systems, 1998.
[6] C E Rasmussen and C K I Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[7] J F C Kingman. Poisson processes, volume 3 of Oxford Studies in Probability. The Clarendon Press
Oxford University Press, New York, 1993. Oxford Science Publications.
[8] F Wood and M J Black. A non-parametric Bayesian alternative to spike sorting. Journal of Neuroscience
Methods, 173:1?12, 2008.
[9] J F C Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78, 1967.
[10] L F James, A Lijoi, and I Pruenster. Posterior analysis for normalized random measures with independent
increments. Scand. J. Stat., 36:76?97, 2009.
[11] N L Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data. Annals
of Statistics, 18(3):1259?1294, 1990.
[12] R Thibaux and M I Jordan. Hierarchical beta processes and the Indian buffet process. In Proceedings of
the International Workshop on Artificial Intelligence and Statistics, volume 11, 2007.
[13] K Sato. L?evy Processes and Infinitely Divisible Distributions. Cambridge University Press, 1990.
[14] D Applebaum. L?evy Processes and Stochastic Calculus. Cambridge studies in advanced mathematics.
University Press, 2004.
[15] T S Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209?
230, 1973.
[16] A Y Lo. On a class of bayesian nonparametric estimates: I. density estimates. Annals of Statistics,
12(1):351?357, 1984.
[17] J Pitman. Combinatorial stochastic processes. Technical Report 621, Department of Statistics, University
of California at Berkeley, 2002. Lecture notes for St. Flour Summer School.
[18] R M Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249?265, 2000.
[19] H Ishwaran and L F James. Gibbs sampling methods for stick-breaking priors. Journal of the American
Statistical Association, 96(453):161?173, 2001.
[20] D M Blei and M I Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis,
1(1):121?144, 2006.
[21] T P Minka and Z Ghahramani. Expectation propagation for infinite mixtures. Presented at NIPS2003
Workshop on Nonparametric Bayesian Methods and Infinite Models, 2003.
[22] L Wang and D B Dunson. Fast bayesian inference in dirichlet process mixture models. Journal of
Computational & Graphical Statistics, 2009.
[23] D A Henze, Z Borhegyi, J Csicsvari, A Mamiya, K D Harris, and G Buzsaki. Intracellular feautures
predicted by extracellular recordings in the hippocampus in Vivo. J. Neurophysiology, 2000.
[24] D E Carlson, Q Wu, W Lian, M Zhou, C R Stoetzner, D Kipke, D Weber, J T Vogelstein, D B Dunson,
and L Carin. Multichannel Electrophysiological Spike Sorting via Joint Dictionary Learning and Mixture
Modeling. IEEE TBME, 2013.
[25] U Rutishauser, E M Schuman, and A N Mamelak. Online detection and sorting of extracellularly recorded
action potentials in human medial temporal lobe recordings, in vivo. J. Neuro. Methods, 2006.
[26] A Calabrese and L Paninski. Kalman filter mixture model for spike sorting of non-stationary data. Journal
of neuroscience methods, 196(1):159?169, 2011.
[27] J Gasthaus, F D Wood, D Gorur, and Y W Teh. Dependent dirichlet process spike sorting. Advances in
neural information processing systems, 21:497?504, 2009.
[28] G Chen, M Iwen, S Chin, and M. Maggioni. A fast multiscale framework for data in high-dimensions:
Measure estimation, anomaly detection, and compressive measurements. In VCIP, 2012 IEEE, 2012.
9
| 5061 |@word neurophysiology:1 version:1 seems:3 hippocampus:2 nd:1 calculus:1 simulation:1 crucially:1 lobe:1 accounting:4 simplifying:1 covariance:5 pick:1 solid:1 recursively:1 moment:1 series:3 efficacy:1 contains:1 denoting:1 outperforms:1 assigning:2 must:1 readily:1 numerical:1 informative:1 plasticity:1 shape:16 motor:1 reproducible:1 treating:1 update:6 plot:1 medial:1 stationary:2 generative:3 greedy:1 advancement:1 device:5 fewer:1 samplingbased:1 half:1 intelligence:1 ith:2 core:1 record:1 colored:3 regressive:5 blei:1 detecting:2 characterization:1 contribute:1 location:2 successive:1 evy:5 complication:1 five:4 windowed:1 along:1 beta:2 sugs:2 consists:2 prove:1 autocorrelation:1 manner:1 expected:2 indeed:5 themselves:2 fmm:1 growing:1 multi:3 brain:3 inspired:1 detects:1 balasubramanian:1 resolve:1 actual:2 little:1 increasing:3 provided:1 estimating:1 moreover:4 underlying:5 panel:7 mass:3 notation:1 ch1:2 what:2 project:1 kind:1 developed:1 compressive:1 temporal:4 berkeley:1 every:1 ti:7 rm:1 k2:2 stick:1 unit:3 appear:2 continually:1 positive:14 before:3 engineering:2 t1:4 treat:1 tends:1 despite:1 oxford:3 firing:2 approximately:1 might:1 plus:1 black:3 studied:1 dynamically:1 suggests:4 collapse:1 averaged:1 directed:1 acknowledgment:1 responsible:1 union:1 practice:1 block:1 tribution:1 implement:1 lcarin:1 procedure:1 empirical:2 evolving:1 adapting:2 significantly:1 convenient:2 word:2 pre:2 regular:1 specificity:4 suggest:1 get:1 close:1 operator:1 prentice:1 franke:1 map:4 deterministic:1 yt:1 center:1 demonstrated:1 straightforward:2 regardless:3 williams:1 independently:2 focused:1 resolution:1 simplicity:1 identifying:1 rule:2 estimator:1 array:3 importantly:1 dominate:1 utilizing:1 shlens:1 borhegyi:1 population:8 exploratory:1 variation:1 increment:1 maggioni:1 simmons:1 annals:3 heightened:2 anomaly:1 duke:4 homogeneous:1 us:1 gps:1 element:6 approximated:1 expensive:1 observed:5 bottom:4 homann:1 electrical:3 capture:2 wang:1 thousand:1 calculate:2 connected:1 plo:2 decrease:1 observes:1 ran:1 substantial:1 agency:1 dynamic:4 dom:1 weakly:1 segment:2 tit:2 upon:1 completely:3 basis:2 joint:3 darpa:1 train:1 distinct:2 fast:3 describe:4 effective:1 monte:1 detected:4 artificial:1 labeling:1 corresponded:1 dek:2 vations:1 lag:1 supplementary:6 valued:2 relax:2 otherwise:3 hc1:1 ability:3 statistic:10 gp:4 jointly:1 online:14 sequence:2 tbme:1 adaptation:2 neighboring:2 realization:2 date:1 flexibility:1 buzsaki:1 intuitive:1 yijk:3 normalize:1 split:2 cluster:19 electrode:17 requirement:2 produce:1 help:1 develop:5 stat:2 ij:4 school:1 minor:2 received:1 eq:4 schuman:1 predicted:1 involves:1 indicate:4 come:1 waveform:47 lijoi:1 closely:2 filter:3 stochastic:4 human:1 enable:1 kistler:1 material:4 bin:7 explains:1 munk:1 assign:3 summation:1 yij:2 pl:3 ground:7 normal:8 exp:1 ic:12 lawrence:1 mapping:1 henze:1 mo:4 driving:1 dictionary:9 achieves:1 vary:1 estimation:1 label:2 combinatorial:1 superposition:4 ylm:2 tool:1 weighted:2 hope:1 butterworth:1 mit:1 clearly:2 gaussian:8 always:1 rather:3 zhou:1 varying:8 voltage:8 timevarying:1 publication:1 improvement:4 bernoulli:4 superimposed:1 indicates:3 prevalent:2 greatly:1 contrast:1 detect:7 inference:12 dependent:1 ferguson:1 entire:4 typically:3 bt:5 borrowing:1 unoptimized:1 i1:1 overall:3 classification:2 flexible:1 priori:1 development:2 art:3 marginal:1 equal:1 having:4 sampling:3 identical:3 represents:1 look:1 carin:2 theart:1 nearly:3 warrant:1 future:1 others:2 stimulus:3 simplify:1 report:1 gamma:13 simultaneously:4 individual:3 ourselves:2 recalling:2 detection:10 stationarity:1 interest:1 gorur:1 mamiya:1 flour:1 mixture:13 analyzed:1 yielding:1 pc:6 chain:2 implication:1 pruenster:1 integral:1 edge:1 partial:6 capable:1 unless:1 indexed:1 iv:2 desired:1 plotted:1 theoretical:1 column:1 modeling:5 earlier:5 rao:1 ar:4 measuring:1 vinayak:1 assignment:2 deviation:6 subset:1 recognizing:1 successful:1 inadequate:1 thibaux:1 combined:1 chooses:1 st:2 density:3 fundamental:1 sensitivity:5 peak:1 international:1 yl:8 decoding:5 iwen:1 again:1 squared:1 recorded:4 possibly:1 classically:1 wishart:4 dr:1 american:1 kingman:2 supp:3 suggesting:4 account:1 potential:2 applebaum:1 mv:3 idealized:1 later:2 view:1 pend:1 extracellularly:1 red:1 recover:1 maintains:2 complicated:1 bayes:3 vivo:2 contribution:1 ass:1 square:1 publicly:1 accuracy:3 convolutional:1 variance:6 characteristic:3 correspond:1 yield:3 generalize:1 bayesian:15 identification:1 accurately:1 produced:4 calabrese:1 carlo:1 history:2 simultaneous:2 explain:1 inform:1 whenever:1 competitor:3 pp:3 frequency:1 james:2 minka:1 associated:11 static:4 sampled:1 emits:1 dataset:6 proved:1 popular:1 recall:2 knowledge:2 subsection:1 improves:3 color:1 electrophysiological:1 amplitude:5 sophisticated:2 cik:1 appears:2 clarendon:1 follow:1 reflected:1 response:1 improved:6 specify:1 nonparametrics:1 though:3 refractoriness:1 shrink:1 just:1 crp:3 until:3 correlation:3 hand:4 multiscale:2 overlapping:16 propagation:1 defines:1 artifact:1 reveal:1 facilitate:2 building:1 effect:1 normalized:4 true:8 consisted:1 verify:1 evolution:5 managed:1 assigned:6 memoryless:1 i2:5 neal:1 deal:1 white:1 eg:2 indistinguishable:1 rat:3 m:5 chin:1 complete:3 demonstrate:3 weber:1 variational:3 jack:1 novel:3 recently:3 instantaneous:1 nips2003:1 common:1 functional:1 spiking:9 khz:2 exponentially:1 insensitive:1 volume:2 extend:1 association:1 approximates:1 resting:1 mor:2 measurement:4 refer:2 cambridge:3 gibbs:1 ylk:3 automatic:1 vanilla:1 grid:1 tuning:1 similarly:1 mathematics:2 had:2 moving:2 specification:1 longer:1 cortex:1 operating:1 base:1 posterior:16 own:1 recent:1 multivariate:1 belongs:2 driven:1 reverse:1 scenario:1 certain:1 binary:2 discretizing:1 success:1 came:1 life:1 yi:1 joshua:1 seen:1 additional:3 freely:1 period:1 dashed:1 vogelstein:2 signal:4 ii:4 multiple:12 full:4 infer:2 simoncelli:1 smooth:1 exceeds:1 technical:1 characterized:2 believed:1 post:1 qi:4 impact:1 neuro:1 involving:1 variant:4 heterogeneous:1 implanted:2 noiseless:1 poisson:17 essentially:1 scalable:1 iteration:1 kernel:4 represent:1 expectation:1 achieved:1 cell:2 whereas:6 background:1 addition:1 interval:6 wealth:1 source:1 crucial:1 operate:1 posse:1 recording:27 induced:1 elegant:1 db:1 dpm:1 hz:1 seem:2 jordan:2 call:4 presence:1 hjort:1 granularity:1 iii:2 easy:1 crm:2 divisible:1 concerned:1 neuronexus:3 marginalization:1 restaurant:2 affect:1 enough:1 restrict:3 identified:1 competing:2 reduce:1 whether:3 pca:2 defense:1 ultimate:1 gb:1 effort:2 york:1 action:2 matlab:1 useful:1 fake:1 clear:1 nonparametric:10 ten:1 multichannel:6 millisecond:1 neuroscience:5 estimated:3 track:2 blue:1 discrete:12 write:1 key:1 threshold:9 demonstrating:1 drawn:8 clarity:1 changing:2 pj:1 preprocessed:1 gmm:4 imaging:1 vast:1 merely:1 wood:2 sum:4 ch2:2 run:3 mystery:1 fourth:1 i5:1 uncertainty:1 family:1 reasonable:1 decide:2 wu:1 missed:1 putative:3 draw:2 decision:3 dy:1 rpm:3 summarizes:1 comparable:1 capturing:1 bound:1 multielectrode:3 ct:3 shank:1 summer:1 discretely:1 activity:11 sato:1 strength:1 precisely:1 generates:1 span:2 min:3 separable:1 extracellular:6 relatively:2 department:3 pacific:1 according:1 poor:2 conjugate:2 describes:1 across:7 remain:1 smaller:1 making:2 projecting:1 restricted:1 computationally:1 equation:2 conjugacy:1 previously:3 mechanism:1 know:1 letting:1 boucsein:1 parametrize:1 gaussians:2 available:1 probe:1 observe:2 eight:1 hierarchical:1 appropriate:1 ishwaran:1 batch:6 yl1:2 alternative:1 buffet:1 hat:1 top:6 dirichlet:7 clustering:3 running:1 graphical:2 qit:4 carlson:2 unsuitable:1 exploit:1 concatenated:1 ghahramani:1 chinese:2 especially:1 already:1 quantity:1 spike:100 strategy:1 parametric:1 rt:3 usual:3 traditional:1 dependence:1 obermayer:1 exhibit:1 gradient:1 dp:9 separate:1 parametrized:1 nelson:1 extent:1 reason:1 code:2 length:4 index:2 modeled:2 cont:1 besides:1 scand:1 kalman:1 innovation:2 difficult:2 dunson:2 potentially:1 trace:2 stated:1 append:1 reliably:1 calcium:1 dpmm:5 unknown:4 perform:1 teh:1 discretize:1 zt:6 neuron:76 observation:8 markov:5 datasets:2 enabling:1 finite:2 snippet:2 january:1 tilde:1 crms:2 situation:1 ever:3 excluding:1 perturbation:2 gasthaus:1 smoothed:1 august:2 community:1 inferred:2 overlooked:1 david:1 evidenced:1 pair:5 required:2 subvector:1 chichilnisky:1 connection:1 csicsvari:1 california:1 elapsed:1 learned:2 ch3:2 address:2 able:2 proceeds:2 below:3 challenge:1 program:1 including:2 memory:1 explanation:1 greatest:1 event:7 overlap:2 residual:10 advanced:2 scheme:3 technology:1 identifies:1 auto:5 prior:13 understanding:2 literature:4 l2:3 review:1 evolve:1 determining:2 marginalizing:1 fully:3 expect:2 lecture:1 interesting:1 proportional:2 kipke:1 h2:1 integrate:2 rutishauser:1 tka:1 principle:2 thresholding:4 row:1 lo:1 repeat:1 last:1 supported:1 rasmussen:1 offline:1 allow:5 deeper:1 understand:1 template:1 characterizing:4 anesthetized:1 d533101:1 pitman:1 benefit:1 distributed:7 ghz:1 curve:6 dimension:1 transition:1 pillow:1 rich:2 autoregressive:3 collection:1 jump:1 made:2 qualitatively:1 far:3 approximate:3 ignore:1 keep:3 suggestive:1 active:2 sequentially:1 hist:1 conceptual:1 receiver:1 mamelak:1 xi:2 continuous:10 latent:1 search:1 additionally:1 nature:1 channel:32 learn:3 robust:1 contributes:2 improving:1 gerstner:1 constructing:1 pk:2 main:1 intracellular:12 noise:15 hyperparameters:2 arise:1 edition:1 x1:1 fig:18 site:2 intel:1 roc:3 slow:1 probing:1 judy:1 position:1 exceeding:1 wish:2 exponential:1 concatenating:1 lie:1 xl:1 breaking:1 third:2 rk:2 minute:5 removing:1 xt:17 showing:1 offset:1 dk:6 admits:1 decay:1 virtue:1 evidence:1 tetrode:1 exists:1 intractable:1 incorporating:2 false:8 sequential:1 effectively:2 deconvolution:1 workshop:2 stoetzner:1 nk:3 sorting:12 chen:1 generalizing:1 electrophysiology:1 simply:2 explore:1 infinitely:1 paninski:1 ordered:2 tracking:2 lewicki:1 collectively:1 corresponds:1 truth:7 harris:1 lth:3 marked:6 endeavor:1 goal:1 consequently:1 identity:2 shared:1 change:5 hard:1 infinite:10 typical:2 determined:1 operates:1 principal:2 total:4 called:3 pas:1 experimental:1 indicating:1 mark:7 support:1 latter:1 indian:1 incorporate:2 evaluate:2 lian:1 d1:1 correlated:4 |
4,489 | 5,062 | Transportability from Multiple Environments
with Limited Experiments
Elias Bareinboim?
UCLA
Sanghack Lee?
Penn State University
Vasant Honavar
Penn State University
Judea Pearl
UCLA
Abstract
This paper considers the problem of transferring experimental findings learned
from multiple heterogeneous domains to a target domain, in which only limited
experiments can be performed. We reduce questions of transportability from multiple domains and with limited scope to symbolic derivations in the causal calculus, thus extending the original setting of transportability introduced in [1], which
assumes only one domain with full experimental information available. We further
provide different graphical and algorithmic conditions for computing the transport
formula in this setting, that is, a way of fusing the observational and experimental information scattered throughout different domains to synthesize a consistent
estimate of the desired effects in the target domain. We also consider the issue of
minimizing the variance of the produced estimand in order to increase power.
1
Motivation
Transporting and synthesizing experimental knowledge from heterogeneous settings are central to
scientific discovery. Conclusions that are obtained in a laboratory setting are transported and applied
elsewhere in an environment that differs in many aspects from that of the laboratory. In data-driven
sciences, experiments are conducted on disparate domains, but the intention is almost invariably to
fuse the acquired knowledge, and translate it into some meaningful claim about a target domain,
which is usually different than any of the individual study domains.
However, the conditions under which this extrapolation can be legitimized have not been formally
articulated until very recently. Although the problem has been discussed in many areas of statistics,
economics, and the health sciences, under rubrics such as ?external validity? [2, 3], ?meta-analysis?
[4], ?quasi-experiments? [5], ?heterogeneity? [6], these discussions are limited to verbal narratives
in the form of heuristic guidelines for experimental researchers ? no formal treatment of the problem has been attempted to answer the practical challenge of generalizing causal knowledge across
multiple heterogeneous domains with disparate experimental data posed in this paper.
The fields of artificial intelligence and statistics provide the theoretical underpinnings necessary for
tackling transportability. First, the distinction between statistical and causal knowledge has received
syntactic representation through causal diagrams [7, 8, 9], which became a popular tool for causal
inference in data-driven fields. Second, the inferential machinery provided by the causal calculus
(do-calculus) [7, 9, 10] is particularly suitable for handling knowledge transfer across domains.
Armed with these techniques, [1] introduced a formal language for encoding differences and commonalities between domains accompanied with necessary or sufficient conditions under which transportability of empirical findings is feasible between two domains, a source and a target; then, these
conditions were extended for a complete characterization for transportability in one domain with unrestricted experimental data [11]. Subsequently, these results were generalized for the settings when
?
These authors contributed equally to this paper.
The authors? addresses are respectively
[email protected], [email protected], [email protected], [email protected].
1
only limited experiments are available in the source domain [12, 13], and further for when multiple
source domains with unrestricted experimental information are available [14, 15]. This paper broadens these discussions introducing a more general setting in which multiple heterogeneous sources
with limited and distinct experiments are available, a task that we call here ?mz-transportability?.1
More formally, the mz-transportability problem concerns with the transfer of causal knowledge
from a heterogeneous collection of source domains ? = {?1 , ..., ?n } to a target domain ? ? . In each
domain ?i ? ?, experiments over a set of variables Zi can be performed, and causal knowledge
gathered. In ? ? , potentially different from ?i , only passive observations can be collected (this constraint is weakened later on). The problem is to infer a causal relationship R in ? ? using knowledge
obtained in ?. Clearly, if nothing is known about the relationship between ? and ? ? , the problem is
trivial; no transfer can be justified. Yet the fact that all scientific experiments are conducted with the
intent of being used elsewhere (e.g., outside the lab) implies that scientific progress relies on the assumption that certain domains share common characteristics and that, owed to these commonalities,
causal claims would be valid in new settings even where experiments cannot be conducted.
The problem stated in this paper generalizes the one-dimensional version of transportability with
limited scope and the multiple dimensional with unlimited scope. Remarkably, while the effects of
interest might not be individually transportable to the target domain from the experiments in any of
the available sources, combining different pieces from the various sources may enable the estimation
of the desired effects (to be shown later on). The goal of this paper is to formally understand under
which conditions the target quantity is (non-parametrically) estimable from the available data.
2
Previous work and our contributions
Consider Fig. 1(a) in which the node S represents factors that produce differences between source
and target populations. Assume that we conduct a randomized trial in Los Angeles (LA) and estimate the causal effect of treatment X on outcome Y for every age group Z = z, denoted by
P (y|do(x), z). We now wish to generalize the results to the population of the United States (U.S.),
but we find the distribution P (x, y, z) in LA to be different from the one in the U.S. (call the latter
P ? (x, y, z)). In particular, the average age in the U.S. is significantly higher than that in LA. How
are we to estimate the causal effect of X on Y in U.S., denoted R = P ? (y|do(x))? 2 3
The selection diagram for this example (Fig. 1(a)) conveys the assumption that the only difference
between the two populations are factors determining age distributions, shown as S ? Z, while agespecific effects P ? (y|do(x), Z = z) are invariant across populations. Difference-generating factors
are represented by a special set of variables called selection variables S (or simply S-variables),
which are graphically depicted as square nodes (). From this assumption, the overall causal effect
in the U.S. can be derived as follows:
X
P ? (y|do(x), z)P ? (z)
R =
z
=
X
P (y|do(x), z)P ? (z)
(1)
z
The last line is the transport formula for R. It combines experimental results obtained in LA,
P (y|do(x), z), with observational aspects of the U.S. population, P ? (z), to obtain an experimental
claim P ? (y|do(x)) about the U.S.. In this trivial example, the transport formula amounts to a simple
re-calibration (or re-weighting) of the age-specific effects to account for the new age distribution.
In general, however, a more involved mixture of experimental and observational findings would
be necessary to obtain a bias-free estimate of the target relation R. Fig. 1(b) depicts the smallest
example in which transportability is not feasible even when experiments over X in ? are available.
In real world applications, it may happen that certain controlled experiments cannot be conducted
in the source environment (for financial, ethical, or technical reasons), so only a limited amount
1
The machine learning literature has been concerned about discrepancies among domains in the context,
almost exclusively, on predictive or classification tasks as opposed to learning causal or counterfactual measures
[16, 17]. Interestingly enough, recent work on anticausal learning moves towards more general modalities of
learning and also leverages knowledge about the underlying data-generating structure [18, 19].
2
We will use Px (y | z) interchangeably with P (y | do(x), z).
3
We use the structural interpretation of causal diagrams as described in [9, pp. 205].
2
S
S
Z
Z1
S
S
X
Y
(a)
X
Y
(b)
Z2
X
Y
(c)
Figure 1: The selection variables S are depicted as square nodes (). (a) Selection diagram illustrating when transportability between two domains is trivially solved through simple recalibration. (b)
The smallest possible selection diagram in which a causal relation is not transportable. (c) Selection
diagram illustrating transportability when only experiments over {Z1 } are available in the source.
of experimental information can be gathered. A natural question arises whether the investigator in
possession of a limited set of experiments would still be able to estimate the desired effects at the
target domain. For instance, we assume in Fig. 1(c) that experiments over Z1 are available and the
target quantity is R = P ? (y|do(x)), which can be shown to be equivalent to P (y|x, do(Z1 )), the
conditional distribution of Y given X in the experimental study when Z1 is randomized. 4
One might surmise that multiple pairwise z-transportability would be sufficient to solve the mztransportability problem, but this is not the case. To witness, consider Fig. 2(a,b) which concerns
the transport of experimental results from two sources ({?a , ?b }) to infer the effect of X on Y
in ? ? , R = P ? (y|do(x)). In these diagrams, X may represent the treatment (e.g., cholesterol
level), Z1 represents a pre-treatment variable (e.g., diet), Z2 represents an intermediate variable
(e.g., biomarker), and Y represents the outcome (e.g., heart failure). We assume that experimental
studies randomizing {Z1 , Z2 } can be conducted in both domains. A simple analysis based on [12]
can show that R cannot be z-transported from either source alone, but it turns out that combining in
a special way experiments from both sources allows one to determine the effect in the target.
More interestingly, we consider the more stringent scenario where only certain experiments can
be performed in each of the domains. For instance, assume that it is only possible to conduct
experiments over {Z2 } in ?a and over {Z1 } in ?b . Obviously, R will not be z-transported individually
P from these domains, but it turns out that taking both sets of experiments into account,
R = z2 P (a) (y|do(z2 ))P (b) (z2 |x, do(Z1 )), which fully uses all pieces of experimental data available. In other words, we were able to decompose R into subrelations such that each one is separately
z-transportable from the source domains, and so is the desired target quantity. Interestingly, it is the
case in this example that if the domains in which experiments were conducted were reversed (i.e.,
{Z1 } randomized in ?a , {Z2 } in ?b ), it will not be possible to transport R by any method ? the
target relation is simply not computable from the available data (formally shown later on).
This illustrates some of the subtle issues mz-transportability entails, which cannot be immediately
cast in terms of previous instances of the transportability class. In the sequel, we try to better understand some of these issues, and we develop sufficient or (specific) necessary conditions for deciding
special transportability for arbitrary collection of selection diagrams and set of experiments. We further construct an algorithm for deciding mz-transportability of joint causal effects and returning the
correct transport formula whenever this is possible. We also consider issues relative to the variance
of the estimand aiming for improving sample efficiency and increasing statistical power.
3
Graphical conditions for mz-transportability
The basic semantical framework in our analysis rests on structural causal models as defined in [9,
pp. 205], also called data-generating models. In the structural causal framework [9, Ch. 7], actions
are modifications of functional relationships, and each action do(x) on a causal model M produces
4
A typical example is whether we can estimate the effect of cholesterol (X) on heart failure (Y ) by experiments on diet (Z1 ) given that cholesterol levels cannot be randomized [20].
3
Z2
Z1
Z1
(a)
X
Z1
X
Z2
Z1
X
Z2
Z2
Y
Y
W
X
Z3
W
U
U
Y
Y
(b)
(c)
Z3
(d)
Figure 2: Selection diagrams illustrating impossibility of estimating R = P ? (y|do(x)) through
individual transportability from ?a and ?b even when Z = {Z1 , Z2 } (for (a, b), (c, d))). If we
assume, more stringently, availability of experiments Za = {Z2 }, Zb = {Z1 }, Z? = {}, a more
elaborated analysis can show that R can be estimated combining different pieces from both domains.
a new model Mx = hU, V, Fx , P (U)i, where Fx is obtained after replacing fX ? F for every
X ? X with a new function that outputs a constant value x given by do(x). 5
We follow the conventions given in [9]. We denote variables by capital letters and their realized
values by small letters. Similarly, sets of variables will be denoted by bold capital letters, sets of
realized values by bold small letters. We use the typical graph-theoretic terminology with the corresponding abbreviations P a(Y)G and An(Y)G , which will denote respectively the set of observable
parents and ancestors of the node set Y in G. A graph GY will denote the induced subgraph G containing nodes in Y and all arrows between such nodes. Finally, GXZ stands for the edge subgraph
of G where all incoming arrows into X and all outgoing arrows from Z are removed.
Key to the analysis of transportability is the notion of ?identifiability,? defined below, which expresses the requirement that causal effects are computable from a combination of data P and assumptions embodied in a causal graph G.
Definition 1 (Causal Effects Identifiability (Pearl, 2000, pp. 77)). The causal effect of an action
do(x) on a set of variables Y such that Y ? X = ? is said to be identifiable from P in G if Px (y)
is uniquely computable from P (V) in any model that induces G.
Causal models and their induced graphs are usually associated with one particular domain (also
called setting, study, population, or environment). In ordinary transportability, this representation
was extended to capture properties of two domains simultaneously. This is possible if we assume
that the structural equations share the same set of arguments, though the functional forms of the
equations may vary arbitrarily [11]. 6
Definition 2 (Selection Diagram). Let hM, M ? i be a pair of structural causal models [9, pp. 205]
relative to domains h?, ? ? i, sharing a causal diagram G. hM, M ? i is said to induce a selection
diagram D if D is constructed as follows:
1. Every edge in G is also an edge in D;
2. D contains an extra edge Si ? Vi whenever there might exist a discrepancy fi 6= fi? or
P (Ui ) 6= P ? (Ui ) between M and M ? .
In words, the S-variables locate the mechanisms where structural discrepancies between the two
domains are suspected to take place.7 Alternatively, the absence of a selection node pointing to
a variable represents the assumption that the mechanism responsible for assigning value to that
variable is identical in both domains.
5
The results presented here are also valid in other formalisms for causality based on potential outcomes.
As discussed in the reference, the assumption of no structural changes between domains can be relaxed,
but some structural assumptions regarding the discrepancies between domains must still hold.
7
Transportability assumes that enough structural knowledge about both domains is known in order to substantiate the production of their respective causal diagrams. In the absence of such knowledge, causal discovery
algorithms might be used to infer the diagrams from data [8, 9].
6
4
Armed with the concept of identifiability and selection diagrams, mz-transportability of causal effects can be defined as follows:
Definition 3 (mz-Transportability). Let D = {D(1) , ..., D(n) } be a collection of selection diagrams
relative to source domains ? = {?1 , ..., ?n }, and target domain ? ? , respectively, and Zi (and Z? )
be the variables in which experiments can be conducted in domain ?i (andS? ? ). Let hP i , Izi i be
the pair of observational and interventional distributions of ?i , where Izi = Z0 ?Zi P i (v|do(z0 )),
and in an analogous manner, hP ? , Iz? i be the observational and interventional distributions of ? ? .
The causal effect R = Px?S
(y|w) is said to be mz-transportable from ? to ? ? in D if Px? (y|w) is
uniquely computable from i=1,...,n hP i , Izi i ? hP ? , Iz? i in any model that induces D.
The requirement that R is uniquely computable from hP ? , Iz? i and hP i , Izi i from all sources has a
syntactic image in the causal calculus, which is captured by the following sufficient condition.
Theorem 1. Let D = {D(1) , ..., D(n) } be a collection of selection diagrams relative to source
domains ? = {?1 , ..., ?n }, and target domain ? ? , respectively, and Si represents the collection of
S-variables in the selection diagram D(i) . Let {hP i , Izi i} and hP ? , Iz? i be respectively the pairs of
observational and interventional distributions in the sources ? and target ? ? . The relation R =
P ? (y|do(x), w) is mz-transportable from ? to ? ? in D if the expression P (y|do(x), w, S1 , ..., Sn )
is reducible, using the rules of the causal calculus, to an expression in which (1) do-operators that
apply to subsets of Izi have no Si -variables or (2) do-operators apply only to subsets of Iz? .
This result provides a powerful way to syntactically establish mz-transportability, but it is not immediately obvious whether a sequence of applications of the rules of causal calculus that achieves
the reduction required by the theorem exists, and even if such sequence exists, it is not obvious how
to obtain it. For concreteness, we illustrate this result using the selection diagrams in Fig. 2(a,b).
Corollary 1. P ? (y|do(x)) is mz-transportable in Fig. 2(a,b) with Za = {Z2 } and Zb = {Z1 }.
Proof. The goal is to show that R = P ? (y|do(x)) is mz-transportable from {?a , ?b } to ? ? using
experiments conducted over {Z2 } in ?a and {Z1 } in ?b . Note that naively trying to transport R
from each of the domains individually is not possible, but R can be decomposed as follows:
P ? (y|do(x)) = P ? (y|do(x), do(Z1 ))
(2)
X
?
?
=
P (y|do(x), do(Z1 ), z2 )P (z2 |do(x), do(Z1 ))
(3)
z2
=
X
P ? (y|do(x), do(Z1 ), do(z2 ))P ? (z2 |do(x), do(Z1 )),
(4)
z2
where Eq. (2) follows by rule 3 of the causal calculus since (Z1 ?? Y |X)DX,Z holds, we con1
dition on Z2 in Eq. (3), and Eq. (4) follows by rule 2 of the causal calculus since (Z2 ??
Y |X, Z1 )DX,Z ,Z , where D is the diagram in ? ? (despite the location of the S-nodes).
1
2
Now we can rewrite the first term of Eq. (4) as indicated by the Theorem (and suggested by Def. 2):
P ? (y|do(x), do(Z1 ), do(z2 )) = P (y|do(x), do(Z1 ), do(z2 ), Sa , Sb )
(5)
= P (y|do(x), do(Z1 ), do(z2 ), Sb )
(6)
= P (y|do(z2 ), Sb )
(7)
= P (a) (y|do(z2 )),
(8)
where Eq. (5) follows from the theorem (and the definition of selection diagram), Eq. (6) follows
, Eq. (7) follows from rule
from rule 1 of the causal calculus since (Sa ?? Y |Z1 , Z2 , X)D(a)
Z1 ,Z2 ,X
3 of the causal calculus since (Z1 , X ?? Y |Z2 )D(a)
. Note that this equation matches with the
Z1 ,Z2 ,X
syntactic goal of Theorem 1 since we have precisely do(z2 ) separated from Sa (and Z2 ? Iza ); so,
we can rewrite the expression which results in Eq. (8) by the definition of selection diagram.
Finally, we can rewrite the second term of Eq. (4) as follows:
P ? (z2 |do(x), do(Z1 )) = P (z2 |do(x), do(Z1 ), Sa , Sb )
= P (z2 |do(x), do(Z1 ), Sa )
= P (z2 |x, do(Z1 ), Sa )
=
P (b) (z2 |x, do(Z1 )),
5
(9)
(10)
(11)
(12)
where Eq. (9) follows from the theorem (and the definition of selection diagram), Eq. (10) follows
from rule 1 of the causal calculus since (Sb ?? Z2 |Z1 , X)D(b) , Eq. (11) follows from rule 2 of
Z1 ,X
the causal calculus since (X ?
? Z2 |Z1 )D(b) . Note that this equation matches the condition of the
Z1 X
theorem, separate do(Z1 ) from Sb (i.e., experiments over Z1 can be used since they are available in
?b ), so we can rewrite Eq. (12) using the definition of selection diagrams, the corollary follows.
The next condition for mz-transportability is more visible than Theorem 1 (albeit weaker), which
also demonstrates the challenge of relating mz-transportability to other types of transportability.
Corollary 2. R = P ? (y|do(x)) is mz-transportable in D if there exists Z0i ? Zi such that all paths
from Z0i to Y are blocked by X, (Si ?? Y|X, Z0i )D(i) , and R is computable from do(Zi ).
X,Z0
i
Remarkably, randomizing Z2 when applying Corollary 1 was instrumental to yield transportability
in the previous example, despite the fact that the directed paths from Z2 to Y were not blocked by X,
which suggests how different this transportability is from z-identifiability. So, it is not immediately
obvious how to combine the topological relations of Zi ?s with X and Y in order to create a general
condition for mz-transportability, the relationship between the distributions in the different domains
can get relatively intricate, but we defer this discussion for now and consider a simpler case.
It is not usually trivial to pursue a derivation of mz-transportability in causal calculus, and next we
show an example in which such derivation does not even exist. Consider again the diagrams in Fig.
2(a,b), and assume that randomized experiments are available over {Z1 } in ?a and {Z2 } in ?b .
Theorem 2. P ? (y|do(x)) is not mz-transportable in Fig. 2(a,b) with Za = {Z1 } and Zb = {Z2 }.
Proof. Formally, we need to display two models M1 , M2 such that the following relations hold (as
implied by Def. 3):
? (a)
(a)
PM1 (Z1 , X, Z2 , Y ) = PM2 (Z1 , X, Z2 , Y ),
?
?
?
?
(b)
(b)
?
? PM1 (Z1 , X, Z2 , Y ) = PM2 (Z1 , X, Z2 , Y ),
(a)
(a)
(13)
PM1 (X, Z2 , Y |do(Z1 )) = PM2 (X, Z2 , Y |do(Z1 )),
?
?
(b)
(b)
?
?
(Z1 , X, Y |do(Z2 )) = PM2 (Z1 , X, Y |do(Z2 )),
?
1
? PM
?
?
PM
(Z1 , X, Z2 , Y ) = PM
(Z1 , X, Z2 , Y ),
1
2
for all values of Z1 , X, Z2 , and Y , and also,
?
?
PM
(Y |do(X)) 6= PM
(Y |do(X)),
1
2
(14)
for some value of X and Y .
Let V be the set of observable variables and U be the set of unobservable variables in D. Let us
assume that all variables in U ? V are binary. Let U1 , U2 ? U be the common causes of Z1 and
X and Z2 , respectively; let U3 , U4 , U5 ? U be a random disturbance exclusive to Z1 , Z2 , and Y ,
respectively, and U6 ? U be an extra random disturbance exclusive to Z2 , and U7 , U8 ? U to Y . Let
Sa and Sb index the model in the following way: the tuples hSa = 1, Sb = 0i, hSa = 0, Sb = 1i,
hSa = 0, Sb = 0i represent domains ?a , ?b , and ? ? , respectively. Define the two models as follows:
?
?
Z1 = U1 ? U2 ? U3 ? Sa
Z1 = U1 ? U2 ? U3 ? Sa
?
?
?
?
?
? X=U
X = Z1 ? U1
1
M1 =
M2 =
Z
=
U
Z
=
(X
?
U
?
(U
?
S
))
?
U
?
?
2
4 ? Sa ? U6 ? U6
2
2
4
a
6
?
?
?
?
Y = (Z2 ? U5 ) ? (U5 ? U7 ) ? (Sb ? U8 )
Y = (Z2 ? U5 ) ? (U5 ? U7 ) ? (Sb ? U8 )
where ? represents the exclusive or function. Both models agree in respect to P (U), which is
defined as P (Ui ) = 1/2, i = 1, ..., 8. It is not difficult to evaluate these models and note that the
constraints given in Eqs. (13) and (14) are satisfied (including positivity), the theorem follows.
4
Algorithm for computing mz-transportability
In this section, we build on previous analyses of identifiability [7, 21, 22, 23] in order to obtain a
mechanic procedure in which a collection of selection diagrams and experimental data is inputted,
and the procedure returns a transport formula whenever it is able to produce one. More specifically,
6
PROCEDURE TRmz (y, x, P, I, S, W, D)
INPUT: x, y: value assignments; P: local distribution relative to domain S (S = 0 indexes ? ? ) and active
experiments I; W: weighting scheme; D: backbone of selection diagram; Si : selection nodes in ?i (S0 = ?
(i)
relative to ? ? ); [The following set and distributions are globally defined: Zi , P ? , PZi .]
(i)
OUTPUT: Px? (y) in terms of P ? , PZ? , PZi or F AIL(D, C0 ).
P
1 if x = ?, return V\Y P.
P
2 if V \ An(Y)D 6= ?, return TRmz (y, x ? An(Y)D , V\An(Y)D P, I, S, W, DAn(Y) ).
3 set W = (V \ X) \ An(Y)DX .
if W 6= ?, return TRmz (y, x ? w, P, I,P
S, W, D). Q
4 if C(D \ X) = {C0 , C1 , ..., Ck }, return V\{Y,X} i TRmz (ci , v \ ci , P, I, S, W, D).
5 if C(D \ X) = {C0 },
6
if C(D) 6= {D},
Q
P
P
7
if C0 ? C(D), return i|Vi ?C0 V\V (i) P/ V\V (i?1) P.
D
8
9
10
11
12
D
if (?C 0 )C0 ? C 0 ? C(D),
(i?1)
for {i|Vi ? C 0 }, set ?i = ?i ? vD
\ C0.
(i?1)
mz
0 Q
return TR (y, x ? C , i|Vi ?C 0 P(Vi |VD
? C 0 , ?i ), I, S, W, C 0 ).
else,
if I`= ?, for i = 0, ..., |D|,
?
if (Si ?
? Y | X)D(i) ? (Zi ? X 6= ?) , Ei = TRmz (y, x \ zi , P, Zi ? X, i, W, D \ {Zi ? X}).
PX
(j)
if |E| > 0, return |E|
i=1 wi Ei .
else, FAIL(D, C0 ).
Figure 3: Modified version of identification algorithm capable of recognizing mz-transportability.
our algorithm is called TRmz (see Fig. 3), and is based on the C-component decomposition for
identification of causal effects [22, 23] (and a version of the identification algorithm called ID).
The rationale behind TRmz is to apply Tian?s factorization and decompose the target relation into
smaller, more manageable sub-expressions, and then try to evaluate whether each sub-expression
can be computed in the target domain. Whenever this evaluation fails, TRmz tries to use the experiments available from the target and, if possible, from the sources; this essentially implements the
declarative condition delineated in Theorem 1. Next, we consider the soundness of the algorithm.
Theorem 3 (soundness). Whenever TRmz returns an expression for Px? (y), it is correct.
In the sequel, we demonstrate how the algorithm works through the mz-transportability of Q =
P ? (y|do(x)) in Fig. 2(c,d) with Z? = {Z1 }, Za = {Z2 }, and Zb = {Z1 }.
Since (V \ X) \ An(Y)DX = {Z2 }, TRmz invokes line 3 with {Z2 } ? {X} as interventional
set. The new call triggers line 4 and C(D \ {X, Z2 }) = {C0 , C1 , C2 , C3 }, where C0 = DZ1 ,
C1 = DZ3 , C2 = DU , and C3 = DW,Y , we invoke line 4 and try to mz-transport individ?
?
?
ually Q0 = Px,z
(z1 ), Q1 = Px,z
(z3 ), Q2 = Px,z
(u), and Q3 =
2 ,z3 ,u,w,y
1 ,z2 ,u,w,y
1 ,z2 ,z3 ,w,y
?
P
(w,
y).
Thus
the
original
problem
reduces
to
try
to
evaluate
the
equivalent
expression
Px,z1 ,z2 ,z3 ,u ?
?
?
?
P
(z
)P
(z
)
P
(u)P
(w,
y).
1
3
x,z1 ,z2 ,u,w,y
x,z1 ,z2 ,z3 ,w,y
x,z1 ,z2 ,z3 ,u
z1 ,z3 ,u,w x,z2 ,z3 ,u,w,y
First, TRmz evaluates the expression Q0 and triggers line 2, noting that all nodes can be ignored
?
since they are not ancestors of {Z1 }, which implies after line 1 that Px,z
(z1 ) = P ? (z1 ).
2 ,z3 ,u,w,y
Second, TRmz evaluates the expression Q1 triggering line 2, which implies that
?
?
Px,z
(z3 ) = Px,z
(z3 ) with induced subgraph D1 = DX,Z1 ,Z2 ,Z3 . TRmz goes to line 5,
1 ,z2 ,u,w,y
1 ,z2
in which in the local call C(D \ {X, Z1 , Z2 }) = {DZ3 }. Thus it proceeds to line 6 testing whether
C(D \ {X, Z1 , Z2 }) is different from D1 , which is false. In this call, ordinary identifiability would
fail, but TRmz proceeds to line 9. The goal of this line is to test whether some experiment can
help for computing Q1 . In this case, ?a fails immediately the test in line 10, but ?b and ? ? succeed,
(i)
which means experiments in these domains may eventually help; the new call is Px,z2 (z3 )D\Z1 , for
0
mz
i = {b, ?} with induced graph D1 = DX,Z2 ,Z3 . Finally, TR triggers line 8 since X is not part
of Z3 ?s components in D10 (or, Z3 ? C 0 = {Z2 L9999K Z3 }), so line 2 is triggered since Z2 is no
longer an ancestor of Z3 in D10 , and then line 1 is triggered since the interventional set is empty in
P
(i)
(i)
?
this local call, so Px,z
(z3 ) = Z 0 Pz1 (z3 |x, Z20 )Pz1 (Z20 ), for i = {b, ?}.
1 ,z2
2
7
?
Third, evaluating the expression Q2 , TRmz goes to line 2, which implies that Px,z
(u) =
1 ,z2 ,z3 ,w,y
?
mz
Px,z1 ,z2 ,z3 ,w (u) with induced subgraph D2 = DX,Z1 ,Z2 ,Z3 ,W,U . TR
goes to line 5, and
in this local call C(D \ {X, Z1 , Z2 , Z3 , W }) = {DU }, and the test in 6 succeed, since there
are more components in D. So, it triggers line 8 since W is not part of U ?s component
?
?
in D2 . The algorithm makes Px,z
(u) = Px,z
(u)D2 |W (and update the work1 ,z2 ,z3 ,w
1 ,z2 ,z3
ing distribution); note that in this call, ordinary identifiability would fail since the nodes are
in the same C-component and the test in line 6 fails. But TRmz proceeds to line 9 trying
to find experiments that can help in Q2 ?s computation. In this case, ?b cannot help but ?a
(a)
and ? ? perhaps can, noting that new calls are launched for computing Px,z1 ,z3 (u)D2 \Z2 |W rel?
ative to ?a , and Px,z
(u)D2 \Z1 |W relative to ? ? with the corresponding data structures set.
2 ,z3
(a)
(a)
In ?a , the algorithm triggers line 7, which yields Px,z1 ,z3 (u)D2 \Z2 |W = Pz2 (u|w, z3 , x, z1 ),
?
and a bit more involved analysis for ?b yields (after simplification) Px,z
(u)D2 \Z1 |W =
2 ,z
3
P
P
?
00
?
00
?
0
?
0
?
0
Z 00 Pz1 (z3 |x, Z2 )Pz1 (Z2 ) .
Z 0 Pz1 (u|w, z3 , x, Z2 )Pz1 (z3 |x, Z2 )Pz1 (Z2 ) /
2
2
mz
Fourth, TR evaluates the expression Q3 and triggers line 5, C(D\{X, Z1 , Z2 , Z3 , U }) = DW,Y .
?
In turn, both tests at lines 6 and 7 succeed, which makes the procedure to return Px,z
(w, y) =
1 ,z2 ,z3 ,u
?
?
P (w|z3 , x, z1 , z2 )P (y|w, x, z1 , z2 , z3 , u).
The composition of the return of these calls generates the following expression:
!
Px? (y)
=
X
?
P (z1 )
(2)
X
Pz?1 (z3 |x, Z20 )Pz?1 (Z20 )
?X
+
(1)
w2
X
Pz(b)
(z3 |x, Z20 )Pz(b)
(Z20 )
1
1
Z20
Z20
z1 ,z3 ,w,u
w1
(1)
w1
?
? ?X
Pz?1 (z3 |x, Z200 )Pz?1 (Z200 )
Pz?1 (u|w, z3 , x, Z20 )Pz?1 (z3 |x, Z20 )Pz?1 (Z20 ) /
Z200
Z20
!
(2)
(u|w, z3 , x, z1 )
+ w2 Pz(a)
2
P ? (w|x, z1 , z2 , z3 ) P ? (y|x, z1 , z2 , z3 , w, u)
(15)
(k)
where wi represents the weight for each factor in estimand k (i = 1, ..., nk ), and nk is the number
of feasible estimands of k. Eq. (15) depicts a powerful way to estimate P ? (y|do(x)) in the target
domain, and depending on weighting choice a different estimand will be entailed. For instance, one
might use an analogous to inverse-variance weighting, which sets the weights for the normalized
Pnk ?2
(k)
inverse of their variances (i.e., wi = ?i?2 / j=1
?j , where ?j2 is the variance of the jth component of estimand k). Our strategy resembles the approach taken in meta-analysis [4], albeit the latter
usually disregards the intricacies of the relationships between variables, so producing a statistically
less powerful estimand. Our method leverages this non-trivial and highly structured relationships, as
exemplified in Eq. (15), which yields an estimand with less variance and statistically more powerful.
5
Conclusions
In this paper, we treat a special type of transportability in which experiments can be conducted only
over limited sets of variables in the sources and target domains, and the goal is to infer whether a
certain effect can be estimated in the target using the information scattered throughout the domains.
We provide a general sufficient graphical conditions for transportability based on the causal calculus
along with a necessary condition for a specific scenario, which should be generalized for arbitrary
structures. We further provide a procedure for computing transportability, that is, generate a formula
for fusing the available observational and experimental data to synthesize an estimate of the desired
causal effects. Our algorithm also allows for generic weighting schemes, which generalizes standard
statistical procedures and leads to the construction of statistically more powerful estimands.
Acknowledgment
The work of Judea Pearl and Elias Bareinboim was supported in part by grants from NSF (IIS1249822, IIS-1302448), and ONR (N00014-13-1-0153, N00014-10-1-0933). The work of Sanghack
Lee and Vasant Honavar was partially completed while they were with the Department of Computer
Science at Iowa State University. The work of Vasant Honavar while working at the National Science
Foundation (NSF) was supported by the NSF. The work of Sanghack Lee was supported in part by
the grant from NSF (IIS-0711356). Any opinions, findings, and conclusions contained in this article
are those of the authors and do not necessarily reflect the views of the sponsors.
8
References
[1] J. Pearl and E. Bareinboim. Transportability of causal and statistical relations: A formal approach. In
W. Burgard and D. Roth, editors, Proceedings of the Twenty-Fifth National Conference on Artificial Intelligence, pages 247?254. AAAI Press, Menlo Park, CA, 2011.
[2] D. Campbell and J. Stanley. Experimental and Quasi-Experimental Designs for Research. Wadsworth
Publishing, Chicago, 1963.
[3] C. Manski. Identification for Prediction and Decision. Harvard University Press, Cambridge, Massachusetts, 2007.
[4] L. V. Hedges and I. Olkin. Statistical Methods for Meta-Analysis. Academic Press, January 1985.
[5] W.R. Shadish, T.D. Cook, and D.T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton-Mifflin, Boston, second edition, 2002.
[6] S. Morgan and C. Winship. Counterfactuals and Causal Inference: Methods and Principles for Social
Research (Analytical Methods for Social Research). Cambridge University Press, New York, NY, 2007.
[7] J. Pearl. Causal diagrams for empirical research. Biometrika, 82(4):669?710, 1995.
[8] P. Spirtes, C.N. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, Cambridge,
MA, 2nd edition, 2000.
[9] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000.
2nd edition, 2009.
[10] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[11] E. Bareinboim and J. Pearl. Transportability of causal effects: Completeness results. In J. Hoffmann and
B. Selman, editors, Proceedings of the Twenty-Sixth National Conference on Artificial Intelligence, pages
698?704. AAAI Press, Menlo Park, CA, 2012.
[12] E. Bareinboim and J. Pearl. Causal transportability with limited experiments. In M. desJardins and
M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence,
pages 95?101, Menlo Park, CA, 2013. AAAI Press.
[13] S. Lee and V. Honavar. Causal transportability of experiments on controllable subsets of variables: ztransportability. In A. Nicholson and P. Smyth, editors, Proceedings of the Twenty-Ninth Conference on
Uncertainty in Artificial Intelligence (UAI), pages 361?370. AUAI Press, 2013.
[14] E. Bareinboim and J. Pearl. Meta-transportability of causal effects: A formal approach. In C. Carvalho
and P. Ravikumar, editors, Proceedings of the Sixteenth International Conference on Artificial Intelligence
and Statistics (AISTATS), pages 135?143. JMLR W&CP 31, 2013.
[15] S. Lee and V. Honavar. m-transportability: Transportability of a causal effect from multiple environments.
In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on
Artificial Intelligence, pages 583?590, Menlo Park, CA, 2013. AAAI Press.
[16] H. Daume III and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence
Research, 26:101?126, 2006.
[17] A.J. Storkey. When training and test sets are different: characterising learning transfer. In J. Candela,
M. Sugiyama, A. Schwaighofer, and N.D. Lawrence, editors, Dataset Shift in Machine Learning, pages
3?28. MIT Press, Cambridge, MA, 2009.
[18] B. Sch?olkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal
learning. In J Langford and J Pineau, editors, Proceedings of the 29th International Conference on
Machine Learning (ICML), pages 1255?1262, New York, NY, USA, 2012. Omnipress.
[19] K. Zhang, B. Sch?olkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional
shift. In Proceedings of the 30th International Conference on Machine Learning (ICML). JMLR: W&CP
volume 28, 2013.
[20] E. Bareinboim and J. Pearl. Causal inference by surrogate experiments: z-identifiability. In N. Freitas and
K. Murphy, editors, Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence
(UAI), pages 113?120. AUAI Press, 2012.
[21] M. Kuroki and M. Miyakawa. Identifiability criteria for causal effects of joint interventions. Journal of
the Royal Statistical Society, 29:105?117, 1999.
[22] J. Tian and J. Pearl. A general identification condition for causal effects. In Proceedings of the Eighteenth
National Conference on Artificial Intelligence, pages 567?573. AAAI Press/The MIT Press, Menlo Park,
CA, 2002.
[23] I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian
causal models. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, pages
1219?1226. AAAI Press, Menlo Park, CA, 2006.
9
| 5062 |@word trial:1 illustrating:3 version:3 manageable:1 instrumental:1 nd:2 c0:10 calculus:14 hu:1 dz1:1 d2:7 nicholson:1 decomposition:1 q1:3 tr:4 reduction:1 contains:1 exclusively:1 united:1 interestingly:3 freitas:1 z2:102 olkin:1 si:6 tackling:1 yet:1 assigning:1 must:1 dx:7 visible:1 happen:1 chicago:1 update:1 alone:1 intelligence:11 cook:1 characterization:1 provides:1 node:11 location:1 completeness:1 simpler:1 zhang:2 along:1 constructed:1 c2:2 combine:2 dan:1 manner:1 pairwise:1 acquired:1 intricate:1 mechanic:1 globally:1 decomposed:1 armed:2 increasing:1 provided:1 estimating:1 underlying:1 backbone:1 pursue:1 ail:1 q2:3 finding:4 possession:1 every:3 auai:2 returning:1 demonstrates:1 biometrika:1 classifier:1 penn:2 grant:2 intervention:1 producing:1 local:4 treat:1 aiming:1 despite:2 encoding:1 id:1 path:2 might:5 eb:1 weakened:1 resembles:1 suggests:1 limited:11 factorization:1 tian:2 statistically:3 directed:1 practical:1 responsible:1 acknowledgment:1 transporting:1 testing:1 recursive:1 implement:1 differs:1 procedure:6 area:1 empirical:2 significantly:1 inferential:1 intention:1 pre:1 word:2 induce:1 symbolic:1 get:1 cannot:6 selection:23 operator:2 context:1 applying:1 equivalent:2 roth:1 eighteenth:1 graphically:1 economics:1 go:3 immediately:4 m2:2 rule:8 cholesterol:3 financial:1 u6:3 dw:2 population:6 notion:1 fx:3 analogous:2 target:24 trigger:6 construction:1 smyth:1 vasant:3 us:1 harvard:1 synthesize:2 storkey:1 particularly:1 marcu:1 u4:1 surmise:1 houghton:1 reducible:1 solved:1 capture:1 wang:1 mz:26 removed:1 environment:5 ui:3 littman:2 rewrite:4 predictive:1 manski:1 efficiency:1 joint:3 various:1 represented:1 derivation:3 articulated:1 separated:1 distinct:1 artificial:11 broadens:1 outside:1 outcome:3 heuristic:1 posed:1 solve:1 statistic:3 soundness:2 syntactic:3 pnk:1 obviously:1 sequence:2 triggered:2 analytical:1 adaptation:2 j2:1 combining:3 mifflin:1 translate:1 subgraph:4 sixteenth:1 olkopf:2 los:1 parent:1 empty:1 requirement:2 extending:1 individ:1 produce:3 generating:3 help:4 illustrate:1 develop:1 depending:1 received:1 progress:1 sa:10 eq:16 c:2 implies:4 convention:1 anticausal:2 correct:2 subsequently:1 enable:1 observational:7 stringent:1 opinion:1 ually:1 decompose:2 hold:3 deciding:2 lawrence:1 scope:3 algorithmic:1 claim:3 pointing:1 u3:3 vary:1 commonality:2 smallest:2 desjardins:2 inputted:1 kuroki:1 achieves:1 narrative:1 estimation:1 individually:3 create:1 tool:1 mit:4 clearly:1 ands:1 modified:1 ck:1 corollary:4 derived:1 q3:2 biomarker:1 impossibility:1 inference:5 sb:12 transferring:1 relation:8 ancestor:3 koller:1 quasi:3 issue:4 overall:1 among:1 classification:1 denoted:3 unobservable:1 special:4 wadsworth:1 field:2 construct:1 psu:2 transportability:45 identical:1 represents:8 park:6 icml:2 discrepancy:4 causation:1 simultaneously:1 national:7 individual:2 murphy:1 friedman:1 invariably:1 interest:1 highly:1 evaluation:1 entailed:1 mixture:1 behind:1 pzi:2 underpinnings:1 edge:4 pm2:4 necessary:5 capable:1 respective:1 machinery:1 conduct:2 desired:5 re:2 causal:58 theoretical:1 instance:4 formalism:1 con1:1 markovian:1 assignment:1 ordinary:3 fusing:2 introducing:1 parametrically:1 subset:3 burgard:1 recognizing:1 conducted:9 seventh:2 answer:1 randomizing:2 muandet:1 international:3 randomized:5 sequel:2 lee:5 invoke:1 probabilistic:1 w1:2 again:1 aaai:6 satisfied:1 central:1 opposed:1 containing:1 reflect:1 positivity:1 external:1 shpitser:1 return:11 account:2 potential:1 accompanied:1 gy:1 bold:2 availability:1 vi:5 piece:3 performed:3 later:3 extrapolation:1 lab:1 try:5 view:1 counterfactuals:1 candela:1 identifiability:9 defer:1 elaborated:1 contribution:1 ative:1 square:2 became:1 variance:6 characteristic:1 gathered:2 yield:4 pm1:3 generalize:1 identification:6 produced:1 semantical:1 sgouritsa:1 researcher:1 za:4 janzing:1 whenever:5 sharing:1 definition:7 sixth:1 recalibration:1 failure:2 evaluates:3 pp:4 involved:2 obvious:3 conveys:1 associated:1 proof:2 judea:3 dataset:1 treatment:4 popular:1 massachusetts:1 counterfactual:1 knowledge:11 stanley:1 subtle:1 campbell:2 higher:1 diet:2 follow:1 izi:6 though:1 until:1 langford:1 working:1 transport:9 replacing:1 ei:2 d10:2 pineau:1 indicated:1 perhaps:1 scientific:3 usa:1 effect:26 validity:1 concept:1 normalized:1 q0:2 laboratory:2 spirtes:1 interchangeably:1 uniquely:3 substantiate:1 criterion:1 generalized:3 trying:2 complete:1 theoretic:1 demonstrate:1 syntactically:1 cp:2 passive:1 characterising:1 reasoning:1 omnipress:1 image:1 recently:1 fi:2 common:2 functional:2 volume:1 discussed:2 interpretation:1 m1:2 relating:1 blocked:2 composition:1 cambridge:5 trivially:1 pm:5 similarly:1 hp:8 u7:3 sugiyama:1 language:1 calibration:1 entail:1 longer:1 recent:1 driven:2 scenario:2 certain:4 n00014:2 meta:4 binary:1 arbitrarily:1 onr:1 captured:1 morgan:1 unrestricted:2 relaxed:1 determine:1 semi:1 ii:2 multiple:9 full:1 infer:4 reduces:1 ing:1 technical:1 match:2 academic:1 equally:1 ravikumar:1 controlled:1 sponsor:1 prediction:2 basic:1 heterogeneous:5 essentially:1 represent:2 c1:3 justified:1 remarkably:2 separately:1 diagram:28 else:2 source:20 modality:1 sch:2 extra:2 rest:1 launched:1 w2:2 induced:5 call:11 structural:9 leverage:2 noting:2 intermediate:1 iii:1 enough:2 concerned:1 zi:11 triggering:1 reduce:1 regarding:1 shadish:1 computable:6 angeles:1 shift:2 whether:7 expression:12 peter:1 york:3 cause:1 action:3 ignored:1 amount:2 u5:5 induces:2 generate:1 exist:2 nsf:4 estimated:2 iz:5 express:1 ist:2 group:1 key:1 terminology:1 pz2:1 capital:2 interventional:6 fuse:1 graph:5 concreteness:1 estimands:2 estimand:7 inverse:2 letter:4 powerful:5 fourth:1 uncertainty:2 place:1 throughout:2 almost:2 winship:1 decision:1 bit:1 def:2 simplification:1 display:1 dition:1 topological:1 identifiable:1 constraint:2 precisely:1 unlimited:1 ucla:4 generates:1 aspect:2 u1:4 argument:1 px:26 relatively:1 glymour:1 structured:1 department:1 honavar:5 combination:1 across:3 smaller:1 wi:3 delineated:1 modification:1 s1:1 invariant:1 heart:2 taken:1 equation:4 agree:1 scheines:1 turn:3 eventually:1 mechanism:2 fail:3 rubric:1 available:15 generalizes:2 apply:3 generic:1 original:2 assumes:2 completed:1 graphical:4 publishing:1 invokes:1 build:1 establish:1 society:1 implied:1 move:1 question:2 quantity:3 realized:2 hoffmann:1 transportable:9 strategy:1 exclusive:3 surrogate:1 said:3 mx:1 reversed:1 separate:1 vd:2 considers:1 collected:1 trivial:4 reason:1 declarative:1 index:2 relationship:6 z3:48 minimizing:1 difficult:1 potentially:1 hsa:3 stated:1 disparate:2 bareinboim:7 synthesizing:1 intent:1 design:2 guideline:1 twenty:7 contributed:1 observation:1 january:1 heterogeneity:1 extended:2 witness:1 locate:1 ninth:1 arbitrary:2 introduced:2 cast:1 pair:3 required:1 c3:2 z1:93 learned:1 distinction:1 pearl:12 address:1 able:3 suggested:1 proceeds:3 usually:4 below:1 exemplified:1 eighth:1 challenge:2 including:1 royal:1 power:2 suitable:1 natural:1 disturbance:2 scheme:2 hm:2 health:1 embodied:1 sn:1 literature:1 discovery:2 mooij:1 determining:1 relative:7 fully:1 rationale:1 carvalho:1 age:5 foundation:1 iowa:1 elia:2 sufficient:5 consistent:1 s0:1 article:1 suspected:1 editor:9 principle:2 share:2 production:1 elsewhere:2 supported:3 last:1 free:1 jth:1 verbal:1 formal:4 bias:1 understand:2 weaker:1 taking:1 fifth:1 valid:2 world:1 gxz:1 stand:1 evaluating:1 author:3 collection:6 selman:1 social:2 observable:2 active:1 incoming:1 uai:2 z20:12 tuples:1 alternatively:1 search:1 transported:3 transfer:4 ca:6 controllable:1 menlo:6 improving:1 du:2 necessarily:1 domain:54 aistats:1 arrow:3 motivation:1 edition:3 daume:1 nothing:1 fig:11 causality:2 scattered:2 depicts:2 ny:2 sub:2 fails:3 wish:1 jmlr:2 weighting:5 third:1 trmz:16 formula:6 z0:3 theorem:12 specific:3 pz:11 concern:2 exists:3 naively:1 albeit:2 false:1 rel:1 ci:2 illustrates:1 nk:2 boston:1 generalizing:1 depicted:2 intricacy:1 simply:2 contained:1 schwaighofer:1 ethical:1 partially:1 u2:3 ch:1 relies:1 hedge:1 ma:2 succeed:3 conditional:2 abbreviation:1 goal:5 towards:1 u8:3 absence:2 feasible:3 change:1 typical:2 specifically:1 zb:4 called:5 experimental:22 la:4 attempted:1 disregard:1 meaningful:1 estimable:1 formally:5 owed:1 latter:2 arises:1 z0i:3 investigator:1 evaluate:3 outgoing:1 d1:3 handling:1 |
4,490 | 5,063 | Causal Inference on Time Series using Restricted
Structural Equation Models
Jonas Peters?
Seminar for Statistics
ETH Z?urich, Switzerland
Dominik Janzing
MPI for Intelligent Systems
T?ubingen, Germany
Bernhard Sch?olkopf
MPI for Intelligent Systems
T?ubingen, Germany
[email protected]
[email protected]
[email protected]
Abstract
Causal inference uses observational data to infer the causal structure of the data
generating system. We study a class of restricted Structural Equation Models for
time series that we call Time Series Models with Independent Noise (TiMINo).
These models require independent residual time series, whereas traditional methods like Granger causality exploit the variance of residuals. This work contains
two main contributions: (1) Theoretical: By restricting the model class (e.g. to
additive noise) we provide general identifiability results. They cover lagged and
instantaneous effects that can be nonlinear and unfaithful, and non-instantaneous
feedbacks between the time series. (2) Practical: If there are no feedback loops
between time series, we propose an algorithm based on non-linear independence
tests of time series. We show empirically that when the data are causally insufficient or the model is misspecified, the method avoids incorrect answers. We
extend the theoretical and the algorithmic part to situations in which the time series have been measured with different time delays. TiMINo is applied to artificial
and real data and code is provided.
1
Introduction
We first introduce the problem of causal inference on iid data, that is in the case with no time
structure. Let therefore X i , i ? V , be a set of random variables and let G be a directed acyclic
graph (DAG) on V describing the causal relationships between the variables. Given iid samples
i
from P(X ),i?V , we aim at estimating the underlying causal structure of the variables X i , i ? V .
Constraint- or independence-based methods [e.g. Spirtes et al., 2000] assume that the joint distribution is Markov, and faithful with respect to G. The PC algorithm, for example, exploits conditional independences for reconstructing the Markov equivalence class of G (some edges remain
i
undirected). We say P(X ),i?V satisfies a Structural Equation Model [Pearl, 2009] w.r.t. DAG G
if for all i ? V we can write X i = fi (PAi , N i ) , where PAi are the parents of node i in G. Additionally, we require (N i )i?V to be jointly independent. By restricting the function class one can
identify the bivariate case: Shimizu et al. [2006] show that if P(X,Y ) allows for Y = a ? X + NY
with NY ?
? X then P(X,Y ) only allows for X = b ? Y + NX with NX ?
? Y if (X, NY ) are jointly
Gaussian ( ?
? stands for statistical independence). This idea has led to the extensions of nonlinear additive functions f (x, n) = g(x) + n [Hoyer et al., 2009]. Peters et al. [2011b] show how
identifiability for two variables generalizes to the multivariate case.
We now turn to the case of time series data. For each i from a finite V , let therefore Xti t?N be
a time series. Xt denotes the vector of time series values at time t. We call the infinite graph that
contains each variable Xti as a node the full time graph. The summary time graph contains all #V
?
Significant parts of this research was done, when Jonas Peters was at the MPI T?ubingen.
1
components of the time series as vertices and an arrow between X i and X j , i 6= j, if there is an
i
arrow from Xt?k
to Xtj in the full time graph for some k. We are given a sample (X1 , . . . , XT )
of a multivariate time series and estimate the true summary time graph. I.i.d. methods are not
directly applicable because a common history might introduce complicated dependencies between
contemporaneous data Xt and Yt . Nevertheless several methods dealing with time series data are
motivated by the iid setting (Section 2). Many of them encounter similar problems: when the model
assumptions are violated (e.g. in the presence of a confounder) the methods draw false causal
conclusions. Furthermore, they do not include nonlinear instantaneous effects. In this work, we
extend the Structural Equation Model framework to time series data and call this approach time
series models with independent noise (TiMINo). These models include nonlinear and instantaneous
effects. They assume Xt to be a function of all direct causes and some noise variable, the collection
of which is supposed to be jointly independent. This model formulation comes with substantial
benefits: In Section 3 we prove that for TiMINo models the full causal structure can be recovered
from the distribution. Section 4 introduces an algorithm (TiMINo causality) that recovers the model
structure from a finite sample. It can be equipped with any algorithm for fitting time series. If
the data do not satisfy the model assumptions, TiMINo causality remains mostly undecided instead
of drawing wrong causal conclusions. Section 5 deals with time series that have been shifted by
different (unknown) time delays. Experiments on simulated and real data sets are shown in Section 6.
2
Existing methods
Granger causality [Granger, 1969] (G-causality for the remainder of the article) is based on the
following idea: X i does not Granger cause X j if including the past of X i does not help in predicting Xtj given the past of all all other time series X k , k 6= i. In principle, ?all other? means all
other information in the world. In practice, one is limited to X k , k ? V . The phrase ?does not
help? is translated into a significance test assuming a multivariate time series model. If the data
follow the assumed model, e.g. the VAR model below, G-causality is sometimes interpreted as testi
k
ing whether Xt?h
, h > 0 is independent of Xtj given Xt?h
, k ? V \ {i}, h > 0 [see Florens and
Mouchart, 1982, Eichler, 2011, Chu and Glymour, 2008,
Quinn
et al., 2011, and ANLTSM below].
Pp
Linear G-causality considers a VAR model: Xt = ? =1 A(? )Xt?? + Nt , where Xt and Nt are
vectors and A(? ) are matrices. For checking whether X i G-causes X j one fits a full VAR model
Mfull to Xt and a VAR model Mrestr to Xt that predicts Xti without using X j (using the constraints
A ? i (? ) = 0 for all 1 ? ? ? p). One tests whether the reduction of the residual sum of squares
?RSSfull )/(pfull ?prestr )
(RSS) of Xti is significant by using the following test statistic: T := (RSSrestr
,
RSSfull /(N ?pfull )
where pfull and prestr are the number of parameters in the respective models. For the significance
test we use T ? Fpfull ?prestr ,N ?pfull . G-causality has been extended to nonlinear G-causality, [e.g.
Chen et al., 2004, Ancona et al., 2004]. In this paper we focus on an extension for the bivariate
case proposed by Bell et al. [1996]. It is based on generalized additive models (gams) [Hastie and
Pp Pn
j
Tibshirani, 1990]: Xti = ? =1 j=1 fi,j,? (Xt??
) + Nti , where Nt is a #V dimensional noise
vector. Bell et al. [1996] utilize the same F statistic as above using estimated degrees of freedom.
Following Bell et al. [1996], Chu and Glymour [2008] introduce additive nonlinear time series models (ANLTSM for short) for performing relaxed conditional independence tests: If including one
2
2
1
1
, into a model for Xt2 that already includes Xt?2
, Xt?1
, and Xt?2
does not imvariable, e.g. Xt?1
2
2
2
1
2
1
, Xt?2
prove the predictability of Xt , then Xt?1 is said to be independent of Xt given Xt?2 , Xt?1
(if the maximal time lag is 2). Chu and Glymour [2008] propose a method based on constraintbased methods like FCI [Spirtes et al., 2000] in order to infer the causal structure exploiting those
conditional independence statements. The instantaneous effects are assumed to be linear and the
confounders linear and instantaneous.
TS-LiNGAM [Hyv?arinen et al., 2008] is based on LiNGAM [Shimizu et al., 2006] from the iid
setting. It allows for instantaneous effects and assumes all relationships to be linear.
These approaches encounter some methodological problems. Instantaneous effects: G-causality
cannot deal with instantaneous effects. E.g., when Xt is causing Yt , including any of the two time
series helps for predicting the other and G-causality infers X ? Y and Y ? X. ANLTSM and
TS-LiNGAM only allow for linear instantaneous effects. Theorem 1 shows that the summary time
graph may still be identifiable when the instantaneous effects are linear and the variables are jointly
Gaussian. TS-LiNGAM does not work in these situations. Confounders: G-causality might fail
2
when there is a confounder between Xt and Yt+1 , say. The path between Xt and Yt+1 cannot be
blocked by conditioning on any observed variables; G-causality infers X ? Y . We will see empirically that TiMINo remains undecided instead; Entner and Hoyer [2010] and Janzing et al. [2009]
provide (partial) results for the iid setting. ANLTSM does not allow for nonlinear confounders or
confounders with time structure and TS-LiNGAM may fail, too (Exp. 1). Robustness: Theorem 1
(ii) shows that performing general conditional independence tests suffices. The conditioning sets,
however, are too large and the tests are performed under a simple model (e.g. VAR). If the model is
misspecified, one may draw wrong conclusions without noticing (e.g. Exp. 3).
For TiMINo (defined below), Lemma 1 shows that after fitting and checking the model by using
unconditional independence tests, the difficult conditional independences have been checked implicitly. A model check is not new [e.g. Hoyer et al., 2009, Entner and Hoyer, 2010] but is thus
an effective tool. We can equip bivariate G-causality with a test for cross-correlations; this is not
straight-forward for multivariate G-causality. Furthermore, using cross-correlation as an independence test does not always suffice (see Section 2).
3
Structural Equation models for time series: TiMINo
Definition 1 Consider a time series Xt = (Xti )i?V whose finite dimensional distributions are absolutely continuous w.r.t a product measure (e.g. there is a pdf or pmf). The time series satisfies a
TiMINo if there is a p > 0 and ?i ? V there are sets PAi0 ? X V \{i} , PAik ? X V , s.t. ?t
(1)
Xti = fi (PAip )t?p , . . . , (PAi1 )t?1 , (PAi0 )t , Nti ,
with Nti jointly independent over i and t and for each i, Nti are identically distributed in t. The
corresponding full time graph is obtained by drawing arrows from any node that appears in the
right-hand side of (1) to Xti . We require the full time graph to be acyclic. Section 6 shows examples.
Theorem 1 (i) assumes that (1) follows an identifiable functional model class (IFMOC). This means
that (I) causal minimality holds, a weak form of faithfulness that assumes a statistical dependence
between cause and effect given all other parents [Spirtes et al., 2000]. And (II), all fi come from a
function class that is small enough to make the bivariate case identifiable. Peters et al. [2011b] give
a precise definition. Important examples include nonlinear functions with additive Gaussian noise
and linear functions with additive non-Gaussian noise. Due to space constraints, proofs are provided
in the appendix. In the one-dimensional linear case model (1) is time-reversible if and only if the
noise is normally distributed [Peters et al., 2009].
Sp
Theorem 1 Suppose that Xt can be represented as a TiMINo (1) with PA(Xti ) = k=0 (PAik )t?k
being the direct causes of Xti and that one of the following holds:
(i) Equations (1) come from an IFMOC (e.g. nonlinear functions fi with additive Gaussian
noise Nti or linear functions fi with additive non-Gaussian noise Nti ). The summary time
graph can contain cycles.
i
(ii) Each component exhibits a time structure (PA(Xti ) contains at least one Xt?k
), the joint
distribution is faithful w.r.t. the full time graph, and the summary time graph is acyclic.
Then the full time graph can be recovered from the joint distribution of Xt . In particular, the true
causal summary time graph is identifiable. (Neither of the conditions (i) and (ii) implies the other.)
Many function classes satisfy (i) [Peters et al., 2013]. To estimate fi from data (E[Xti |Xt?p ,
. . . , Xt?1 ] for additive noise) we require stationarity and/or ? mixing, or geometric ergodicity [e.g.
Chu and Glymour, 2008]. Condition (ii) shows how time structure simplifies the causal inference
problem. For iid data the true graph is not identifiable in the linear Gaussian case; with time structure
it is. We believe that condition (ii) is more difficult to verify in practice; faithfulness is not required
for (i). In (ii), the acyclicity prevents the full time graph from being fully connected up to order p.
4
A practical method: TiMINo causality
The algorithm for TiMINo causality is based on the theoretical finding in Theorem 1. It takes the
time series data as input and outputs either a DAG that estimates the summary time graph or remains undecided. It tries to fit a TiMINo model to the data and outputs the corresponding graph. If
3
no model with independent residuals is found, it outputs ?I do not know?. This becomes intractable
for a time series with many components; for time series without feedback loops, we adapt a method
for additive noise models without time structure suggested by Mooij et al. [2009] that avoids enumerating all DAGs. Algorithm 1 shows the modified version. As reported by Mooij et al. [2009],
the time complexity is O(d2 ? f (n, d) ? t(n, d)), where d is the number of time series, n the sample
size and f (n, d) and t(n, d) the complexity of the user-specific regression method and independence
test, respectively. Peters et al. [2013] discuss the algorithm?s correctness. We present our choices
but do not claim their optimality, any other fitting method and independence test can be used, too.
Algorithm 1 TiMINo causality
1: Input: Samples from a d-dimensional time series of length T : (X1 , . . . , XT ), maximal order p
2: S := (1, . . . , d)
3: repeat
4:
for k in S do
k
k
i
i
5:
Fit TiMINo for Xtk using Xt?p
, . . . , Xt?1
, Xt?p
, . . . , Xt?1
, Xti for i ? S \ {k}
i
6:
Test if residuals are indep. of X , i ? S.
7:
end for
8:
Choose k ? to be the k with the weakest dependence. (If there is no k with independence,
break and output: ?I do not know - bad model fit?).
9:
S := S \ {k ? }; pa(k ? ) := S
10: until length(S)= 1
11: For all k remove all parents that are not required to obtain independent residuals.
12: Output: (pa(1), . . . , pa(d))
Depending on the assumed model class, TiMINo causality has to be provided with a fitting method.
Here, we chose the R functions ar for VAR fitting (fi (p1 , . . . , pr , n) = ai,1 ? p1 + . . . + ai,r ? pr + n),
gam for generalized additive models (fi (p1 , . . . , pr , n) = fi,1 (p1 )+. . .+fi,r (pr )+n) [e.g. Bell et al.,
1996] and gptk for GP regression (fi (p1 , . . . , pr , n) = fi (p1 , . . . , pr ) + n). We call the methods
TiMINo-linear, TiMINo-gam and TiMINo-GP, respectively. For the first two AIC determines the
order of the process. All fitting methods are used in a ?standard way?. For gam we used the built-in
nonparametric smoothing splines. For the GP we used zero mean, squared exponential covariance
function and Gaussian Likelihood. The hyper-parameters are automatically chosen by marginal
likelihood optimization. Code is available online.
To test for independence between a residual time series Ntk and another time series Xti , i ? S,
we shift the latter time series up to the maximal order ?p (but at least up to ?4); for each of those
combinations we perform HSIC [Gretton et al., 2008], an independence test for iid data. One could
also use a test based on cross-correlation that can be derived from Thm 11.2.3. in [Brockwell and
Davis, 1991]. This is related to what is done in transfer function modeling [e.g. ?13.1 in Brockwell
and Davis, 1991], which is restricted to two time series and linear functions. As opposed to the
iid setting, testing for cross-correlation is often enough in order to reject a wrong model. Only
Experiments 1 and 5 describe situations, in which cross-correlations fail. To reduce the running
time one can use cross-correlation to determine the graph structure and use HSIC as a final model
check. For HSIC we used a Gaussian kernel; as in [Gretton et al., 2008], the bandwidth is chosen
such that the median distance of the input data leads to an exponent of one. Testing for non-vanishing
autocorrelations in the residuals is not included yet.
If the model assumptions only hold in some parts of the summary time graph, we can still try
to discover parts of the causal structure. Our code package contains this option. We obtained
positive results on simulated data but there is no corresponding identifiability statement.
Our method has some potential weaknesses. It can happen that one is able to fit a model only in the
wrong direction. This, however, requires an ?unnatural? fine tuning of the functions [Janzing and
Steudel, 2010] and is relevant only when there are time series without time structure or the data are
non-faithful (see Theorem 1). The null hypothesis of the independence test represents independence,
although the scientific discovery of a causal relationship should rather be the alternative hypothesis.
This fact may lead to wrong causal conclusions (instead of ?I do not know?) on small data sets.The
effect is strengthened by the Bonferroni correction of the HSIC based independence test; one may
require modifications for a high number of time series components. For large sample sizes, even
4
smallest differences between the true data generating process and the model may lead to rejected
independence tests [discussed by Peters et al., 2011a].
5
TiMINo for Shifted Time Series
In some applications, we observe the components of the time series with varying time delay. Instead
? i = X i , with 0 ? ` ? k. E.g., in functional magnetic resonance
of Xti we are then working with X
t
t?`
imaging brain activity is measured through an increased blood flow in the corresponding area. It
has been reported that these data often suffer from different time delays [e.g. Buxton et al., 1998,
? ti , we therefore have to cope with causal
Smith et al., 2011]. Given the (shifted) measurements X
relationships that go backward in time. This is only resolved when going back to the unobserved
true data Xti . Measures like Granger causality will fail in these situations. This does not necessarily
? ti instead
have to be the case, however. The structure still remains identifiable even if we observe X
? i (the following theorem generalizes the second part of Theorem 1 and is proved accordingly)1 :
of X
t
? i = X i , where 0 ? ` ? k are unknown
Theorem 2 Assume condition (ii) from Theorem 1 with X
t
t?`
? t is identifiable from the joint distribution of X
? t . In
time delays. Then, the full time graph of X
? t and Xt are identical and therefore identifiable.
particular, the summary time graphs of X
As opposed to Theorem 1 we cannot identify the full time graph of Xt . It may not be possible, for
example, to distinguish between a lag two effect from X 1 to X 2 and a corresponding lag one effect
with a shifted time series X 2 . The method for recovering the network structure stays almost the same
as the one for non-shifted time series. only line 5 of Algorithm 1 has to be updated: we additionally
i
include Xt+`
for 0 ? ` ? k for all i ? S \ {k}. While TiMINo exploits an asymmetry between
cause and effect emerging from restricted structural equations, G-causality exploits the asymmetry
of time. The latter asymmetry is broken when considering shifted time series.
6
Experiments
6.1
Artificial Data
We always included instantaneous effects, fitted models up to order p = 2 or p = 6 and set ? = 0.05.
Experiment 1: Confounder with time lag. We simulate 100 data sets (length 1000) from Zt =
a ? Zt?1 + NZ,t , Xt = 0.6 ? Xt?1 + 0.5 ? Zt?1 + NX,t , Yt = 0.6 ? Yt?1 + 0.5 ? Zt?2 + NY,t , with a
between 0 and 0.95 and N?,t ? 0.4 ? N (0, 1)3 . Here, Z is a hidden common cause for X and Y . For
all a, Xt contains information about Zt?1 and Yt+1 (see Figure 1); G-causality and TS-LiNGAM
wrongly infer X ? Y . For large a, Yt contains additional information about Xt+1 , which leads
to the wrong arrow Y ? X. TiMINo causality does not decide for any a. The nonlinear methods
perform very similar (not shown). For a = 0, a cross-correlation test is not enough to reject X ? Y .
Further, all methods fail for a = 0 and Gaussian noise. (Similar results for non-linear confounder.)
Experiment 2: Linear, Gaussian with instantaneous effects. We sample 100 data sets (length
2000) from Xt = A1 ? Xt?1 + NX,t , Wt = A2 ? Wt?1 + A3 ? Xt + NW,t , Yt = A4 ? Yt?1 + A5 ?
Wt?1 + NY,t , Zt = A6 ? Zt?1 + A7 ? Wt + A8 ? Yt?1 + NZ,t and N?,t ? 0.4 ? N (0, 1) and Ai iid from
U([?0.8, ?0.2] ? [0.2, 0.8]). We regard the graph containing X ? W ? Y ? Z and W ? Z as
correct. TS-LiNGAM and G-causality are not able to recover the true structure (see Table 1). We
obtain similar results for non-linear instantaneous interactions.
Experiment 3: Nonlinear, non-Gaussian without instantaneous effects. We simulate 100 data
sets (length 500) from Xt = 0.8Xt?1 + 0.3NX,t , Yt = 0.4Yt?1 + (Xt?1 ? 1)2 + 0.3NY,t , Zt =
0.4Zt?1 + 0.5 cos(Yt?1 ) + sin(Yt?1 ) + 0.3NZ,t , with N?,t ? U([?0.5, 0.5]) (similar results for
other noise distributions, e.g. exponential). Thus, X ? Y ? Z is the ground truth. Nonlinear
G-causality fails since the implementation is only pairwise and it thus always infers an effect from
X to Z. Linear G-causality cannot remove the nonlinear effect from Xt?2 to Zt by using Yt?1 . Also
TiMINo-linear assumes a wrong model but does not make any decision. TiMINo-gam and TiMINoGP work well on this data set (Table 2). This specific choice of parameters show that a significant
1
We believe that a corresponding statement for condition (i) holds, too.
5
Zt?2
TS?LiNGAM
0.0 0.4 0.8
Yt?2
Zt?1
Xt+1
Xt
a
Yt?1
Zt
TiMINo (linear)
0.0 0.4 0.8
Xt?1
a
a
Zt+1
Yt+1
Yt
G?caus. (linear)
0.0 0.4 0.8
Xt?2
none
Y ?> X
both
X ?> Y
0
0.1
0.2
0.3 0.4 0.5 0.6 0.7
confounder parameter a
0.8
0.9
no decision
none
Y ?> X
X ?> Y
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.3 0.4 0.5 0.6 0.7
confounder parameter a
0.8
0.9
none
Y ?> X
both
X ?> Y
0
0.1
0.2
Figure 1: Exp.1: Part of the causal full time graph with hidden common cause Z (top left). TiMINo
causality does not decide (top right), whereas G-causality and TS-LiNGAM wrongly infer causal
connections between X and Y (bottom).
G-causal.
linear
13%
87%
0%
TiMINo
linear
83%
7%
10%
TSLiNGAM
19%
81%
0%
prop. of correct (?)
and incorrect (?.)
answers
DAG
correct
wrong
no dec.
1
0.5
0
100
300
500
700
length of time series
900
Table 1: Exp.2: Gaussian data and linear Figure 2: Exp.4: TiMINo-GP (blue) works reliinstantaneous effects: only TiMINo mostly ably for long time series. TiMINo-linear (red) and
discovers the correct DAG.
TiMINo-gam (black) mostly remain undecided.
difference in performance is possible. For other parameters (e.g. less impact of the nonlinearity),
G-causality and TS-LiNGAM still assume a wrong model but make fewer mistakes.
Table 2: Exp.3: Since the data are nonlinear, linear G-causality and TS-LiNGAM give wrong answers, TiMINo-lin does not decide. Nonlinear G-causality fails because it analyzes the causal structure between pairs of time series.
DAG
correct
wrong
no dec.
Grangerlin
69%
31%
0%
Grangernonlin
0%
100%
0%
TiMINolin
0%
0%
100%
TiMINogam
95%
1%
4%
TiMINoGP
94%
1%
5%
TS-LiNGAM
12%
88%
0%
Experiment 4: Non-additive interaction. We simulate 100 data sets with different lengths from
Xt = 0.2 ? Xt?1 + 0.9NX,t , Yt = ?0.5 + exp(?(Xt?1 + Xt?2 )2 ) + 0.1NY,t , with N?,t ? N (0, 1).
Figure 2 shows that TiMINo-linear and TiMINo-gam remain mainly undecided, whereas TiMINoGP performs well. For small sample sizes, one observes two effects: GP regression does not obtain
accurate estimates for the residuals, these estimates are not independent and thus TiMINo-GP remains more often undecided. Also, TiMINo-gam makes more correct answers than one would expect due to more type II errors. Linear G-causality and TS-LiNGAM give more than 90% incorrect
answers, but non-linear G-causality is most often correct (not shown). Bad model assumptions do
not always lead to incorrect causal conclusions.
Experiment 5: Non-linear Dependence of Residuals. In Experiment 1, TiMINo equipped with a
cross-correlation inferred a causal edge, although there were none. The opposite is also possible:
2
+ NY,t and N?,t ? 0.4 ? N (0, 1) (length 1000).
Xt = ?0.5 ? Xt?1 + NX,t , Yt = ?0.5 ? Yt?1 + Xt?1
TiMINo-gam with cross-correlation infers no causal link between X and Y , whereas TiMINo-gam
with HSIC correctly identifies X ? Y .
Experiment 6: Shifted Time Series. We simulate 100 random DAGs with #V = 3 nodes by
choosing a random ordering of the nodes and including edges with a probability of 0.6. The structural equations
are additive (gam). Each component is of the form f (x) = a ? max(x, ?0.1) + b ?
p
sign(x) |x|, with a, b iid from U([?0.5, ?0.2] ? [0.2, 0.5]). We sample time series (length 1000)
from Gaussian noise and observe the sink node time series with a time delay of three. This makes all
6
traditional methods inapplicable. The performance of linear G-causality, for example, drops from
an average Structural Hamming Distance (SHD) of 0.38 without time delay to 1.73 with time delay.
TiMINo-gam causality recognizes the wrong model assumption. The SHD increases from 0.13 (17
undecided cases) to 0.71 (79 undecided cases). Adjusting for a time delay (Section 5) yields an
SHD of 0.70 but many more decisions (18 undecided cases). Although it is possible to adjust for
time delays, the procedure enlarges the model space, which makes rejecting all wrong models more
difficult. Already #V = 5 leads to worse average SHD: G-causality: 4.5, TiMINo-gam: 1.5 (92
undecided cases) and TiMINo-gam with time delay adjustment: 2.4 (38 undecided cases).
6.2
Real Data
We fitted up to order 6 and included instantaneous effects. For TiMINo, ?correct? means that
TiMINo-gam is correct and TiMINo-linear is correct or undecided. TiMINo-GP always remains
undecided because there are too few data points to fit such a general model. Again, ? is set to 0.05.
Experiment 7: Gas Furnace. [Box et al., 2008, length 296], Xt describes the input gas rate and
Yt the output CO2 . We regard X ? Y as being true. TS-LiNGAM, G-causality, TiMINo-lin
and TiMINo-gam correctly infer X ? Y . Disregarding time information leads to a wrong causal
conclusion: The method described by Hoyer et al. [2009] leads to a p-value of 4.8% in the correct
and 9.1% in the false direction.
Experiment 8: Old Faithful. [Azzalini and Bowman, 1990, length 194] Xt contains the duration
of an eruption and Yt the time interval to the next eruption of the Old Faithful geyser. We regard
X ? Y as the ground truth. Although the time intervals [t, t + 1] do not have the same length for all
t, we model the data as two time series. TS-LiNGAM and TiMINo give correct answers, whereas
linear G-causality infers X ? Y , and nonlinear G-causality infers Y ? X.
Experiment 9: Abalone (no time structure). The abalone data set [Asuncion and Newman, 2007]
contains (among others that lead to similar results) age Xt and diameter Yt of a certain shell fish.
If we model 1000 randomly chosen samples as time series, G-causality (both linear and nonlinear)
infers no causal relation as expected. TS-LiNGAM wrongly infers Y ? X, which is probably due
to the nonlinear relationship. TiMINo gives the correct result.
Experiment 10: Diary (confounder). We consider 10 years of weekly prices for butter Xt and
cheddar cheese Yt (length 522, http://future.aae.wisc.edu/tab/prices.html) We
expect their strong correlation to be due to the (hidden) milk price Mt : X ? M ? Y . TiMINo
does not decide, whereas TS-LiNGAM and G-causality wrongly infer X ? Y . This may be due to
different time lags of the confounder (cheese has longer storing and maturing times than butter).
Experiment 11: Temperature in House. We placed temperature sensors in six rooms (1 - Shed,
2 - Outside, 3 - Kitchen Boiler, 4 - Living Room, 5 - WC, 6 - Bathroom) of a house in the black
forest and recorded the temperature on an hourly basis (length 7265). This house is not inhabited
for most of the year, and lacking central heating; the few electric radiators start if the temperatur
drops close to freezing. TiMINo does not decide since the model leads to dependent residuals.
Although we do not provide any theory for the following steps, we analyze the model leading to
the ?least dependent? residuals by setting the test level ? to zero. TiMINo causality then outputs
a causal ordering of the variables. We applied TiMINo-lin and TiMINo-gam to the data sets using
lags up to twelve (half a day) and report the proportion in which node i precedes node j (see matrix).
This procedure reveals a sensible causal structure (we ?
arbitrarily- refer to entries larger than 0.5 as causation). 2 ? 0
0.25 0.83
1
1
1
(outside) causes all other readings, and none of the other ?0.75
0
0.83
1
1
1 ?
?
?
0
0.75 0.33 0.33?
temperatures causes 2. 1 (shed) causes all other readings ?0.17 0.17
? 0
?
0
0.25
0
0
0
?
except for 2. This is wrong, but not surprising since the ?
? 0
0
0.67
1
0
0 ?
shed?s temperature is rather close to the outside temper0
0
0.67
1
1
0
ature. 4 (living room) does not cause any other reading,
but every other reading does cause it (the living room is
the only room without any heating). The links 5 ? 3 and 6 ? 3 appear spurious, and come with
numbers close to 0.5. These might be erroneous, however, they might also be due to the fact that
sensor 3 is sitting on top of the kitchen boiler, which acts as a heat reservoir that delays temperature
changes. The link 6 ? 5 comes with a large number, but it is plausible since unlike the WC, the
7
bathroom has a window and is thus affected directly by outside temperature, causing fast regulation
of its radiator, which is placed on a thin wooden wall facing the WC.
The phase slope index [Nolte et al., 2008] performed well in Exp. 7, in all other experiments it either
gave wrong results or did not decide. Due to space constraints we omit details about this method.
We did not find any code for ANLTSM.
7
Conclusions and Future Work
This paper shows how causal inference on time series benefits from the framework of Structural
Equation Models. The identifiability statement is more general than existing results. The algorithm
is based on unconditional independence tests and is applicable to multivariate, linear, nonlinear
and instantaneous interactions. It contains the option of remaining undecided. While methods like
Granger causality are built on the asymmetry of time direction, TiMINo additionally takes into account identifiability emerging from restricted structural equation models. This leads to a straightforward way of dealing with (unknown) time delays in the different time series. Although an extensive
evaluation on real data sets is still required, we believe that our results emphasize the potential use
of causal inference methods. They may provide guidance for future interventional experiments.
As future work one may use heteroscedastic models [Chen et al., 2012] and systematically preprocess the data (removing trends, periodicities, etc.). This may reduce the number of cases where
TiMINo causality is undecided. TiMINo causality evaluates a model fit by checking independence
of the residuals. As suggested in Mooij et al. [2009], Yamada and Sugiyama [2010], one may make
the independence of the residuals as a criterion for the fitting process or at least for order selection.
8
Appendix
Lemma 1 (Markov Condition for TiMINo) If Xt = (Xti )i?V satisfy a TiMINo model, each
variable Xti is conditionally independent of each of its non-descendants given its parents.
Sp
Proof . With S := PA(Xti ) = k=0 (PAik )t?k and Eq. (1) we get Xti |S=s = fi (s, Nti ) for an s
with p(s) > 0. Any non-descendant of Xti is a function of all noise variables from its ancestors and
is thus independent of Xti given S = s. This is the only time we assume t ? N in this paper.
Proof of Theorem 1 Suppose that Xt allows for two TiMINo representations that lead to different
full time graphs G and G 0 . (i) Assume that G and G 0 do not differ in the instantaneous effects:
1
PAi0 (in G) = PAi0 (in G 0 ) ?i. Wlog, there is some k > 0 and an edge Xt?k
? Xt2 , say, that is in
1
i
G but not in G 0 . From G 0 and Lemma 1 we have that Xt?k
?
? Xt2 | S , where S = ({Xt?l
,1 ? l ?
1
2
i
p, i ? V } ? NDt ) \ {Xt?k , Xt }, and NDt are all Xt that are non-descendants (wrt instantaneous
1
?
6? Xt2 | S . Now,
effects) of Xt2 . Applied to G, causal minimality leads to a contradiction: Xt?k
0
i
let G and G differ in the instantaneous effects and choose S = {Xt?l , 1 ? l ? p, i ? V }. For
? i )t ), where PA
? i are all instantaneous parents of X i
each s and i we have: Xti |S=s = fi (s, (PA
0
0
t
conditioned on S = s. All Xti |S=s with the instantaneous effects describe two different structures
of an IFMOC. This contradicts the identifiability results by Peters et al. [2011b]. (ii) Because of
Lemma 1 and faithfulness G and G 0 only differ in the instantaneous effects. But each instantaneous
j
j
arrow Xti ? Xtj forms a v-structure together with Xt?k
? Xtj ; Xt?k
cannot be connected with
i
Xt since this introduces a cycle in the summary time graph.
? t can differ only in the directions of edges
Proof of Theorem 2 Two full time graphs G and G 0 for X
j
j
i
between time series. Assume Xt ? Xt+k in G and Xti ? Xt+k
in G 0 . Choose the largest k
j
i
i
i
possible. Then there is a v-structure Xt?` ? Xt ? Xt+k for some `. A connection between Xt?`
j
and Xt+k would lead to a pair as above with a larger k.
References
N. Ancona, D. Marinazzo, and S. Stramaglia. Radial basis function approach to nonlinear Granger causality of
time series. Phys. Rev. E, 70(5):056221, 2004.
8
A. Asuncion and D. J. Newman. UCI repository. http://archive.ics.uci.edu/ml/, 2007.
A. Azzalini and A. W. Bowman. A look at some data on the Old Faithful Geyser. Applied Statistics, 39(3):
357?365, 1990.
D. Bell, J. Kay, and J. Malley. A non-parametric approach to non-linear causality testing. Economics Letters,
51(1):7?18, 1996.
G. E. P. Box, G. M. Jenkins, and G. C. Reinsel. Time series analysis: forecasting and control. Wiley series in
probability and statistics. John Wiley, 2008.
P. J. Brockwell and R. A. Davis. Time Series: Theory and Methods. Springer, 2nd edition, 1991.
R. B. Buxton, E. C. Wong, and L. R. Frank. Dynamics of blood flow and oxygenation changes during brain
activation: The balloon model. Magnetic Resonance in Medicine, 39(6):855?864, 1998.
Y. Chen, G. Rangarajan, J. Feng, and M. Ding. Analyzing multiple nonlinear time series with extended Granger
causality. Physics Letters A, 324, 2004.
Z. Chen, K. Zhang, and L. Chan. Causal discovery with scale-mixture model for spatiotemporal variance
dependencies. In NIPS 25, 2012.
T. Chu and C. Glymour. Search for additive nonlinear time series causal models. Journal of Machine Learning
Research, 9:967?991, 2008.
M. Eichler. Graphical modelling of multivariate time series. Probability Theory and Related Fields, 2011.
D. Entner and P. Hoyer. Discovering unconfounded causal relationships using linear non-Gaussian models. In
JSAI-isAI Workshops, 2010.
J. P. Florens and M. Mouchart. A note on noncausality. Econometrica, 50(3):583?591, 1982.
C. W. J. Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3):424?38, July 1969.
A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of independence. In NIPS 20, Canada, 2008.
T. J. Hastie and R. J. Tibshirani. Generalized additive models. London: Chapman & Hall, 1990.
P. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Sch?olkopf. Nonlinear causal discovery with additive noise
models. In NIPS 21, Canada, 2009.
A. Hyv?arinen, S. Shimizu, and P. Hoyer. Causal modelling combining instantaneous and lagged effects: an
identifiable model based on non-gaussianity. In ICML 25, 2008.
D. Janzing and B. Steudel. Justifying additive-noise-model based causal discovery via algorithmic information
theory. Open Systems and Information Dynamics, 17:189?212, 2010.
D. Janzing, J. Peters, J.M. Mooij, and B. Sch?olkopf. Identifying confounders using additive noise models. In
UAI 25, 2009.
J. Mooij, D. Janzing, J. Peters, and B. Sch?olkopf. Regression by dependence minimization and its application
to causal inference. In ICML 26, 2009.
G. Nolte, A. Ziehe, V. Nikulin, A. Schl?ogl, N. Kr?amer, T. Brismar, and K.-R. M?uller. Robustly Estimating the
Flow Direction of Information in Complex Physical Systems. Physical Review Letters, 100, 2008.
J. Pearl. Causality: Models, reasoning, and inference. Cambridge Univ. Press, 2nd edition, 2009.
J. Peters, D. Janzing, A. Gretton, and B. Sch?olkopf. Detecting the dir. of causal time series. In ICML 26, 2009.
J. Peters, D. Janzing, and B. Sch?olkopf. Causal inference on discrete data using additive noise models. IEEE
Trans. Pattern Analysis Machine Intelligence, 33(12):2436?2450, 2011a.
J. Peters, J. Mooij, D. Janzing, and B. Sch?olkopf. Identifiability of causal graphs using functional models. In
UAI 27, 2011b.
J. Peters, J. Mooij, D. Janzing, and B. Sch?olkopf. Causal discovery with continuous additive noise models,
2013. arXiv:1309.6779.
C. Quinn, T. Coleman, N. Kiyavash, and N. Hatsopoulos. Estimating the directed information to infer causal
relationships in ensemble neural spike train recordings. Journal of Comp. Neuroscience, 30(1):17?44, 2011.
S. Shimizu, P. Hoyer, A. Hyv?arinen, and A. J. Kerminen. A linear non-Gaussian acyclic model for causal
discovery. Journal of Machine Learning Research, 7:2003?2030, 2006.
S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, and
M. W. Woolrich. Network modelling methods for FMRI. NeuroImage, 54(2):875 ? 891, 2011.
P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd edition, 2000.
M. Yamada and M. Sugiyama. Dependence minimizing regression with model selection for non-linear causal
inference under non-Gaussian noise. In AAAI. AAAI Press, 2010.
9
| 5063 |@word repository:1 version:1 proportion:1 nd:3 open:1 hyv:3 d2:1 r:1 covariance:1 reduction:1 series:60 contains:10 past:2 existing:2 ramsey:1 recovered:2 nt:3 surprising:1 activation:1 yet:1 chu:5 john:1 additive:20 happen:1 oxygenation:1 webster:1 remove:2 drop:2 half:1 fewer:1 discovering:1 intelligence:1 accordingly:1 coleman:1 vanishing:1 short:1 smith:2 yamada:2 detecting:1 math:1 node:8 zhang:1 bowman:2 direct:2 jonas:2 incorrect:4 prove:2 descendant:3 fitting:7 introduce:3 pairwise:1 expected:1 mpg:2 p1:6 brain:2 automatically:1 xti:26 equipped:2 considering:1 window:1 becomes:1 provided:3 estimating:3 underlying:1 suffice:1 discover:1 null:1 what:1 interpreted:1 emerging:2 finding:1 unobserved:1 every:1 ti:2 act:1 shed:3 weekly:1 wrong:16 control:1 normally:1 omit:1 appear:1 causally:1 positive:1 hourly:1 mistake:1 analyzing:1 ntk:1 path:1 might:4 chose:1 black:2 nz:3 equivalence:1 heteroscedastic:1 co:1 limited:1 confounder:8 directed:2 practical:2 faithful:6 testing:3 practice:2 fci:1 procedure:2 area:1 eth:1 bell:5 reject:2 radial:1 get:1 cannot:5 close:3 selection:2 wrongly:4 wong:1 yt:27 go:1 urich:1 straightforward:1 duration:1 economics:1 identifying:1 contradiction:1 kay:1 hsic:5 updated:1 suppose:2 user:1 us:1 hypothesis:2 pa:8 trend:1 predicts:1 observed:1 bottom:1 ding:1 cycle:2 connected:2 indep:1 ordering:2 balloon:1 observes:1 stramaglia:1 substantial:1 hatsopoulos:1 broken:1 complexity:2 econometrica:2 co2:1 dynamic:2 inapplicable:1 basis:2 sink:1 translated:1 resolved:1 joint:4 represented:1 undecided:15 boiler:2 heat:1 fast:1 effective:1 describe:2 train:1 london:1 artificial:2 precedes:1 newman:2 hyper:1 choosing:1 outside:4 whose:1 lag:6 larger:2 plausible:1 say:3 drawing:2 enlarges:1 statistic:5 gp:7 jointly:5 final:1 online:1 propose:2 nikulin:1 interaction:3 maximal:3 product:1 remainder:1 causing:2 relevant:1 loop:2 uci:2 combining:1 mixing:1 brockwell:3 ogl:1 supposed:1 olkopf:9 exploiting:1 parent:5 asymmetry:4 rangarajan:1 generating:2 help:3 depending:1 schl:1 measured:2 eq:1 strong:1 recovering:1 salimi:1 come:5 implies:1 differ:4 switzerland:1 direction:5 correct:13 eruption:2 observational:1 require:5 arinen:3 suffices:1 furnace:1 wall:1 khorshidi:1 extension:2 correction:1 hold:4 hall:1 ground:2 ic:1 exp:8 algorithmic:2 nw:1 claim:1 smallest:1 a2:1 applicable:2 ndt:2 teo:1 largest:1 correctness:1 tool:1 fukumizu:1 minimization:1 uller:1 mit:1 sensor:2 gaussian:17 always:5 aim:1 modified:1 rather:2 pn:1 reinsel:1 varying:1 derived:1 focus:1 methodological:1 modelling:3 check:2 likelihood:2 mainly:1 wooden:1 inference:10 dependent:2 spurious:1 hidden:3 ancestor:1 relation:2 going:1 germany:2 among:1 html:1 exponent:1 resonance:2 smoothing:1 marginal:1 field:1 chapman:1 identical:1 pai:2 represents:1 look:1 icml:3 thin:1 fmri:1 future:4 report:1 others:1 spline:1 intelligent:2 few:2 causation:2 randomly:1 paik:3 xtj:5 kitchen:2 phase:1 freedom:1 stationarity:1 a5:1 evaluation:1 adjust:1 weakness:1 introduces:2 mixture:1 pc:1 unconditional:2 accurate:1 edge:5 partial:1 respective:1 old:3 pmf:1 causal:46 guidance:1 theoretical:3 fitted:2 increased:1 modeling:1 unfaithful:1 cover:1 ar:1 kerminen:1 a6:1 phrase:1 vertex:1 entry:1 delay:13 too:5 reported:2 dependency:2 answer:6 spatiotemporal:1 dir:1 confounders:5 twelve:1 minimality:2 stay:1 physic:1 together:1 squared:1 aaai:2 again:1 recorded:1 opposed:2 choose:3 containing:1 central:1 woolrich:1 worse:1 leading:1 account:1 potential:2 de:2 includes:1 gaussianity:1 satisfy:3 performed:2 try:2 break:1 tab:1 analyze:1 red:1 start:1 recover:1 option:2 complicated:1 identifiability:7 asuncion:2 slope:1 ably:1 contribution:1 square:1 variance:2 ensemble:1 yield:1 identify:2 sitting:1 preprocess:1 miller:1 weak:1 rejecting:1 iid:10 none:5 comp:1 straight:1 history:1 phys:1 janzing:12 checked:1 definition:2 evaluates:1 pp:2 proof:4 recovers:1 hamming:1 proved:1 adjusting:1 infers:8 back:1 appears:1 day:1 follow:1 formulation:1 done:2 box:2 amer:1 furthermore:2 ergodicity:1 rejected:1 smola:1 correlation:10 until:1 hand:1 working:1 freezing:1 nonlinear:23 reversible:1 a7:1 autocorrelations:1 scientific:1 believe:3 effect:28 contain:1 true:7 verify:1 nichols:1 spirtes:4 deal:2 conditionally:1 sin:1 during:1 bonferroni:1 davis:3 mpi:3 abalone:2 criterion:1 generalized:3 pdf:1 performs:1 temperature:7 reasoning:1 instantaneous:25 discovers:1 fi:15 misspecified:2 common:3 functional:3 mt:1 empirically:2 physical:2 eichler:2 conditioning:2 extend:2 discussed:1 significant:3 blocked:1 measurement:1 refer:1 cambridge:1 dag:8 ai:3 tuning:1 nonlinearity:1 sugiyama:2 longer:1 etc:1 multivariate:6 chan:1 pai1:1 diary:1 certain:1 ubingen:3 arbitrarily:1 analyzes:1 additional:1 relaxed:1 bathroom:2 determine:1 living:3 ii:10 july:1 full:14 multiple:1 infer:7 gretton:4 ing:1 adapt:1 cross:10 long:1 lin:3 justifying:1 a1:1 impact:1 prediction:1 regression:5 arxiv:1 sometimes:1 kernel:2 dec:2 whereas:6 fine:1 interval:2 median:1 sch:9 unlike:1 archive:1 probably:1 recording:1 undirected:1 univ:1 flow:3 call:4 structural:10 presence:1 identically:1 enough:3 independence:23 fit:7 gave:1 hastie:2 bandwidth:1 opposite:1 nolte:2 reduce:2 idea:2 simplifies:1 malley:1 enumerating:1 shift:1 whether:3 motivated:1 six:1 unnatural:1 forecasting:1 song:1 suffer:1 peter:17 cause:13 nonparametric:1 diameter:1 http:2 shifted:7 fish:1 sign:1 estimated:1 neuroscience:1 tibshirani:2 correctly:2 blue:1 write:1 discrete:1 radiator:2 affected:1 nevertheless:1 blood:2 wisc:1 interventional:1 neither:1 utilize:1 backward:1 shd:4 imaging:1 graph:29 econometric:1 sum:1 xt2:5 year:2 package:1 noticing:1 letter:3 almost:1 decide:6 draw:2 decision:3 appendix:2 steudel:2 distinguish:1 aic:1 aae:1 identifiable:9 activity:1 constraint:4 wc:3 simulate:4 optimality:1 performing:2 xtk:1 glymour:6 combination:1 remain:3 describes:1 reconstructing:1 contradicts:1 rev:1 b:1 modification:1 restricted:5 pr:6 lingam:17 equation:10 scheines:1 remains:6 describing:1 granger:9 turn:1 fail:5 discus:1 know:3 wrt:1 end:1 generalizes:2 available:1 jenkins:1 gam:17 observe:3 quinn:2 spectral:1 magnetic:2 robustly:1 alternative:1 encounter:2 robustness:1 denotes:1 assumes:4 include:4 running:1 top:3 a4:1 recognizes:1 remaining:1 maturing:1 graphical:1 medicine:1 exploit:4 feng:1 already:2 spike:1 parametric:1 dependence:5 traditional:2 said:1 hoyer:9 exhibit:1 distance:2 link:3 simulated:2 sensible:1 nx:7 considers:1 tuebingen:2 equip:1 assuming:1 code:4 length:14 index:1 relationship:7 insufficient:1 beckmann:1 minimizing:1 difficult:3 mostly:3 regulation:1 statement:4 frank:1 lagged:2 implementation:1 zt:14 unknown:3 perform:2 markov:3 finite:3 t:16 gas:2 situation:4 extended:2 precise:1 kiyavash:1 thm:1 canada:2 inferred:1 pair:2 required:3 extensive:1 connection:2 faithfulness:3 nti:7 pearl:2 nip:3 trans:1 able:2 suggested:2 below:3 pattern:1 reading:4 built:2 including:4 max:1 azzalini:2 predicting:2 residual:14 identifies:1 review:1 geometric:1 discovery:6 checking:3 mooij:8 inhabited:1 lacking:1 fully:1 expect:2 acyclic:4 var:6 facing:1 age:1 buxton:2 degree:1 article:1 principle:1 systematically:1 storing:1 periodicity:1 summary:10 repeat:1 placed:2 side:1 allow:2 benefit:2 distributed:2 feedback:3 regard:3 stand:1 avoids:2 world:1 forward:1 collection:1 cope:1 contemporaneous:1 emphasize:1 implicitly:1 bernhard:1 pfull:4 dealing:2 ml:1 cheese:2 reveals:1 investigating:1 uai:2 assumed:3 continuous:2 search:2 table:4 additionally:3 transfer:1 forest:1 necessarily:1 complex:1 electric:1 sp:2 significance:2 main:1 did:2 arrow:5 noise:22 edition:3 heating:2 x1:2 causality:49 reservoir:1 geyser:2 strengthened:1 ny:8 predictability:1 wlog:1 wiley:2 seminar:1 fails:2 neuroimage:1 exponential:2 house:3 dominik:1 theorem:13 removing:1 erroneous:1 bad:2 xt:87 specific:2 disregarding:1 weakest:1 bivariate:4 intractable:1 a3:1 workshop:1 restricting:2 false:2 milk:1 ature:1 kr:1 conditioned:1 chen:4 shimizu:4 led:1 prevents:1 adjustment:1 springer:1 ch:1 a8:1 truth:2 satisfies:2 determines:1 prop:1 shell:1 conditional:5 room:5 price:3 butter:2 change:2 included:3 infinite:1 except:1 wt:4 lemma:4 acyclicity:1 entner:3 ziehe:1 latter:2 ethz:1 violated:1 absolutely:1 unconfounded:1 |
4,491 | 5,064 | Discovering Hidden Variables in Noisy-Or Networks
using Quartet Tests
Yacine Jernite, Yoni Halpern, David Sontag
Courant Institute of Mathematical Sciences
New York University
{halpern, jernite, dsontag}@cs.nyu.edu
Abstract
We give a polynomial-time algorithm for provably learning the structure and parameters of bipartite noisy-or Bayesian networks of binary variables where the
top layer is completely hidden. Unsupervised learning of these models is a form
of discrete factor analysis, enabling the discovery of hidden variables and their
causal relationships with observed data. We obtain an efficient learning algorithm
for a family of Bayesian networks that we call quartet-learnable. For each latent
variable, the existence of a singly-coupled quartet allows us to uniquely identify
and learn all parameters involving that latent variable. We give a proof of the polynomial sample complexity of our learning algorithm, and experimentally compare
it to variational EM.
1
Introduction
We study the problem of discovering the presence of latent variables in data and learning models
involving them. The particular family of probabilistic models that we consider are bipartite noisy-or
Bayesian networks where the top layer is completely hidden. Unsupervised learning of these models
is a form of discrete factor analysis and has applications in sociology, psychology, epidemiology,
economics, and other areas of scientific inquiry that need to identify the causal relationships of
hidden or latent variables with observed data (Saund, 1995; Martin & VanLehn, 1995). Furthermore,
these models are widely used in expert systems, such as the QMR-DT network for medical diagnosis
(Shwe et al. , 1991). The ability to learn the structure and parameters of these models from partially
labeled data could dramatically increase their adoption.
We obtain an efficient learning algorithm for a family of Bayesian networks that we call quartetlearnable, meaning that every latent variable has a singly-coupled quartet (i.e. four children of
a latent variable for which there is no other latent variable that is shared by at least two of the
children). We show that the existence of such a quartet allows us to uniquely identify each latent
variable and to learn all parameters involving that latent variable. Furthermore, using a technique
introduced by Halpern & Sontag (2013), we show how to subtract already learned latent variables
to create new singly-coupled quartets, substantially expanding the class of structures that we can
learn. Importantly, even if we cannot discover every latent variable, our algorithm guarantees the
correctness of any latent variable that was discovered. We show in Sec. 4 that our algorithm can
learn nearly all of the structure of the QMR-DT network for medical diagnosis (i.e., discovering the
existence of hundreds of diseases) simply from data recording the symptoms of each patient.
Underlying our algorithm are two new techniques for structure learning. First, we introduce a quartet
test to determine whether a set of binary variables is singly-coupled. When singly-coupled variables
are found, we use previous results in mixture model learning to identify the coupling latent variable.
Second, we develop a conditional point-wise mutual information test to learn parameters of other
children of identified latent variables. We give a self-contained proof of the polynomial sample
1
X
Y
Z
X
Z
Y
?
=
a
c
b
c
a
b
c d
g h i
a
b
e
f
Figure 1: Left: Example of a quartet-learnable network. For this network, the order (X, Y, Z) satisfies the definition: {a, b, c, d} is singly coupled by X, {c, e, f, g} is singly coupled by Y given X and {d, g, h, i} is singly
coupled by Z given X, Y . Right: Example of two different networks that have the same observable moments
(i.e., distribution on a, b, c). pX = 0.2, pY = 0.3, pZ = 0.37. fX = (0.1, 0.2, 0.3), fY = (0.6, 0.4, 0.5),
fZ = (0.28, 0.23, 0.33). The noise probabilities and full moments are given in the supplementary material.
complexity of our structure and parameter learning algorithms, by bounding the error propagation
due to finding roots of polynomials. Finally, we present an experimental comparison of our structure
?
learning algorithm to the variational expectation maximization algorithm of Singliar
& Hauskrecht
(2006) on a synthetic image-decomposition problem and show competitive results.
Related work. Martin & VanLehn (1995) study structure learning for noisy-or Bayesian networks,
observing that any two observed variables that share a hidden parent must be correlated. Their algorithm greedily attempts to find a small set of cliques that cover the dependencies of which it is
most certain. Kearns & Mansour (1998) give a polynomial-time algorithm with provable guarantees
for structure learning of noisy-or bipartite networks with bounded in-degree. Their algorithm incrementally constructs the network, in each step adding a new observed variable, introducing edges
from the existing latent variables to the observed variable, and then seeing if new latent variables
should be created. This approach requires strong assumptions, such as identical priors for the hidden
variables and all incoming edges for an observed variable having the same failure probabilities.
Silva et al. (2006) study structure learning in linear models with continuous latent variables, giving
an algorithm for discovering disjoint subsets of observed variables that have a single hidden variable
as its parent. Recent work has used tensor methods and sparse recovery to learn linear latent variable
models with graph expansion (Anandkumar et al. , 2013), and also continuous admixture models
such as latent Dirichlet allocation (Anandkumar et al. , 2012a). The discrete variable setting is
not linear, making it non-trivial to apply these methods that rely on linearity of expectation. An
alternative approach is to perform gradient ascent on the likelihood or use expectation maximization
(EM). Although more robust to model error, the likelihood is nonconvex and these methods do not
have consistency guarantees. Elidan et al. (2001) seek ?structural signatures?, in their case semicliques, to use as structure candidates within structural EM (Elidan & Friedman, 2006; Friedman,
1997; Lazic et al. , 2013). Our algorithm could be used in the same way.
?
Exact inference is intractable in noisy-or networks (Cooper, 1987), so Singliar
& Hauskrecht (2006)
give a variational EM algorithm for unsupervised learning of the parameters of a bipartite noisy-or
network. We will use this as a baseline in our experimental results.
Spectral approaches to learning mixture models originated with Chang?s spectral method (Chang
1996; analyzed in Mossel & Roch 2005, see also Anandkumar et al. (2012b)). The binary variable
setting is a special case and is discussed in Lazarsfeld (1950) and Pearl & Tarsi (1986). In Halpern
& Sontag (2013) the parameters of singly-coupled variables in bipartite networks of known structure
are learned using mixture model learning.
Quartet tests have been previously used for learning latent tree models (Anandkumar et al. , 2011;
Pearl & Tarsi, 1986). Our quartet test, like that of Ishteva et al. (2013) and Eriksson (2005), uses
the full fourth-order moment and a similar unfolding of the fourth-order moment matrix.
Background. We consider bipartite noisy-or Bayesian networks (G, ?) with n binary latent variables U, which we denote with capital letters (e.g. X), and m observed binary variables O, which
we denote with lower case letters (e.g. a). The edges in the model are directed from the latent variables to the observed variables, as shown in Fig. 1. In the noisy-or framework, an observed variable
is on if at least one of its parents is on and does not fail to activate it.
The entire Bayesian network is parametrized by n?m+n+m parameters. These parameters consist
of prior probabilities on the latent variables, pX for X ? U, failure probabilities between latent and
2
observed variables, f~X (a vector of size m), and noise or leak probabilities ~? = {?1 , ..., ?m }. An
equivalent formulation includes the noise in the model by introducing a single ?noise? latent variable,
X0 , which is present with probability p0 = 1 and has failure probabilities f~0 = 1 ? ~? . The Bayesian
network only has an edge between latent variable X and observed variable a if fX,a < 1. The
generative process for the model is then:
? The states of the latent variables are drawn independently: X ? Bernoulli(pX ) for X ? U.
? Each X ? U with X = 1 activates observed variable a with probability 1 ? fX,a .
? An observed variable a ? O is ?on? (a = 1) if it is activated by at least one of its parents.
The algorithms described in this paper make substantial use of sets of moments of the observed
variables, particularly the negative moments. Let S ? O be a set of observed variables, and X ? U
be the set of parents of S. The joint distribution of a bipartite noisy-or network can be shown to have
the following factorization, where S = {o1 , . . . , o|S| }:
Y
NG,S = P (o1 = 0, o2 = 0, . . . , o|S| = 0) =
U ?X
(1 ? pU + pU
|S|
Y
fU,oi ).
(1)
i=1
The full joint distribution can be obtained from the negative moments via inclusion-exclusion formulas. We denote NG to be the set of negative moments of the observed variables under (G, ?). In
the remainder of this section we will review two results described in Halpern & Sontag (2013).
Parameter learning of singly-coupled triplets. We say that a set O of observed variables is singlycoupled by a parent X if X is a parent of every member of O and there is no other parent Y that is
shared by at least two members of O. A singly coupled set of observations is a binary mixture model,
which gives rise to the next result based on a rank-2 tensor decomposition of the joint distribution.
If (a, b, c) are singly-coupled by X, we can learn pX and fX,a as follows. Let M1 = P (b, c, a = 0),
M2 = P (b, c, a = 1), and M3 = M2 M1?1 . Solving for (?1 , ?2 ) = eigenvalues(M3 ), we then have:
pX =
1 + ?2 T
1 + ?1
1 (M2 ? ?1 M1 )1 and fX,a =
.
?2 ? ?1
1 + ?2
(2)
Subtracting off. Because of the factored form of Equation 1, we can remove the influence of a
latent variable from the negative moments. Let X be a latent variable of G. Let S ? O be a
set of observations and X be the parents of S. If we know NG,S , the prior of X, and the failure
probabilities fX,S , we can obtain the negative moments of S under (G \ {X}, ?). When S includes
all of the children of X, this operation ?subtracts off? or removes X from the network:
NG\X,S =
Y
(1 ? pU + pU
U ?X \X
2
|S|
Y
fU,oi ) =
i=1
NG,S
.
Q|S|
(1 ? pX + pX i=1 fX,oi )
(3)
Structure learning
Our paper focuses on learning the structure of these bipartite networks, including the number of
latent variables. We begin with the observation that not all structures are identifiable, even if given
infinite data. Suppose we applied the tensor decomposition method to the marginal distribution
(moments) of three observed variables that share two parents. Often we can learn a network with
the same marginal distribution, but where these three variables have just one parent. Figure 1 gives
an example of such a network. As a result, if we hope to be able to learn structure, we need to make
additional assumptions (e.g., every latent variable has at least four children).
We give two variants of an algorithm based on quartet tests, and prove its correctness in Section 3.
Our approach is based on decomposing the structure learning problem into two tasks: (1) identifying
the latent variables, and (2) determining to which observed variables they are connected.
2.1
Finding singly coupled quartets
Since triplets are not sufficient to identify a latent variable (Figure 1), we propose a new approach
based on identifying singly-coupled quartets. We present two methods to find such quartets. The
3
Algorithm 1 STRUCTURE-LEARN
Algorithm 2 EXTEND
?q , ?q0 , ?e .
Input: Latent variable L with singly-coupled
Input: Observations S, Thresholds
children (a, b, c, d), currently known latent
Output: Latent structure Latent
structure Latent, threshold ?
1: Latent = {}
Output: children, all the children of L.
2: while Not Converged do
1: children={(a, fL,a ), (b, fL,b ),
3:
for all quartets (a, b, c, d) in S do
(c, fL,c ), (d, fL,d )}
4:
T ? JOINT(a, b, c, d)
2: for all observable x 6? {a, b, c, d} do
5:
T ? ADJUST(T, Latent)
3:
Subtract off coupling parents in Latent
6:
if PRETEST(T ,?e ) and 4TEST(T ,
from the moments
?q ,?q0 ) then
a,?
b)
(?
a,?
b|?
x)
7:
// (a, b, c, d) are singly-coupled.
4:
if PP(?a(?)P
> P (?Pa|?
+ ? then
(?
b)
x)P (?
b|?
x)
8:
L ? MIXTURE(a, b, c, d)
5:
fL,x = FAILURE(a,b,x,L)
9:
children ?EXTEND(L, Latent, ?e ) 6:
children ? children ? {(x, fL,x )}
10:
Latent ? Latent ? {(L, children)} 7:
end if
11:
end if
8: end for
12:
end for
9: Return children
13: end while
Figure 2: Structure learning. Left: Main routine of the algorithm. JOINT gives the joint distribution
and ADJUST subtracts off the influence of the latent variables (Eq. 3). PRETEST filters the set
of candidate quartets by determining whether every triplet in a quartet has a shared parent, using
Lemma 2. 4TEST refers to either of the quartet tests described in Section 2.1. ?q0 is only used in the
coherence quartet test. MIXTURE refers to using Eq. 2 to learn the parameters for all triplets in a
singly-coupled quartet. This yields multiple estimates for each parameter and we take the median.
Right: Algorithm to identify all of the children of a latent variable. FAILURE uses the method
outlined in Section 2.2 (see Eq. 6) to find the failure probability fL,x .
first is based on a rank test on a matrix formed from the fourth order moments and the second uses
variance of parameters learned from third order moments. We then present a method that uses the
point-wise mutual information of a triplet to identify all the other children of the new latent variable.
The outline of the learning algorithm is presented in Algorithm 1.
While not all networks can be learned, this method allows us to define a class of noisy-or networks
on which we can perform structure learning.
Definition 1. A noisy-or network is quartet-learnable if there exists an ordering of its latent variables such that each one has a quartet of children which are singly coupled once the previous latent
variables are removed from the model. A noisy-or network is strongly quartet-learnable if all of its
latent variables have a singly coupled quartet of children.
An example of a quartet-learnable network is given in Figure 1.
Rank test. A candidate quartet for the rank test is a quartet where all nodes have at least one common
parent. One way to find whether a candidate quartet is singly coupled is by looking directly at the
rank of its fourth-order moments matrix. We have three ways to unfold the 2 ? 2 ? 2 ? 2 tensor
defined by these moments into a 4 ? 4 matrix: we can consider the joint probability matrix of the
aggregated variables (a, b) and (c, d), of (a, c) and (b, d), or of (a, d) and (b, c). We discuss the rank
property for the first unfolding, but note that it holds for all three.
Let M be the 4 ? 4 matrix obtained this way, and S be the set of parents that are parents of both
(a, b) and (c, d). For all S ? S let qS and rS be the vectors of the probabilities of (a, b) and (c, d)
respectively given that S is the set of parents that are active. Then:
?
?
X Y
Y
?
M=
pX
(1 ? pY )? qS rST .
S?S
X?S
Y ?S\S
In particular, this means that if there is only one parent shared between (a, b) and (c, d), M is the
sum of two rank 1 matrices, and thus is at most rank 2.
4
Conversely, if |S| > 1, M is the sum of at least 4 rank 1 matrices, and its elements are polynomial
expressions of the parameters of the model. The determinant itself is then a polynomial function
of the parameters of the model, i.e. P (pX , fX,u ?X ? S, u ? {a, b, c, d}). We give examples in
the supplementary material of parameter settings showing that P 6? 0, hence the set of its roots has
measure 0, which means that the third largest eigenvalue (using the eigenvalues? absolute values) of
M is non-zero with probability one.
This will allow us to determine whether a candidate quartet is singly coupled by looking at the third
eigenvalues of the three unfoldings of its joint distribution tensor. However, for the algorithm to be
practical, we need a slightly stronger formalization of the property:
Definition 2. We say that a model is -rank-testable if for any quartet {a, b, c, d} that share a parent
U and any non-empty set of latent variables V such that U 6? V and ?V ? V, (fV,b 6= 1 ? fV,c 6= 1),
the third eigenvalue of the moments matrix M corresponding to the sub-network {U, a, b, c, d} ? V
is at least .
Any (finite) noisy-or network whose parameters were drawn at random is -rank-testable for some
with probability 1. The special case where all failure probabilities are equal also falls within this
framework, provided they are not too close to 0 or 1. We can then determine whether a quartet is
singly coupled by testing whether the third eigenvalues of all of the three unfoldings of the joint
distributions are below a threshold, ?q . If this test succeeds, we learn its parameters using Eq. 2.
Coherence test. Let {a, b, c, d} be a quartet of observed variables. To determine whether it is singly
coupled, we can also apply Eq. 2 to learn the parameters of triplets (a, b, c), (a, b, d), (a, c, d) and
(b, c, d) as if they were singly coupled. This gives us four overlapping sets of parameters. If the
variance of parameter estimates exceeds a threshold we know that the quartet is not singly coupled.
Note that agreement between the parameters learned is necessary but not sufficient to determine
that (a, b, c, d) are singly coupled. For example, in the case of a fully connected graph with two
parents, four children and identical failure probabilities, the third-order moments of any triplet are
identical, hence the parameters learned will be the same. Lemma 1, however, states that the moments
generated from the estimated parameters can only be equal to the true moments if the quartet is
actually singly coupled.
Lemma 1. If the model is -rank-testable and (a, b, c, d) are not singly coupled, then if MR represents the reconstructed moments and M the true moments, we have:
4
||MR ? M ||? >
.
8
This can be proved using a result on eigenvalue perturbation from Elsner (1985) for an unfolding
of the moments? tensor. These two properties lead to the following algorithm: First try to learn the
parameters as if the quartet were singly coupled. If the variance of the parameter estimates exceeds
a threshold, then reject the quartet. Next, check whether we can reconstruct the moments using the
mean of the parameter estimates. Accept the quartet as singly-coupled if the reconstruction error is
below a second threshold.
2.2
Extending Latent Variables
Once we have found a singly coupled quartet (a, b, c, d), the second step is to find all other children of the coupling parent A. To that end, we can use a property of the conditional pointwise mutual information (CPMI) that we introduce in this section. In this section, we use the
notation a
? to denote the event a = 0. The CPMI between a and b given x is defined as
CPMI(a, b|x) ? P (?
a, ?b|?
x)/(P (?
a|?
x)P (?b|?
x)). We will compare it to the point-wise mutual information (PMI) between a and b defined as PMI(a, b) ? P (?
a, ?b)/(P (?
a)P (?b)).
Let (a, b) be two observed variables that we know only share one parent A, and let x be any another
observed variable. We show how the CPMI between a and b given x can be used to find fA,x , the
failure probability of x given A. Our algorithm requires that the priors of all of the hidden variables
be less than 1/2.
For any observed variable x, the following lemma states that CPMI(a, b|x) 6= PMI(a, b) if and only
if a, b and x share a parent. Since the only latent variable that has both a and b as children is A, this
is equivalent to saying that x is a child of A.
5
Lemma 2. Let (a, b, x) be three observed variables in a noisy-or network, and let Ua,b be the set of
common parents of a and b. For U ? Ua,b , defining
pU |?x =
P (U, x
?)
pU fU,x
=
,
P (?
x)
1 ? pU + pU fU,x
(4)
we have pU |?x ? pU . Furthermore,
Y
(1 ? pU |?x + pU |?x fU,a fU,b )
P (?
a, ?b)
P (?
a, ?b|?
x)
?
=
,
P (?
a|?
x)P (?b|?
x) U ?U (1 ? pU |?x + pU |?x fU,a )(1 ? pU |?x + pU |?x fU,b )
P (?
a)P (?b)
a,b
with equality if and only if (a, b, x) do not share a parent.
The proof for Lemma 2 is given in the supplementary material. As a result, if a and b have only
parent A in common, we can write:
(1 ? pA|?x + pA|?x fA,a fA,b )
P (?
a, ?b|?
x)
=
R ? CPMI(a, b|x) =
.
(1 ? pA|?x + pA|?x fA,a )(1 ? pA|?x + pA|?x fA,b )
P (?
a|?
x)P (?b|?
x)
We can equivalently write this equation as Q(pA|?x ) = 0 for the quadratic function Q(x) given by:
Q(x) = R(fA,a ? 1)(fA,b ? 1)x2 + [R(fA,a + fA,b ? 2) ? (fA,a fA,b ? 1)]x + R ? 1.
(5)
0
Moreover, we can show that Q (x) = 0 for some x > 1/2, hence one of the roots of Q is always
greater than 1/2. In our framework, we know that pA|?x ? pA ? 21 , hence pA|?x is simply the smaller
root of Q. After solving for pA|?x , we can obtain fA,x using Eq. 4:
pA|?x (1 ? pA )
.
(6)
pA (1 ? pA|?x )
Extending step. Once we find a singly-coupled quartet (a, b, c, d) with common parent A, Lemma 2
allows us to determine whether a new variable x is also a child of A. Notice that for this step we
only need to use two of the children in {a, b, c, d}, which we arbitrarily choose to be a and b. If x is
found to be a child of A, we can solve for fA,x using Eqs. 5 and 6. Algorithm 2 combines these two
steps to find the parameters of all the children of A after a singly-coupled quartet has been found.
fA,x =
Parameter learning with known structure. When the structure of the network is known, singlycoupled triplets are sufficient for identifiability without resorting to the quartet tests in Section 2.1.
That setting was previously studied in Halpern & Sontag (2013), which required every edge to be
part of a singly coupled triplet or pair for its parameters to be learnable (possibly after subtracting
off latent variables). Our new CPMI technique improves this result by enabling us to learn all failure
probabilities for a latent variable?s children even if the variable has only one singly coupled triplet.
3
Sample complexity analysis
In Section 2, we gave two variants of an algorithm to learn the structure of a class of noisy-or
networks. We now want to upper bound the number of samples it requires to learn the structure of
the network correctly with high probability, as a function of the ranges in which the parameters are
found. All priors are in [pmin , 1/2], all failures probabilities are in [fmin , fmax ], and the marginal
probabilities of an observed variable x being off is lower bounded by nmin ? P (?
x). The full proofs
for these results are given in the supplementary materials.
Theorem 1. If a network with m observed variables is strongly quartet-learnable and ?-ranktestable, then its structure can be learned in polynomial time with probability (1 ? ?) and with
a polynomial number of samples equal to:
2m
1
1
ln
.
O max 8 , 8
? nmin p2min (1 ? fmax )8
?
After N samples, the additive error on any of the parameters (N ) is bounded with probability 1 ? ?
by:
r
ln 2m
?
1
?
(N ) ? O 18
.
28
13
fmin (1 ? fmax )6 nmin pmin N
6
We obtain this result by determining the accuracy we need for our tests to be provably correct,
and bounding how much the error in the output of the parameter learning algorithms depends on
the input. This proves that we can learn a class of strongly quartet-learnable noisy-or networks
in polynomial time and sample complexity. Next, we show how to extend the analysis to quartetlearnable networks as defined in Section 2 by subtracting off latent variables that we have previously
learned. If some of the removed latent variables were coupling for an otherwise singly coupled
quartet, we then discover new latent variables, and repeat the operation. If a network is quartetlearnable, we can find all of the latent variables in a finite number of subtracting off steps, which
we call the depth of the network (thus, a strongly quartet-learnable network has depth 0). To prove
that the structure learning algorithm remains correct, we simply need to show that the estimated
subtracted off moments remain close to the true ones.
Lemma 3. If the additive error on the estimated negative moments of an observed quartet C and on
the parameters for W latent variables X1 , . . . , XW whose influence we want to remove from C is at
most , then the error on the subtracted off moments for C is O(W 4W ).
We define the width of the network to be the maximum number of parents that need to be subtracted
off to be able to learn the parameters for a new singly-coupled quartet (this is typically a small
constant). This leads to the following result:
Theorem 2. If a network with m observed variables is quartet-learnable at depth d, is ?-ranktestable, and has width W , then its structure can be learned with probability (1 ? ?) with NS
samples, where:
2m
2d
1
W 4W
1
NS = O
.
ln
? max 8 , 8
18
28
2
13
6
8
fmin (1 ? fmax ) nmin pmin
? nmin pmin (1 ? fmax )
?
The left hand side of this expression has to do with the error introduced in the estimate of the
parameters each time we do a subtracting off step, which by definition occurs at most d times,
hence the exponent. We notice that the bounds do not depend directly on the number of latent
variables, indicating that we can learn networks with many latent variables, as long as the number
of subtraction steps is small. While this bound is useful for proving that the sample complexity
is indeed polynomial, in the experiments section we show that in practice our algorithm obtains
reasonable results on sample sizes well below the theoretical bound.
4
Experiments
Depth of aQMR-DT. Halpern & Sontag (2013) previously showed that the parameters of the
anonymized QMR-DT network for medical diagnosis (provided by the University of Pittsburgh
through the efforts of Frances Connell, Randolph A. Miller, and Gregory F. Cooper) could be learned
from data recording only symptoms if the structure is known. We now show that the structure can
also be learned. Here we assume that the quartet tests are perfect (i.e. infinite data setting). Table 1
compares the depth of the aQMR-DT network using triplets and quartets. Structure learning discovers all but four of the diseases, two of which would not be learnable even if the structure were
known. These two diseases are discussed in Halpern & Sontag (2013) and share all of their children
except for one symptom each, resulting in a situation where no singly-coupled triplets can be found.
The additional two diseases that cannot be learned share all but two children with each other. Thus,
for these two latent variables, singly-coupled triplets exist but singly-coupled quartets do not.
Implementation. We test the performance of our algorithm on the synthetic image dataset used in
?
Singliar
& Hauskrecht (2006). The Bayesian network consists of 8 latent variables and 64 observed
variables, arranged in an 8x8 grid of pixels. Each of the latent variables connects to a subset of the
observed pixels (see Figure 3). The latent variable priors are set to 0.25, the failure probabilities
for all edges are set to 0.1, and leak probabilities are set to 0.001. We generate samples from the
network and use them to test the ability of our algorithm to discover the latent variables and network
structure from the samples. The network is quartet learnable, but the first and last of the ground truth
sources shown in Figure 3 can only be learned after a subtraction step.
?
We use variational EM (Singliar
& Hauskrecht, 2006) as a baseline, using 16 random initializations
and choosing the run with the highest lower bound on likelihood. We found that multiple initializations substantially improved the quality of its result. The variational algorithm is given the correct
7
depth
0
1
2
3
inf
Triplets (known structure)
priors learned edges learned
527
43,139
39
2,109
2
100
0
0
2
122
depth
0
1
2
3
inf
Quartets (unknown structure)
diseases discovered edges learned
469
39,522
82
4,875
13
789
2
86
4
198
Table 1: Right: The depth at which latent variables (i.e., diseases) are discovered and parameters learned in
the aQMR-DT network for medical diagnosis (Shwe et al. , 1991) using the quartet-based structure learning
algorithm, assuming infinite data. Left: Comparison to parameter learning with known structure, using one
singly-coupled triplet to learn the failure probabilities for all of a disease?s symptoms. The parameters learned
at level 0 can be learned without any subtracting-off step. Those marked depth inf cannot be learned.
number of sources as input. For our algorithm, we use the rank-based quartet test, which has the
advantage of requiring only one threshold, ?q , compared to the two needed by the coherence test. In
our algorithm, the thresholds determine the number of discovered latent variables (sources).
Quartets are pre-filtered using pointwise mutual information to reject quartets that have non-siblings
(i.e. (a, b, c, d) where a and b are likely not siblings). All quartets that fail the pretest or the rank
test are discarded. We sort the remaining quartets by third singular value and proceed from lowest
to highest. For each quartet in sorted order we check if it overlaps with a latent variable previously
learned in this round. If it does not, we create a new latent variable and use the EXTEND step to
find all of its children. The algorithm converges when no quartets pass the threshold.
Figure 3 shows how the algorithms perform on the synthetic dataset with varying numbers of samples. Unless otherwise specified, our experiments use threshold values ?q = 0.01 and ?e = 0.1.
Experiments exploring the sensitivity of the algorithm to these thresholds can be found in the supplementary material. The running time of the quartet algorithm is under 6 minutes for 10,000 samples using a parallel implementation with 16 cores. For comparison, the variational algorithm on
the same samples takes 4 hours using 16 cores simultaneously (one random initialization per core)
on the same machine. The variational run-time scales linearly with sample size while the quartet
algorithm is independent of sample size once the quartet marginals are computed.
Sample size
Variational EM
Quartet Structure Learning
d=0
Ground truth sources
d=1
100
500
1000
2000
10000
10000*
?
Figure 3: A comparison between the variational algorithm of Singliar
& Hauskrecht (2006) and the quartet
algorithm as the number of samples increases. The true network structure is shown on the right, with one image
for each of the eight latent variables (sources). For each edge from a latent variable to an observed variable, the
corresponding pixel intensity specifies 1 ? fX,a (black means no edge). The results of the quartet algorithm are
divided by depth. Column d=0 shows the sources learned without any subtraction and d=1 shows the sources
learned after a single subtraction step. Nothing was learned at d > 1. The sample size of 10,000* refers to
10,000 samples using an optimized value for the threshold of the rank-based quartet test (?q = 0.003).
5
Conclusion
We presented a novel algorithm for learning the structure and parameters of bipartite noisy-or
Bayesian networks where the top layer consists completely of latent variables. Our algorithm can
learn a broad class of models that may be useful for factor analysis and unsupervised learning. The
structure learning algorithm does not depend on an ability to estimate the parameters in strongly
quartet-learnable networks. As a result, it may be possible to generalize the approach beyond the
noisy-or setting to other bipartite Bayesian networks, including those with continuous variables and
discrete variables of more than two states.
8
References
Anandkumar, Anima, Chaudhuri, Kamalika, Hsu, Daniel, Kakade, Sham, Song, Le, & Zhang, Tong.
2011. Spectral Methods for Learning Multivariate Latent Tree Structure. Proceedings of NIPS
24, 2025?2033.
Anandkumar, Anima, Foster, Dean, Hsu, Daniel, Kakade, Sham, & Liu, Yi-Kai. 2012a. A spectral
algorithm for latent Dirichlet allocation. Proceedings of NIPS 25, 926?934.
Anandkumar, Animashree, Hsu, Daniel, & Kakade, Sham M. 2012b. A method of moments for
mixture models and hidden Markov models. In: Proceedings of COLT 2012.
Anandkumar, Animashree, Javanmard, Adel, Hsu, Daniel J, & Kakade, Sham M. 2013. Learning
Linear Bayesian Networks with Latent Variables. Pages 249?257 of: Proceedings of ICML.
Chang, Joseph T. 1996. Full reconstruction of Markov models on evolutionary trees: identifiability
and consistency. Mathematical biosciences, 137(1), 51?73.
Cooper, Gregory F. 1987. Probabilistic Inference Using Belief Networks Is NP-Hard. Technical
Report BMIR-1987-0195. Medical Computer Science Group, Stanford University.
Elidan, Gal, & Friedman, Nir. 2006. Learning hidden variable networks: The information bottleneck
approach. Journal of Machine Learning Research, 6(1), 81.
Elidan, Gal, Lotner, Noam, Friedman, Nir, & Koller, Daphne. 2001. Discovering hidden variables:
A structure-based approach. Advances in Neural Information Processing Systems, 479?485.
Elsner, Ludwig. 1985. An optimal bound for the spectral variation of two matrices. Linear algebra
and its applications, 71, 77?80.
Eriksson, Nicholas. 2005. Tree construction using singular value decomposition. Algebraic Statistics
for computational biology, 347?358.
Friedman, Nir. 1997. Learning Belief Networks in the Presence of Missing Values and Hidden
Variables. Pages 125?133 of: ICML ?97.
Halpern, Yoni, & Sontag, David. 2013. Unsupervised Learning of Noisy-Or Bayesian Networks.
In: Conference on Uncertainty in Artificial Intelligence (UAI-13).
Ishteva, Mariya, Park, Haesun, & Song, Le. 2013. Unfolding Latent Tree Structures using 4th Order
Tensors. In: ICML ?13.
Kearns, Michael, & Mansour, Yishay. 1998. Exact inference of hidden structure from sample data
in noisy-OR networks. Pages 304?310 of: Proceedings of UAI 14.
Lazarsfeld, Paul. 1950. Latent Structure Analysis. In: Stouffer, Samuel, Guttman, Louis, Suchman,
Edward, Lazarsfeld, Paul, Star, Shirley, & Clausen, John (eds), Measurement and Prediction.
Princeton, New Jersey: Princeton University Press.
Lazic, Nevena, Bishop, Christopher M, & Winn, John. 2013. Structural Expectation Propagation:
Bayesian structure learning for networks with latent variables. In: Proceedings of AISTATS 16.
Martin, J, & VanLehn, Kurt. 1995. Discrete factor analysis: Learning hidden variables in Bayesian
networks. Tech. rept. Department of Computer Science, University of Pittsburgh.
Mossel, Elchanan, & Roch, S?ebastien. 2005. Learning nonsingular phylogenies and hidden Markov
models. Pages 366?375 of: Proceedings of 37th STOC. ACM.
Pearl, Judea, & Tarsi, Michael. 1986. Structuring causal trees. Journal of Complexity, 2(1), 60?77.
Saund, Eric. 1995. A multiple cause mixture model for unsupervised learning. Neural Computation,
7(1), 51?71.
Shwe, Michael A, Middleton, B, Heckerman, DE, Henrion, M, Horvitz, EJ, Lehmann, HP, &
Cooper, GF. 1991. Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR
knowledge base. Meth. Inform. Med, 30, 241?255.
Silva, Ricardo, Scheine, Richard, Glymour, Clark, & Spirtes, Peter. 2006. Learning the structure of
linear latent variable models. The Journal of Machine Learning Research, 7, 191?246.
?
Singliar,
Tom?as?, & Hauskrecht, Milo?s. 2006. Noisy-or component analysis and its application to
link analysis. The Journal of Machine Learning Research, 7, 2189?2213.
9
| 5064 |@word determinant:1 polynomial:11 stronger:1 tarsus:3 seek:1 r:1 decomposition:4 p0:1 moment:28 liu:1 daniel:4 kurt:1 o2:1 existing:1 horvitz:1 p2min:1 must:1 john:2 additive:2 remove:3 generative:1 discovering:5 intelligence:1 randolph:1 core:3 filtered:1 node:1 zhang:1 daphne:1 mathematical:2 prove:2 consists:2 combine:1 introduce:2 x0:1 javanmard:1 indeed:1 ua:2 begin:1 discover:3 underlying:1 bounded:3 linearity:1 provided:2 notation:1 moreover:1 lowest:1 substantially:2 finding:2 gal:2 hauskrecht:6 guarantee:3 every:6 internist:1 medical:5 louis:1 rept:1 qmr:4 black:1 initialization:3 studied:1 conversely:1 factorization:1 ishteva:2 range:1 adoption:1 directed:1 practical:1 testing:1 practice:1 unfold:1 area:1 reject:2 pre:1 refers:3 seeing:1 cannot:3 close:2 eriksson:2 influence:3 py:2 equivalent:2 dean:1 middleton:1 missing:1 economics:1 independently:1 recovery:1 identifying:2 m2:3 factored:1 q:2 importantly:1 proving:1 fx:9 variation:1 construction:1 suppose:1 yishay:1 exact:2 us:4 agreement:1 pa:16 element:1 particularly:1 labeled:1 observed:32 connected:2 ordering:1 removed:2 highest:2 disease:7 substantial:1 leak:2 complexity:6 halpern:9 signature:1 depend:2 solving:2 algebra:1 bipartite:10 eric:1 completely:3 joint:9 jersey:1 activate:1 artificial:1 lazarsfeld:3 choosing:1 whose:2 widely:1 supplementary:5 solve:1 say:2 kai:1 reconstruct:1 otherwise:2 stanford:1 ability:3 statistic:1 noisy:22 itself:1 advantage:1 eigenvalue:7 propose:1 subtracting:6 reconstruction:2 remainder:1 fmax:5 chaudhuri:1 ludwig:1 lazic:2 rst:1 parent:28 empty:1 extending:2 perfect:1 converges:1 coupling:4 develop:1 eq:7 edward:1 strong:1 c:1 correct:3 filter:1 material:5 exploring:1 hold:1 ground:2 currently:1 largest:1 correctness:2 create:2 unfolding:6 hope:1 activates:1 always:1 ej:1 varying:1 structuring:1 focus:1 bernoulli:1 likelihood:3 rank:15 check:2 tech:1 greedily:1 baseline:2 inference:3 entire:1 typically:1 accept:1 hidden:16 koller:1 france:1 provably:2 pixel:3 colt:1 exponent:1 special:2 mutual:5 marginal:3 equal:3 construct:1 once:4 clausen:1 having:1 ng:5 identical:3 represents:1 broad:1 biology:1 unsupervised:6 nearly:1 icml:3 park:1 np:1 report:1 richard:1 simultaneously:1 connects:1 attempt:1 friedman:5 adjust:2 mixture:8 analyzed:1 activated:1 edge:10 fu:8 necessary:1 elchanan:1 unless:1 tree:6 causal:3 theoretical:1 sociology:1 column:1 cover:1 maximization:2 introducing:2 subset:2 hundred:1 too:1 dependency:1 gregory:2 synthetic:3 epidemiology:1 sensitivity:1 probabilistic:3 off:13 guttman:1 michael:3 choose:1 possibly:1 expert:1 return:1 pmin:4 ricardo:1 singliar:6 de:1 star:1 sec:1 includes:2 depends:1 saund:2 root:4 try:1 observing:1 competitive:1 sort:1 parallel:1 identifiability:2 oi:3 formed:1 accuracy:1 variance:3 miller:1 yield:1 identify:7 nonsingular:1 generalize:1 bayesian:15 anima:2 converged:1 inquiry:1 inform:1 ed:1 definition:4 failure:14 pp:1 proof:4 bioscience:1 judea:1 hsu:4 proved:1 dataset:2 animashree:2 knowledge:1 improves:1 routine:1 actually:1 yacine:1 courant:1 dt:6 tom:1 yoni:2 improved:1 formulation:1 arranged:1 symptom:4 strongly:5 furthermore:3 just:1 nmin:5 hand:1 christopher:1 overlapping:1 propagation:2 incrementally:1 quality:1 scientific:1 requiring:1 true:4 hence:5 equality:1 q0:3 spirtes:1 round:1 self:1 uniquely:2 width:2 samuel:1 outline:1 silva:2 meaning:1 variational:9 wise:3 image:3 discovers:1 novel:1 common:4 lotner:1 discussed:2 extend:4 m1:3 marginals:1 measurement:1 consistency:2 outlined:1 pmi:3 inclusion:1 resorting:1 grid:1 hp:1 pu:16 base:1 multivariate:1 recent:1 exclusion:1 showed:1 inf:3 certain:1 nonconvex:1 binary:6 arbitrarily:1 yi:1 additional:2 greater:1 mr:2 elsner:2 subtraction:4 determine:7 aggregated:1 elidan:4 full:5 multiple:3 sham:4 exceeds:2 technical:1 long:1 divided:1 prediction:1 involving:3 variant:2 patient:1 expectation:4 background:1 want:2 winn:1 median:1 source:7 singular:2 ascent:1 recording:2 med:1 member:2 call:3 anandkumar:8 structural:3 presence:2 psychology:1 gave:1 jernite:2 identified:1 sibling:2 bottleneck:1 whether:9 expression:2 effort:1 adel:1 song:2 peter:1 algebraic:1 sontag:8 york:1 proceed:1 cause:1 dramatically:1 vanlehn:3 useful:2 singly:41 generate:1 fz:1 specifies:1 exist:1 notice:2 estimated:3 disjoint:1 correctly:1 per:1 diagnosis:5 discrete:5 write:2 milo:1 group:1 four:5 reformulation:1 threshold:12 drawn:2 capital:1 shirley:1 graph:2 sum:2 run:2 letter:2 fourth:4 uncertainty:1 lehmann:1 family:3 saying:1 reasonable:1 coherence:3 layer:3 fl:7 bound:6 quadratic:1 identifiable:1 x2:1 fmin:3 connell:1 martin:3 px:9 glymour:1 department:1 smaller:1 slightly:1 em:6 remain:1 heckerman:1 kakade:4 joseph:1 making:1 ln:3 equation:2 previously:5 remains:1 discus:1 fail:2 needed:1 know:4 end:6 operation:2 decomposing:1 apply:2 eight:1 spectral:5 nicholas:1 subtracted:3 alternative:1 existence:3 top:3 dirichlet:2 remaining:1 running:1 xw:1 giving:1 testable:3 prof:1 tensor:7 already:1 occurs:1 fa:14 evolutionary:1 gradient:1 link:1 parametrized:1 fy:1 trivial:1 provable:1 quartet:69 assuming:1 o1:2 pointwise:2 relationship:2 equivalently:1 stoc:1 noam:1 negative:6 rise:1 implementation:2 ebastien:1 unknown:1 perform:3 upper:1 observation:4 markov:3 discarded:1 enabling:2 finite:2 defining:1 situation:1 looking:2 discovered:4 mansour:2 perturbation:1 intensity:1 david:2 introduced:2 pair:1 required:1 specified:1 optimized:1 fv:2 learned:24 pearl:3 hour:1 nip:2 roch:2 able:2 beyond:1 below:3 including:2 max:2 belief:2 event:1 overlap:1 rely:1 meth:1 mossel:2 created:1 admixture:1 x8:1 coupled:41 gf:1 nir:3 prior:7 review:1 discovery:1 determining:3 fully:1 allocation:2 clark:1 degree:1 sufficient:3 anonymized:1 foster:1 share:8 repeat:1 last:1 side:1 allow:1 institute:1 fall:1 absolute:1 sparse:1 depth:10 subtracts:2 reconstructed:1 observable:2 obtains:1 clique:1 active:1 incoming:1 uai:2 pittsburgh:2 continuous:3 latent:81 triplet:15 table:2 learn:23 robust:1 expanding:1 expansion:1 aistats:1 main:1 linearly:1 bounding:2 noise:4 paul:2 shwe:3 nothing:1 child:30 x1:1 fig:1 cooper:4 tong:1 n:2 formalization:1 sub:1 originated:1 haesun:1 candidate:5 third:7 formula:1 theorem:2 minute:1 bishop:1 showing:1 learnable:13 nyu:1 pz:1 intractable:1 consist:1 exists:1 adding:1 kamalika:1 subtract:2 simply:3 likely:1 contained:1 partially:1 chang:3 truth:2 satisfies:1 acm:1 conditional:2 marked:1 sorted:1 shared:4 experimentally:1 hard:1 henrion:1 infinite:3 except:1 kearns:2 lemma:8 pas:1 experimental:2 m3:2 succeeds:1 dsontag:1 indicating:1 phylogeny:1 princeton:2 correlated:1 |
4,492 | 5,065 | Learning Hidden Markov Models from Non-sequence
Data via Tensor Decomposition
Jeff Schneider
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Tzu-Kuo Huang
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Learning dynamic models from observed data has been a central issue in many
scientific studies or engineering tasks. The usual setting is that data are collected
sequentially from trajectories of some dynamical system operation. In quite a few
modern scientific modeling tasks, however, it turns out that reliable sequential
data are rather difficult to gather, whereas out-of-order snapshots are much easier to obtain. Examples include the modeling of galaxies, chronic diseases such
Alzheimer?s, or certain biological processes.
Existing methods for learning dynamic model from non-sequence data are mostly
based on Expectation-Maximization, which involves non-convex optimization and
is thus hard to analyze. Inspired by recent advances in spectral learning methods,
we propose to study this problem from a different perspective: moment matching
and spectral decomposition. Under that framework, we identify reasonable assumptions on the generative process of non-sequence data, and propose learning
algorithms based on the tensor decomposition method [2] to provably recover firstorder Markov models and hidden Markov models. To the best of our knowledge,
this is the first formal guarantee on learning from non-sequence data. Preliminary
simulation results confirm our theoretical findings.
1
Introduction
Learning dynamic models from observed data has been a central issue in many fields of study, scientific or engineering tasks. The usual setting is that data are collected sequentially from trajectories of
some dynamical system operation, and the goal is to recover parameters of the underlying dynamic
model. Although many research and engineering efforts have been devoted to that setting, it turns
out that in quite a few modern scientific modeling problems, another situation is more frequently encountered: observed data are out-of-order (or partially-ordered) snapshots rather than full sequential
samples of the system operation. As pointed out in [7, 8], this situation may appear in the modeling
of celestial objects such as galaxies or chronic diseases such as Alzheimer?s, because observations
are usually taken from different trajectories (galaxies or patients) at unknown, arbitrary times. Or it
may also appear in the study of biological processes, such as cell metabolism under external stimuli,
where most measurement techniques are destructive, making it very difficult to repetitively collect
observations from the same individual living organisms as they change over time. However, it is
much easier to take single snapshots of multiple organisms undergoing the same biological process
in a fully asynchronous fashion, hence the lack of timing information. Rabbat et al. [9] noted that in
certain network inference problems, the only available data are sets of nodes co-occurring in random
walks on the network without the order in which they were visited, and the goal is to reconstruct
the network structure from such co-occurrence data. This problem is essentially about learning a
first-order Markov chain from data lacking sequence information.
1
As one can imagine, dynamic model learning in a non-sequential setting is much more difficult
than in the sequential setting and has not been thoroughly studied. One issue is that the notion
of non-sequence data is vague because there can be many different generative processes resulting
in non-sequence data. Without any restrictions, one can easily find a case where no meaningful
dynamic model can be learnt. It is therefore important to figure out what assumptions on the data
and the model would lead to successful learning. However, existing methods for non-sequential
settings, e.g., [9, 11, 6, 8], do not shed much light on this issue because they are mostly based
on Expectation-Maximization (EM), which require non-convex optimization. Regardless of the
assumptions we make, as long as the resulting optimization problem remains non-convex, formal
analysis of learning guarantees is still formidable.
We thus propose to take a different approach, based on another long-standing estimation principle:
the method of moments (MoM). The basic idea of MoM is to find model parameters such that the
resulting moments match or resemble the empirical moments. For some estimation problems, this
approach is able to give unique and consistent estimates while the maximum-likelihood method gets
entangled in multiple and potentially undesirable local maxima. Taking advantage of this property,
an emerging area of research in machine learning has recently developed MoM-based learning algorithms with formal guarantees for some widely used latent variable models, such as Gaussian
mixture models[5], Hidden Markov models [3], Latent Dirichlet Allocation [1, 4], etc. Although
many learning algorithms for these models exist, some having been very successful in practice,
barely any formal learning guarantee was given until the MoM-based methods were proposed. Such
breakthroughs seem surprising, but it turns out that they are mostly based on one crucial property:
for quite a few latent variable models, the model parameters can be uniquely determined from spectral decompositions of certain low-order moments of observable quantities.
In this work we demonstrate that under the MoM and spectral learning framework, there are reasonable assumptions on the generative process of non-sequence data, under which the tensor decomposition method [2], a recent advancement in spectral learning, can provably recover the parameters
of first-order Markov models and hidden Markov models. To the best of our knowledge, ours is the
first work that provides formal guarantees for learning from non-sequence data. Interestingly, these
assumptions bear much similarity to the usual idea behind topic modeling: with the bag-of-words
representation which is invariant to word orderings, the task of inferring topics is almost impossible given one single document (no matter how long it is!), but becomes easier as more documents
touching upon various topics become available. For learning dynamic models, what we need in the
non-sequence data are multiple sets of observations, where each set contains independent samples
generated from its own initial distribution, and the many different initial distributions together cover
the entire (hidden) state space. In some of the aforementioned scientific applications, such as biological studies, this type of assumptions might be realized by running multiple experiments with
different initial configurations or amounts of stimuli.
The main body of the paper consists of four sections. Section 2 briefly reviews the essentials of
the tensor decomposition framework [2]; Section 3 details our assumptions on non-sequence data,
tensor-decomposition based learning algorithms, and theoretical guarantees; Section 4 reports some
simulation results confirming our theoretical findings, followed by conclusions in Section 5. Proofs
of theoretical results are given in the appendices in the supplementary material.
2
Tensor Decomposition
We mainly follow the exposition in [2], starting with some
Np preliminaries and notations. A real p-th
order tensor A is a member of the tensor product spaceN i=1 Rmi of p Euclidean spaces. For a vecp
tor x ? Rm , we denoteNby x?p := x ? x ? ? ? ? ? x ? i=1 Rm its p-th tensor power. A convenient
p
way to represent A ? i=1 Rm is through a p-way array of real numbers [Ai1 i2 ???ip ]1?i1 ,i2 ,...,ip ?m ,
where Ai1 i2 ???ip denotes the (i1 , i2 , . . . , ip )-th coordinate of A with respect to a canonical basis.
With this representation, we can view A as a multi-linear map that, given a set
p matrices
Nof
p
mi
with
{Xi ? Rm?mi }pi=1 , produces another p-th order tensor A(X1 , X2 , ? ? ? , Xp ) ?
i=1 R
the following p-way array representation:
X
A(X1 , X2 , ? ? ? , Xp )i1 i2 ???ip :=
Aj1 j2 ???jp (X1 )j1 i1 (V2 )j2 i2 ? ? ? (Xp )jp ip . (1)
1?j1 ,j2 ,...,jp ?m
2
Figure 1: Running example of Markov chain with three states
In this work we consider tensors that are up to the third-order (p ? 3) and, for most of the time,
also symmetric, meaning that their p-way array representations are invariant under permutations of
array indices. More specifically, we focus on second and third-order symmetric tensors in or slightly
perturbed from the following form:
M2 :=
k
X
i=1
? i ?i ? ?i ,
M3 :=
k
X
i=1
? i ?i ? ?i ? ?i ,
(2)
satisfying the following non-degeneracy conditions:
Condition 1. ?i ? 0 ? 1 ? i ? k, {?i ? Rm }ki=1 are linearly independent, and k ? m.
As described in later sections, the core of our learning task involves estimating {?i }ki=1 and {?i }ki=1
from perturbed or noisy versions of M2 and M3 . We solve this estimation problem with the tensor
decomposition method recently proposed by Anandkumar et al. [2]. The algorithm and its theoretical guarantee are summarized in Appendix A. The key component of this method is a novel tensor
power iteration procedure for factorizing a symmetric orthogonal tensor, which is robust against
input perturbation.
3
Learning from Non-sequence Data
We first describe a generative process of non-sequence data for first-order Markov models and
demonstrate how to apply tensor decomposition methods to perform consistent learning. Then
we extend these ideas to hidden Markov models and provide theoretical guarantees on the sample complexity of the proposed learning algorithm. For notational conveniences we define the
following vector-matrix cross product ?d?{1,2,3} : (v ?1 M )ijk := vi (M )jk , (v ?2 M )ijk =
vj (M )ik , (v ?3 M )ijk = vk (M )ij . For a matrix M we denote by Mi its i-th column.
3.1
First-order Markov Models
Let P ? [0, 1]m?m be the transition probability matrix of a discrete, first-order, ergodic Markov
chain with m states and a unique stationary distribution ?. Let P be of full rank and 1? P = 1? .
To give a high-level idea of what makes it possible to learn P from non-sequence data, we use the
simple Markov chain with three states shown in Figure 1 as our running example, demonstrating
step by step how to extend from a very restrictive generative setting of the data to a reasonably
general setting, along with the assumptions made to allow consistent parameter estimation. In the
usual setting where we have sequences of observations, say {x(1) , x(2) , . . .} with parenthesized
superscripts denoting time, it is straightforward to consistently estimate P . We simply calculate the
empirical frequency of consecutive pairs of states:
P
(t+1)
= i, x(t) = j)
t (x
c
P
Pij :=
.
(t) = j)
t (x
Alternatively, suppose for each state j, we have an i.i.d. sample of its immediate next state Dj :=
(1)
(1)
{x1 , x2 , . . . | x(0) = j}, where subscripts are data indices. Consistent estimation in this case
is also easy: the empirical distribution of Dj consistently estimates Pj , the j-th column of P . For
3
example, the Markov chain in Figure 1 may produce the following three samples, whose empirical
distributions estimate the three columns of P respectively:
D1 = {2, 1, 2, 2, 2, 2, 2, 2, 2, 2}
D2 = {3, 3, 2, 3, 2, 3, 3, 2, 3, 3}
D3 = {1, 1, 3, 1, 3, 3, 1, 3, 3, 1}
c1 = [0.1 0.9 0.0]? ,
? P
c2 = [0.0 0.3 0.7]? ,
? P
c3 = [0.5 0.0 0.5]? .
? P
A nice property of these estimates is that, unlike in the sequential setting, they do not depend on
any particular ordering of the observations in each set. Nevertheless, such data are not quite nonsequenced because all observations are made at exactly the next time step. We thus consider the
(t )
(t )
following generalization: for each state j, we have Dj := {x1 1 , x2 2 , . . . | x(0) = j}, i.e.,
independent samples of states drawn at unknown future times {t1 , t2 , . . .}. For example, our data in
this setting might be
D1 = {2, 1, 2, 3, 2, 3, 3, 2, 2, 3},
D2 = {3, 3, 2, 3, 2, 1, 3, 2, 3, 1},
D3 = {1, 1, 3, 1, 2, 3, 2, 3, 3, 2}.
(3)
Obviously it is hard to extract information about P from such data. However, if we assume that
the unknown times {ti } are i.i.d. random variables following some distribution independent of the
initial state j, it can then be easily shown that Dj ?s empirical distribution consistently estimates Tj ,
the j-th column of the the expected transition probability matrix T := Et [P t ]:
D1 = {2, 1, 2, 3, 2, 3, 3, 2, 2, 3}
D2 = {3, 3, 2, 3, 2, 1, 3, 2, 3, 1}
D3 = {1, 1, 3, 1, 2, 3, 2, 3, 3, 2}
c1 = [0.1 0.5 0.4]? ,
T
c2 = [0.2 0.3 0.5]? ,
? T
c3 = [0.3 0.3 0.4]? .
? T
?
In general there exist many P ?s that result in the same T . Therefore, as detailed later, we make
a specific distributional assumption on {ti } to enable unique recovery of the transition matrix P
from T (Assumption A.1). Next we consider a further generalization, where the unknowns are not
only the time stamps of the observations, but also the initial state j. In other words, we only know
each set was generated from the same initial state, but do not know the actual initial state. In this
case, the empirical distributions of the sets consistently estimate the columns of T in some unknown
permutation ?:
D?(3) = {1, 1, 3, 1, 2, 3, 2, 3, 3, 2}
?
D?(2) = {3, 3, 2, 3, 2, 1, 3, 2, 3, 1}
?
D?(1) = {2, 1, 2, 3, 2, 3, 3, 2, 2, 3}
?
?
T[
?(3) = [0.3 0.3 0.4] .
?
T[
?(2) = [0.2 0.3 0.5] ,
?
T[
?(1) = [0.1 0.5 0.4] .
In order to be able to identify ?, we will again resort to randomness and assume the unknown initial
states are random variables following a certain distribution (Assumption A.2) so that the data carry
information about ?. Finally, we generalize from a single unknown initial state to an unknown
(t )
(t )
initial state distribution, where each set of observations D := {x1 1 , x2 2 , . . . | ? (0) } consists of
independent samples of states drawn at random times from some unknown initial state distribution
? (0) . For example, the data may look like:
D?(0)
= {1, 3, 3, 1, 2, 3, 2, 3, 3, 2},
D?(0)
= {3, 1, 2, 3, 2, 1, 3, 2, 3, 1},
D?(0)
= {2, 1, 2, 3, 3, 3, 3, 1, 2, 3},
1
2
3
..
.
With this final generalization, most would agree that the generated data are non-sequenced and that
the generative process is flexible enough to model the real-world situations described in Section
1. However, simple estimation with empirical distributions no longer works because each set may
now contain observations from multiple initial states. This is where we take advantage of the tensor
4
decomposition framework outlined in Section 2, which requires proper assumptions on the initial
state distribution ? (0) (Assumption A.3).
Now we are ready to give the definition of our entire generative process. Assume we have N sets
of non-sequence data each containing n observations, and each set of observations {xi }ni=1 were
independently generated by the following:
? Draw an initial distribution
? (0) ? Dirichlet(?),
Pm
E[? (0) ] = ?/( i=1 ?i ) = ?, ?i 6= ?j ? i 6= j.
? For i = 1, . . . , n,
? Draw a discrete time ti ? Geometric(r), ti ? {1, 2, 3, . . .}.
? Draw an initial state si ? Multinomial(? 0 ), si ? {0, 1}m .
? Draw an observation xi ? Multinomial(P ti si ), xi ? {0, 1}m .
(Assumption A.3)
(Assumption A.2)
(Assumption A.1)
The above generative process has several properties. First, all the data points in the same set share
the same initial state distribution but can have different initial states; the initial state distribution
varies across different sets and yet centers at the stationary distribution of the Markov chain. As
mentioned in Section 1, this may be achieved in biological studies by running multiple experiments
with different input stimuli, so the data collected in the same experiment can be assumed to have the
same initial state distribution. Second, each data point is drawn from an independent trajectory of
the Markov chain, a similar situation in the modeling of galaxies or Alzheimer?s, and random time
steps could be used to compensate for individual variations in speed: a small/large ti corresponds
to a slowly/fast evolving individual object. Finally, the geometric distribution can be interpreted as
an overall measure of the magnitude of speed variation: a large success probability r would result
in many small ti ?, meaning that most objects evolve at similar speeds, while a small r would lead to
ti ?s taking a wide range of values, indicating a large speed variation.
To use the tensor decomposition method in Appendix A, we need the tensor structure (2) in certain
low-order moments of observed quantities. The following theorem identifies such quantities:
t
?1
Theorem 1. Define
P the expected transition probability matrix T := Et [P ] = rP (I ? (1 ? r)P )
and let ?0 := i ?i , C2 := E[x1 x?
]
and
C
:=
E[x
?
x
?
x
].
Then
the
following
holds:
3
1
2
3
2
E[x1 ]
C3
=
=
M2
:=
M3
:=
0
+ ??
?? ? ,
0 +1
P
P3
?3
2
0
+ ??
i ?i T i
d=1 ? ?d C2 ?
(?0 +2)(?0 +1)
0 +2
?,
C2 =
?
1
?0 +1 T diag(?)T
?
?
(4)
2?20
?3
,
(?0 +2)(?0 +1) ?
(?0 + 1)C2 ? ?0 ?? = T diag(?)T ,
P3
P
(?0 +2)(?0 +1)
0
2 ?3
C3 ? (?0 +1)?
= i ?i Ti?3 .
d=1 ? ?d C2 + ?0 ?
2
2
(5)
(6)
(7)
The proof is in Appendix B.1, which relies on the special structure in the moments of the Dirichlet
distribution (Assumption A.3). It is clear that M2 and M3 have the desired tensor structure. Asc2 and M
c3 by computing empirical moments from
suming ?0 is known, we can form estimates M
the data. Note that the xi ?s are exchangeable, so we can use all pairs and triples of data points to
compute the estimates. Interestingly, these low-order moments have a very similar structure to those
in Latent Dirichlet Allocation [1]. Indeed, according to our generative process, we can view a set
of non-sequence data points as a document generated by an LDA model with the expected transition matrix T as the topic matrix, the stationary distribution ? as the topic proportions, and most
importantly, the states as both the words and the topics. The last property is what distinguishes our
generative process from a general LDA model: because both the words and the topics correspond to
the states, the topic matrix is no longer invariant to column permutations. Since the tensor decomposition method may return Tb under any column permutation, we need to recover the correct matching
b returned by the tensor decomposition method underbetween its rows and columns. Note that the ?
goes the same permutation as Tb?s columns. Because all ?i ?s have different values by Assumption
b and the mean ?
? of all data.
A.2, we may recover the correct matching by sorting both the returned ?
A final issue is estimating P and r from Tb. This is in general difficult even when the exact T
is available because multiple choices of P and r may result in the same T . However, if the true
transition matrix P has at least one zero entry, then unique recovery is possible:
5
Theorem 2. Let P ? , r? , T ? and ? ? denote the true values of the transition probability matrix,
the success probability, the expected transition matrix, and the stationary distribution, respectively.
Assume that P ? is ergodic and of full rank, and Pij? = 0 for some i and j. Let S := {?/(? ? 1) |
? is a real negative eigenvalue of T ? } ? {0}. Then the following holds:
? 0 ? max(S) < r? ? 1.
? For all r ? (0, 1] \ S, P (r) := (rI + (1 ? r)T ? )?1 T ? is well-defined and
1? P (r) = 1? , P (r)? ? = ? ? , P ? = P (r? ),
P (r)ij ? 0 ? i, j
??
r ? r? .
That is, P (r) is a stochastic matrix if and only if r ? r? .
The proof is in Appendix C. This theorem indicates that we can determine r? from T ? by doing
bi-section on (0, 1]. But this approach fails when we replace T ? by an estimate Tb because even
Pb(r? ) might contain negative values. A more practical estimation procedure is the following: for
each value of r in a decreasing sequence starting from 1, project Pb(r) := (rI + (1 ? r)Tb)?1 Tb onto
the space of stochastic matrices and record the projection distance. Then search in the sequence of
projection distances for the first sudden increase 1 starting from 1, and take the corresponding value
of r and projected Pb(r) as our estimates.
Assuming the true r and ?0 are known, with the empirical moments being consistent estimators for
the true moments and the tensor decomposition method guaranteed to return accurate estimates under small input perturbation, we can conclude that the estimates described above will converge (with
high probability) to the true quantities as the sample size N increases. We give sample complexity
bound on estimation error in the next section for hidden Markov models.
3.2
Hidden Markov Models
Let P and ? now be defined over the hidden discrete state space of size k and have the same
properties as the first-order Markov model. The generative process here is almost identical to (and
therefore share the same interpretation with) the one in Section 3.1, except for an extra mapping
from the discrete hidden state to a continuous observation space:
? Draw a state indicator vector hi ? Multinomial(P ti si ), hi ? {0, 1}k .
? Draw an observation: xi = U hi + ?i , where U ? Rm?k denotes a rank-k matrix of
mean observation vectors for the k hidden states, and the random noise vectors ?i ?s are i.i.d
satisfying E[?i ] = 0 and Var[?i ] = ? 2 I.
Note that a spherical covariance2 is required for the tensor decomposition method to be applicable.
The low-order moments that lead to the desired tensor structure are given in the following:
Theorem 3. Define
matrix T := Et [P t ] = rP (I ? (1 ? r)P )?1
P the expected hidden state transition
?3
?
?
and let ?0 :=
i ?i , V1 := E[x1 ], V2 := E[x1 x1 ], V3 := E[x1 ], C2 := E[x1 x2 ] and C3 :=
E[x1 ? x2 ? x3 ]. Then the following holds:
P3
P
?3
+ d=1 V1 ?d (? 2 I),
V1 = U ?, V2 = U diag(?)U ? + ? 2 I, V3 =
i ?i Ui
P3
P
M2 := V2 ? ? 2 I = U diag(?)U ? , M3 := V3 ? d=1 V1 ?d (? 2 I) = i ?i Ui?3 ,
C2
=
?
1
?0 +1 U T diag(?)(U T )
C3
=
2
(?0 +2)(?0 +1)
M2?
:=
M3?
:=
(?0 + 1)C2 ? ?0 V1 V1? = U T diag(?)(U T )? ,
P3
P
(?0 +2)(?0 +1)
0
2 ?3
C3 ? (?0 +1)?
= i ?i (U T )?3
i .
d=1 V1 ?d C2 + ?0 V1
2
2
1
2
P
i
+
?
?0
?0 +1 V1 V1 ,
?i (U T )?3
i +
?0
?0 +2
P3
d=1
V1 ?d C2 ?
2?20
?3
(?0 +2)(?0 +1) V1
Intuitively the jump should be easier to locate as P gets sparser, but we do not have a formal result.
We may allow different covariances ?j2 I for different hidden states. See Section 3.2 of [2] for details.
6
Algorithm 1 Tensor decomposition method for learning HMM from non-sequence data
input N sets of non-sequence data points, the success probability r, the Dirichlet parameter ?0 , the
number of hidden states k, and numbers of iterations L and N.
e possibly under permutation of state labels.
b , Pe and U
output Estimates ?
?
c2 := ?min (V
c1 , V
c2 , V
c3 , C
c2 , C
c3 , and ?
c2 ? V
c1 V
c1 ).
1: Compute empirical averages V
c?
c? , M
c2 , M
c3 , M
2: Compute M
3
2
c2 and M
c3 with the number of hidden states k to obtain
3: Run Algorithm A.1 (Appendix A) on M
k?k?k
b
c ? Rm?k .
a symmetric tensor T ? R
and a whitening transformation W
4: Run Algorithm A.2 (Appendix A) k times each with numbers of iterations L and N, the input
tensor in the first run set to Tb and in each subsequent run set to the deflated tensor returned by
the previous run, resulting in k pairs of eigenvalue/eigenvector {(??i , vbi )}ki=1 .
c? to obtain T
c? and M
c? , W
c? and {(??? i , vb? )}k .
5: Repeat Steps 4 and 5 on M
3
2
i i=1
6: Match {(??i , vbi )}ki=1 with {(???i , vbi? )}ki=1 by sorting {??i }ki=1 and {???i }ki=1 .
7: Obtain estimates of HMM parameters:
d
c? )? V
c? ?b? ,
U
T := (W
b := (W
c ? )? Vb ?,
b
U
b + (1 ? r)U
d
d
Pb := (rU
T )? U
T,
?2
?2
b := [???1 ? ? ? ???k ]? ,
?
b := diag([??1 ? ? ? ??k ]? ); V
c? and ?b? are defined in the same way.
ck ], ?
where Vb := [c
v1 ? ? ? v
b
b onto the simplex and P onto the space of stochastic matrices.
8: (Optional) Project ?
The proof is in Appendix B.2. This theorem suggests that, unlike first-order Markov models, HMMs
require two applications of the tensor decomposition methods, one on M2 and M3 for extracting
the mean observation vectors U , and the other on M2? and M3? for extracting the matrix product
U T . Another issue is that the estimates for M2 and M3 require an estimate for the noise variance
? 2 , which is not directly observable. Nevertheless, since M2 and M3 are in the form of low-order
moments of spherical Gaussian mixtures, we may use the existing result (Theorem 3.2, [2]) to obtain
c2 ? Vb1 Vb ? ). The situation regarding permutations of the estimates is also
an estimate ?
b2 = ?min (V
1
different here. First note that P = (rU +(1?r)U T )? U T, which implies that permuting the columns
of U and the columns of U T in the same manner has the effect of permuting both the rows and the
columns of P , essentially re-labeling the hidden states. Hence we can only expect to recover P up
to some simultaneous row and column permutation. By the assumption that ?i ?s are all different, we
b and U
d
b and ?b? to match the columns of U
can sort the two estimates ?
T , and obtain Pb if r is known.
When r is unknown, a similar heuristics to the one for first-order Markov models can be used to
estimate r, based on the fact that P = (rU + (1 ? r)U T )? U T = (rI + (1 ? r)T )?1 T , suggesting
that Theorem 2 remains true when expressing P by U and U T .
Algorithm 1 gives the complete procedure for learning HMM from non-sequence data. Combining
the perturbation bounds of the tensor decomposition method (Appendix A), the whitening procedure
(Appendix D.1) and the matrix pseudoinverse [10], and concentration bounds on empirical moments
(Appendix D.3), we provide a sample complexity analysis:
Theorem 4. Suppose the numbers of iterations N and L for Algorithm A.2 satisfy the conditions in
Theorem A.1 (Appendix A), and the number of hidden states k, the success probability r, and the
Dirichlet parameter ?0 are all given. For any ? ? (0, 1) and ? > 0, if the number of sets N satisfies
12 max(k 2 , m)m3 ? 3 (?0 + 2)2 (?0 + 1)2
N ?
?
?
42000c2 ?1 (U T )2 max(?1 (U T ), ?1 (U ), 1)2
4600
225000
,
,
,
max
2
?min
min(?k (M2? ), ?k (M2 ))2 ?2 ?k (rU + (1 ? r)U T )4 min(?k (U T ), ?k (U ), 1)4
?
?
where c is some constant, ? := max(? 2 + maxi,j (|Uik |2 ), 1), ?min := mini,j |1/ ?j ? 1/ ?j |,
b returned by Algorithm 1 satisfy
and ?i (?) denotes the i-th largest singular value, then the Pb and U
??k (rU + (1 ? r)U T )2
b
b
Prob(kP ? P k ? ?) ? 1 ? ? and Prob kU ? U k ?
? 1 ? ?,
6?1 (U T )
7
(a) Matrix estimation error
(b) Projection distance
Figure 2: Simulation results
where P and U may undergo label permutation.
The proof is in Appendix E. In this result, the sample size N exhibits a fairly high-order polynomial
dependency on m, k, ??1 and scales with 1/? linearly instead of logarithmically, as is common in
sample complexity results on spectral learning. This is because we do not impose any constraints
on the observation model and simply use the Markov inequality for bounding the deviation in the
empirical moments. If we make stronger assumptions such as boundedness or sub-Gaussianity, it
is possible to use stronger, exponential tail bounds to obtain better sample complexity. Also worth
?2
noting is that ?min
acts as a threshold. As shown in our proof, as long as the operator norm of
the tensor perturbation is sufficiently smaller than ?min , which measures the gaps between different
?i ?s, we can correctly match the two sets of estimated tensor eigenvalues. Lastly, the lower bound
of N , as one would expect, depends on conditions of the matrices being estimated as reflected in
the various ratios of singular values. An interesting quantity missing from the sample analysis is the
size of each set n. To simplify the analysis we essentially assume n = 3, but understanding how
n might affect the sample complexity may have a critical impact in practice: when collecting more
data, should we collect more sets or larger sets? What is the trade-off between them? This is an
interesting direction for future work.
4
Simulation
Our HMM has m = 40 and k = 5 with Gaussian noise ? 2 = 2. The mean vectors U were sampled
from independent univariate standard normal and then normalized to lie on the unit sphere. The
transition matrix P contains one zero entry. For the generative process, we set ?0 = 1, r = 0.3, n =
1000, and N ? 1000{20 , 21 , . . . , 210 }. The numbers of iterations for Algorithm A.2 were N = 200
and L = 1000. Figure 2(a) plots the relative matrix estimation error (in spectral norm) against the
sample size N for P , U , and U T obtained by Algorithm 1 given the true r. It is clear that U is
the easiest to learn, followed by U T , and P is the most difficult, and that all three errors converge
to a very small value for sufficiently large N . Note that in Theorem 4 the bounds for P and U are
different. With the model used here, the extra multiplicative factor in the bound for U is less than
0.007, suggesting that U is indeed easier to estimate than P . Figure 2(b) demonstrates the heuristics
for determining r, showing projection distances (in logarithm) versus r. As N increases, the take-off
point gets closer to the true r = 0.3. The large peak indicates a pole (the set S in Theorem 2).
5
Conclusions
We have demonstrated that under reasonable assumptions, tensor decomposition methods can provably learn first-order Markov models and hidden Markov models from non-sequence data. We
believe this is the first formal guarantee on learning dynamic models in a non-sequential setting.
There are several ways to extend our results. No matter what distribution generates the random time
steps, tensor decomposition methods can always learn the expected transition probability matrix T .
Depending on the application, it might be better to use some other distribution for the missing time.
The proposed algorithm can be modified to learn discrete HMMs under a similar generative process.
Finally, applying the proposed methods to real data should be the most interesting future direction.
8
References
[1] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu. A spectral algorithm for
latent dirichlet allocation. arXiv preprint arXiv:1204.6703v4, 2013.
[2] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. arXiv preprint arXiv:1210.7559v2, 2012.
[3] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and
hidden Markov models. arXiv preprint arXiv:1203.0683, 2012.
[4] S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A practical
algorithm for topic modeling with provable guarantees. arXiv preprint arXiv:1212.4777, 2012.
[5] D. Hsu and S. M. Kakade. Learning mixtures of spherical gaussians: moment methods and
spectral decompositions. In Proceedings of the 4th conference on Innovations in Theoretical
Computer Science, pages 11?20. ACM, 2013.
[6] T.-K. Huang and J. Schneider. Learning linear dynamical systems without sequence information. In Proceedings of the 26th International Conference on Machine Learning, pages
425?432, 2009.
[7] T.-K. Huang and J. Schneider. Learning auto-regressive models from sequence and nonsequence data. In Advances in Neural Information Processing Systems 24, pages 1548?1556.
2011.
[8] T.-K. Huang, L. Song, and J. Schneider. Learning nonlinear dynamic models from nonsequenced data. In Proceedings of the 13th International Conference on Artificial Intelligence
and Statistics, 2010.
[9] M. G. Rabbat, M. A. Figueiredo, and R. D. Nowak. Network inference from co-occurrences.
Information Theory, IEEE Transactions on, 54(9):4053?4068, 2008.
[10] G. Stewart. On the perturbation of pseudo-inverses, projections and linear least squares problems. SIAM review, 19(4):634?662, 1977.
[11] X. Zhu, A. B. Goldberg, M. Rabbat, and R. Nowak. Learning bigrams from unigrams. In the
Proceedings of 46th Annual Meeting of the Association for Computational Linguistics: Human
Language Technology, Columbus, OH, 2008.
9
| 5065 |@word version:1 briefly:1 polynomial:1 proportion:1 stronger:2 norm:2 bigram:1 d2:3 simulation:4 decomposition:23 covariance:1 boundedness:1 carry:1 moment:17 initial:19 configuration:1 contains:2 liu:1 denoting:1 ours:1 interestingly:2 document:3 existing:3 surprising:1 si:4 yet:1 subsequent:1 j1:2 confirming:1 plot:1 stationary:4 generative:13 intelligence:1 advancement:1 metabolism:1 core:1 record:1 sudden:1 regressive:1 provides:1 node:1 along:1 c2:20 become:1 ik:1 consists:2 manner:1 indeed:2 expected:6 frequently:1 multi:1 inspired:1 decreasing:1 spherical:3 actual:1 becomes:1 project:2 estimating:2 underlying:1 notation:1 formidable:1 what:6 easiest:1 interpreted:1 emerging:1 eigenvector:1 developed:1 finding:2 transformation:1 guarantee:10 pseudo:1 collecting:1 firstorder:1 ti:10 act:1 shed:1 exactly:1 rm:7 demonstrates:1 exchangeable:1 unit:1 appear:2 t1:1 engineering:3 timing:1 local:1 subscript:1 might:5 studied:1 collect:2 suggests:1 co:3 hmms:2 range:1 bi:1 unique:4 practical:2 practice:2 x3:1 procedure:4 area:1 empirical:12 evolving:1 matching:3 convenient:1 word:5 projection:5 get:3 convenience:1 undesirable:1 onto:3 operator:1 impossible:1 applying:1 restriction:1 map:1 demonstrated:1 chronic:2 center:1 missing:2 straightforward:1 regardless:1 starting:3 independently:1 convex:3 ergodic:2 go:1 recovery:2 m2:12 estimator:1 array:4 importantly:1 oh:1 notion:1 coordinate:1 variation:3 imagine:1 suppose:2 exact:1 goldberg:1 pa:2 logarithmically:1 satisfying:2 jk:1 distributional:1 observed:4 preprint:4 calculate:1 ordering:2 trade:1 disease:2 mentioned:1 complexity:6 ui:2 dynamic:9 halpern:1 depend:1 celestial:1 upon:1 basis:1 vague:1 easily:2 various:2 fast:1 describe:1 kp:1 artificial:1 labeling:1 quite:4 whose:1 widely:1 supplementary:1 solve:1 say:1 heuristic:2 reconstruct:1 vecp:1 larger:1 statistic:1 noisy:1 ip:6 superscript:1 obviously:1 final:2 sequence:25 advantage:2 eigenvalue:3 propose:3 product:3 j2:4 combining:1 produce:2 telgarsky:1 object:3 depending:1 ij:2 c:2 involves:2 resemble:1 implies:1 nof:1 direction:2 correct:2 stochastic:3 human:1 enable:1 material:1 require:3 generalization:3 preliminary:2 biological:5 hold:3 sufficiently:2 normal:1 mapping:1 tor:1 consecutive:1 estimation:10 applicable:1 bag:1 label:2 visited:1 largest:1 gaussian:3 always:1 modified:1 rather:2 ck:1 focus:1 notational:1 vk:1 rank:3 likelihood:1 mainly:1 consistently:4 indicates:2 inference:2 entire:2 hidden:19 i1:4 provably:3 issue:6 aforementioned:1 flexible:1 overall:1 breakthrough:1 special:1 fairly:1 field:1 having:1 identical:1 look:1 future:3 simplex:1 report:1 stimulus:3 simplify:1 np:1 t2:1 few:3 distinguishes:1 modern:2 individual:3 vb1:1 ai1:2 mixture:4 light:1 behind:1 tj:1 devoted:1 permuting:2 chain:7 accurate:1 closer:1 nowak:2 orthogonal:1 euclidean:1 logarithm:1 walk:1 desired:2 re:1 theoretical:7 column:14 modeling:7 cover:1 stewart:1 maximization:2 pole:1 deviation:1 entry:2 successful:2 dependency:1 perturbed:2 varies:1 learnt:1 thoroughly:1 peak:1 international:2 siam:1 standing:1 v4:1 off:2 together:1 again:1 central:2 tzu:1 moitra:1 containing:1 huang:4 slowly:1 possibly:1 external:1 resort:1 return:2 suggesting:2 summarized:1 b2:1 gaussianity:1 matter:2 satisfy:2 vi:1 depends:1 later:2 view:2 multiplicative:1 unigrams:1 analyze:1 doing:1 recover:6 sort:1 square:1 ni:1 variance:1 correspond:1 identify:2 generalize:1 trajectory:4 worth:1 randomness:1 simultaneous:1 definition:1 against:2 frequency:1 destructive:1 galaxy:4 proof:6 mi:3 degeneracy:1 sampled:1 hsu:4 knowledge:2 follow:1 reflected:1 lastly:1 until:1 nonlinear:1 lack:1 suming:1 lda:2 columbus:1 scientific:5 believe:1 effect:1 contain:2 true:8 normalized:1 hence:2 symmetric:4 i2:6 uniquely:1 noted:1 complete:1 demonstrate:2 meaning:2 novel:1 recently:2 common:1 multinomial:3 jp:3 extend:3 organism:2 interpretation:1 tail:1 association:1 mellon:2 measurement:1 expressing:1 outlined:1 pm:1 pointed:1 schneide:1 language:1 dj:4 similarity:1 longer:2 whitening:2 etc:1 own:1 recent:2 touching:1 perspective:1 certain:5 aj1:1 inequality:1 success:4 meeting:1 impose:1 schneider:4 determine:1 converge:2 v3:3 living:1 full:3 multiple:7 match:4 repetitively:1 long:4 cross:1 compensate:1 sphere:1 impact:1 basic:1 patient:1 cmu:2 expectation:2 essentially:3 arxiv:8 iteration:5 represent:1 sequenced:1 robotics:1 cell:1 c1:5 achieved:1 whereas:1 entangled:1 singular:2 crucial:1 extra:2 unlike:2 undergo:1 member:1 seem:1 anandkumar:4 alzheimer:3 extracting:2 noting:1 easy:1 enough:1 affect:1 rabbat:3 idea:4 regarding:1 effort:1 song:1 returned:4 sontag:1 detailed:1 clear:2 amount:1 nonsequenced:2 exist:2 canonical:1 estimated:2 correctly:1 carnegie:2 discrete:5 key:1 four:1 demonstrating:1 nevertheless:2 pb:6 drawn:3 d3:3 vbi:3 threshold:1 pj:1 v1:13 run:5 prob:2 inverse:1 nonsequence:1 almost:2 reasonable:3 wu:1 p3:6 draw:6 appendix:13 vb:4 ki:8 bound:7 hi:3 followed:2 guaranteed:1 encountered:1 annual:1 rmi:1 constraint:1 x2:7 ri:3 generates:1 speed:4 min:8 department:1 according:1 across:1 slightly:1 em:1 smaller:1 kakade:4 making:1 intuitively:1 invariant:3 taken:1 agree:1 remains:2 turn:3 know:2 ge:2 available:3 operation:3 gaussians:1 apply:1 v2:5 spectral:9 occurrence:2 rp:2 denotes:3 dirichlet:7 include:1 running:4 linguistics:1 restrictive:1 tensor:36 quantity:5 realized:1 concentration:1 usual:4 exhibit:1 distance:4 hmm:4 topic:9 collected:3 barely:1 provable:1 assuming:1 ru:5 index:2 mini:1 ratio:1 innovation:1 difficult:5 mostly:3 potentially:1 negative:2 proper:1 unknown:10 perform:1 observation:17 snapshot:3 markov:25 optional:1 immediate:1 situation:5 locate:1 perturbation:5 arbitrary:1 pair:3 required:1 c3:12 able:2 dynamical:3 usually:1 tb:7 reliable:1 max:5 power:2 critical:1 indicator:1 zhu:2 technology:1 identifies:1 arora:1 ready:1 extract:1 auto:1 mom:5 review:2 nice:1 geometric:2 understanding:1 evolve:1 determining:1 relative:1 lacking:1 fully:1 expect:2 bear:1 permutation:9 interesting:3 allocation:3 var:1 versus:1 triple:1 gather:1 pij:2 consistent:5 xp:3 principle:1 foster:1 pi:1 share:2 row:3 repeat:1 last:1 asynchronous:1 figueiredo:1 formal:7 allow:2 institute:1 wide:1 taking:2 mimno:1 transition:11 world:1 made:2 jump:1 projected:1 transaction:1 observable:2 confirm:1 pseudoinverse:1 sequentially:2 pittsburgh:2 assumed:1 conclude:1 xi:6 alternatively:1 factorizing:1 search:1 latent:6 continuous:1 ku:1 learn:5 reasonably:1 robust:1 parenthesized:1 vj:1 diag:7 main:1 linearly:2 bounding:1 noise:3 body:1 x1:14 uik:1 fashion:1 fails:1 inferring:1 sub:1 exponential:1 lie:1 stamp:1 pe:1 third:2 theorem:12 specific:1 showing:1 undergoing:1 maxi:1 deflated:1 essential:1 sequential:7 magnitude:1 occurring:1 sorting:2 easier:5 sparser:1 gap:1 simply:2 univariate:1 ordered:1 partially:1 corresponds:1 satisfies:1 relies:1 acm:1 goal:2 exposition:1 jeff:1 replace:1 hard:2 change:1 determined:1 specifically:1 except:1 kuo:1 m3:11 ijk:3 meaningful:1 indicating:1 d1:3 |
4,493 | 5,066 | Learning Efficient Random Maximum A-Posteriori
Predictors with Non-Decomposable Loss Functions
Tamir Hazan
University of Haifa
Subhransu Maji
TTI Chicago
Joseph Keshet
Bar-Ilan university
Tommi Jaakkola
CSAIL, MIT
Abstract
In this work we develop efficient methods for learning random MAP predictors for
structured label problems. In particular, we construct posterior distributions over
perturbations that can be adjusted via stochastic gradient methods. We show that
any smooth posterior distribution would suffice to define a smooth PAC-Bayesian
risk bound suitable for gradient methods. In addition, we relate the posterior distributions to computational properties of the MAP predictors. We suggest multiplicative posteriors to learn super-modular potential functions that accompany
specialized MAP predictors such as graph-cuts. We also describe label-augmented
posterior models that can use efficient MAP approximations, such as those arising
from linear program relaxations.
1
Introduction
Learning and inference in complex models drives much of the research in machine learning
applications ranging from computer vision, natural language processing, to computational biology [1, 18, 21]. The inference problem in such cases involves assessing the likelihood of possible
structured-labels, whether they be objects, parsers, or molecular structures. Given a training dataset
of instances and labels, the learning problem amounts to estimation of the parameters of the inference engine, so as to best describe the labels of observed instances. The goodness of fit is usually
measured by a loss function.
The structures of labels are specified by assignments of random variables, and the likelihood of the
assignments are described by a potential function. Usually, it is feasible to only find the most likely
or maximum a-posteriori (MAP) assignment, rather than sampling according to their likelihood. Indeed, substantial effort has gone into developing algorithms for recovering MAP assignments, either
based on specific parametrized restrictions such as super-modularity [2] or by devising approximate
methods based on linear programming relaxations [21]. Learning MAP predictors is usually done
by structured-SVMs that compare a ?loss adjusted? MAP prediction to its training label [25]. In
practice, most loss functions used decompose in the same way as the potential function, so as to not
increase the complexity of the MAP prediction task. Nevertheless, non-decomposable loss functions
capture the structures in the data that we would like to learn.
Bayesian approaches for expected loss minimization, or risk, effortlessly deal with nondecomposable loss functions. The inference procedure samples a structure according to its likelihood, and computes its loss given a training label. Recently [17, 23] constructed probability
models through MAP predictions. These ?perturb-max? models describe the robustness of the
MAP prediction to random changes of its parameters. Therefore, one can draw unbiased samples from these distributions using MAP predictions. Interestingly, when incorporating perturbmax models to Bayesian loss minimization one would ultimately like to use the PAC-Bayesian risk
[11, 19, 3, 20, 5, 10].
Our work explores the Bayesian aspects that emerge from PAC-Bayesian risk minimization. We
focus on computational aspects when constructing posterior distributions, so that they could be used
1
to minimize the risk bound efficiently. We show that any smooth posterior distribution would suffice
to define a smooth risk bound which can be minimized through gradient decent. In addition, we
relate the posterior distributions to the computational properties of MAP predictors. We suggest
multiplicative posterior models to learn super-modular potential functions, that come with specialized MAP predictors such as graph-cuts [2]. We also describe label-augmented posterior models
that can use MAP approximations, such as those arising from linear program relaxations [21].
2
Background
Learning complex models typically involves reasoning about the states of discrete variables whose
labels (assignments of values) specify the discrete structures of interest. The learning task which
we consider in this work is to fit parameters w that produce to most accurate prediction y ? Y
to a given object x. Structures of labels are conveniently described by a discrete product space
Y = Y1 ? ? ? ? ? Yn . We describe the potential of relating a label y to an object x with respect to
the parameters w by real valued functions ?(y; x, w). Our goal is to learn the parameters w that best
describe the training data (x, y) ? S. Within Bayesian perspectives, the distribution that one learns
given the training data is composed from a distribution over the parameter space qw (?) and over the
labels space P [y|w, x] ? exp ?(y; x, w). Using the Bayes rule we derive the predictive distribution
over the structures
Z
P [y|x] = P [y|?, x]qw (?)d?
(1)
Unfortunately, sampling algorithms over complex models are provably hard in theory and tend to
be slow in many cases of practical interest [7]. This is in contrast to the maximum a-posteriori
(MAP) prediction, which can be computed efficiently for many practical cases, even when sampling
is provably hard.
(MAP predictor)
yw (x) = arg max ?(y; x, w)
y1 ,...,yn
(2)
Recently, [17, 23] suggested to change of the Bayesian posterior probability models to utilize the
MAP prediction in a deterministic manner. These perturb-max models allow to sample from the
predictive distribution with a single MAP prediction:
def
(Perturb-max models)
P [y|x] = P??qw y = y? (x)
(3)
A
is decomposed along a graphical model if it has the form ?(y; x, w) =
P potential function P
i?V ?i (yi ; x, w) +
i,j?E ?i,j (yi , yj ; x, w). If the graph has no cycles, MAP prediction can
be computed efficiently using the belief propagation algorithm. Nevertheless, there are cases where
MAP prediction can be computed efficiently for graph with cycles. A potential function is called
supermodular if it is defined over Y = {?1, 1}n and its pairwise interactions favor adjacent states to
have the same label, i.e., ?i,j (?1, ?1; x, w)+?i,j (1, 1; x, w) ? ?i,j (?1, 1; x, w)+?i,j (1, ?1; x, w).
In such cases MAP prediction reduces to computing the min-cut (graph-cuts) algorithm.
Recently, a sequence of works attempt to solve the MAP prediction task for non-supermodular
potential function as well as general regions. These cases usually involve potentials function that
are described by a family R of subsets of variables r ? {1, ..., n}, called regions. We denote by yr
the set of labels that correspond
to the region r, namely (yi )i?r and consider the following potential
P
functions ?(y; x, w) = r?R ?r (yr ; x, w). Thus, MAP prediction can be formulated as an integer
linear program:
X
b? ? arg max
br (yr )?r (yr ; x, w)
(4)
br (yr )
s.t.
r,yr
br (yr ) ? {0, 1},
X
br (yr ) = 1,
yr
X
bs (ys ) = br (yr ) ?r ? s
ys \yr
The correspondence between MAP prediction and integer linear program solutions is (yw (x))i =
arg maxyi b?i (yi ). Although integer linear program solvers provide an alternative to MAP prediction, they may be restricted to problems of small size. This restriction can be relaxed when one
replaces the integral constraints br (yr ) ? {0, 1} with nonnegative constraints br (yr ) ? 0. These
2
linear program relaxations can be solved efficiently using different convex max-product solvers, and
whenever these solvers produce an integral solution it is guaranteed to be the MAP prediction [21].
Given training data of object-label pairs, the learning objective is to estimate a predictive distribution
over the structured-labels. The goodness of fit is measured by a loss function L(?
y , y). As we focus
on randomized MAP predictors our goal is to learn the parameters w that minimize the expected
perturb-max prediction loss, or randomized risk. We define the randomized risk at a single instancelabel pair as
X
R(w, x, y) =
P??qw y? = y? (x) L(?
y , y).
y??Y
Alternatively, the randomized risk takes the form R(w, x, y) = E??qw [L(y? (x), y)]. The randomized risk originates within the PAC-Bayesian generalization bounds. Intuitively, if the training set is
an independent sample, one would expect that best predictor on the training set to perform well on
unlabeled objects at test time.
3
Minimizing PAC-Bayesian generalization bounds
Our approach is based on the PAC-Bayesian risk analysis of random MAP predictors. In the following we state the PAC-Bayesian generalization bound for structured predictors and describe the
gradients of these bounds for any smooth posterior distribution.
The PAC-Bayesian generalization bound describes the expected loss, or randomized risk, when considering the true distributions over object-labels in the world R(w) = E(x,y)?? [R(w, x, y)]. It upper
P
1
bounds the randomized risk by the empirical randomized risk RS (w) = |S|
(x,y)?S R(w, x, y)
and a penalty term which decreases proportionally to the training set size. Here we state the PACBayesian theorem, that holds uniformly for all posterior distributions over the predictions.
Theorem 1. (Catoni [3], see also [5]). Let L(?
y , y) ? [0, 1] be a bounded loss function. Let
p(?) be any probability density functionRand let qw (?) be a family of probability density functions
parameterized by w. Let KL(qw ||p) = qw (?) log(qw (?)/p(?)). Then, for any ? ? (0, 1] and for
any real number ? > 0, with probability at least 1 ? ? over the draw of the training set the following
holds simultaneously for all w
KL(qw ||p) + log(1/?)
1
?RS (w) +
R(w) ?
1 ? exp(??)
|S|
For completeness we present a proof sketch for the theorem in the appendix. This proof follows
Seeger?s PAC-Bayesian approach [19], and extended to the structured label case [13]. The proof
technique replaces prior randomized risk, with the posterior randomized risk that holds uniformly
for every w, while penalizing this change by their KL-divergence. This change-of-measure step is
close in spirit to the one that is performed in importance sampling. The proof is then concluded by
simple convex bound on the moment generating function of the empirical risk.
To find the best posterior distribution that minimizes the randomized risk, one can minimize its
empirical upper bound. We show that whenever the posterior distributions have smooth probability
density functions qw (?), the perturb-max probability model is smooth as a function of w. Thus the
randomized risk bound can be minimized with gradient methods.
Theorem 2. Assume qw (?) is smooth as a function of its parameters, then the PAC-Bayesian bound
is smooth as a function of w:
h
i
1 X
?w RS (w) =
E??qw ?w [log qw (?)]L(y? (x), y)
|S|
(x,y)?S
Moreover, the KL-divergence is a smooth function of w and its gradient takes the form:
h
i
?w KL(qw ||p) = E??qw ?w [log qw (?)] log(qw (?)/p(?)) + 1
R
Proof: First we note that R(w, x, y) = qw (?)L(y? (x), y)d?. Since qw (?) is a probability density
function and L(?
y , y) ? [0, 1] we can differentiate under the integral (cf. [4] Theorem 2.27).
Z
?w R(w, x, y) = ?w qw (?)L(y? (x), y)d?
3
Using the identity ?w qw (?) = qw (?)?w log(qw (?)) the first part of the proof follows. The
second part of the proof follows in the same manner, while noting that ?w (qw (?) log qw (?)) =
(?w qw (?))(log qw (?) + 1).
The gradient of the randomized empirical risk is governed by the gradient of the log-probability
density function of its corresponding posterior model. For example, Gaussian model with mean w
and identity covariance matrix has the probability density function qw (?) ? exp(?k? ? wk2 /2),
thus the gradient of its log-density is the linear moment ?, i.e., ?w [log qw ] = ? ? w.
Taking any smooth distribution qw (?), we can find the parameters w by descending along the
stochastic gradient of the PAC-Bayesian generalization bound. The gradient of the randomized empirical risk is formed by two expectations, over the sample points and over the posterior distribution. Computing these expectations is time consuming, thus we use a single sample ?? [log qw (?)]L(y? (x), y) as an unbiased estimator for the gradient. Similarly we estimate
the gradient of the KL-divergence with an unbiased estimator which requires a single sample of
?w [log qw (?)](log(qw (?)/p(?)) + 1). This approach, called stochastic approximation or online
gradient descent, amounts to use the stochastic gradient update rule
w ? w ? ? ? ??w [log qw (?)] L(y? (x), y) + log(qw (?)/p(?)) + 1
where ? is the learning rate. Next, we explore different posterior distributions from computational
perspectives. Specifically, we show how to learn the posterior model so to ensure the computational
efficiency of its MAP predictor.
4
Learning posterior distributions efficiently
The ability to efficiently apply MAP predictors is key to the success of the learning process. Although MAP predictions are NP-hard in general, there are posterior models for which they can
be computed efficiently. For example, whenever the potential function corresponds to a graphical
model with no cycles, MAP prediction can be efficiently computed for any learned parameters w.
Learning unconstrained parameters with random MAP predictors provides some freedom in choosing the posterior distribution. In fact, Theorem 2 suggests that one can learn any posterior distribution by performing gradient descent on its risk bound, as long as its probability density function
is smooth. We show that for unconstrained parameters, additive posterior distributions simplify the
learning problem, and the complexity of the bound (i.e., its KL-divergence) mostly depends on its
prior distribution.
Corollary 1. Let q0 (?) be a smooth probability density function with zero mean and set the posterior
distribution using additive shifts qw (?) = q0 (? ? w). Let H(q) = ?E??q [log q(?)] be the entropy
function. Then
KL(qw ||p) = ?H(q0 ) ? E??q0 [log p(? + w)]
In particular, if p(?) ? exp(?k?k2 ) is Gaussian then ?w KL(qw ||p) = w
Proof: KL(qw ||p) = ?H(qw ) ? E??qw [log p(?)]. By a linear change of variable, ?? = ? ? w it
follows that H(qw ) = H(q0 ) thus ?w H(qw ) = 0. Similarly E??qw [log p(?)] = E??q0 [log p(? +
w)]. Finally, if p(?) is Gaussian then E??q0 [log p(? + w)] = ?w2 ? E??q0 [? 2 ].
This result implies that every additively-shifted smooth posterior distribution may consider the KLdivergence penalty as the square regularization when using a Gaussian prior p(?) ? exp(?k?k2 ).
This generalizes the standard claim on Gaussian posterior distributions [11], for which q0 (?) are
Gaussians. Thus one can use different posterior distributions to better fit the randomized empirical
risk, without increasing the computational complexity over Gaussian processes.
Learning unconstrained parameters can be efficiently applied to tree structured graphical models.
This, however, is restrictive. Many practical problems require more complex models, with many
cycles. For some of these models linear program solvers give efficient, although sometimes approximate, MAP predictions. For supermodular models there are specific solvers, such as graph-cuts,
that produce fast and accurate MAP predictions. In the following we show how to define posterior
distributions that guarantee efficient predictions, thus allowing efficient sampling and learning.
4
4.1
Learning constrained posterior models
MAP predictions can be computed efficiently in important practical cases, e.g., supermodular potential functions satisfying ?i,j (?1, ?1; x, w) + ?i,j (1, 1; x, w) ? ?i,j (?1, 1; x, w) + ?i,j (1, ?1; x, w).
Whenever we restrict ourselves to symmetric potential function ?i,j (yi , yj ; x, w) = wi,j yi yj , supermodularity translates to nonnegative constraint on the parameters wi,j ? 0. In order to model
posterior distributions that allow efficient sampling we define models over the constrained parameter space. Unfortunately, the additive posterior models qw (?) = q0 (? ? w) are inappropriate for
this purpose, as they have a positive probability for negative ? values and would generate nonsupermodular models.
To learn constrained parameters one requires posterior distributions that respect these constraints.
For nonnegative parameters we apply posterior distributions that are defined on the nonnegative
real numbers. We suggest to incorporate the parameters of the posterior distribution in a multiplicative manner into a distribution over the nonnegative real numbers. For any distribution q? (?)
we determine a posterior distribution with parameters w as qw (?) = q? (?/w)/w. We show that
multiplicative posterior models naturally provide log-barrier functions over the constrained set of
nonnegative numbers. This property is important to the computational efficiency of the bound minimization algorithm.
Corollary 2. For any probability distribution q? (?), let q?,w (?) = q? (?/w)/w be the parametrized
posterior distribution. Then
KL(q?,w ||p) = ?H(q? ) ? log w ? E??q? [log p(w?)]
R?
Define the Gamma function ?(?) = 0 ? ??1 exp(??). If p(?) = q? (?) = ? ??1 exp(??)/?(?)
have the Gamma distribution with parameter ?, then E??q? [log p(w?)] = (? ? 1) log
p w ? ?w.
Alternatively, if p(?) are truncated Gaussians then E??q? [log p(w?)] = ? ?2 w2 + log ?/2.
Proof: The entropy of multiplicative posterior models naturally implies the log-barrier function:
Z
?
? =?/w
q? (?
? ) log q? (?
? ) ? log w d?
? = ?H(q? ) ? log w.
?H(q?,w ) =
Similarly, E??q?,w [log p(?)] = E??q? [log p(w?)]. The special cases for the Gamma and the truncated normal distribution follow by a direct computation.
The multiplicative posterior distribution would provide the barrier function ? log w as part of its KLdivergence. Thus the multiplicative posterior effortlessly enforces the constraints of its parameters.
This property suggests that using multiplicative rules are computationally favorable. Interestingly,
using a prior model with Gamma distribution adds to the barrier function a linear regularization
term kwk1 that encourages sparsity. On the other hand, a prior model with a truncated Gaussian
adds a square regularization term which drifts the nonnegative parameters away from zero. A computational disadvantage of the Gaussian prior is that its barrier function cannot be controlled by a
parameter ?.
4.2
Learning posterior models with approximate MAP predictions
MAP prediction can be phrased as an integer linear program, stated in Equation (4). The computational burden of integer linear programs can be relaxed when one replaces the integral constraints
with nonnegative constraints. This approach produces approximate MAP predictions. An important
learning challenge is to extend the predictive distribution of perturb-max models to incorporate approximate MAP solutions. Approximate MAP predictions are are described by the feasible set of
their linear program relaxations, that is usually called the local polytope:
n
o
X
X
L(R) = br (yr ) : br (yr ) ? 0,
br (yr ) = 1, ?r ? s
bs (ys ) = br (yr )
yr
ys \yr
Linear programs solutions are usually the extreme points of their feasible polytope. The local polytope is defined by a finite set of equalities and inequalities, thus it has a finite number of extreme
points. The perturb-max model that is defined in Equation (3) can be effortlessly extended to the
finite set of the local polytope extreme points [15]. This approach has two flaws. First, linear program solutions might not be extreme points, and decoding such a point usually requires additional
5
computational effort. Second, without describing the linear program solutions one cannot incorporate loss functions that take the structural properties of approximate MAP predictions into account
when computing the the randomized risk.
Theorem 3. Consider approximate MAP predictions that arise from relaxation of the MAP prediction problem in Equation (4).
X
arg max
br (yr )?r (yr ; x, w) s.t. b ? L(R)
br (yr )
r,yr
?
Then any optimal solution b is described by a vector y?w (x) in the finite power sets over the regions,
Y? ? ?r 2Yr :
y?w (x) = (?
yw,r (x))r?R
where
y?w,r (x) = {yr : b?r (yr ) > 0}
Moreover, if there is a unique optimal solution b? then it corresponds to an extreme point in the local
polytope.
Proof: The program is convex over a compact set, thus strong duality
holds. Fixing the Lagrange
P
multipliers ?r?s (yr ) that correspond to the marginal constraints ys \yr bs (ys ) = br (yr ), and considering the probability constraints as the domain of the primal program, we derive the dual program
n
o
X
X
X
max ?r (yr ; x, w) +
?c?r (yc ) ?
?r?p (yr )
r
yr
c:c?r
p:p?r
Lagrange optimality constraints (or equivalently, Danskin Theorem) determine the primal optimal
solutions b?r (yP
r ) to be probability distributions over the set arg maxyr {?r (yr ; x, w) +
P
?
?
?w,r (x)
c:c?r ?c?r (yc ) ?
p:p?r ?r?p (yr )} that satisfy the marginalization constraints. Thus y
is the information that identifies the primal optimal solutions, i.e., any other primal feasible solution
that has the same y?w,r (x) is also a primal optimal solution.
This theorem extends Proposition 3 in [6] to non-binary and non-pairwise graphical models. The
theorem describes the discrete structures of approximate MAP predictions. Thus we are able to
define posterior distributions that use efficient, although approximate, predictions while taking into
account their structures. To integrate these posterior distributions to randomized risk we extend the
loss function to L(?
yw (x), y). One can verify that the results in Section 3 follow through, e.g., by
considering loss functions L : Y? ? Y? ? [0, 1] while the training examples labels belong to the
subset Y ? Y? .
5
Empirical evaluation
We perform experiments on an interactive image segmentation. We use the Grabcut dataset proposed
by Blake et al. [1] which consists of 50 images of objects on cluttered backgrounds and the goal is
to obtain the pixel accurate segmentations of the object given an initial ?trimap? (see Figure 1). A
trimap is an approximate segmentation of the image into regions that are well inside, well outside
and the boundary of the object, something a user can easily specify in an interactive application.
A popular approach for segmentation is the GrabCut approach [2, 1]. We learn parameters for the
?Gaussian Mixture Markov Random Field? (GMMRF) formulation of [1] using
P a potential funcn
tion
over
foreground/background
segmentations
Y
=
{?1,
1}
:
?(y;
x,
w)
=
l?V ?i (yi ; x, w) +
P
?
(y
,
y
;
x,
w).
The
local
potentials
are
?
(y
;
x,
w)
=
w
log
P
(y
|x),
where wyi are
i,j
i
j
i
i
y
i
i
i,j?E
parameters to be learned while P (yi |x) are obtained from a Gaussian mixture model learned on the
background and foreground pixels for an image x in the initial trimap. The pairwise potentials are
?i,j (yi , yj ; x, w) = wa exp(?(xi ? xj )2 )yi yj , where xi denotes the intensity of image x at pixel
i, and wa are the parameters to be learned for the angles a ? {0, 90, 45, ?45}? . These potential
functions are supermodular as long as the parameters wa are nonnegative, thus MAP prediction can
be computed efficiently with the graph-cuts algorithm. For these parameters we use multiplicative
posterior model with the Gamma distribution. The dataset does not come with a standard training/test split so we use the odd set of images for training and even set of images for testing. We use
stochastic gradient descent with the step parameter decaying as ?t = to?+t for 250 iterations.
6
Method
Our method
Structured SVM (hamming loss)
Structured SVM (all-zero loss)
GMMRF (Blake et al. [1])
Perturb-and-MAP ([17])
Grabcut loss
7.77%
9.74%
7.87%
7.88%
8.19%
PASCAL loss
5.29%
6.66%
5.63%
5.85%
5.76%
Table 1: Learning the Grabcut segmentations using two different loss functions. Our learned parameters outperform structured SVM approaches and Perturb-and-MAP moment matching
Figure 1: Two examples of image (left), input ?trimap? (middle) and the final segmentation (right)
produced using our learned parameters.
We use two different loss functions for training/testing for our approach to illustrate the flexibility of
our approach for learning using various task specific loss functions. The ?GrabCut loss? measures
the fraction of incorrect pixels labels in the region specified as the boundary in the trimap. The
?PASCAL loss?, which is commonly used in several image segmentation benchmarks, measures the
ratio of the intersection and union of the foregrounds of ground truth segmentation and the solution.
As a comparison we also trained parameters using moment matching of MAP perturbations [17] and
structured SVM. We use a stochastic gradient approach with a decaying step size for 1000 iterations.
Using structured SVM, solving loss-augmented inference maxy??Y {L(y, y?) + ?(y; x, w)} with the
hamming loss can be efficiently done using graph-cuts. We also consider learning parameters with
all-zero loss function, i.e., L(y, y?) ? 0. To ensure that the weights remain non-negative we project
the weights into the non-negative side after each iteration.
Table 1 shows the results of learning using various methods. For the GrabCut loss, our method
obtains comparable results to the GMMRF framework of [1], which used hand-tuned parameters.
Our results are significantly better when PASCAL loss is used. Our method also outperforms the
parameters learned using structured SVM and Perturb-and-MAP approaches. In our experiments the
structured SVM with the hamming loss did not perform well ? the loss augmented inference tended
to focus on maximum violations instead of good solutions which causes the parameters to change
even though the MAP solution has a low loss (a similar phenomenon was observed in [22]). Using
the all-zero loss tends to produce better results in practice as seen in Table 1. Figure 1 shows some
examples images, the input trimap, and the segmentations obtained using our approach.
6
Related work
Recent years have introduced many optimization techniques that lend efficient MAP predictors for
complex models. These MAP predictors can be integrated to learn complex models using structuredSVM [25]. Structured-SVM has a drawback, as its MAP prediction is adjusted by the loss function,
therefore it has an augmented complexity. Recently, there has been an effort to efficiently integrate
non-decomposable loss function into structured-SVMs [24]. However this approach does not hold
for any loss function.
Bayesian approaches to loss minimization treat separately the prediction process and the loss incurred, [12]. However, the Bayesian approach depends on the efficiency of its sampling procedure,
but unfortunately, sampling in complex models is harder that the MAP prediction task [7].
The recent works [17, 23, 8, 9, 16] integrate efficient MAP predictors into Bayesian modeling. [23]
describes the Bayesian perspectives, while [17, 8] describe their relations to the Gibbs distribution and moment matching. [9] provide unbiased samples form the Gibbs distribution using MAP
predictors and [16] present their measure concentration properties. Other strategies for producing
7
(pseudo) samples efficiently include Herding [26]. However, these approaches do not consider risk
minimization.
The perturb-max models in Equation (3) play a key role in PAC-Bayesian theory [14, 11, 19, 3, 20, 5,
10]. The PAC-Bayesian approaches focus on generalization bounds to the object-label distribution.
However, the posterior models in the PAC-Bayesian approaches were not extensively studied in the
past. In most cases the posterior model remained undefined. [10] investigate linear predictors with
Gaussian posterior models to have a structured-SVM like bound. This bound holds uniformly for
every ? and its derivation is quite involved. In contrast we use Catoni?s PAC-Bayesian bound that
is not uniform over ? but does not require the log |S| term [3, 5]. The simplicity of Catoni?s bound
(see Appendix) makes it amenable to different extensions. In our work, we extend these results
to smooth posterior distributions, while maintaining the quadratic regularization form. We also
describe posterior distributions for non-linear models. In different perspective, [3, 5] describe the
optimal posterior, but unfortunately there is no efficient sampling procedure for this posterior model.
In contrast, our work explores posterior models which allow efficient sampling. We investigate two
posterior models: the multiplicative models, for constrained MAP solvers such as graph-cuts, and
posterior models for approximate MAP solutions.
7
Discussion
Learning complex models requires one to consider non-decomposable loss functions that take into
account the desirable structures. We suggest to use the Bayesian perspectives to efficiently sample
and learn such models using random MAP predictions. We show that any smooth posterior distribution would suffice to define a smooth PAC-Bayesian risk bound which can be minimized using
gradient decent. In addition, we relate the posterior distributions to the computational properties
of the MAP predictors. We suggest multiplicative posterior models to learn supermodular potential
functions that come with specialized MAP predictors such as graph-cuts algorithm. We also describe
label-augmented posterior models that can use efficient MAP approximations, such as those arising
from linear program relaxations. We did not evaluate the performance of these posterior models and
further explorations of such models is required.
The results here focus on posterior models that would allow for efficient sampling using MAP predictions. There are other cases for which specific posterior distributions might be handy, e.g., learning posterior distributions of Gaussian mixture models. In these cases, the parameters include the
covariance matrix, thus would require to sample over the family of positive definite matrices.
A
Proof sketch for Theorem 1
Theorem 2.1 in [5]: For any distribution D over object-labels pairs, for any w-parametrized
distribution qw , for any prior distribution p, for any ? ? (0, 1], and for any convex function
D : [0, 1] ? [0, 1] ? R, with probability at least 1 ? ? over the draw of the training set the divergence D(E??qw RS (?), E??qw R(?)) is upper bounded simultaneously for all w by
1
i
1 h
KL(qw ||p) + log
E??p ES?Dm exp mD(RS (?), R(?))
|S|
?
For D(RS (?), R(?)) = F(R(?)) ? ?RS (?), the bound reduces to a simple convex bound onthe
moment generating function of the empirical risk: ES?Dm exp mD(RS (?, x, y), R(?, x, y)) =
exp(mF(R(?)))ES?Dm exp(?m?RS (?)) Since the exponent function is a convex function of
RS (?) = RS (?) ? 1 + (1 ? RS (?)) ? 0, the moment generating function bound is exp(??RS (?)) ?
RS (?) exp(??) + (1 ? RS (?)). Since ES RS (?) = R(?), the right term in the risk bound in
can be made 1 when choosing F(R(?)) to be the inverse of the moment generating function
bound. This is Catoni?s bound [3, 5] for the structured labels case. To derive Theorem 1 we apply exp(?x) ? 1 ? x to derive the lower bound (1 ? exp(??))E??qw R(?) ? ?E??qw RS (?) ?
D(E??qw RS (?), E??qw R(?)).
8
References
[1] Andrew Blake, Carsten Rother, Matthew Brown, Patrick Perez, and Philip Torr. Interactive
image segmentation using an adaptive gmmrf model. In ECCV 2004, pages 428?441. 2004.
[2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts.
PAMI, 2001.
[3] O. Catoni. Pac-bayesian supervised classification: the thermodynamics of statistical learning.
arXiv preprint arXiv:0712.0248, 2007.
[4] G.B. Folland. Real analysis: Modern techniques and their applications, john wiley & sons.
New York, 1999.
[5] P. Germain, A. Lacasse, F. Laviolette, and M. Marchand. Pac-bayesian learning of linear
classifiers. In ICML, pages 353?360. ACM, 2009.
[6] A. Globerson and T. S. Jaakkola. Fixing max-product: Convergent message passing algorithms
for MAP LP-relaxations. Advances in Neural Information Processing Systems, 21, 2007.
[7] L.A. Goldberg and M. Jerrum. The complexity of ferromagnetic ising with local fields. Combinatorics Probability and Computing, 16(1):43, 2007.
[8] T. Hazan and T. Jaakkola. On the partition function and random maximum a-posteriori perturbations. In Proceedings of the 29th International Conference on Machine Learning, 2012.
[9] T. Hazan, S. Maji, and T. Jaakkola. On sampling from the gibbs distribution with random
maximum a-posteriori perturbations. Advances in Neural Information Processing Systems,
2013.
[10] J. Keshet, D. McAllester, and T. Hazan. Pac-bayesian approach for minimization of phoneme
error rate. In ICASSP, 2011.
[11] John Langford and John Shawe-Taylor. Pac-bayes & margins. Advances in neural information
processing systems, 15:423?430, 2002.
[12] Erich Leo Lehmann and George Casella. Theory of point estimation, volume 31. 1998.
[13] Andreas Maurer. A note on the pac bayesian theorem. arXiv preprint cs/0411099, 2004.
[14] D. McAllester. Simplified pac-bayesian margin bounds. Learning Theory and Kernel Machines, pages 203?215, 2003.
[15] D. McAllester, T. Hazan, and J. Keshet. Direct loss minimization for structured prediction.
Advances in Neural Information Processing Systems, 23:1594?1602, 2010.
[16] Francesco Orabona, Tamir Hazan, Anand D Sarwate, and Tommi. Jaakkola. On measure concentration of random maximum a-posteriori perturbations. arXiv:1310.4227, 2013.
[17] G. Papandreou and A. Yuille. Perturb-and-map random fields: Using discrete optimization to
learn and sample from energy models. In ICCV, Barcelona, Spain, November 2011.
[18] A.M. Rush and M. Collins. A tutorial on dual decomposition and lagrangian relaxation for
inference in natural language processing.
[19] Matthias Seeger. Pac-bayesian generalisation error bounds for gaussian process classification.
The Journal of Machine Learning Research, 3:233?269, 2003.
[20] Yevgeny Seldin. A PAC-Bayesian Approach to Structure Learning. PhD thesis, 2009.
[21] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for
MAP using message passing. In Conf. Uncertainty in Artificial Intelligence (UAI), 2008.
[22] Martin Szummer, Pushmeet Kohli, and Derek Hoiem. Learning crfs using graph cuts. In
Computer Vision?ECCV 2008, pages 582?595. Springer, 2008.
[23] D. Tarlow, R.P. Adams, and R.S. Zemel. Randomized optimum models for structured prediction. In AISTATS, pages 21?23, 2012.
[24] Daniel Tarlow and Richard S Zemel. Structured output learning with high order loss functions.
In International Conference on Artificial Intelligence and Statistics, pages 1212?1220, 2012.
[25] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. Advances in neural
information processing systems, 16:51, 2004.
[26] Max Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1121?1128. ACM, 2009.
9
| 5066 |@word kohli:1 middle:1 additively:1 r:18 covariance:2 decomposition:1 harder:1 moment:8 initial:2 hoiem:1 daniel:1 tuned:1 interestingly:2 outperforms:1 past:1 john:3 chicago:1 additive:3 partition:1 update:1 intelligence:2 devising:1 yr:34 tarlow:2 completeness:1 provides:1 along:2 constructed:1 direct:2 incorrect:1 consists:1 inside:1 manner:3 pairwise:3 expected:3 indeed:1 decomposed:1 inappropriate:1 solver:6 considering:3 increasing:1 project:1 spain:1 bounded:2 maxyi:1 suffice:3 moreover:2 qw:55 minimizes:1 kldivergence:2 guarantee:1 pseudo:1 every:3 interactive:3 k2:2 classifier:1 originates:1 trimap:6 yn:2 producing:1 positive:2 local:6 treat:1 tends:1 pami:1 might:2 studied:1 wk2:1 suggests:2 gone:1 practical:4 unique:1 enforces:1 yj:5 testing:2 practice:2 union:1 definite:1 globerson:2 handy:1 procedure:3 nondecomposable:1 empirical:8 significantly:1 matching:3 suggest:5 cannot:2 unlabeled:1 close:1 risk:29 descending:1 restriction:2 map:68 deterministic:1 folland:1 lagrangian:1 crfs:1 cluttered:1 convex:6 decomposable:4 simplicity:1 rule:3 estimator:2 play:1 parser:1 user:1 programming:1 goldberg:1 satisfying:1 cut:11 ising:1 observed:2 role:1 taskar:1 preprint:2 solved:1 capture:1 region:6 ferromagnetic:1 cycle:4 decrease:1 substantial:1 complexity:5 ultimately:1 trained:1 solving:1 predictive:4 yuille:1 efficiency:3 easily:1 icassp:1 various:2 maji:2 derivation:1 leo:1 fast:2 describe:11 artificial:2 zemel:2 choosing:2 outside:1 whose:1 modular:2 quite:1 valued:1 solve:1 supermodularity:1 favor:1 ability:1 jerrum:1 statistic:1 final:1 online:1 differentiate:1 sequence:1 matthias:1 interaction:1 product:3 flexibility:1 optimum:1 assessing:1 produce:5 generating:4 adam:1 tti:1 object:11 derive:4 develop:1 illustrate:1 fixing:2 andrew:1 measured:2 odd:1 strong:1 recovering:1 c:1 involves:2 come:3 implies:2 tommi:2 drawback:1 stochastic:6 exploration:1 mcallester:3 require:3 generalization:6 decompose:1 proposition:1 adjusted:3 extension:1 hold:6 effortlessly:3 blake:3 normal:1 exp:16 ground:1 claim:1 matthew:1 purpose:1 estimation:2 favorable:1 label:25 pacbayesian:1 minimization:9 mit:1 gaussian:13 super:3 rather:1 jaakkola:6 corollary:2 focus:5 likelihood:4 contrast:3 seeger:2 posteriori:6 inference:7 flaw:1 typically:1 integrated:1 relation:1 koller:1 subhransu:1 provably:2 arg:5 dual:2 pixel:4 pascal:3 classification:2 exponent:1 constrained:5 special:1 marginal:1 field:3 construct:1 sampling:12 biology:1 icml:1 foreground:3 minimized:3 np:1 simplify:1 richard:1 modern:1 composed:1 simultaneously:2 divergence:5 gamma:5 ourselves:1 attempt:1 freedom:1 interest:2 message:2 investigate:2 evaluation:1 violation:1 mixture:3 extreme:5 undefined:1 primal:5 perez:1 amenable:1 accurate:3 integral:4 tree:1 taylor:1 maurer:1 haifa:1 rush:1 instance:2 modeling:1 disadvantage:1 papandreou:1 goodness:2 assignment:5 subset:2 predictor:22 uniform:1 veksler:1 density:9 explores:2 randomized:18 international:3 csail:1 decoding:1 thesis:1 conf:1 yp:1 account:3 ilan:1 potential:18 satisfy:1 combinatorics:1 depends:2 multiplicative:11 performed:1 tion:1 hazan:6 bayes:2 decaying:2 minimize:3 formed:1 square:2 phoneme:1 efficiently:16 correspond:2 bayesian:33 produced:1 drive:1 herding:2 tended:1 casella:1 whenever:4 energy:2 derek:1 involved:1 dm:3 naturally:2 proof:11 hamming:3 dataset:3 popular:1 segmentation:11 gmmrf:4 supermodular:6 supervised:1 follow:2 specify:2 wei:1 formulation:1 done:2 though:1 langford:1 sketch:2 hand:2 propagation:1 verify:1 unbiased:4 true:1 multiplier:1 brown:1 regularization:4 equality:1 q0:10 symmetric:1 deal:1 adjacent:1 encourages:1 reasoning:1 ranging:1 image:11 recently:4 boykov:1 specialized:3 volume:1 sarwate:1 extend:3 belong:1 relating:1 gibbs:3 unconstrained:3 erich:1 similarly:3 language:2 shawe:1 add:2 patrick:1 something:1 posterior:64 recent:2 perspective:5 inequality:1 binary:1 success:1 kwk1:1 yi:10 seen:1 guestrin:1 additional:1 relaxed:2 george:1 determine:2 grabcut:6 desirable:1 reduces:2 smooth:17 long:2 molecular:1 y:6 controlled:1 prediction:42 vision:2 expectation:2 arxiv:4 iteration:3 sometimes:1 kernel:1 addition:3 background:4 separately:1 concluded:1 w2:2 accompany:1 tend:1 anand:1 spirit:1 integer:5 structural:1 noting:1 split:1 decent:2 meltzer:1 marginalization:1 fit:4 xj:1 restrict:1 andreas:1 br:14 translates:1 shift:1 whether:1 effort:3 penalty:2 sontag:1 york:1 cause:1 passing:2 yw:4 involve:1 proportionally:1 amount:2 extensively:1 zabih:1 svms:2 generate:1 outperform:1 tutorial:1 shifted:1 arising:3 discrete:5 key:2 nevertheless:2 penalizing:1 utilize:1 graph:12 relaxation:10 fraction:1 year:1 angle:1 parameterized:1 inverse:1 lehmann:1 uncertainty:1 extends:1 family:3 draw:3 appendix:2 comparable:1 def:1 bound:32 guaranteed:1 convergent:1 correspondence:1 replaces:3 quadratic:1 marchand:1 nonnegative:9 annual:1 constraint:11 phrased:1 aspect:2 min:1 optimality:1 performing:1 martin:1 structured:21 developing:1 according:2 describes:3 remain:1 son:1 wi:2 lp:2 joseph:1 b:3 maxy:1 intuitively:1 restricted:1 iccv:1 computationally:1 equation:4 describing:1 generalizes:1 gaussians:2 apply:3 away:1 alternative:1 robustness:1 denotes:1 cf:1 ensure:2 include:2 graphical:4 maintaining:1 laviolette:1 restrictive:1 perturb:12 objective:1 strategy:1 concentration:2 md:2 gradient:19 parametrized:3 philip:1 polytope:5 rother:1 ratio:1 minimizing:1 equivalently:1 unfortunately:4 mostly:1 relate:3 negative:3 stated:1 wyi:1 danskin:1 tightening:1 perform:3 allowing:1 upper:3 francesco:1 markov:2 benchmark:1 finite:4 lacasse:1 descent:3 november:1 truncated:3 extended:2 y1:2 perturbation:5 drift:1 intensity:1 introduced:1 namely:1 pair:3 specified:2 kl:12 required:1 germain:1 engine:1 learned:7 barcelona:1 able:1 bar:1 suggested:1 usually:7 dynamical:1 yc:2 sparsity:1 challenge:1 program:17 max:16 lend:1 belief:1 power:1 suitable:1 natural:2 thermodynamics:1 identifies:1 prior:7 loss:42 expect:1 integrate:3 incurred:1 eccv:2 side:1 allow:4 taking:2 barrier:5 emerge:1 boundary:2 world:1 tamir:2 computes:1 commonly:1 made:1 adaptive:1 simplified:1 pushmeet:1 welling:1 approximate:13 compact:1 obtains:1 uai:1 consuming:1 xi:2 alternatively:2 modularity:1 table:3 learn:14 complex:8 constructing:1 domain:1 did:2 aistats:1 yevgeny:1 arise:1 augmented:6 slow:1 wiley:1 governed:1 learns:1 theorem:14 remained:1 specific:4 pac:24 svm:9 incorporating:1 burden:1 importance:1 keshet:3 catoni:5 phd:1 margin:3 mf:1 entropy:2 intersection:1 likely:1 explore:1 seldin:1 conveniently:1 lagrange:2 springer:1 corresponds:2 truth:1 acm:2 goal:3 formulated:1 identity:2 carsten:1 orabona:1 feasible:4 change:6 hard:3 specifically:1 torr:1 uniformly:3 generalisation:1 called:4 duality:1 e:4 szummer:1 collins:1 incorporate:3 evaluate:1 phenomenon:1 |
4,494 | 5,067 | Variational Planning for Graph-based MDPs
Qiang Cheng?
Qiang Liu?
Feng Chen?
Alexander Ihler?
Department of Automation, Tsinghua University
?
Department of Computer Science, University of California, Irvine
?
{cheng-q09@mails., chenfeng@mail.}tsinghua.edu.cn
?
{qliu1@,ihler@ics.}uci.edu
?
Abstract
Markov Decision Processes (MDPs) are extremely useful for modeling and solving sequential decision making problems. Graph-based MDPs provide a compact
representation for MDPs with large numbers of random variables. However, the
complexity of exactly solving a graph-based MDP usually grows exponentially in
the number of variables, which limits their application. We present a new variational framework to describe and solve the planning problem of MDPs, and derive
both exact and approximate planning algorithms. In particular, by exploiting the
graph structure of graph-based MDPs, we propose a factored variational value iteration algorithm in which the value function is first approximated by the multiplication of local-scope value functions, then solved by minimizing a Kullback-Leibler
(KL) divergence. The KL divergence is optimized using the belief propagation
algorithm, with complexity exponential in only the cluster size of the graph. Experimental comparison on different models shows that our algorithm outperforms
existing approximation algorithms at finding good policies.
1
Introduction
Markov Decision Processes (MDPs) have been widely used to model and solve sequential decision
making problems under uncertainty, in fields including artificial intelligence, control, finance and
management (Puterman, 2009, Barber, 2011). However, standard MDPs are described by explicitly
enumerating all possible states of variables, and are thus not well suited to solve large problems.
Graph-based MDPs (Guestrin et al., 2003, Forsell and Sabbadin, 2006) provide a compact representation for large and structured MDPs, where the transition model is explicitly represented by a
dynamic Bayesian network. In graph-based MDPs, the state is described by a collection of random
variables, and the transition and reward functions are represented by a set of smaller (local-scope)
functions. This is particularly useful for spatial systems or networks with many ?local? decisions,
each affecting small sub-systems that are coupled together and interdependent (Nath and Domingos,
2010, Sabbadin et al., 2012).
The graph-based MDP representation gives a compact way to describe a structured MDP, but the
complexity of exactly solving such MDPs typically still grows exponentially in the number of state
variables. Consequently, graph-based MDPs are often approximately solved by enforcing contextspecific independence or function-specific independence constraints (Sigaud et al., 2010). To take
advantage of context-specific independence, a graph-based MDP can be represented using decision
trees or algebraic decision diagrams (Bahar et al., 1993), and then solved by applying structured
value iteration (Hoey et al., 1999) or structured policy iteration (Boutilier et al., 2000). However,
in the worst case, the size of the diagram still increases exponentially with the number of variables.
Alternatively, methods based on function-specific independence approximate the value function by
a linear combination of basis functions (Koller and Parr, 2000, Guestrin et al., 2003). Exploiting function-specific independence, a graph-based MDP can be solved using approximate linear
programming (Guestrin et al., 2003, 2001, Forsell and Sabbadin, 2006), approximate policy itera1
tion (Sabbadin et al., 2012, Peyrard and Sabbadin, 2006) and approximate value iteration (Guestrin
et al., 2003). Among these, the approximate linear programming algorithm in Guestrin et al. (2003,
2001) has an exponential number of constraints (in the treewidth), and thus cannot be applied to
general MDPs with many variables. The approximate policy iteration algorithm in Sabbadin et al.
(2012), Peyrard and Sabbadin (2006) exploits a mean field approximation to compute and update
the local policies; unfortunately this can give loose approximations.
In this paper, we propose a variational framework for the MDP planning problem. This framework
provides a new perspective to describe and solve graph-based MDPs where both the state and decision spaces are structured. We first derive a variational value iteration algorithm as an exact planning
algorithm, which is equivalent to the classical value iteration algorithm. We then design an approximate version of this algorithm by taking advantage of the factored representation of the reward and
transition functions, and propose a factored variational value iteration algorithm. This algorithm
treats the value function as a unnormalized distribution and approximates it using a product of localscope value functions. At each step, this algorithm computes the value function by minimizing a
Kullback-Leibler divergence, which can be done using a belief propagation algorithm for influence
diagram problems (Liu and Ihler, 2012) . In comparison with the approximate linear programming
algorithm (Guestrin et al., 2003) and the approximate policy iteration algorithm (Sabbadin et al.,
2012) on various graph-based MDPs, we show that our factored variational value iteration algorithm generates better policies.
The remainder of this paper is organized as follows. The background and some notation for graphbased MDPs are introduced in Section 2. Section 3 describes a variational view of planning for finite
horizon MDPs, followed by a framework for infinite MDPs in Section 4. In Section 5, we derive
an approximate algorithm for solving infinite MDPs based on the variational perspective. We show
experiments to demonstrate the effectiveness of our algorithm in Section 6.
2
Markov Decision Processes and Graph-based MDPs
2.1 Markov Decision Processes
A Markov Decision Process (MDP) is a discrete time stochastic control process, where the system
chooses the decisions at each step to maximize the overall reward. An MDP can be characterized
by a four tuple (X , D, R, T ), where X represents the set of all possible states; D is the set of all
possible decisions; R : X ? D ? R is the reward function of the system, and R (x, d) is the
reward of the system after choosing decision d in state x; T : X ? D ? X ? [0, 1] is the transition
function, and T (y|x, d) is the probability that the system arrives at state y, given that it starts from
x upon executing decision d. A policy of the system is a mapping from the states to the decisions
? (x) : X ? D so that ? (x) tells the decision chosen by the system in state x. The graphical
representation of an MDP is shown in Figure 1(a).
We consider the case of an MDP with infinite horizon, in which the future rewards are discounted
exponentially with a discount factor ? ? [0, 1]. The task of the MDP is to choose the best stationary
policy ? ? (x) that maximizes the expected discounted reward on the infinite horizon. The value
function v ? (x) of the best policy ? ? (x) then satisfies the following Bellman equation:
X
v ? (x) = max
T (y|x, ? (x)) (R (x, ? (x)) + ?v ? (y)),
(1)
?(x)
?
y?X
?
where v (x) = v (y) , ?x = y. The Bellman equation can be solved using stochastic dynamic
programming algorithms such as value iteration and policy iteration, or linear programming algorithms (Puterman, 2009).
2.2 Graph-based MDPs
We assume that the full state x can be represented as a collection of state variables xi , so that X
is a Cartesian product of the domains of the xi : X = X1 ? X2 ? ? ? ? ? XN , and similarly for d:
D = D1 ? D2 ? ? ? ? ? DN . We consider the following particular factored form for MDPs: for each
variable i, there exist neighborhood sets ?i (including i) such that the value of xt+1
depends only
i
on the variable i?s neighborhood, xt [?i ], and the ith decision dti . Then, we can write the transition
function in a factored form:
N
Y
T (y|x, d) =
Ti (yi |x[?i ], di ),
(2)
i=1
2
r1
d
? ? x?
T ? y | x, d ?
x
R ? x, d ?
R ? x, d ?
r
R ? x, d ?
y
r
R ? x2 , d 2 ?
? ? x ?T ? y | x, d ?
r2
x
T ? y | x, d ?
y
x
R ? x2 , d 2 ?
? ? x2 ?r
d
r
r3
3
x1
x1
y1
d2
r2
d2
x2
x2
y2
y3
d 3 T ? y3 | x2 , xr33, d3 ?
R ? x3 , d3 ?
d1
y2
T ? y3 | x2 , x3 , d3 ?
r3
(a)
r2
T ? y2 | x1 , x2 , x3 , d 2 ?
R ? x3 , d3 ?
r1
y1
2
? ? x3x?
d1
T ? y1 | x1 , x2 , d1 ?
T ? y2 | x1 , x2 , x3 , d 2 ?
x3
r
R ? x, d ?
x1
d2
x2
d1
T ? y1 | x1 , x2 , d1 ?
2
x2
r1
d1
R ? x1 , d1 ?
1
x1
d
? ? x?
? ? x ? T ? y | x, d ?
x
? ? x1 r?
d
d
R ? x1 , d1 ?
d 3(b)
x3
y3
r3
x3
d3
d3
Figure 1: (a) A Markov decision process; (b) A graph-based Markov decision process.
where each factor is a local-scope function Ti : X [?i ] ? Di ? Xi ? [0, 1] , ?i ? {1, 2, . . . , N } . We
also assume
that the reward
function is the
sum of N local-scope rewards:
d
d
d
N
X
d
d
d
? ? x?
? ? x?
?R
? x(x,
?
d) =
Ri (xi , di ),
(3)
? ? x ? T ? y | x, d ?
? ? x ?T ? y | x, d ?
? ? x ?T ? y | x, d ?
i=1
y
withxlocal-scope functions
Ri : Xi ?x Di ? R, ?i ? x{1, 2, . . . , N }.
T ? y | x, d ?
T ? y | x, d ?
T ? y | x, d ?
y
x
x
To summarize,
a graph-based
Markov decision
process is xcharacterized by the following parameters:
({XRi? x:, d1? ? i ? N } R; ?{D
: 1 ? i ? RN? x,}d ?; {Ri : 1 ? i ? N } ; {?i : 1 ? i ? N } ; {Ti : 1 ? i ? N }) .
x, d ? i
Figure 1(b)
of a graph-based
MDP. These assumptions for graph-based MDPs can
r gives an example
r
r
be easilyR ? xgeneralized,
for
to include
R ? xexample
R ? x, d ? Ti and Ri that depend on arbitrary sets of variables
,d?
,d?
r
and decisions,r using some additional
notation.r
The optimal policy ? (x) cannot be explicitly represented for large graph-based MDPs, since the
number of states grows exponentially with the number of variables. To reduce complexity, we consider a particular class of local policies: a policy ? (x) : X ? D is said to be local if decision di is
made using only the neighborhood ?i , so that ? (x) = (?1 (x [?1 ]) , ?2 (x [?2 ]) , . . . , ?N (x [?N ]))
where ?i (x [?i ]) : X [?i ] ? Di . The main advantage of local policies is that they can be concisely
expressed when the neighborhood sizes |?i | are small.
3
Variational Planning for Finite Horizon MDPs
In this section, we introduce a variational planning viewpoint of finite MDPs. A finite MDP can
be viewed as an influence diagram; we can then directly relate planning to the variational decisionmaking framework of Liu and Ihler (2012).
Influence diagrams (Shachter, 2007) make use of Bayesian networks to represent structured decision
problems under uncertainty. The shaded part in Figure 1(a) shows a simple example influence
diagram, with random variables {x, y}, decision variable d and reward functions {R (x, d) , v (y)}.
The goal is then to choose a policy that maximizes the expected reward.
The best policy ? t (x) for a finite MDP can be computed using backward induction (Barber, 2011):
X
v t?1 (x) = max
T (y|x, ? (x)) R (x, ? (x)) + ?v t (y) ,
(4)
? (x)
y?X
t
Let p (x, y, d) = T (y|x, ? (x)) (R (x, ? (x)) + ?v t (y)) be an augmented distribution (see, e.g.,
Liu and Ihler (2012)). Applying a variational framework for influence diagrams (Liu and Ihler,
2012, Theorem 3.1), the optimal policy can be equivalently solved from the dual form of Eq. (4):
? ? t = max ? ?;t , ? + H (x, y, d; ? ) ? H (d|x; ? ) ,
(5)
? ?M
where ? ?;t (x, y, d) = log pt (x, y, d) = log T (y|x, d) + log (R (x, d) + ?v t (y)), and ? is a
vector of moments in the marginal polytope M (Wainwright and Jordan, 2008). In a mild abuse
of notation, we will use ? to refer both to the vector of moments and to the maximum entropy
3
distribution ? (x, y, d) consistent with those moments; H(?; ? ) refers to the entropy or conditional
entropy of this distribution. See also Wainwright and Jordan (2008), Liu and Ihler (2012) for details.
Let ? t (x, y, d) be the optimal solution of Eq. (5); then from Liu and Ihler (2012), the optimal policy
? t (x) is simply arg maxd ? t (d|x). Moreover, the optimal value function v t?1 (x) can be obtained
from Eq. (5). This result is summarized in the following lemma.
Lemma 1. For finite MDPs with non-stationary policy, the best policy ? t (x) and the value function
v t?1 (x) can be obtained by solving Eq. (5). Let ? t (x, y, d) be the optimal solution of Eq. (5).
(a) The optimal policy can be obtained from ? t (x, y, d), as ? t (x) = arg maxd ? t (d|x).
(b) The value function w.r.t. ? t (x) can be obtained as v t?1 (x) = exp (? (? t )) ? t (x).
Proof. (a) follows directly from Theorem 3.1 of Liu and Ihler (2012).
(b) Note that
T (y|x, ? t (x)) (R (x, ? t (x)) + ?v t (y)) = exp (? (? t )) ? t (x, y, d). Making use of Eq. (4),
summing over y and maximizing over d on exp (? (? t )) ? t (x, y, d), we obtain v t?1 (x) =
exp (? (? t )) ? t (x).
4
Variational Planning for Infinite Horizon MDPs
Given the variational form of finite MDPs, we now construct a variational framework for infinite
MDPs. Compared to the primal form (i.e., Eq. (4)) of finite MDPs, the Bellman equation of an
infinite MDP, Eq. (1), has the additional constraint that v t?1 (x) = v t (y) when x = y. For an
infinite MDP, we can simply consider a two-stage finite MDP with the variational form in Eq. (5),
but with this additional constraint. The main result is given by the following theorem.
Theorem 2. Assume ? and ? are the solution of the following optimization problem,
max
?,
subject to ? = ? ? , ? + H (x, y, d; ? ) ? H (d|x; ? ),
? ?M,??R
? ? = log T (y|x, d) + log (R (x, d) + ? exp (?) ?x (y)) ,
(6)
(7)
where ?x denotes the marginal distribution on x. With ? ? being the optimal solution, we have
(a) The optimal policy of the infinite MDP can be decoded as ? ? (x) = arg maxd ? ? (d|x).
(b) The value function w.r.t. ? ? (x) is v ? (x) = exp (?) ? ? (x).
Proof. The Bellman equation is equivalent to the backward induction in Eq. (4), subject to an extra
constraint that v t = v t?1 . The result follows by replacing Eq. (4) with its variational dual (5).
Like the Bellman equation (4), its dual form (6) also has no closed-form solution. Analogously to the
value iteration algorithm for the Bellman equation, Eq. (6) can be solved by alternately fixing ?x (x),
? in ? ? and solving Eq. (6) with only the first constraint using some convex optimization technique.
However, each step of solving for ? and ? is equivalent to one step of value iteration; if ? (x, y, d) is
represented explicitly, it seems to offer no advantage over simply applying the elimination operators
as in (4). The usefulness of this form is mainly in opening the door to design new approximations.
5
Approximate Variational Algorithms for Graph-based MDPs
The framework in the previous section gives a new perspective on the MDP planning problem, but
does not by itself simplify the problem or provide new solution methods. For graph-based MDPs,
the sizes of the full state and decision spaces are exponential in the number of variables. Thus, the
complexity of exact algorithms is exponentially large. In this section, we present an approximate
algorithm for solving Eq. (6), by exploiting the factorization structure of the transition function (2),
the reward function (3) and the value function v (x).
Standard variational approximations take advantage of the multiplicative factorization of a distribution to define their approximations. While our (unnormalized) distribution p (x, y, d) =
exp[? ? (x, y, d)] is structured, some of its important structure comes from additive factors, such
as the local-scope reward functions Ri (xi , di ) in Eq. (3), and the discounted value function ?v (x)
in Eq. (1). Computing the sum of these additive factors directly would create a large factor over an
unmanageably large variable domain, and destroy most of the useful structure of p (x, y, d).
4
To avoid this effect, we convert the presence of additive factors into multiplicative factors by augmenting the model with a latent ?selector? variable, which is similar to that used for the ?complete
likelihood? in mixture models (Liu and Ihler, 2012). For example, consider the sum of two factors:
X
X
f (x) = f12 (x1 , x2 ) + f23 (x2 , x3 ) =
(f12 )a ? (f23 )(1?a) =
f?12 (a, x1 , x2 ) ? f?23 (a, x2 , x3 ).
a?{0,1}
a?{0,1}
Introducing the auxilliary variable a converts f into a product of factors, where marginalizing over
a yields the original function f .
Using this augmenting approach, the additive elements of the graph-based MDP are converted to
? i (xi , di , a), and ?v (x) ? v?? (x, a). In this way, the
multiplicative factors, that is Ri (xi , di ) ? R
?
parameter ? of a graph-based MDP can be represented as
N
N
X
X
? i (xi , di , a) + log v?? (y, a) .
? ? (x, y, d, a) =
log Ti (yi |x[?i ], di ) +
log R
i=1
i=1
Now, p (x, y, d, a) = exp[? ? (x, y, d, a)] has a representation in terms of a product of factors. Let
N
N
X
X
? i (xi , di , a).
? (x, y, d, a) =
log Ti (yi |x[?i ], di ) +
log R
i=1
i=1
Before designing the algorithms, we first construct a cluster graph (G; C; S) for the distribution
exp[? (x, y, d, a)], where C denotes the set of clusters and S is the set of separators. (See Liu and
Ihler (2012, 2011), Wainwright and Jordan (2008) for more details on cluster graphs.) We assign
each decision node di to one cluster that contains di and its parents pa(i); clusters so assigned are
called decision clusters A, while other clusters are called normal clusters R, so that C = {R, A}.
Using the structure of the cluster graph, ? can be decomposed into
X
? (x, y, d, a) =
?ck (xck , yck , dck , a),
(8)
k?C
and the distribution ? is approximated as
Q
?c (zck )
,
? (x, y, d, a) = Q k?C k
(kl)?S ?skl (zskl )
(9)
where zck = {xck , yck , dck , a}. Therefore, instead of optimizing the full distribution ? , we can
optimize the collection of marginal distributions ? = {?ck , ?sk }, with far lower computational cost.
These marginals should belong to the local consistency polytope L, which enforces that marginals
are consistent on their overlapping sets of variables (Wainwright and Jordan, 2008).
We now construct a reduced cluster graph over x from the full cluster graph, to serve as the approximating structure of the marginal ? (x). We assume a factored representation for ? (x):
Q
?c (xck )
,
(10)
? (x) = Q k?C k
(kl)?S ?skl (xskl )
where the ?ck (xck ) is the marginal distribution of ?ck (zck ) on xck . Note that Eq. (10) also dictates a
factored approximationQof the value function v (x), because v (x) ? exp (?) ? (x). Assume v? (x)
factors into v? (x) = k vck (xck ). Then, the constraint (7) reduces to a set of simpler constraints
on the cliques of the cluster graph,
?c?k (xck , yck , dck , a) = ?ck (xck , yck , dck , a) + log vck ,x (yck , a) , k ? C.
Correspondingly, the constraint (6) can be approximated by
X
X
X
X
?=
h?c?k , ?ck i +
Hck +
H 0 ck ?
Hskl ,
k?C
k?R
k?D
(11)
(12)
(kl)?S
where Hck is the entropy of variables in cluster ck , Hck = H (xck , yck , dck , a; ? ) and Hc0 k =
H (xck , yck , dck , a; ? ) ? H (dck |xck ; ? ). With these approximations, we solve the optimization in
Theorem 2 using ?mixed? belief propagation (Liu and Ihler, 2012) for fixed {?c?k }; we then update
{?c?k } using the fixed point condition (11). This gives the double loop algorithm in Algorithm 1.
5
Algorithm 1 Factored Variational Value Iteration Algorithm
Input: A graph-based
MDP with ({X
i } ; {Di } ; {Ri } ; {?i } ; {Ti }), the cluster graph (G; C; S), and
the initial ?ct=0
(x
)
,
?c
?
C
.
c
k
k
k
Iterate until convergence (for both the outer loop and the inner loop).
1:
Outer loop: Update ?c?;t
using Eq. (11).
k
2:
Inner loop: Maximize the right side of Eq. (12) with fixed ?c?;t
and compute ?ct+1
(xck )
k
k
using the belief propagation algorithm proposed in Liu and Ihler (2012):
X ? ?ck (zck )m?k (zck )
mk?l (zck ) ? ?skl (zskl )
,
ml?k (zck )
zck \skl
?ck (zck )
ck ? R
where ?ck (zck ) = exp[?c?k (zck )],
and ?[?ck (zck )] =
?ck (zck )?ck (dck |xck ) ck ? A,
X
with ?ck (zck ) = ?ck (zck )m?k (zck ) and
?ck (xck ) = max
?ck (zck )
dck
yck ,a
Output: The local policies {? (di |x (?i ))}, and the value function v? (x) = exp (?) ? (x).
6
Experiments
We perform experiments in two domains, disease management in crop fields and viral marketing, to
evaluate the performance of our factored variational value iteration algorithm (FVI). For comparison,
we use approximate policy iteration algorithm (API) (Sabbadin et al., 2012), (a mean-field based policy iteration approach), and the approximate linear programming algorithm (ALP) (Guestrin et al.,
2001). To evaluate each algorithm?s performance, we obtain its approximate local policy, then compute the expected value of the policy using either exact evaluation (if feasible)
or a sample-based
P
estimate (if not). We then compare the expected reward U alg (x) = |X1 | x v alg (x) of each algorithm?s policy.
6.1 Disease Management in Crop Fields
A graph-based MDP for disease management in crop fields was introduced in (Sabbadin et al.,
2012). Suppose we have a set of crop fields in an agricultural area, where each field is susceptible to
contamination by a pathogen. When a field is contaminated, it can infect its neighbors and the yield
will decrease. However, if a field is left fallow, it has a probability (denoted by q) of recovering from
infection. The decisions of each year include two options (Di = {1, 2}) for each field: cultivate
normally (di = 1) or leave fallow (di = 2). The problem is then to choose the optimal stationary
policy to maximize the expected discounted yield. The topology of the fields is represented by an
undirected graph, where each node represents one crop field. An edge is drawn between two nodes
if the fields share a common border (and can thus pass an infection). Each crop field can be in
one of three states: xi = 1 if it is uninfected and xi = 2 to xi = 3 for increasing degrees of
infection. The probability that a field moves from state xi to state xi + 1 with di = 1 is set to be
n
P = P (?, p, ni ) = ? + (1 ? ?) (1 ? (1 ? p) i ), where ? and p are parameters and ni is the number
of the neighbors of i that are infected. The transition function is summarized in Table 1. The reward
function depends on each field?s state and local decision. The maximal yield r > 1 is achieved by an
uninfected, cultivated field; otherwise, the yield decreases linearly with the level of infection, from
maximal reward r to minimal reward 1 + r/10. A field left fallow produces reward 1.
Table 1: Local transition probabilities p x0i |xN (i) , ai , for the disease management problem.
di = 1
di = 2
xi = 1 xi = 2 xi = 3 xi = 1 xi = 2 xi = 3
x0i = 1 1 ? P
0
0
1
q
q/2
x0i = 2
P
1?P
0
0
1?q
q/2
x0i = 3
0
P
1
0
0
1?q
6.2 Viral Marketing
Viral marketing (Nath and Domingos, 2010, Richardson and Domingos, 2002) uses the natural
premise that members of a social network influence each other?s purchasing decisions or comments;
then, the goal is to select the best set of people to target for marketing such that overall profit is
6
maximized. Viral marketing has been previously framed as a one-shot influence diagram problem
(Nath and Domingos, 2010). Here, we frame the viral marketing task as an MDP planning problem,
where we optimize the stationary policy to maximize long-term reward.
The topology of the social network is represented by a directed graph, capturing directional social
influence. We assume there are three states for each person in the social network: xi = 1 if i is
making positive comments, xi = 2 if not commenting, and xi = 3 for negative comments. There is
a binary decision corresponding to each person i: market to this person (di = 1) or not (di = 2). We
also define a local reward function: if a person gives good comments when di = 2, then the reward
is r; otherwise, the reward is less, decreasing linearly to minimum value
1 + r/10. For marketed
individuals (di = 1), the reward is 1. The local transition p x0i |xN (i) , di is set as in Table 1.
6.3 Experimental Results
We evaluate both problems above on two topologies of model, each of three sizes (6, 10, and 20
nodes). Our first topology type are random, regular graphs with three neighbors per node. Our
second are ?chain-like? graphs, in which we order the nodes, then connect each node at random to
four of its six nearest nodes in the ordering. This ensures that the resulting graph has low tree-width
(? 6), enabling comparison of the ALP algorithm. We set parameters r = 10 and ? = 0.1, and test
the results on different choices of p and q.
Tables 2? 4 show the expected rewards found by each algorithm for several settings. The best
performance (highest rewards) are labeled in bold. For models with 6 nodes, we also compute the
expected reward under the optimal global policy ? ? (x) for comparison. Note that this value overestimates the best possible local policy {?i? (?i (x))} being sought by the algorithms; the best local
policy is usually much more difficult to compute due to imperfect recall. Since the complexity of the
approximate linear programming (ALP) algorithm is exponential in the treewidth of graph defined
by the neighborhoods ?i , we were unable to compute results for models beyond treewidth 6.
The tables show that our factored variational value iteration (FVI) algorithm gives policies with
higher expected rewards than those of approximate policy iteration (API) on the majority of models
(156/196), over all sets of models and different p and q. Compared to approximate linear programming, in addition to being far more scalable, our algorithm performed comparably, giving better
policies on just over half of the models (53/96) that ALP could be run on. However, when we
restrict to low-treewidth ?chain? models, we find that the ALP algorithm appears to perform better
on larger models; it outperforms our FVI algorithm on only 4/32 models of size 6, but this increases
to 14/32 at size 10, and 25/32 at size 20. It may be that ALP better takes advantage of the structure
of x on these cases, and more careful choice of the cluster graph could similarly improve FVI.
The average results across all settings are shown in Table 5, along with the relative improvements of
our factored variational value iteration algorithm to approximate policy iteration and approximate
linear programming. Table 5 shows that our FVI algorithm, compared to approximate policy iteration, gives the best policies on regular models across sizes, and gives better policies than those of
the approximate linear programming on chain-like models with small size (6 and 10 nodes). Although on average the approximate linear programming algorithm may provide better policies for
?chain? models with large size, its exponential number of constraints makes it infeasible for general
large-scale graph-based MDPs.
7
Conclusions
In this paper, we have proposed a variational planning framework for Markov decision processes.
We used this framework to develop a factored variational value iteration algorithm that exploits the
structure of the graph-based MDP to give efficient and accurate approximations, scales easily to large
systems, and produces better policies than existing approaches. Potential future directions include
studying methods for the choice of cluster graphs, and improved solutions for the dual approximation (12), such as developing single-loop message passing algorithms to directly optimize (12).
Acknowledgments
This work was supported in part by National Science Foundation grants IIS-1065618 and IIS1254071, a Microsoft Research Fellowship, National Natural Science Foundation of China
(#61071131 and #61271388), Beijing Natural Science Foundation (#4122040), Research Project
of Tsinghua University (#2012Z01011), and Doctoral Fund of the Ministry of Education of China
(#20120002110036).
7
Table 2: The expected rewards of different algorithms on regular models with 6 nodes.
Disease Management
Viral Marketing
(p, q)
Exact
FVI
API
ALP
Exact
FVI
API
ALP
(0.2, 0.2)
202.4 202.4
164.7
148.3
259.3 258.2
250.0
237.7
(0.4, 0.2)
169.2 169.2
139.0
123.3
212.2 195.3
192.6
183.4
(0.6, 0.2)
158.1
155.2
157.4 115.4
209.6
167.8
174.0 156.4
(0.8, 0.2)
154.1
152.7
153.2 106.0
209.5
152.7
172.2 144.7
(0.2, 0.4)
262.5 259.2
254.7
236.7
361.6 361.6
355.8
355.0
(0.4, 0.4)
220.1 219.1
177.0
181.3
300.2 285.8
285.1
267.3
(0.6, 0.4)
212.1 203.8
203.8
162.7
297.3
244.6
249.6 244.8
(0.8, 0.4)
211.7 198.2
198.2
136.1
297.3
225.2
296.8 273.5
(0.2, 0.6)
349.3 349.3
333.6
307.3
428.1 428.1
428.1
427.7
(0.4, 0.6)
290.7 276.7
276.7
200.0
361.8 351.7
303.3
350.0
(0.6, 0.6)
284.7
242.7
243.7 212.8
355.5
304.7
152.5
306.5
(0.8, 0.6)
284.0 236.1
236.1
194.7
355.5
282.9
355.0 271.3
(0.2, 0.8)
423.6 423.6
423.6
274.7
470.0 469.8
469.8
469.8
(0.4, 0.8)
362.2 351.0
344.3
264.5
411.6
402.0
402.0
403.7
(0.6, 0.8)
351.6 304.8
302.7
242.5
398.2
347.8
351.8 336.6
(0.8, 0.8)
350.5 284.2
284.9
207.9
398.0
320.8
398.0 294.0
Table 3: The expected rewards of different algorithms on ?chain-like? models with 10 nodes.
Disease Management
Viral Marketing
(p, q)
FVI
API
ALP
FVI
API
ALP
(0.3, 0.3)
304.8 258.4
288.9
355.5 324.1 335.5
(0.5, 0.3)
273.4
228.7
292.7
308.1
291.5 323.8
(0.7, 0.3)
262.2
261.6
329.6 298.5 298.1 269.7
(0.3, 0.5)
420.2
395.4
456.5 550.1 523.9 543.9
(0.5, 0.5)
358.5 317.7
302.6
453.3 450.9 410.0
(0.7, 0.5)
343.8
344.9
394.3
386.1
418.6 436.9
(0.3, 0.7)
612.9
613.6 531.2
659.9
634.8 664.7
(0.5, 0.7)
498.2
491.8
538.6 542.7 523.9 518.2
(0.7, 0.7)
430.0 411.8
427.3
496.9 495.7 451.2
Table 4: The expected rewards (?102 ) of different algorithms on models with 20 nodes.
Disease Manag. Viral Marketing
Disease Manag. Viral Marketing
(p, q)
FVI
API
FVI
API
(p, q)
FVI
API
FVI
API
(0.2, 0.2)
7.17
6.33
7.87
7.88
(0.4, 0.2)
5.93
5.19
6.53
5.65
(0.6, 0.2)
5.33
4.94
5.99
5.28
(0.8, 0.2)
5.12
5.20
5.76
5.62
(0.4, 0.4)
9.10
8.82
11.56 11.52
(0.4, 0.4)
7.70
6.23
9.23
8.83
(0.4, 0.4)
7.04
6.17
7.95
7.65
(0.4, 0.4)
6.72
6.72
7.45
7.14
(0.6, 0.6)
12.29 12.11
13.85 13.85
(0.6, 0.6)
9.97
10.06 11.74 11.72
(0.6, 0.6)
8.50
8.72
10.22 10.02
(0.6, 0.6)
8.01
7.69
9.23
8.88
(0.8, 0.8)
14.53
14.57
15.25
15.27 (0.8, 0.8) 12.57 12.43
13.47 13.22
(0.8, 0.8)
10.90 10.78
11.82 11.50
(0.8, 0.8)
9.92
9.56
10.77 10.64
Table 5: Comparison of average expected rewards on regular and ?chain-like? models.
Type
n=6
n = 10
n = 20
Regular
Chain
FVI: 275.8
API: 271.4
FVI: 275.8
API: 271.6
ALP: 244.9
Rel. Imprv.
1.6%
Rel. Imprv.
1.6%
12.6%
FVI: 458.7
API: 452.3
FVI: 415.7
API: 399.4
ALP: 414.7
8
Rel. Imprv.
1.4%
Rel. Imprv.
4.1%
0.7%
FVI: 935.6
API: 905.1
FVI: 821.9
API: 749.6
ALP: 872.2
Rel. Imprv.
3.37%
Rel. Imprv.
9.7%
?5.8%
References
R Iris Bahar, Erica A Frohm, Charles M Gaona, Gary D Hachtel, Enrico Macii, Abelardo Pardo, and Fabio
Somenzi. Algebraic decision diagrams and their applications. In IEEE/ACM International Conference on
Computer-Aided Design, pages 188?191, 1993.
David Barber. Bayesian reasoning and machine learning. Cambridge University Press, 2011.
Craig Boutilier, Richard Dearden, and Mois?es Goldszmidt. Stochastic dynamic programming with factored
representations. Artificial Intelligence, 121(1):49?107, 2000.
Nicklas Forsell and R?egis Sabbadin. Approximate linear-programming algorithms for graph-based markov
decision processes. Frontiers in Artificial Intelligence and Applications, 141:590, 2006.
Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent planning with factored MDPs. Advances in
Neural Information Processing Systems, 14:1523?1530, 2001.
Carlos Guestrin, Daphne Koller, Ronald Parr, and Shobha Venkataraman. Efficient solution algorithms for
factored mdps. Journal of Artificial Intelligence Research, 19:399?468, 2003.
Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. SPUDD: Stochastic planning using decision
diagrams. In Proceedings of the Fifteenth conference on Uncertainty in Artificial Intelligence, pages 279?
288, 1999.
Daphne Koller and Ronald Parr. Policy iteration for factored mdps. In Proceedings of the Sixteenth Conference
on Uncertainty in Artificial Intelligence, pages 326?334, 2000.
Qiang Liu and Alexander Ihler. Variational algorithms for marginal MAP. In Uncertainty in Artificial Intelligence (UAI), 2011.
Qiang Liu and Alexander Ihler. Belief propagation for structured decision making. In Uncertainty in Artificial
Intelligence (UAI), pages 523?532, August 2012.
A. Nath and P. Domingos. Efficient belief propagation for utility maximization and repeated inference. In The
Proceeding of the Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010.
Nathalie Peyrard and R?egis Sabbadin. Mean field approximation of the policy iteration algorithm for graphbased markov decision processes. Frontiers in Artificial Intelligence and Applications, 141:595, 2006.
Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. Wiley-Interscience,
2009.
Matthew Richardson and Pedro Domingos. Mining knowledge-sharing sites for viral marketing. In Proceedings
of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 61?
70, 2002.
R. Sabbadin, N. Peyrard, and N. Forsell. A framework and a mean-field algorithm for the local control of
spatial processes. International Journal of Approximate Reasoning, 53(1):66 ? 86, 2012.
Ross D Shachter. Model building with belief networks and influence diagrams. Advances in decision analysis:
from foundations to applications, pages 177?201, 2007.
Olivier Sigaud, Olivier Buffet, et al. Markov decision processes in artificial intelligence. ISTE-Jonh Wiley &
Sons, 2010.
M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
9
| 5067 |@word mild:1 version:1 seems:1 hu:1 d2:4 profit:1 shot:1 moment:3 initial:1 liu:14 contains:1 outperforms:2 existing:2 fvi:19 gaona:1 ronald:3 additive:4 update:3 fund:1 stationary:4 intelligence:11 half:1 ith:1 provides:1 node:13 simpler:1 daphne:3 dn:1 along:1 interscience:1 introduce:1 commenting:1 abelardo:1 market:1 expected:12 planning:15 bellman:6 discounted:4 decomposed:1 decreasing:1 agricultural:1 increasing:1 project:1 notation:3 moreover:1 maximizes:2 finding:1 dti:1 y3:4 ti:7 finance:1 exactly:2 zck:17 control:3 normally:1 grant:1 overestimate:1 before:1 positive:1 local:20 treat:1 tsinghua:3 limit:1 api:16 approximately:1 abuse:1 doctoral:1 china:2 shaded:1 factorization:2 directed:1 acknowledgment:1 enforces:1 x3:10 frohm:1 area:1 dictate:1 refers:1 regular:5 cannot:2 operator:1 context:1 influence:9 applying:3 optimize:3 equivalent:3 map:1 maximizing:1 jesse:1 convex:1 factored:17 bahar:2 pt:1 suppose:1 target:1 exact:6 programming:14 uninfected:2 us:1 designing:1 domingo:6 auxilliary:1 olivier:2 pa:1 element:1 trend:1 approximated:3 particularly:1 labeled:1 solved:7 worst:1 ensures:1 venkataraman:1 ordering:1 decrease:2 contamination:1 highest:1 disease:8 complexity:6 reward:31 dynamic:4 depend:1 solving:8 manag:2 serve:1 upon:1 basis:1 easily:1 sigaud:2 represented:9 various:1 describe:3 artificial:11 tell:1 choosing:1 neighborhood:5 widely:1 solve:5 larger:1 otherwise:2 richardson:2 itself:1 advantage:6 propose:3 product:4 maximal:2 remainder:1 uci:1 loop:6 sixteenth:1 exploiting:3 parent:1 cluster:17 double:1 r1:3 decisionmaking:1 convergence:1 produce:2 executing:1 leave:1 derive:3 develop:1 augmenting:2 fixing:1 nearest:1 x0i:5 eq:19 recovering:1 treewidth:4 come:1 direction:1 somenzi:1 stochastic:5 alp:13 elimination:1 education:1 premise:1 assign:1 aubin:1 frontier:2 ic:1 normal:1 exp:12 scope:6 mapping:1 parr:4 matthew:1 sought:1 ross:1 create:1 ck:20 avoid:1 dck:9 improvement:1 likelihood:1 mainly:1 sigkdd:1 inference:2 typically:1 koller:4 overall:2 among:1 dual:4 arg:3 yck:8 denoted:1 spatial:2 marginal:6 field:21 construct:3 qiang:4 represents:2 future:2 contaminated:1 simplify:1 richard:1 opening:1 shobha:1 divergence:3 national:2 individual:1 microsoft:1 unmanageably:1 message:1 mining:2 evaluation:1 mixture:1 arrives:1 primal:1 chain:7 accurate:1 hachtel:1 tuple:1 edge:1 tree:2 minimal:1 mk:1 modeling:1 infected:1 maximization:1 cost:1 introducing:1 usefulness:1 connect:1 chooses:1 person:4 st:1 international:3 together:1 analogously:1 aaai:1 management:7 choose:3 converted:1 potential:1 skl:4 summarized:2 bold:1 automation:1 explicitly:4 depends:2 performed:1 tion:1 view:1 closed:1 multiplicative:3 graphbased:2 start:1 option:1 carlos:2 f12:2 ni:2 maximized:1 yield:5 directional:1 bayesian:3 comparably:1 craig:2 sharing:1 infection:4 proof:2 ihler:15 di:28 irvine:1 recall:1 knowledge:2 organized:1 appears:1 higher:1 improved:1 done:1 marketing:11 stage:1 just:1 until:1 replacing:1 overlapping:1 propagation:6 grows:3 mdp:25 building:1 effect:1 y2:4 assigned:1 leibler:2 puterman:3 width:1 unnormalized:2 iris:1 complete:1 demonstrate:1 reasoning:2 variational:28 charles:1 common:1 viral:10 exponentially:6 belong:1 approximates:1 marginals:2 refer:1 cambridge:1 ai:1 framed:1 consistency:1 similarly:2 f23:2 perspective:3 optimizing:1 binary:1 maxd:3 yi:3 guestrin:9 minimum:1 additional:3 ministry:1 maximize:4 ii:1 full:4 egis:2 reduces:1 alan:1 characterized:1 offer:1 long:1 scalable:1 crop:6 fifteenth:1 iteration:26 represent:1 achieved:1 background:1 affecting:1 addition:1 fellowship:1 enrico:1 diagram:11 extra:1 comment:4 subject:2 undirected:1 member:1 nath:4 effectiveness:1 jordan:5 presence:1 door:1 iterate:1 independence:5 topology:4 restrict:1 reduce:1 inner:2 cn:1 imperfect:1 enumerating:1 six:1 utility:1 algebraic:2 passing:1 boutilier:3 useful:3 discount:1 reduced:1 exist:1 per:1 discrete:2 write:1 four:2 drawn:1 d3:6 backward:2 destroy:1 graph:45 sum:3 convert:2 year:1 run:1 beijing:1 uncertainty:6 fourth:1 family:1 decision:41 fallow:3 capturing:1 ct:2 followed:1 cheng:2 constraint:10 x2:18 ri:7 generates:1 pardo:1 extremely:1 martin:1 department:2 structured:8 developing:1 combination:1 smaller:1 describes:1 across:2 son:1 making:5 hoey:2 equation:6 previously:1 loose:1 r3:3 studying:1 buffet:1 original:1 denotes:2 include:3 graphical:2 exploit:2 giving:1 approximating:1 classical:1 feng:1 move:1 xck:14 said:1 fabio:1 unable:1 majority:1 outer:2 mail:2 barber:3 polytope:2 enforcing:1 induction:2 minimizing:2 equivalently:1 difficult:1 unfortunately:1 susceptible:1 robert:1 relate:1 xri:1 negative:1 design:3 policy:45 twenty:1 perform:2 markov:13 finite:9 enabling:1 y1:4 rn:1 frame:1 spudd:1 arbitrary:1 august:1 introduced:2 david:1 kl:5 optimized:1 california:1 concisely:1 alternately:1 beyond:1 usually:2 eighth:1 summarize:1 including:2 max:5 belief:7 wainwright:5 dearden:1 natural:3 improve:1 mdps:38 coupled:1 interdependent:1 discovery:1 multiplication:1 marginalizing:1 relative:1 multiagent:1 mixed:1 qliu1:1 foundation:5 degree:1 purchasing:1 consistent:2 nathalie:1 viewpoint:1 share:1 supported:1 infeasible:1 contextspecific:1 side:1 neighbor:3 taking:1 correspondingly:1 xn:3 transition:9 computes:1 collection:3 made:1 far:2 social:4 erica:1 approximate:26 compact:3 selector:1 kullback:2 clique:1 ml:1 global:1 uai:2 summing:1 xi:24 alternatively:1 latent:1 sk:1 table:11 alg:2 separator:1 domain:3 main:2 linearly:2 border:1 repeated:1 x1:15 augmented:1 site:1 wiley:2 sub:1 decoded:1 exponential:6 infect:1 theorem:5 specific:4 xt:2 r2:3 rel:6 sequential:2 pathogen:1 cartesian:1 horizon:5 chen:1 suited:1 entropy:4 hck:3 simply:3 shachter:2 expressed:1 pedro:1 gary:1 satisfies:1 acm:2 conditional:1 viewed:1 goal:2 macii:1 consequently:1 careful:1 feasible:1 aided:1 infinite:9 lemma:2 called:2 pas:1 experimental:2 e:1 select:1 people:1 goldszmidt:1 alexander:3 evaluate:3 d1:10 |
4,495 | 5,068 | Integrated Non-Factorized Variational Inference
Shaobo Han
Duke University
Durham, NC 27708
[email protected]
Xuejun Liao
Duke University
Durham, NC 27708
[email protected]
Lawrence Carin
Duke University
Durham, NC 27708
[email protected]
Abstract
We present a non-factorized variational method for full posterior inference in
Bayesian hierarchical models, with the goal of capturing the posterior variable dependencies via efficient and possibly parallel computation. Our approach unifies
the integrated nested Laplace approximation (INLA) under the variational framework. The proposed method is applicable in more challenging scenarios than typically assumed by INLA, such as Bayesian Lasso, which is characterized by the
non-differentiability of the `1 norm arising from independent Laplace priors. We
derive an upper bound for the Kullback-Leibler divergence, which yields a fast
closed-form solution via decoupled optimization. Our method is a reliable analytic alternative to Markov chain Monte Carlo (MCMC), and it results in a tighter
evidence lower bound than that of mean-field variational Bayes (VB) method.
1
Introduction
Markov chain Monte Carlo (MCMC) methods [1] have been dominant tools for posterior analysis in
Bayesian inference. Although MCMC can provide numerical representations of the exact posterior,
they usually require intensive runs and are therefore time consuming. Moreover, assessment of
a chain?s convergence is a well-known challenge [2]. There have been many efforts dedicated to
developing deterministic alternatives, including the Laplace approximation [3], variational methods
[4], and expectation propagation (EP) [5]. These methods each have their merits and drawbacks [6].
More recently, the integrated nested Laplace approximation (INLA) [7] has emerged as an encouraging method for full posterior inference, which achieves computational accuracy and speed by taking
advantage of a (typically) low-dimensional hyper-parameter space, to perform efficient numerical
integration and parallel computation on a discrete grid. However, the Gaussian assumption for the
latent process prevents INLA from being applied to more general models outside of the family of
latent Gaussian models (LGMs).
In the machine learning community, variational inference has received significant use as an efficient
alternative to MCMC. It is also attractive because it provides a closed-form lower bound to the
model evidence. An active area of research has been focused on developing more efficient and
accurate variational inference algorithms, for example, collapsed inference [8, 9], non-conjugate
models [10, 11], multimodal posteriors [12], and fast convergent methods [13, 14].
The goal of this paper is to develop a reliable and efficient deterministic inference method, to both
achieve the accuracy of MCMC and retain its inferential flexibility. We present a promising variational inference method without requiring the widely used factorized approximation to the posterior.
Inspired by INLA, we propose a hybrid continuous-discrete variational approximation, which enables us to preserve full posterior dependencies and is therefore more accurate than the mean-field
variational Bayes (VB) method [15]. The continuous variational approximation is flexible enough
for various kinds of latent fields, which makes our method applicable to more general settings than
assumed by INLA. The discretization of the low-dimensional hyper-parameter space can overcome
the potential non-conjugacy and multimodal posterior problems in variational inference.
1
2
Integrated Non-Factorized Variational Bayesian Inference
Consider a general Bayesian hierarchical model with observation y, latent variables x, and hyperparameters ?. The exact joint posterior
R Rp(x, ?|y) = p(y, x, ?)/p(y) can be difficult to evaluate, since
usually the normalization p(y) =
p(y, x, ?)dxd? is intractable and numerical integration of x
is too expensive.
To address this problem, we find a variational approximation to the exact posterior by minimizing
the Kullback-Leibler (KL) divergence KL (q(x, ?|y)||p(x, ?|y)). Applying Jensen?s inequality to
the log-marginal data likelihood, one obtains
RR
RR
ln p(y) = ln
q(x, ?|y) p(y,x,?)
q(x, ?|y) ln p(y,x,?)
(1)
q(x,?|y) dxd? ?
q(x,?|y) dxd? := L,
which holds for any proposed approximating distributions q(x, ?|y). L is termed the evidence
lower bound (ELBO)[4]. The gap in the Jensen?s inequality is exactly the KL divergence. Therefore
minimizing the Kullback-Leibler (KL) divergence is equivalent to maximizing the ELBO.
To make the variational problem tractable, the variational distribution q(x, ?|y) is commonly required to take a restricted form. For example, mean-field variational Bayes (VB) method assumes
the distribution factorizes into a product of marginals [15], q(x, ?|y) = q(x)q(?), which ignores the
posterior dependencies among different latent variables (including hyperparameters) and therefore
impairs the accuracy of the approximate posterior distribution.
2.1
Hybrid Continuous and Discrete Variational Approximations
We consider a non-factorized approximation to the posterior q(x, ?|y) = q(x|y, ?)q(?|y), to preserve the posterior dependency structure. Unfortunately, this generally leads to a nontrivial optimization problem,
q ? (x, ?|y) = argmin{q(x,?|y)} KL (q(x, ?|y)||p(x, ?|y)) ,
RR
q(x,?|y)
= argmin{q(x,?|y)}
q(x, ?|y) ln p(x,?|y)
dxd?,
h
i
R
R
q(x|?,y)
= argmin{q(x|y,?), q(?|y)} q(?|y) q(x|?, y) ln p(x,?|y)
dx + ln q(?|y) d?. (2)
We propose a hybrid continuous-discrete variational distribution
P q(x|y, ?)qd (?|y), where qd (?|y)
is
a
finite
mixture
of
Dirac-delta
distributions,
q
(?|y)
=
d
k ?k ??k (?) with ?k = qd (?k |y) and
P
k ?k = 1. Clearly, qd (?|y) is an approximation of q(?|y) by discretizing the continuous (typically) low-dimensional parameter space of ? using a grid G with finite grid points1 . One can always
reduce the discretization error by increasing the number of points in G. To obtain a useful discretization at a manageable number of grid points, the dimension of ? cannot be too large; this is also the
same assumption in INLA [7], but we remove here the Gaussian prior assumption of INLA on latent
effects x.
The hybrid variational approximation is found by minimizing the KL divergence, i.e.,
i
hR
P
q(x|y,?k )
dx
+
ln
q
(?
|y)
KL (q(x, ?|y)||p(x, ?|y)) = k qd (?k |y) q(x|?k , y) ln p(x,?
d
k
k |y)
which leads to the approximate marginal posterior,
P
q(x|y) = k q(x|y, ?k )qd (?k |y)
As will be clearer shortly, the problem in (3) can be much easier to solve than that in (2).
(3)
(4)
We give the name integrated non-factorized variational Bayes (INF-VB) to the method of approximating p(x, ?|y) with q(x|y, ?)qd (?|y) by solving the optimization problem in (3). The use of
qd (?) is equivalent to numerical integration, which is a key idea of INLA [7], see Section 2.3 for
details. It has also been used in sampling methods when samples are not easy to obtain directly
[16]. Here we use this idea in variational inference to overcome the potential non-conjugacy and
multimodal posterior problems in ?.
2.2
Variational Optimization
The proposed INF-VB method consists of two algorithmic steps:
1
The grid points need not to be uniformly spaced, one may put more grid points to potentially high mass
regions if credible prior information is available.
2
? Step 1: Solving multiple independent optimization problems, each for a grid point in G, to obtain
the optimal q(x|y, ?k ), ??k ? G, i.e.,
hR
i
P
q(x|y,?k )
q ? (x|y, ?k ) = argmin{q(x|y,?k )} k qd (?k |y) q(x|?k , y) ln p(x,?
dx
+
ln
q
(?
|y)
d
k
|y)
k
R
q(x|y,?k )
dx
= argmin{q(x|y,?k )} q(x|?k , y) ln p(x|y,?
k)
= argmin{q(x|y,?k )} KL(q(x|y, ?k )||p(x|y, ?k ))
(5)
The optimal variational distribution q ? (x|y, ?k ) is the exact posterior p(x|y, ?k ). In case it is
not available, we may further constrain q(x|y, ?k ) to a parametric form, examples including: (i)
multivariate Gaussian [17], if the posterior asymptotic normality holds; (ii) skew-normal densities
[6, 18]; or (iii) an inducing factorization assumption (see Ch.10.2.5 in [19]), if the latent variables
x are conditionally independent or their dependencies are negligible.
? Step 2: Given {q ? (x|y, ?k ) : ?k ? G} obtained in Step 1, one solves
Z
P
q ? (x|y, ?k )
q ? (x|?k , y) ln
{qd? (?k |y)} = argmin{qd (?k |y)} k qd (?k |y)
dx + ln qd (?k |y)
p(x, ?k |y)
{z
}
|
l(qd (?k |y))=l(?k )
Setting ?l(?k )/??k = 0 (also ?
2
qd? (?k |y)
l(?k )/??k2
? exp
R
> 0), which is solved to give
k |y)
q (x|y, ?k ) ln qp(x,?
dx
.
? (x|y,? )
k
?
(6)
Note that qd (?|y) is evaluated at a grid of points ?k ? G, it needs to be known
P only up to a
multiplicative constant, which can be identified from the normalization constraint k qd? (?k |y) =
1. The integral in (6) can be analytically evaluated in the application considered in Section 3.
2.3
Links between INF-VB and INLA
The INF-VB is a variational extension of the integrated nested Laplace approximations (INLA)
[7], a deterministic Bayesian inference method for latent Gaussian models (LGMs), to the case
when p(x|?) exhibits strong non-Gaussianity and hence p(?|y) may not be approximated accurately
by the Laplace?s method of integration [20]. To see the connection, we review briefly the three
computation steps of INLA and compare them with INF-VB in below:
1. Based on the Laplace approximation [3], INLA seeks a Gaussian distribution qG (x|y, ?k ) =
N (x; x? (?k ), H(x? (?k ))?1 ), ??k ? G that captures most of the probabilistic mass locally,
where x? (?k ) = argmaxx p(x|y, ?k ) is the posterior mode, and H(x? (?k )) is the Hessian matrix of the log posterior evaluated at the mode. By contrast, INF-VB with the Gaussian parametric
constraint on q ? (x|y, ?k ) provides a global variational Gaussian approximation qV G (x|y, ?k ) in
the sense that the conditions of the Laplace approximation hold on average [17]. As we will
see next, the averaging operator plays a crucial role in handling the non-differentiable `1 norm
arising from the double-exponential priors.
2. INLA computes the marginal posteriors of ? based on the Laplace?s method of integration [20],
(7)
qLA (?|y) = p(x,?|y)
q(x|y,?)
?
x=x (?)
The quality of this approximation depends on the accuracy of q(x|y, ?). When q(x|y, ?) =
p(x|y, ?), one has qLA (?|y) equal to p(?|y), according to the Bayes rule. It has been shown
in [7] that (7) is accurate enough for latent Gaussian models with qG (x|y, ?). Alternatively, the
variational optimal posterior qd? (?|y) by INF-VB (6) can be derived as a lower bound of the true
posterior p(?|y) by Jensen?s inequality.
i R h
i
hR
p(x,?|y)
?
ln p(?|y) = ln
ln p(x,?|y)
(8)
q(x|y,?) q(x|y, ?)dx ?
q(x|y,?) q(x|y, ?) dx = ln qd (?|y)
Its optimality justifications in Section 2.2 also explain the often observed empirical successes of
hyperparameter selection based on the ELBO of ln p(y|?) [13], when the first level of Bayesian
inference is performed, i.e. only the conditional posterior q(x|y, ?) with fixed ? is of interest. In
Section 4 we compare the accuracies of both (6) and (7) for hyperparameter learning.
3. INLA obtainsP
the marginal distributions of interest, e.g., q(x|y) via numerically integrating out
?:
q(x|y)
=
k q(x|y, ?k )q(?k |y)?k with area weights ?k . In INF-VB, we have qd (?|y) =
P
?
?
(?).
Let
?k = q(?k |y)?k , we immediately have
k
?
k
k
3
R
P
P
q(x|y) = q(x|y, ?)qd (?|y)d? = k q(x|y, ?k )qd (?k |y) = k q(x|y, ?k )q(?k |y)?k (9)
This Dirac-delta mixture interpretation of numerical integration also enables us to quantitize
the accuracy of INLA approximation qG (x|y, ?)qLA (?|y) using the KL divergence to p(x, ?|y)
under the variational framework.
In contrast to INLA, INF-VB provides q(x|y, ?) and qd (?|y), both are optimal in a sense of the minimum Kullback-Leibler divergence, within the proposed hybrid distribution family. In this paper we
focus on the full posterior inference of Bayesian Lasso [21] where the local Laplace approximation
in INLA cannot be applied, as the non-differentiability of the `1 norm prevents one from computing
the Hessian matrix. Besides, if we do not exploit the scale mixture of normals representation [22] of
Laplace priors (i.e., no data-augmentation), we are actually dealing with a non-conjugate variational
inference problem in Bayesian Lasso.
3
Application to Bayesian Lasso
Consider the Bayesian Lasso regression model [21], y = ?x + e, where ? ? Rn?p is the design
matrix containing predictors, y ? Rn are responses2 , and e ? Rn contain independent zero-mean
Gaussian noise e ? N (e; 0, ? 2 In ). Following [21] we assume3 ,
xj |? 2 , ?2 , ? 2???2 exp ? ???2 kxj k1 , ? 2 ? InvGamma(? 2 ; a, b), ?2 ? Gamma(?2 ; r, s)
While the Lasso estimates [23] provide only the posterior modes of the regression parameters x ?
Rp , Bayesian Lasso [21] provides the complete posterior distribution p(x, ?|y), from which one
may obtain whatever statistical properties are desired of x and ?, including the posterior mode,
mean, median, and credible intervals.
Since in our approach variational Gaussian approximation is performed separately (see Section 3.1)
for each hyperparameter {?, ? 2 } considered, the efficiency of approximating p(x|y, ?) is particularly important. The upper bound of the KL divergence derived in Section 3.2 provides an approximate closed-form solution, that is often accurate enough or requires a small number of gradient iterations to converge to optimality. The tightness of the upper bound is analyzed using spectral-norm
bounds (See Section 3.3), which also provide insights on the connection between the deterministic
Lasso [23] and the Bayesian Lasso [21].
3.1
Variational Gaussian Approximation
The conditional distribution of y and x given ? is
p
?
p(y, x|?) = ?
/(2?)p
(2?? 2 )n
n
o
2
?
exp ? ky??xk
?
kxk
1 .
2? 2
?
(10)
The postulated approximation, q(x|?, y) = N (x; ?, D), is a multivariate Gaussian density (dropping dependencies of variational parameters (?, D) on (?, y) for brevity), whose parameters (?, D)
are found by minimizing the KL divergence to p(x|?, y),
R
Def.
g(?, D) = KL(q(x; ?, D)kp(x|y, ?)) = q(x; ?, D) ln q(x;?,D)
p(x|y,?) dx
R
q(x;?,D)
= q(x; ?, D) ln p(y,x|?) dx + ln p(y|?),
2
0
+tr(? ?D)
= ? 21 ln|D|+ ky???k 2?
+ ? E (kxk ) + ln p(y|?) ? ln ?(? 2 , ?)
2
p ? q 1
p
Pp
Eq (kxk1 ) = j=1 ?j ? 2?j ?(hj ) + 2 dj ?(hj ) , hj = ??j dj , dj = Djj
2
2 ?2 p/2
(11)
2 ?n/2
where ?(? , ?) = (4?e? ? ) (2?? )
, ?(?) and ?(?) corresponds to the standard normal
cumulative distribution function and probability density function, respectively. Expectation is taken
with respect to q(x; ?, D). Define D = CCT , where C is the Cholesky factorization of the covariance matrix D. Since g(?, D) is convex in the parameter space (?, C), a global optimal variational
Gaussian approximation q ? (x|y, ?) is guaranteed, which achieves the minimum KL divergence to
p(x|?, y) within the family of multivariate Gaussian densities specified [13]4 .
2
We assume that both y and the columns of ? have been mean-centered to remove the intercept term.
[21] suggested using scaled double-exponential priors under which they showed that p(x, ? 2 |y, ?) is unimodal, further, the unimodality helps to accelerate convergence of the data-augmentation Gibbs sampler and
makes the posterior mode more meaningful. Gamma prior is put on ?2 for conjugacy.
4
Code for variational Gaussian approximation is available at mloss.org/software/view/308
3
4
As a first step, one finds q ? (x|y, ?) using gradient based procedures independently for each hyperparameter combinations {?, ? 2 }. Second, q ? (?|y) can be evaluated analytically using either (6) or
(7); both will yield a finite mixture of Gaussian distribution for the marginal posterior q(x|y) via numerical integration, which is highly efficient since we only have two hyperparameters in Bayesian
Lasso. Finally, the evidence lower bound (ELBO) in (1) can also be evaluated analytically after
simple algebra. We will show in Section 4.3 a comparison with the mean-field variational Bayesian
(VB) approach, derived based on a scale normal mixture representation [22] of the Laplace prior.
3.2
Upper Bounds of KL divergence
? via minimizing an upper bound of KL divergence (11).
? D)
We provide an approximate solution (?,
This solution solves a Lasso problem in ?, and has a closed-form expression for D, making this
computationally efficient. In practice, it could serve as an initialization for gradient procedures.
Lemma
3.1. (Triangle Inequality) Eq kxk1 ? Eq kx ? ?k1 + k?k1 , where Eq kx ? ?k1 =
p
Pp p
2/? j=1 dj , with the expectation taken with respect to q(x; ?, D).
q P
qP
Pp
p
p
2
Lemma 3.2. For any {dj ? 0}pj=1 , it holds
p j=1 d2j .
j=1 dj ?
j=1 dj ?
?
Lemma 3.3. [24] For any A ? Sp++ , tr(A2 ) ? tr(A) ? p tr(A2 ).
?
Theorem 3.1. (Upper and Lower bound) For any A, D ? Sp++ , A = D5 , dj = Djj holds
p
P
?
p
?1 tr(A) ?
dj ? p tr(A).
j=1
p
Applying Lemma 3.1 and Theorem 3.1 in (11), one obtains an upper bound for KL divergence,
r
ky ? ??k22
?
tr(?0 ?D) ? 2p ?
1
p(y|?)
f (?, D) =
+ k?k1 + ? ln |D| +
+
tr( D) + ln ?(?
2 ,?)
2
2
2?
?
2
2?
?
?
|
{z
} |
{z
}
f1 (?)
f2 (D)
? g(?, D) = KL(q(x; ?, D)kp(x|y, ?))
(12)
In the problem of minimizing the KL divergence g(?, CCT ), one needs to iteratively update ? and
C, since they are coupled. However, the upper bound f (?, D) decouples into two additive terms:
f1 is a function of ? while f2 is a function of D, which greatly simplifies the minimization.
? The minimization of f1 (?) is a convex Lasso problem. Using path-following algorithms (e.g., a
modified least angle regression algorithm (LARS) [25]), one can efficiently compute the entire
solution path of Lasso estimates as a function of ?0 = 2?? in one shot. Global optimal solutions
? k ) on each grid point ?k ? G can be recovered using the piece-wise linear property.
for ?(?
?
? The function f2 (D) is convex in the parameter space A = D, whose minimizer is in closedform and can be found by setting the gradient to zero and solving the resulting equation,
q
?1
q
q
0
2p
?2 p
?2 p
?0 ?
?
?A f2 = ?A?1 + ? ??A
+
?
I
=
0,
A
=
I
+
I
+
, (13)
2
?
2?? 2
2?? 2
?2
? = A
? 2 , which is guaranteed to be a positive definite matrix. Note that the global
We have D
? k ) for each grid point ?k ? G have the same eigenvectors as the Gram matrix ?0 ?
optimum D(?
and differ only in eigenvalues. For j = 1, . . .p
, p, denote the eigenvalues
of D and ?0 ? as ?j and
p
?j , respectively. By (13), we have ?j = ? p/(2?? 2 ) + ?2 p/(2?? 2 ) + ?j /? 2 . Therefore,
one can pre-compute the eigenvectors once, and only update the eigenvalues as a function of ?k .
This will make the computation efficient both in time and memory.
? which minimize the KL upper bound f (?,
? in (12) achieves its global
? D)
? D)
The solutions (?,
? in (11), as we
? D)
optimum. Meanwhile, it is also accurate in the sense of the KL divergence g(?,
will show next. Tightness analysis of the upper bound is also provided, using trace norm bounds.
5
Since D is positive definite, it has a unique symmetric square root A =
D by taking square root of the eigenvalues.
5
?
D, which can be obtained from
3.3
Theoretical Anlaysis
? be the minimizer of the KL upper
? D)
Theorem 3.2. (KL Divergence Upper Bound) Let (?,
? is given in (13). Then
? solves the Lasso and D
bound(12), i.e., ?
?
? + ln p(y|?)
? D) ? min?,D f (?, D) = f1 (?)
? + f2 (D)
g(?,
(14)
?(? 2 ,?)
q
?2
2
?1
2?2 n
2
? = P ln ?j +P ?j ?j2 +P
? = min? ky???k
where f1 (?)
+ ?? k?k1 , f2 (D)
.
j
j 2?
j
2? 2
? (?j )
? is upper bounded by the minimum achievable `1 -penalized least
? D)
Thus the KL divergence for (?,
? which are ultimately related to the eigenvalues {?j }
? and terms in f2 (D)
square error 1 = f1 (?)
(j = 1, . . . , p) of the Gram matrix ?0 ?.
Let (?? , D? ) be the minimizer of the original KL divergence g(?, D), and g1 (?|D) collect the
terms of g(?, D) that are related to ?. Then the Bayesian posterior mean obtained via VG, i.e.,
?? = argmin? g1 (?|D? ) = argmin? Eq(x|y,?) ky ? ?xk22 + 2??kxk1 ,
(15)
is a counterpart of the deterministic Lasso [23], which appears naturally in the upper bound,
? = argmin? f1 (?) = argmin? ky ? ??k22 + 2??k?k1
?
(16)
Note that the Lasso solution cannot be found by gradient methods due to non-differentiability. By
taking the expectation, the objective function is smoothed around 0 and thus differentiable. This
connection indicates that in VG for Bayesian Lasso, the conditions of deterministic Lasso hold on
average, with respect to the variational distribution q(x|y, ?), in the parameter space of ?.
The following theorem (proof sketches are in the Supplementary Material) provides quantitative
measures of the closeness of the upper bounds, f1 (?) and f (?, D), to their respective true counterparts.
Theorem 3.3. The tightness of f1 (?) and f (?, D) is given by
q
q
?
?
0
2p
2?
+ ?? 2p
tr(
D),
f
(?,
D)
?
g(?,
D)
?
g1 (?|D) ? f1 (?) ? tr(?2??D)
2
?
?
? tr( D)(17)
which holds for any (?, D) ? Rp ? Sp++ . Further assume g(?? , D? ) = 2 (minimum achievable
KL divergence, or information gap), we have
p
p
?
? ? 1 + tr(?0 ?D)/(2? 2 ) + ? 2p/(? 2 ?)tr( D)
f1 (?? ) ? g1 (?? ) ? g1 (?)
(18a)
p
?
?
?
2
?
?
?
? D) ? f (?,
? D) ? f (? , D ) ? 2 + 2? 2p/(? ?)tr( D )
g(?,
(18b)
4
Experiments
We consider long runs of MCMC 6 as reference solutions, and consider two types of INF-VB: INFVB-1 calculates hyperparameter posteriors using (6); while INF-VB-2 uses (7) and evaluates it at
the posterior mode of p(x|y, ?). We also compare INF-VB-1 and INF-VB-2 to VB, a mean-field
variational Bayes (VB) solution (See Supplementary Material for update equations). The results
show that the INF-VB method is more accurate than VB, and is a promising alternative to MCMC
for Bayesian Lasso.
4.1
Synthetic Dataset
We compare the proposed INF-VB methods with VB and intensive MCMC runs, in terms of the joint
posterior q(?2 , ? 2 |y) , the marginal posteriors of hyper-parameters q(? 2 |y) and q(?2 |y), and the
marginal posteriors of regression coefficients q(xj |y) (see Figure 1). The observations are generated
from yi = ?Ti x + i , i = 1, . . . , 600, where ?ij are drawn from an i.i.d. normal distribution7 ,
where the pairwise correlation between the jth and the kth columns of ? is 0.5|j?k| ; we draw
i ? N (0, ? 2 ), xj |?, ? ? Laplace(?/?), j = 1, . . . , 300, and set ? 2 = 0.5, ? = 0.5.
In all experiments shown here, we take intensive MCMC runs as the gold standard (with 5 ? 103 burn-ins
and 5 ? 105 samples collected). We use data-augmentation Gibbs sampler introduced in [21]. Ground truth for
latent variables and hyper-parameter are also compared to whenever possible. The hyperparameters for Gamma
distributions are set to a = b = r = s = 0.001 through all these experiments. If not mentioned, the grid size
is 50 ? 50, which is uniformly created around the ordinary least square (OLS) estimates of hyper-parameters.
7
The responses y and the columns of ? are centered; the columns of ? are also scaled to have unit variance
6
6
0.45
0.4
0.4
0.35
0.15
0.2
0.25
?2
0.3
0.2
0.25
2
?
(a)
0.3
0.4
0.35
0.15
0.35
0.2
(b)
0.25
2
?
0.3
0.35
0.35
0.15
20
6
15
10
5
0.3
0.4
0.5
0.6
4
2
0
0.1
0.7
0.15
0.2
2
?
(e)
0.25
2
?
0.3
0.35
6
4
2
0
0.4
0.35
MCMC
INF?VB?1
INF?VB?2
VB
Ground Truth
8
5
0
0.3
10
MCMC
INF?VB?1
INF?VB?2
VB
Ground Truth
q(x2|y)
25
q(x1|y)
10
MCMC
INF?VB?1
INF?VB?2
VB
Ground Truth
30
q(?2|y)
15
0.25
?2
(d)
8
MCMC
INF?VB?1
INF?VB?2
VB
Ground Truth
0.2
(c)
20
q(?2|y)
0.5
0.45
0.4
0.35
0.15
0.35
INF?VB?2
Ground Truth
0.6
0.55
0.5
?2
0.45
0.45
INF?VB?1
Ground Truth
0.6
0.55
0.5
?2
0.5
?2
VB
Ground Truth
0.6
0.55
?2
MCMC
Ground Truth
0.6
0.55
?1.2
?1.1
?1
?0.9
0
?0.8
?0.2
?0.1
x1
(f )
0
0.1
x2
(g)
(h)
Figure 1: Contour plots for joint posteriors of hyperparameters q(? 2 , ?2 |y): (a)-(d); Marginal posterior of hyperparameters and coefficients: (e) q(? 2 |y), (f)q(?2 |y); (g) q(x1 |y), (h)q(x2 |y)
See Figure 1(a)-(d), both MCMC and INF-VB preserve the strong posterior dependence among
hyperparameters, while mean-field VB cannot. While mean-field VB approximates the posterior
mode well, the posterior variance can be (sometimes severely) underestimated, see Figure 1(e), (f ).
Since we have analytically approximated p(x|y) by a finite mixture of normal distribution q(x|y, ?)
with mixing weights q(?|y), the posterior marginals for the latent variables: q(xj |y) are easily
accessible from this analytical representation. Perhaps surprisingly, both INF-VB and mean-field
VB provide quite accurate marginal distributions q(xj |y), see Figure 1(j)-(h) for examples. The
differences in the tails of q(?|y) between INF-VB and mean-field VB yield negligible differences
in the marginal distributions q(xj |y), when ? is integrated out.
4.2
Diabetes Dataset
We consider the benchmark diabetes dataset [25] frequently used in previous studies of Bayesian
Lasso; see [21, 26], for example. The goal of this diagnostic study, as suggested in [25], is to
construct a linear regression model (n = 442, p = 10) to reveal the important determinants of
the response, and to provide interpretable results to guide disease progression. In Figure 2, we
show accurate marginal posteriors of hyperparameters q(? 2 |y) and q(?2 |y) as well as marginals
of coefficients q(xj |y), j = 1, . . . , 10, which indicate the relevance of each predictor. We also
compared them to the ordinary least square (OLS) estimates.
?3
1.5
1
2
15
10
MCMC
INF?VB?1
INF?VB?2
VB
OLS
10
5
5
0.5
1
0.8
1
1.2
0
0
1.4
1000
2000
?2
?2
(a)
3000
0
4000
?0.15
?0.1
(b)
20
?0.15
?0.1
?0.05
x2 (sex)
10
0
0.05
(d)
15
MCMC
INF?VB?1
INF?VB?2
VB
OLS
15
q(x5|y)
10
0
0.05
20
MCMC
INF?VB?1
INF?VB?2
VB
OLS
15
q(x4|y)
15
0
(c)
20
MCMC
INF?VB?1
INF?VB?2
VB
OLS
?0.05
x1 (age)
MCMC
INF?VB?1
INF?VB?2
VB
OLS
10
q(x6|y)
0
q(x3|y)
MCMC
INF?VB?1
INF?VB?2
VB
OLS
15
q(x2|y)
3
MCMC
INF?VB?1
INF?VB?2
VB
OLS
2
q(?2|y)
4
x 10
q(x1|y)
5
q(?2|y)
2.5
MCMC
INF?VB?1
INF?VB?2
VB
OLS
6
10
5
5
5
?0.05
0
x3 (bmi)
0.05
0
0.1
?0.1
?0.05
(e)
0
x4 (bp)
0.05
0
?0.1
0.1
0
0.1
?0.05
0
10
0.05
x6 (ldl)
0.1
0.15
(h)
20
MCMC
INF?VB?1
INF?VB?2
VB
OLS
15
q(x9|y)
10
0.05
20
MCMC
INF?VB?1
INF?VB?2
VB
OLS
q(x8|y)
q(x7|y)
15
0
x5 (tc)
(g)
15
MCMC
INF?VB?1
INF?VB?2
VB
OLS
20
?0.05
(f )
10
MCMC
INF?VB?1
INF?VB?2
VB
OLS
15
q(x10|y)
0
?0.1
5
10
5
5
5
0
?0.1
?0.05
0
x7 (hdl)
(i)
0.05
0.1
0
?0.05
0
0.05
x8 (tch)
0.1
0
0.15
(j)
5
?0.1
?0.05
0
x9 (ltg)
(k)
0.05
0.1
0
?0.1
?0.05
0
x10 (glu)
0.05
0.1
(l)
Figure 2: Posterior marginals of hyperparameters: (a) q(? 2 |y) and (b)q(?2 |y); posterior marginals
of coefficients: (c)-(l) q(xj |y) (j = 1, . . . , 10)
7
4.3
Comparison: Accuracy and Speed
We quantitatively measure the quality of the approximate joint probability q(x, ?|y) provided by our
non-factorized variational methods, and compare them to VB under factorization assumptions. The
KL divergence KL(q(x, ?|y)|p(x, ?|y)) is not directly available; instead, we compare the negative
evidence lower bound (1), which can be evaluated analytically in our case and differs from the KL
divergence only up to a constant. We also measure the computational time of different algorithms
by elapsed times (seconds). In INF-VB, different grids of sizes m ? m are considered, where
m = 1, 5, 10, 30, 50. We consider two real world datasets: the above Diabetes dataset, and the
Prostate cancer dataset [27]. Here, INF-VB-3 and INF-VB-4 refer to the methods that use the
approximate solution in Section 3.2 with no gradient steps for q(x|y, ?), and use (6) or (7) for
q(?|y).
150
650
645
640
635
630
0
10
20
30
40
m
(a)
50
15
10
INF?VB?1
INF?VB?2
INF?VB?3
INF?VB?4
VB
145
MCMC
INF?VB?1
INF?VB?2
INF?VB?3
INF?VB?4
VB
Negative ELBO
Negative ELBO
655
20
Elapsed Time (seconds)
INF?VB?1
INF?VB?2
INF?VB?3
INF?VB?4
VB
660
5
140
135
130
125
120
0
0
10
20
30
40
115
50
0
10
20
30
m
m
(b)
(c)
40
50
15
Elapsed Time (seconds)
665
MCMC
INF?VB?1
INF?VB?2
INF?VB?3
INF?VB?4
VB
10
5
0
0
10
20
30
40
50
m
(d)
Figure 3: Negative evidence lower bound (ELBO) and elapsed time v.s. grid size; (a), (b) for the
Diabetes dataset (n = 442, p = 10). (c), (d) for the Prostate cancer dataset (n = 97, p = 8)
The quality of variational methods depends on the flexibility of variational distributions. In INF-VB
for Bayesian Lasso, we constrain q(x|y, ?) to be parametric and q(?|y) to be still in free form. See
from Figure 3, the accuracy of INF-VB method with a 1?1 grid is worse than mean-field VB, which
corresponds to the partial Bayesian learning of q(x|y, ?) with a fixed ?. As the grid size increases,
the accuracies of INF-VB (even those without gradient steps) also increase and are in general of
better quality than mean-field VB, in the sense of negative ELBO (KL divergence up to a constant).
The computational complexities of INF-VB, mean-field VB, and MCMC methods are proportional
to the grid size, number of iterations toward local optimum, and the number of runs, respectively.
Since the computations on the grid are independent, INF-VB is highly parallelizable, which is an
important feature as more multiprocessor computational power becomes available. Besides, one
may further reduce its computational load by choosing grid points more economically, which will
be pursued in our next step. Even the small datasets we show here for illustration enjoy good speedups. A significant speed-up for INF-VB can be achieved via parallel computing.
5
Discussion
We have provided a flexible framework for approximate inference of the full posterior p(x, ?|y)
based on a hybrid continuous-discrete variational distribution, which is optimal in the sense of the
KL divergence. As a reliable and efficient alternative to MCMC, our method generalizes INLA to
non-Gaussian priors and VB to non-factorization settings. While we have used Bayesian Lasso as
an example, our inference method is generically applicable. One can also approximate p(x|y, ?)
using other methods, such as scalable variational methods [28], or improved EP [29].
The posterior p(?|y), which is analyzed based on a grid approximation, enables users to do both
model averaging and model selection, depending on specific purposes. The discretized approximation of p(?|y) overcomes the potential non-conjugacy or multimodal issues in the ? space in
variational inference, and it also allows parallel implementation of the hybrid continuous-discrete
variational approximation with the dominant computational load (approximating the continuous
high dimensional q(x|y, ?)) distributed on each grid point, which is particularly important when
applying INF-VB to large-scale Bayesian inference. INF-VB has limitations. The number of hyperparameters ? should be no more than 5 to 6, which is the same fundamental limitation of INLA.
Acknowledgments
The work reported here was supported in part by grants from ARO, DARPA, DOE, NGA and ONR.
8
References
[1] D. Gamerman and H. F. Lopes. Markov chain Monte Carlo: stochastic simulation for Bayesian inference.
Chapman & Hall Texts in Statistical Science Series. Taylor & Francis, 2006.
[2] C. P. Robert and G. Casella. Monte Carlo Statistical Methods (Springer Texts in Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2005.
[3] R. E. Kass and D. Steffey. Approximate Bayesian inference in conditionally independent hierarchical
models (parametric empirical Bayes models). J. Am. Statist. Assoc., 84(407):717?726, 1989.
[4] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. In Learning in graphical models, pages 105?161, Cambridge, MA, 1999. MIT Press.
[5] T. P. Minka. Expectation propagation for approximate Bayesian inference. In J. S. Breese and D. Koller,
editors, Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362?369, 2001.
[6] J. T. Ormerod. Skew-normal variational approximations for Bayesian inference. Technical Report CRGTR-93-1, School of Mathematics and Statistics, Univeristy of Sydney, 2011.
[7] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models by using
integrated nested Laplace approximations. Journal of the Royal Statistical Society: Series B, 71(2):319?
392, 2009.
[8] J. Hensman, M. Rattray, and N. D. Lawrence. Fast variational inference in the conjugate exponential
family. In Advances in Neural Information Processing Systems, 2012.
[9] J. Foulds, L. Boyles, C. Dubois, P. Smyth, and M. Welling. Stochastic collapsed variational Bayesian
inference for latent Dirichlet allocation. In 19th ACM SIGKDD Conference on Knowledge Discovery and
Data Mining (KDD), 2013.
[10] J. W. Paisley, D. M. Blei, and M. I. Jordan. Variational Bayesian inference with stochastic search. In
International Conference on Machine Learning, 2012.
[11] C. Wang and D. M. Blei. Truncation-free online variational inference for Bayesian nonparametric models.
In Advances in Neural Information Processing Systems, 2012.
[12] S. J. Gershman, M. D. Hoffman, and D. M. Blei. Nonparametric variational inference. In International
Conference on Machine Learning, 2012.
[13] E. Challis and D. Barber. Concave Gaussian variational approximations for inference in large-scale
Bayesian linear models. Journal of Machine Learning Research - Proceedings Track, 15:199?207, 2011.
[14] M. E. Khan, S. Mohamed, and K. P. Muprhy. Fast Bayesian inference for non-conjugate Gaussian process
regression. In Advances in Neural Information Processing Systems, 2012.
[15] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003.
[16] C. Ritter and M. A. Tanner. Facilitating the Gibbs sampler: The Gibbs stopper and the griddy-Gibbs
sampler. J. Am. Statist. Assoc., 87(419):pp. 861?868, 1992.
[17] M. Opper and C. Archambeau. The variational Gaussian approximation revisited. Neural Comput.,
21(3):786?792, 2009.
[18] E. Challis and D. Barber. Affine independence variational inference. In Advances in Neural Information
Processing Systems, 2012.
[19] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2006.
[20] L. Tierney and J. B. Kadane. Accurate approximations for posterior moments and marginal densities. J.
Am. Statist. Assoc., 81:82?86, 1986.
[21] T. Park and G. Casella. The Bayesian Lasso. J. Am. Statist. Assoc., 103(482):681?686, 2008.
[22] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical
Society. Series B, 36(1):pp. 99?102, 1974.
[23] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society,
Series B, 58:267?288, 1996.
[24] G. H. Golub and C. V. Loan. Matrix Computations(Third Edition). Johns Hopkins University Press, 1996.
[25] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407?
499, 2004.
[26] C. Hans. Bayesian Lasso regression. Biometrika, 96(4):835?845, 2009.
[27] T. Stamey, J. Kabalin, J. McNeal, I. Johnstone, F. Freha, E. Redwine, and N. Yang. Prostate specific
antigen in the diagnosis and treatment of adenocarcinoma of the prostate. ii. radical prostatectomy treated
patients. Journal of Urology, 16:pp. 1076?1083, 1989.
[28] M. W. Seeger and H. Nickisch. Large scale Bayesian inference and experimental design for sparse linear
models. SIAM J. Imaging Sciences, 4(1):166?199, 2011.
[29] B. Cseke and T. Heskes. Approximate marginals in latent Gaussian models. J. Mach. Learn. Res., 12:417?
454, 2011.
9
| 5068 |@word determinant:1 briefly:1 manageable:1 achievable:2 norm:5 economically:1 sex:1 seek:1 simulation:1 covariance:1 tr:14 gamerman:1 shot:1 moment:1 series:4 recovered:1 discretization:3 ka:1 dx:10 john:1 numerical:6 additive:1 kdd:1 analytic:1 enables:3 remove:2 plot:1 interpretable:1 update:3 pursued:1 intelligence:1 xk:1 blei:3 provides:6 revisited:1 org:1 urology:1 consists:1 pairwise:1 frequently:1 discretized:1 inspired:1 cct:2 encouraging:1 increasing:1 becomes:1 provided:3 moreover:1 bounded:1 factorized:7 mass:2 kind:1 argmin:11 nj:2 quantitative:1 ti:1 concave:1 exactly:1 biometrika:1 decouples:1 k2:1 scaled:2 assoc:4 whatever:1 grant:1 unit:2 enjoy:1 positive:2 negligible:2 local:2 severely:1 mach:1 path:2 burn:1 initialization:1 collect:1 challenging:1 antigen:1 archambeau:1 factorization:4 challis:2 unique:1 acknowledgment:1 mallow:1 practice:1 definite:2 differs:1 x3:2 lcarin:1 procedure:2 area:2 empirical:2 inferential:1 pre:1 integrating:1 cannot:4 selection:3 operator:1 put:2 collapsed:2 applying:3 intercept:1 equivalent:2 deterministic:6 maximizing:1 independently:1 convex:3 focused:1 xuejun:1 foulds:1 immediately:1 boyle:1 rule:1 insight:1 d5:1 justification:1 laplace:14 annals:1 play:1 user:1 exact:4 duke:6 smyth:1 us:1 diabetes:4 expensive:1 approximated:2 particularly:2 recognition:1 ep:2 role:1 observed:1 kxk1:3 solved:1 capture:1 wang:1 region:1 mentioned:1 disease:1 complexity:1 ultimately:1 solving:3 algebra:1 serve:1 efficiency:1 f2:7 triangle:1 multimodal:4 joint:4 kxj:1 accelerate:1 easily:1 various:1 unimodality:1 darpa:1 fast:4 london:1 monte:4 kp:2 artificial:1 hyper:5 outside:1 choosing:1 whose:2 emerged:1 widely:1 solve:1 supplementary:2 quite:1 tightness:3 elbo:8 statistic:4 g1:5 online:1 beal:1 advantage:1 rr:3 differentiable:2 eigenvalue:5 analytical:1 propose:2 aro:1 product:1 j2:1 mixing:1 flexibility:2 achieve:1 gold:1 secaucus:2 inducing:1 dirac:2 ky:6 convergence:2 double:2 optimum:3 help:1 derive:1 develop:1 clearer:1 depending:1 andrew:1 radical:1 ij:1 school:1 received:1 eq:5 strong:2 sydney:1 solves:3 indicate:1 qd:23 differ:1 drawback:1 lars:1 stochastic:3 centered:2 material:2 require:1 f1:11 tighter:1 extension:1 dxd:4 hold:7 around:2 considered:3 ground:9 normal:8 exp:3 hall:1 lawrence:2 algorithmic:1 inla:20 achieves:3 a2:2 purpose:1 applicable:3 ormerod:1 tool:1 qv:1 hoffman:1 minimization:2 mit:1 clearly:1 gaussian:23 always:1 modified:1 hj:3 shrinkage:1 factorizes:1 jaakkola:1 cseke:1 derived:3 focus:1 martino:1 likelihood:1 indicates:1 greatly:1 contrast:2 sigkdd:1 seeger:1 sense:5 am:4 inference:35 multiprocessor:1 integrated:8 typically:3 entire:1 koller:1 chopin:1 issue:1 among:2 flexible:2 univeristy:1 integration:7 marginal:12 field:13 equal:1 once:1 construct:1 sampling:1 chapman:1 x4:2 park:1 carin:1 report:1 prostate:4 quantitatively:1 preserve:3 divergence:23 gamma:3 hdl:1 interest:2 highly:2 mining:1 golub:1 generically:1 mixture:7 analyzed:2 chain:4 accurate:9 integral:1 partial:1 respective:1 decoupled:1 taylor:1 desired:1 re:1 theoretical:1 column:4 ordinary:2 predictor:2 too:2 reported:1 kadane:1 dependency:6 synthetic:1 nickisch:1 density:5 fundamental:1 international:2 siam:1 accessible:1 retain:1 probabilistic:1 ritter:1 tanner:1 hopkins:1 thesis:1 augmentation:3 x9:2 containing:1 possibly:1 ldl:1 worse:1 closedform:1 potential:3 gaussianity:1 coefficient:4 inc:2 postulated:1 depends:2 piece:1 multiplicative:1 performed:2 view:1 closed:4 root:2 francis:1 bayes:7 parallel:4 minimize:1 square:5 accuracy:9 variance:2 efficiently:1 yield:3 spaced:1 bayesian:37 unifies:1 accurately:1 carlo:4 explain:1 parallelizable:1 casella:2 whenever:1 evaluates:1 pp:6 mohamed:1 minka:1 qla:3 naturally:1 proof:1 dataset:7 treatment:1 knowledge:1 efron:1 credible:2 actually:1 appears:1 x6:2 response:2 improved:1 evaluated:6 correlation:1 sketch:1 assessment:1 propagation:2 mode:7 quality:4 perhaps:1 reveal:1 name:1 effect:1 k22:2 requiring:1 true:2 contain:1 counterpart:2 usa:2 analytically:5 hence:1 symmetric:1 leibler:4 iteratively:1 attractive:1 conditionally:2 x5:2 djj:2 complete:1 dedicated:1 d2j:1 variational:55 wise:1 recently:1 ols:14 qp:2 tail:1 interpretation:1 approximates:1 marginals:6 numerically:1 significant:2 refer:1 cambridge:1 gibbs:5 paisley:1 grid:20 mathematics:1 heskes:1 dj:9 han:3 dominant:2 posterior:49 multivariate:3 showed:1 inf:79 scenario:1 termed:1 inequality:4 discretizing:1 success:1 onr:1 yi:1 minimum:4 converge:1 ii:2 full:5 multiple:1 unimodal:1 x10:2 technical:1 characterized:1 long:1 qg:3 calculates:1 scalable:1 regression:9 liao:1 patient:1 expectation:5 iteration:2 normalization:2 sometimes:1 achieved:1 separately:1 interval:1 underestimated:1 median:1 crucial:1 jordan:2 yang:1 iii:1 enough:3 easy:1 xj:8 independence:1 hastie:1 lasso:25 identified:1 reduce:2 idea:2 simplifies:1 intensive:3 expression:1 impairs:1 effort:1 hessian:2 york:2 useful:1 generally:1 mloss:1 eigenvectors:2 nonparametric:2 locally:1 statist:4 differentiability:3 glu:1 delta:2 arising:2 diagnostic:1 track:1 rattray:1 neuroscience:1 tibshirani:2 diagnosis:1 discrete:6 hyperparameter:5 dropping:1 key:1 drawn:1 tierney:1 pj:1 imaging:1 nga:1 run:5 angle:2 uncertainty:1 lope:1 family:4 distribution7:1 draw:1 vb:117 capturing:1 bound:23 def:1 guaranteed:2 convergent:1 invgamma:1 nontrivial:1 constraint:2 constrain:2 bp:1 x2:5 software:1 x7:2 speed:3 optimality:2 min:2 speedup:1 developing:2 according:1 combination:1 conjugate:4 making:1 restricted:1 xjliao:1 taken:2 ln:29 computationally:1 conjugacy:4 equation:2 xk22:1 skew:2 merit:1 tractable:1 available:5 generalizes:1 progression:1 hierarchical:3 spectral:1 alternative:5 shortly:1 rp:3 original:1 assumes:1 dirichlet:1 graphical:2 exploit:1 k1:7 ghahramani:1 approximating:4 society:3 objective:1 parametric:4 dependence:1 exhibit:1 gradient:7 kth:1 link:1 barber:2 collected:1 toward:1 besides:2 code:1 illustration:1 minimizing:6 nc:3 difficult:1 unfortunately:1 robert:1 potentially:1 trace:1 negative:5 design:2 implementation:1 perform:1 upper:15 observation:2 markov:3 datasets:2 benchmark:1 finite:4 rn:3 smoothed:1 community:1 introduced:1 required:1 kl:30 specified:1 connection:3 khan:1 elapsed:4 address:1 suggested:2 usually:2 below:1 pattern:1 challenge:1 reliable:3 including:4 memory:1 royal:3 power:1 treated:1 hybrid:7 hr:3 normality:1 dubois:1 created:1 x8:2 coupled:1 text:2 prior:9 review:1 discovery:1 asymptotic:1 limitation:2 proportional:1 allocation:1 gershman:1 vg:2 shaobo:2 age:1 affine:1 editor:1 cancer:2 penalized:1 surprisingly:1 supported:1 free:2 truncation:1 jth:1 guide:1 johnstone:2 saul:1 taking:3 sparse:1 distributed:1 overcome:2 dimension:1 hensman:1 gram:2 cumulative:1 contour:1 computes:1 ignores:1 world:1 commonly:1 opper:1 welling:1 approximate:13 obtains:2 kullback:4 overcomes:1 dealing:1 global:5 active:1 assumed:2 consuming:1 alternatively:1 continuous:8 latent:14 search:1 promising:2 learn:1 argmaxx:1 meanwhile:1 rue:1 sp:3 bmi:1 noise:1 hyperparameters:10 edition:1 facilitating:1 x1:5 crgtr:1 gatsby:1 exponential:3 comput:1 adenocarcinoma:1 third:1 theorem:5 load:2 specific:2 bishop:1 jensen:3 evidence:6 closeness:1 intractable:1 phd:1 anlaysis:1 kx:2 gap:2 durham:3 easier:1 tc:1 tch:1 prevents:2 kxk:2 springer:1 ch:1 nested:4 corresponds:2 minimizer:3 truth:9 acm:1 ma:1 conditional:2 goal:3 springerverlag:2 loan:1 uniformly:2 averaging:2 sampler:4 lemma:4 breese:1 experimental:1 meaningful:1 college:1 cholesky:1 brevity:1 relevance:1 lgms:2 evaluate:1 mcmc:31 handling:1 |
4,496 | 5,069 | Global Solver and Its Efficient Approximation for
Variational Bayesian Low-rank Subspace Clustering
Shinichi Nakajima
Nikon Corporation
Tokyo, 140-8601 Japan
[email protected]
Akiko Takeda
The University of Tokyo
Tokyo, 113-8685 Japan
[email protected]
S. Derin Babacan
Google Inc.
Mountain View, CA 94043 USA
[email protected]
Masashi Sugiyama
Tokyo Institute of Technology
Tokyo 152-8552, Japan
[email protected]
Ichiro Takeuchi
Nagoya Institute of Technology
Aichi, 466-8555, Japan
[email protected]
Abstract
When a probabilistic model and its prior are given, Bayesian learning offers inference with automatic parameter tuning. However, Bayesian learning is often obstructed by computational difficulty: the rigorous Bayesian learning is intractable
in many models, and its variational Bayesian (VB) approximation is prone to suffer from local minima. In this paper, we overcome this difficulty for low-rank
subspace clustering (LRSC) by providing an exact global solver and its efficient
approximation. LRSC extracts a low-dimensional structure of data by embedding
samples into the union of low-dimensional subspaces, and its variational Bayesian
variant has shown good performance. We first prove a key property that the VBLRSC model is highly redundant. Thanks to this property, the optimization problem of VB-LRSC can be separated into small subproblems, each of which has
only a small number of unknown variables. Our exact global solver relies on another key property that the stationary condition of each subproblem consists of
a set of polynomial equations, which is solvable with the homotopy method. For
further computational efficiency, we also propose an efficient approximate variant,
of which the stationary condition can be written as a polynomial equation with a
single variable. Experimental results show the usefulness of our approach.
1
Introduction
Principal component analysis (PCA) is a widely-used classical technique for dimensionality reduction. This amounts to globally embedding the data points into a low-dimensional subspace. As more
flexible models, the sparse subspace clustering (SSC) [7, 20] and the low-rank subspace clustering
(LRSC) [8, 13, 15, 14] were proposed. By inducing sparsity and low-rankness, respectively, SSC
and LRSC locally embed the data into the union of subspaces. This paper discusses a probabilistic
model for LRSC.
As the classical PCA requires users to pre-determine the dimensionality of the subspace, LRSC requires manual parameter tuning for adjusting the low-rankness of the solution. On the other hand,
1
Bayesian formulations enable us to estimate all unknown parameters without manual parameter
tuning [5, 4, 17]. However, in many problems, the rigorous application of Bayesian inference is
computationally intractable. To overcome this difficulty, the variational Bayesian (VB) approximation was proposed [1]. A Bayesian formulation and its variational inference have been proposed for
LRSC [2]. There, to avoid computing the inverse of a prohibitively large matrix, the posterior is
approximated with the matrix-variate Gaussian (MVG) [11].
Typically, the VB solution is computed by the iterated conditional modes (ICM) algorithm [3, 5],
which is derived through the standard procedure for the VB approximation. Since the objective
function for the VB approximation is generally non-convex, the ICM algorithm is prone to suffer
from local minima. So far, the global solution for the VB approximation is not attainable except PCA
(or the fully-observed matrix factorization), for which the global VB solution has been analytically
obtained [17]. This paper makes LRSC another exception with proposed global VB solvers.
Two common factors make the global VB solution attainable in PCA and LRSC: first, a large portion
of the degrees of freedom that the VB approximation learns are irrelevant, and the optimization
problem can be decomposed into subproblems, each of which has only a small number of unknown
variables; second, the stationary condition of each subproblem is written as a polynomial system (a
set of polynomial equations).
Based on these facts, we propose an exact global VB solver (EGVBS) and an approximate global
VB solver (AGVBS). EGVBS finds all stationary points by solving the polynomial system with the
homotopy method [12, 10], and outputs the one giving the lowest free energy. Although EGVBS
solves subproblems with much less complexity than the original VB problem, it is still not efficient
enough for handling large-scale data. For further computational efficiency, we propose AGVBS, of
which the stationary condition is written as a polynomial equation with a single variable. Our experiments on artificial and benchmark datasets show that AGVBS provides a more accurate solution
than the MVG approximation [2] with much less computation time.
2
Background
In this section, we introduce the low-rank subspace clustering and its variational Bayesian formulation.
2.1
Subspace Clustering Methods
Let Y ? RL?M = (y 1 , . . . , y M ) be L-dimensional observed samples of size M . We generally
denote a column vector of a matrix by a bold-faced small letter. We assume that each y m is approximately expressed as a linear combination of M ? words in a dictionary, D = (d1 , . . . , dM ? ),
i.e.,
Y = DX + E,
M ? ?M
where X ? R
is unknown coefficients, and E ? RL?M is noise. In subspace clustering,
the observed matrix Y itself is often used as a dictionary D. The convex formulation of the sparse
subspace clustering (SSC) [7, 20] is given by
min?Y ? Y X?2Fro + ??X?1 , s.t. diag(X) = 0,
X
(1)
where X ? RM ?M is a parameter to be estimated, ? > 0 is a regularization coefficient to be
manually tuned. ? ? ?Fro and ? ? ?1 are the Frobenius norm and the (element-wise) ?1 -norm of a
matrix, respectively. The first term in Eq.(1) requires that each data point y m can be expressed as
a linear combination of a small set of other data points {dm? } for m? ?= m. This smallness of the
set is enforced by the second (?1 -regularization) term, and leads to the low-dimensionality of each
b is obtained, abs(X)
b + abs(X
b ? ), where abs(?) takes the
obtained subspace. After the minimizer X
absolute value element-wise, is regarded as an affinity matrix, and a spectral clustering algorithm,
such as normalized cuts [19], is applied to obtain clusters.
In the low-rank subspace clustering (LRSC) or low-rank representation [8, 13, 15, 14], lowdimensional subspaces are sought by enforcing the low-rankness of X:
min?Y ? Y X?2Fro + ??X?tr .
X
2
(2)
Thanks to the simplicity, the global solution of Eq.(2) has been analytically obtained [8].
2.2
Variational Bayesian Low-rank Subspace Clustering
We formulate the probabilistic model of LRSC, so that the maximum a posteriori (MAP) estimator
coincides with the solution of the problem (2) under a certain hyperparameter setting:
(
)
p(Y |A? , B ? ) ? exp ? 2?1 2 ?Y ? DB ? A?? ?2Fro ,
(3)
( 1
)
( 1
)
?
? ?1 ??
?
? ?1 ??
p(A ) ? exp ? 2 tr(A CA A ) ,
p(B ) ? exp ? 2 tr(B CB B ) .
(4)
Here, we factorized X as X = B ? A?? , as in [2], to induce low-rankness through the model-induced
regularization mechanism [17]. In this formulation, A? ? RM ?H and B ? ? RM ?H for H ?
min(L, M ) are the parameters to be estimated. We assume that hyperparameters
CA = diag(c2a1 , . . . , c2aH ),
CB = diag(c2b1 , . . . , c2bH ).
are diagonal and positive definite. The dictionary D is treated as a constant, and set to D = Y , once
Y is observed.1
The Bayes posterior is written as
p(A? , B ? |Y ) =
p(Y |A? ,B ? )p(A? )p(B ? )
,
p(Y )
(5)
where p(Y ) = ?p(Y |A? , B ? )?p(A? )p(B ? ) is the marginal likelihood. Here, ???p denotes the expectation over the distribution p. Since the Bayes posterior (5) is computationally intractable, we adopt
the variational Bayesian (VB) approximation [1, 5].
Let r(A? , B ? ), or r for short, be a trial distribution. The following functional with respect to r is
called the free energy:
?
?
?
?
r(A? ,B ? )
r(A? ,B ? )
= log p(A
? log p(Y ).
F (r) = log p(Y |A? ,B
(6)
? ),p(A? )p(B ? )
? ,B ? |Y )
r(A? ,B ? )
r(A? ,B ? )
In the last equation of Eq.(6), the first term is the Kullback-Leibler (KL) distance from the trial
distribution to the Bayes posterior, and the second term is a constant. Therefore, minimizing the free
energy (6) amounts to finding a distribution closest to the Bayes posterior in the sense of the KL
distance. In the VB approximation, the free energy (6) is minimized over some restricted function
space.
2.2.1
Standard VB (SVB) Iteration
The standard procedure for the VB approximation imposes the following constraint on the posterior:
r(A? , B ? ) = r(A? )r(B ? ).
By using the variational method, we can show that the VB posterior is Gaussian, and has the following form:
(
)
(
? )
?
?? ?b
?)
? )? ?
?? ?b
b? )? ?1
b? )? )
? ?1
tr((A? ?A
(A? ?A
b
(b
(b
b
?
A?
B?
,
r(B
)
?
exp
?
,
r(A? ) ? exp ?
(7)
2
2
?
? = vec(B ? ) ? RM H . The means and the covariances satisfy the following equations:
where b
(?
)?1
?
2 ?1
2
?? ?
?
b? = 12 Y ? Y B
b ? ?A? ,
? = ?
+
?
C
A
?
B
Y
Y
B
,
(8)
A
A
?
r(B ? )
(
)
)
(
?
?1
b
? = ??B2 ? vec Y ? Y A
b?? A
b? + M ?A? ) ? Y ? Y + ? 2 (C ?1 ? IM )
?B ? = ? 2 (A
b? , ?
, (9)
b
B
?
where ? denotes the Kronecker product of matrices, and IM is the M -dimensional identity matrix.
1
Our formulation is slightly different from the one proposed in [2], where a clean version of Y is introduced
as an additional parameter to cope with outliers. Since we focus on the LRSC model without outliers in this
paper, we simplified the model. In our formulation, the clean dictionary corresponds to Y BA? (BA? )? , where
? denotes the pseudo-inverse of a matrix.
3
For empirical VB learning, where the hyperparameters are also estimated from observation, the
following are obtained from the derivatives of the free energy (6):
( ?
)
?M
2
bh ?2 + m=1 ?B
/M,
a?h ?2 /M + ?a2? ,
c2bh = ?b
c2ah = ?b
(10)
?
h
?2 =
where
(
(
b? A
b?? +?B ? (A
b?? A
b? +M ? ? )B ?? ?
tr Y ? Y IM ?2B
A
(?a2? , . . . , ?a2? )
1
H
LM
and
m,h
))
r(B ? )
,
2
2
2
2
))
, . . . , ?B
), . . . , (?B
, . . . , ?B
((?B
?
?
?
?
1,1
M,H
M,1
1,H
(11)
are the diagonal entries
?B ? , respectively. Eqs.(8)?(11) form an ICM algorithm, which we call the standard VB
of ?A? and ?
(SVB) iteration.
2.2.2
Matrix-Variate Gaussian Approximate (MVGA) Iteration
Actually, the SVB iteration cannot be applied to a large-scale problem, because Eq.(9) requires the
inversion of a huge M H ? M H matrix. This difficulty can be avoided by restricting r(B ? ) to be
the matrix-variate Gaussian (MVG) [11], i.e.,
(
(
))
?1
?
b ? ?1 ? b ? ? .
r(B ? ) ? exp ? 12 tr ?B
(12)
? (B ? B )?B ? (B ? B )
Under this additional constraint, a gradient-based computationally tractable algorithm can be derived
[2], which we call the MVG approximate (MVGA) iteration.
3
Global Variational Bayesian Solvers
In this section, we first show that a large portion of the degrees of freedom in the expression (7)
are irrelevant, which significantly reduces the complexity of the optimization problem without the
MVG approximation. Then, we propose an exact global VB solver and its approximation.
3.1
Irrelevant Degrees of Freedom of VB-LRSC
Consider the following transforms:
A = ?Yright? A? ,
B = ?Yright? B ? ,
where
Y = ?Yleft ?Y ?Yright?
(13)
is the singular value decomposition (SVD) of Y . Then, we obtain the following theorem:
Theorem 1 The global minimum of the VB free energy (6) is achieved with a solution such that
b B,
b ?A , ?
?B are diagonal.
A,
(Sketch of proof) After the transform (13), we can regard the observed matrix as a diagonal matrix,
bA
b? is naturally
i.e., Y ? ?Y . Since we assume Gaussian priors with no correlation, the solution B
expected to be diagonal. To prove this intuition, we apply a similar approach to [17], where the
diagonalities of the VB posterior covariances were proved in fully-observed matrix factorization by
b?? A
b? +M ?A? is diagonal, with
investigating perturbations around any solution. We first show that A
?B . Other diagonalities can be shown similarly.
which Eq.(9) implies the diagonality of ?
2
Theorem 1 does not only reduce the complexity of the optimization problem greatly, but also makes
the problem separable, as shown in the following.
3.2
Exact Global VB Solver (EGVBS)
Thanks to Theorem 1, the free energy minimization problem can be decomposed as follows:
Lemma 1 Let J(? min(L, M )) be the rank of Y , ?m be the m-th largest singular value of Y , and
2
2
2
2
(b
a1 , . . . , b
aH ), (?a21 , . . . , ?a2H ), (bb1 , . . . , bbH ), ((?B
, . . . , ?B
), . . . , (?B
, . . . , ?B
))
1,1
M,1
1,H
M,H
b ?A , B,
b ?
?B , respectively. Then, the free energy (6) is written as
be the diagonal entries of A,
(
)
?J
2
?H
?h
F = 12 LM log(2?? 2 ) + h=1
+ h=1 2Fh ,
where
?2
4
(14)
Algorithm 1 Exact Global VB Solver (EGVBS) for LRSC.
right?
1: Calculate the SVD of Y = ?Yleft ?Y ?Y
.
2: for h = 1 to H do
3:
Find all the solutions of the polynomial system (16)?(18) by the homotopy method.
4:
Discard prohibitive solutions with complex numbers or with negative variances.
5:
Select the stationary point giving the smallest Fh (defined by Eq.(15)).
6:
The global solution for h is the selected stationary point if it satisfies Fh < 0, otherwise the
null solution (19).
7: end for
bA
b? ? right?
b = ? right B
8: Calculate X
Y
Y
b + abs(X
b ? ).
9: Apply spectral clustering with the affinity matrix equal to abs(X)
c2a
h
2
?a
?J
2
b
a2h +M ?a
h
c2a
c2b
?
2
b
b2h + J
m=1 ?B
m,h
2Fh = M log
? (M + J) +
+ m=1 log ?2
+
c2b
Bm,h
h
h
h
) ?
}
{ (
J
2 2
?Bm,h (b
a2h + M ?a2h ) ,
+ ?12 ?h2 ?2b
ahbbh + bb2h (b
a2h + M ?a2h ) + m=1 ?m
h
and its stationary condition is given as follows: for each h = 1, . . . , H,
(
)?1
?J
2
?2
2 2
b
ah = ?h2 bbh ?a2h ,
?a2h = ? 2 ?h2bb2h + m=1 ?m
?Bm,h + c?2
,
ah
? (
)?1
( 2
)
2
? 2
2
2
b
ah + M ?a2h + c?2
(m ? J),
? ?m
2
2
bbh = ?h b
a
?
,
?
=
b
h
h
Bh,h
Bm,h
? 2
?2
cb
(m > J),
)
( h ?
J
2
/J.
c2ah = b
a2h /M + ?a2h ,
c2bh = bb2h + m=1 ?B
m,h
If no stationary point gives Fh < 0, the global solution is given by
2
b
ah = bbh = 0,
?a2h , ?B
, c2ah , c2bh ? 0 for m = 1, . . . , M.
m,h
(15)
(16)
(17)
(18)
(19)
2
Taking account of the trivial relations c2bh = ?B
for m > J, Eqs.(16)?(18) for each h can be seen
m,h
)
(
as a polynomial system with 5 + J unknown variables, i.e., b
ah , ? 2 , c2 , bbh , {? 2
}J , c2 .
ah
ah
2
Bm,h m=1
bh
2
Thus, Lemma 1 has decomposed the original problem (8)?(10) with O(M H ) unknown variables
into H subproblems with O(J) variables each.
Given the noise variance ? 2 , our exact global VB solver (EGVBS) finds all stationary points that satisfy the polynomial system (16)?(18) by using the homotopy method [12, 10],2 After that, it discards
the prohibitive solutions with complex numbers or with negative variances, and then selects the one
giving the smallest Fh , defined by Eq.(15). The global solution is the selected stationary point if
it satisfies Fh < 0, or the null solution (19) otherwise. Algorithm 1 summarizes the procedure of
EGVBS. If ? 2 is unknown, we conduct a naive 1-D search by iteratively applying EGVBS, as for
VB matrix factorization [17].
3.3
Approximate Global VB Solver (AGVBS)
Although Lemma 1 significantly reduced the complexity of the optimization problem, EGVBS is
not applicable to large-scale data, since the homotopy method is not guaranteed to find all the solutions in polynomial time in J, when the polynomial system involves O(J) unknown variables. For
large-scale data, we propose a scalable approximation by introducing an additional constraint that
2 2
?m
?Bm,h are constant over m = 1, . . . , J, i.e.,
for all m ? J.
2 2
?m
?Bm,h = ? 2bh
2
(20)
The homotopy method is a reliable and efficient numerical method to solve a polynomial system [6, 9]. It
provides all the isolated solutions to a system of n polynomials f (x) ? (f1 (x), . . . , fn (x)) = 0 by defining a
smooth set of homotopy systems with a parameter t ? [0, 1]: g(x, t) ? (g1 (x, t), g2 (x, t), . . . , gn (x, t)) = 0,
such that one can continuously trace the solution path from the easiest (t = 0) to the target (t = 1). We use
HOM4PS-2.0 [12], which is one of the most successful polynomial system solvers.
5
Under this constraint, we obtain the following theorem (the proof is omitted):
Theorem 2 Under the constraint (20), any stationary point of the free energy (15) for each h satisb
fies the following polynomial equation with a single variable ?
bh :
5
6
4
2
3
b
b
b
b
b
b
bh + ?0 = 0,
b h + ?3 ?
b h + ?2 ?
b h + ?1 ?
b h + ?4 ?
?6 ?
b h + ?5 ?
(21)
where
?6 =
?2h
2 ,
?h
?3 =
2?h M (M ?J)?
3
?h
?2 =
(M ?J) ?
2
?h
?1 = ?
?5 = ?2
2
4
?
4
?2h M ? 2
3
?h
?
+
2(M ?J)?
?h
?J
m=1
2
2
+
?h M ? ((M +J)?
2
?h
2
(M ?J)? 2 ((M +J)? 2 ??h
)
?h
Here, ? = (
2?h
?h ,
?2
?m
/J)?1
2
?4 =
?2h M 2 ? 4
4
?h
?h ((M +J)?
?h
2
??h
)
2
?
2
??h
)
2?h (2M ?J)? 2
2
?h
?
+1+
2
?2h M ? 2 (M ? 2 ??h
)
3
?h
+ ((M + J)? 2 ? ?h2 ) ?
2
?2h (M ? 2 ??h
)
,
2
?h
2
+
?h (M ?
?h
?h (M ?J)? (M ?
2
?h
2
2
2
??h
)
2
??h
)
?h M J? 4
,
?h
?0 = M J? 4 .
(
)
?2
b
and ?h = 1 ? ?h2 . For each real solution ?
bh such that
+
(
)
2
b
b
?
bh = ?
b + ?h ? M?h? , ?
b = ?h2 ? (M + J)? 2 ? M ? 2 ? ?h2 ?h ??bh ,
?
(
(
))
(
)?1
b
1
?2
M ?2
?b = 2M J ?
b2 ? 4M J? 4 1 + ?h ??bh
, ?bh = ?
?
?
b
,
b+ ?
?
?
h
h
?h
?b
are real and positive, the corresponding stationary point candidate is given by
)
(
) (?
? ?
?
?2
2
b ?2 ?bh , ?b, ?
b h,
b
ah , ?a2h , c2ah , bbh , ? 2bh , c2bh =
?
b?,
b
/
?/?
?
b
/?
,
2
?
h .
?h
b
?
?h ?h ??h
,
(22)
,
(23)
(24)
(25)
(26)
(27)
(28)
?
b
Given the noise variance ? 2 , obtaining the coefficients (22)?(25) is straightforward. Our approximate global VB solver (AGVBS) solves the sixth-order polynomial equation (21), e.g., by the
R
?roots? function in MATLAB?
, and obtain all candidate stationary points by using Eqs.(26)?(28).
Then, it selects the one giving the smallest Fh , and the global solution is the selected stationary point
if it satisfies Fh < 0, otherwise the null solution (19). Note that, although a solution of Eq.(21) is not
necessarily a stationary point, selection based on the free energy discards all non-stationary points
and local maxima. As in EGVBS, a naive 1-D search is conducted for estimating ? 2 .
In Section 4, we show that AGVBS is practically a good alternative to the MVGA iteration in terms
of accuracy and computation time.
4
Experiments
In this section, we experimentally evaluate the proposed EGVBS and AGVBS. We assume that the
hyperparameters (CA , CB ) and the noise variance ? 2 are unknown and estimated from observations.
We use the full-rank model (i.e., H = min(L, M )), and expect VB-LRSC to automatically find the
true rank without any parameter tuning.
We first conducted an experiment with a small artificial dataset (?artificial small?), on which the
exact algorithms, i.e., the SVB iteration (Section 2.2.1) and EGVBS (Section 3.2), are computationally tractable. Through this experiment, we can measure the accuracy of the efficient approximate
variants, i.e., the MVGA iteration (Section 2.2.2) and AGVBS (Section 3.3). We randomly created M = 75 samples in L = 10 dimensional space. We assumed K = 2 clusters: M (1)? = 50
samples lie in a H (1)? = 3 dimensional subspace, and the other M (2)? = 25 samples lie in a
H (2)? = 1 dimensional subspace. For each cluster k, we independently drew M (k)? samples from
NH (k)? (0, 10IH (k)? ), where Nd (?, ?) denotes the d-dimensional Gaussian, and projected them
(k)?
into the observed L-dimensional space by R(k) ? RL?H , each entry of which follows N1 (0, 1).
(k)?
Thus, we obtained a noiseless matrix Y (k)? ? RL?M
for the k-th cluster. Concatenating all
clusters, Y ? = (Y (1)? , . . . , Y (K)? ), and adding random noise subject to N1 (0, 1) to each entry gave
?K
an artificial observed matrix Y ? RL?M , where M = k=1 M (k)? = 75. The true rank of Y ?
6
2.3
2
EGVBS
AGVBS
SVBIteration
MVGAIteration
2
10
EGVBS
AGVBS
SVBIteration
MVGAIteration
8
6
b
H
F /LM
2.1
10
4
10
Time(sec)
EGVBS
AGVBS
SVBIteration
MVGAIteration
2.2
4
0
10
2
1.9
1.8
0
50
100
150
Iteration
200
250
0
(a) Free energy
50
100
150
Iteration
200
0
0
250
(b) Computation time
50
100
150
Iteration
200
250
(c) Estimated rank
Figure 1: Results on the ?artificial small? dataset (L = 10, M = 75, H ? = 4). The clustering errors
were 1.3% for EGVBS, AGVBS, and the SVB iteration, and 2.4% for the MVGA iteration.
1.62
AGVBS
MVGAIteration
AGVBS
MVGAIteration
10
2
10
b
H
Time(sec)
F /LM
1.63
1.625
15
4
10
AGVBS
MVGAIteration
1.635
5
0
10
1.615
1.61
0
500
1000
1500
Iteration
2000
2500
0
(a) Free energy
500
1000
1500
Iteration
2000
0
0
2500
(b) Computation time
500
1000
1500
Iteration
2000
2500
(c) Estimated rank
Figure 2: Results on the ?artificial large? dataset (L = 50, M = 225, H ? = 5). The clustering errors
were 4.0% for AGVBS and 11.2% for the MVGA iteration.
7
AGVBS
MVGAIteration
4
AGVBS
MVGAIteration
40
2
10
30
20
0
10
3
2
0
AGVBS
MVGAIteration
50
b
H
5
4
10
Time(sec)
F /LM
6
10
500
1000
1500
Iteration
2000
(a) Free energy
2500
0
500
1000
1500
Iteration
2000
(b) Computation time
2500
0
0
500
1000
1500
Iteration
2000
2500
(c) Estimated rank
Figure 3: Results on the ?1R2RC? sequence (L = 59, M = 459) of the Hopkins 155 motion
database. The clustering errors are shown in Figure 4.
?K
is given by H ? = min( k=1 H (k)? , L, M ) = 4. Note that H ? is different from the rank J of the
observed matrix Y , which is almost surely equal to min(L, M ) = 10 under the Gaussian noise.
b??
b = B
b?A
Figure 1 shows the free energy, the computation time, and the estimated rank of X
over iterations. For the iterative methods, we show the results of 10 trials starting from different
random initializations. We can see that AGVBS gives almost the same free energy as the exact
methods (EGVBS and the SVB iteration). The exact method requires a large computation cost:
EGVBS took 621 sec to obtain the global solution, and the SVB iteration took ? 100 sec to achieve
almost the same free energy. The approximate methods are much faster: AGVBS took less than 1
sec, and the MVGA iteration took ? 10 sec. Since the MVGA iteration had not converged after
250 iterations, we continued the MVGA iteration until 2500 iterations, and found that the MVGA
iteration sometimes converges to a local solution with significantly higher free energy than the other
methods. EGVBS, AGVBS, and the SVB iteration successfully found the true rank H ? = 4, while
the MVGA iteration sometimes failed. This difference is actually reflected to the clustering error,
i.e., the misclassification rate with all possible cluster correspondences taken into account, after
spectral clustering [19] is performed: 1.3% for EGVBS, AGVBS, and the SVB iteration, and 2.4%
for the MVGA iteration.
Next we conducted the same experiment with a larger artificial dataset (?artificial large?) (L =
50, K = 4, (M (1)? , . . . , M (K)? ) = (100, 50, 50, 25), (H (1)? , . . . , H (K)? ) = (2, 1, 1, 1)), on which
EGVBS and the SVB iteration are computationally intractable. Figure 2 shows results with AGVBS
and the MVGA iteration. An advantage in computation time is clear: AGVBS took ? 0.1 sec, while
the MVGA iteration took more than 100 sec. The clustering errors were 4.0% for AGVBS and
11.2% for the MVGA iteration.
Finally, we applied AGVBS and the MVGA iteration to the Hopkins 155 motion database [21].
In this dataset, each sample corresponds to a trajectory of a point in a video, and clusteirng the
trajectories amounts to finding a set of rigid bodies. Figure 3 shows the results on the ?1R2RC?
7
MAP (with optimized lambda)
AGVBS
MVGAIteration
0.4
1R2TCRT_g13
1R2TCRT_g12
1R2TCR
1R2TCRT
1R2RC_g23
1R2RC_g13
1R2RC_g12
1R2RCT_B_g23
1R2RCT_B_g13
1R2RCT_B
1R2RCT_B_g12
1R2RCT_A_g23
1R2RCT_A_g13
1R2RCT_A
1R2RCT_A_g12
1R2RCR_g23
1R2RCR_g13
1R2RCR
0
1R2RCR_g12
0.2
1R2RC
Cl u st e r i n g E r r or
0.6
Figure 4: Clustering errors on the first 20 sequences of Hopkins 155 dataset.
(L = 59, M = 459) sequence.3 We see that AGVBS gave a lower free energy with much less
computation time than the MVGA iteration. Figure 4 shows the clustering errors over the first 20
sequences. We find that AGVBS generally outperforms the MVGA iteration. Figure 4 also shows
the results with MAP estimation (2) with the tuning parameter ? optimized over the 20 sequences
(we performed MAP with different values for ?, and selected the one giving the lowest average
clustering error). We see that AGVBS performs comparably to MAP with optimized ?, which
implies that VB estimates the hyperparameters and the noise variance reasonably well.
5
Conclusion
In this paper, we proposed a global variational Bayesian (VB) solver for low-rank subspace clustering (LRSC), and its approximate variant. The key property that enabled us to obtain a global
solver is that we can significantly reduce the degrees of freedom of the VB-LRSC model, and the
optimization problem is separable.
Our exact global VB solver (EGVBS) provides the global solution of a non-convex minimization
problem by using the homotopy method, which solves the stationary condition written as a polynomial system. On the other hand, our approximate global VB solver (AGVBS) finds the roots of
a polynomial equation with a single unknown variable, and provides the global solution of an approximate problem. We experimentally showed advantages of AGVBS over the previous scalable
method, called the matrix-variate Gaussian approximate (MVGA) iteration, in terms of accuracy and
computational efficiency. In AGVBS, SVD dominates the computation time. Accordingly, applying
additional tricks, e.g., parallel computation and approximation based on random projection, to the
SVD calculation would be a vital option for further computational efficiency.
LRSC can be equipped with an outlier term, which enhances robustness [7, 8, 2]. With the outlier
term, much better clustering error on Hopkins 155 dataset was reported [2]. Our future work is to
extend our approach to such robust variants. Theorem 2 enables us to construct the mean update
(MU) algorithm [16], which finds the global solution with respect to a large number of unknown
variables in each step. We expect that the MU algorithm tends to converge to a better solution than
the standard VB iteration, as in robust PCA and its extensions. EGVBS and AGVBS cannot be
applied to the applications where Y has missing entries. Also in such cases, Theorem 2 might be
used to derive a better algorithm, as the VB global solution of fully-observed matrix factorization
(MF) was used as a subroutine for partially-observed MF [18].
In many probabilistic models, the Bayesian learning is often intractable, and its VB approximation
has to rely on a local search algorithm. Exceptions are the fully-observed MF, for which an analyticform of the global solution has been derived [17], and LRSC, for which this paper provided global
VB solvers. As in EGVBS, the homotopy method can solve a stationary condition if it can be written
as a polynomial system. We expect that such a tool would extend the attainability of global solutions
of non-convex problems, with which machine learners often face.
Acknowledgments
The authors thank the reviewers for helpful comments. SN, MS, and IT thank the support from
MEXT Kakenhi 23120004, the FIRST program, and MEXT KAKENHI 23700165, respectively.
3
Peaks in free energy curves are due to pruning, which is necessary for the gradient-based MVGA iteration.
The free energy can jump just after pruning, but immediately get lower than the value before pruning.
8
References
[1] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proc. of
UAI, pages 21?30, 1999.
[2] S. D. Babacan, S. Nakajima, and M. N. Do. Probabilistic low-rank subspace clustering. In Advances in
Neural Information Processing Systems 25, pages 2753?2761, 2012.
[3] J. Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society B, 48:259?
302, 1986.
[4] C. M. Bishop. Variational principal components. In Proc. of International Conference on Artificial Neural
Networks, volume 1, pages 514?509, 1999.
[5] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, NY, USA, 2006.
[6] F. J. Drexler. A homotopy method for the calculation of all zeros of zero-dimensional polynomial ideals.
In H. J. Wacker, editor, Continuation methods, pages 69?93, New York, 1978. Academic Press.
[7] E. Elhamifar and R. Vidal. Sparse subspace clustering. In Proc. of CVPR, pages 2790?2797, 2009.
[8] P. Favaro, R. Vidal, and A. Ravichandran. A closed form solution to robust subspace estimation and
clustering. In Proceedings of CVPR, pages 1801?1807, 2011.
[9] C. B. Garcia and W. I. Zangwill. Determining all solutions to certain systems of nonlinear equations.
Mathematics of Operations Research, 4:1?14, 1979.
[10] T. Gunji, S. Kim, M. Kojima, A. Takeda, K. Fujisawa, and T. Mizutani. Phom?a polyhedral homotopy
continuation method. Computing, 73:57?77, 2004.
[11] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman and Hall/CRC, 1999.
[12] T. L. Lee, T. Y. Li, and C. H. Tsai. Hom4ps-2.0: a software package for solving polynomial systems by
the polyhedral homotopy continuation method. Computing, 83:109?133, 2008.
[13] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In Proc. of ICML,
pages 663?670, 2010.
[14] G. Liu, H. Xu, and S. Yan. Exact subspace segmentation and outlier detection by low-rank representation.
In Proc. of AISTATS, 2012.
[15] G. Liu and S. Yan. Latent low-rank representation for subspace segmentation and feature extraction. In
Proc. of ICCV, 2011.
[16] S. Nakajima, M. Sugiyama, and S. D. Babacan. Variational Bayesian sparse additive matrix factorization.
Machine Learning, 92:319?1347, 2013.
[17] S. Nakajima, M. Sugiyama, S. D. Babacan, and R. Tomioka. Global analytic solution of fully-observed
variational Bayesian matrix factorization. Journal of Machine Learning Research, 14:1?37, 2013.
[18] M. Seeger and G. Bouchard. Fast variational Bayesian inference for non-conjugate matrix factorization
models. In Proceedings of International Conference on Artificial Intelligence and Statistics, La Palma,
Spain, 2012.
[19] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Machine Intell.,
22(8):888?905, 2000.
[20] M. Soltanolkotabi and E. J. Cand`es. A geometric analysis of subspace clustering with outliers. CoRR,
2011.
[21] R. Tron and R. Vidal. A benchmark for the comparison of 3-D motion segmentation algorithms. In Proc.
of CVPR, 2007.
9
| 5069 |@word trial:3 version:1 inversion:1 polynomial:21 norm:2 nd:1 palma:1 covariance:2 decomposition:1 attainable:2 tr:6 reduction:1 liu:3 tuned:1 outperforms:1 com:1 attainability:1 gmail:1 dx:1 written:7 fn:1 numerical:1 additive:1 enables:1 analytic:1 update:1 stationary:19 intelligence:1 prohibitive:2 selected:4 accordingly:1 akiko:1 short:1 provides:4 favaro:1 c2:2 prove:2 consists:1 polyhedral:2 introduce:1 expected:1 cand:1 globally:1 decomposed:3 automatically:1 equipped:1 solver:19 provided:1 estimating:1 spain:1 factorized:1 lowest:2 null:3 mountain:1 easiest:1 finding:2 corporation:1 pseudo:1 masashi:1 prohibitively:1 rm:4 positive:2 before:1 local:5 tends:1 path:1 approximately:1 might:1 kojima:1 initialization:1 co:1 factorization:7 acknowledgment:1 zangwill:1 union:2 definite:1 procedure:3 empirical:1 yan:2 significantly:4 projection:1 pre:1 induce:1 word:1 get:1 cannot:2 yleft:2 selection:1 bh:13 ravichandran:1 applying:2 map:5 c2ah:5 missing:1 reviewer:1 shi:1 straightforward:1 starting:1 independently:1 convex:4 formulate:1 simplicity:1 immediately:1 estimator:1 continued:1 regarded:1 enabled:1 embedding:2 target:1 user:1 exact:12 trick:1 element:2 approximated:1 recognition:1 cut:2 database:2 observed:13 subproblem:2 calculate:2 intuition:1 mu:2 complexity:4 solving:2 efficiency:4 learner:1 bbh:6 separated:1 fast:1 artificial:10 widely:1 solve:2 larger:1 cvpr:3 otherwise:3 statistic:1 g1:1 transform:1 itself:1 sequence:5 advantage:2 took:6 propose:5 lowdimensional:1 product:1 achieve:1 inducing:1 frobenius:1 takeda:3 cluster:6 converges:1 derive:1 ac:3 eq:11 solves:3 c:1 involves:1 implies:2 tokyo:6 enable:1 crc:1 f1:1 homotopy:12 im:3 extension:1 practically:1 around:1 hall:1 exp:6 cb:4 lm:5 dictionary:4 sought:1 adopt:1 a2:3 fh:9 smallest:3 omitted:1 estimation:2 proc:7 applicable:1 derin:1 largest:1 successfully:1 tool:1 svb:10 minimization:2 gaussian:8 avoid:1 derived:3 focus:1 kakenhi:2 rank:22 likelihood:1 greatly:1 besag:1 rigorous:2 seeger:1 kim:1 sense:1 posteriori:1 inference:4 helpful:1 rigid:1 mizutani:1 typically:1 relation:1 subroutine:1 selects:2 flexible:1 marginal:1 equal:2 once:1 construct:1 extraction:1 manually:1 chapman:1 yu:1 icml:1 future:1 minimized:1 randomly:1 intell:1 n1:2 ab:5 freedom:4 detection:1 huge:1 highly:1 bb1:1 accurate:1 necessary:1 conduct:1 isolated:1 column:1 gn:1 cost:1 introducing:1 entry:5 c2b:2 usefulness:1 successful:1 conducted:3 reported:1 thanks:3 st:1 peak:1 international:2 probabilistic:5 lee:1 continuously:1 hopkins:4 ssc:3 lambda:1 derivative:1 li:1 japan:4 account:2 bold:1 b2:2 sec:9 coefficient:3 inc:1 satisfy:2 dbabacan:1 performed:2 view:1 root:2 closed:1 ichiro:2 portion:2 bayes:5 option:1 parallel:1 bouchard:1 takeuchi:2 accuracy:3 variance:6 mvg:5 bayesian:19 iterated:1 comparably:1 trajectory:2 ah:9 converged:1 manual:2 sixth:1 energy:20 sugi:1 dm:2 naturally:1 proof:2 proved:1 adjusting:1 dataset:7 dimensionality:3 segmentation:5 actually:2 higher:1 reflected:1 a2h:13 formulation:7 obstructed:1 just:1 correlation:1 until:1 hand:2 sketch:1 nonlinear:1 google:1 lrsc:20 mode:1 usa:2 normalized:2 true:3 analytically:2 regularization:3 leibler:1 iteratively:1 c2bh:6 coincides:1 m:1 mist:1 b2h:1 tron:1 performs:1 motion:3 c2b1:1 variational:16 wise:2 image:1 common:1 functional:1 rl:5 jp:4 nh:1 volume:1 extend:2 aichi:1 vec:2 automatic:1 tuning:5 mathematics:1 similarly:1 sugiyama:3 soltanolkotabi:1 had:1 posterior:8 closest:1 showed:1 nagar:1 irrelevant:3 discard:3 nagoya:1 certain:2 seen:1 minimum:3 additional:4 surely:1 determine:1 diagonality:1 redundant:1 converge:1 full:1 reduces:1 smooth:1 faster:1 academic:1 calculation:2 offer:1 lin:1 a1:1 variant:5 scalable:2 noiseless:1 titech:1 expectation:1 fujisawa:1 iteration:43 nakajima:5 sometimes:2 achieved:1 background:1 singular:2 comment:1 induced:1 subject:1 db:1 call:2 ideal:1 vital:1 enough:1 variate:5 gave:2 reduce:2 attias:1 yright:3 nitech:1 expression:1 pca:5 c2a1:1 suffer:2 york:2 matlab:1 generally:3 clear:1 amount:3 transforms:1 locally:1 reduced:1 continuation:3 estimated:8 hyperparameter:1 key:3 nikon:2 clean:2 enforced:1 inverse:2 letter:1 package:1 almost:3 summarizes:1 vb:41 guaranteed:1 correspondence:1 constraint:5 kronecker:1 software:1 babacan:4 min:7 separable:2 combination:2 conjugate:1 slightly:1 outlier:6 restricted:1 iccv:1 taken:1 computationally:5 equation:10 discus:1 mechanism:1 tractable:2 end:1 operation:1 vidal:3 apply:2 spectral:3 alternative:1 robustness:1 original:2 denotes:4 clustering:27 dirty:1 giving:5 classical:2 society:1 objective:1 malik:1 diagonal:7 enhances:1 affinity:2 gradient:2 subspace:26 distance:2 thank:2 trivial:1 enforcing:1 providing:1 minimizing:1 subproblems:4 trace:1 negative:2 ba:4 anal:1 unknown:11 observation:2 datasets:1 benchmark:2 defining:1 shinichi:1 perturbation:1 introduced:1 kl:2 optimized:3 trans:1 pattern:2 sparsity:1 program:1 reliable:1 royal:1 video:1 misclassification:1 difficulty:4 treated:1 rely:1 solvable:1 smallness:1 technology:2 picture:1 created:1 fro:4 extract:1 naive:2 sn:1 faced:1 prior:2 geometric:1 determining:1 fully:5 expect:3 h2:6 degree:4 imposes:1 editor:1 prone:2 last:1 free:20 institute:2 taking:1 face:1 absolute:1 sparse:4 regard:1 overcome:2 curve:1 fies:1 author:1 jump:1 projected:1 simplified:1 avoided:1 bm:7 far:1 cope:1 approximate:12 pruning:3 kullback:1 global:35 investigating:1 uai:1 assumed:1 search:3 iterative:1 latent:2 reasonably:1 robust:4 ca:4 obtaining:1 complex:2 necessarily:1 cl:1 diag:3 aistats:1 noise:7 hyperparameters:4 icm:3 body:1 xu:1 c2a:2 ny:1 tomioka:1 inferring:1 a21:1 concatenating:1 candidate:2 lie:2 learns:1 theorem:8 embed:1 bishop:2 gupta:1 dominates:1 intractable:5 ih:1 restricting:1 adding:1 corr:1 drew:1 elhamifar:1 rankness:4 mf:3 garcia:1 failed:1 expressed:2 g2:1 partially:1 springer:1 corresponds:2 minimizer:1 satisfies:3 relies:1 conditional:1 identity:1 experimentally:2 except:1 principal:2 lemma:3 called:2 experimental:1 svd:4 la:1 e:1 exception:2 select:1 support:1 mext:2 tsai:1 evaluate:1 d1:1 handling:1 |
4,497 | 507 | Dynamically-Adaptive Winner-Take-All Networks
Treat E. Laale
Artif1cia1IntcUigeoce Laboratory
Computer Science Department
Univmity of California. Los Angeles. CA 90024
Abstract
Winner-Take-All (WTA) networks. in which inhibitory interconnections are used to determine the most highly-activated of a pool of unilS.
are an important part of many neural network models. Unfortunately,
convergence of normal WTA networks is extremely sensitive to the
magnitudes of their weights, which must be hand-tuned and which generally only provide the right amount of inhibition across a relatively
small range of initial conditions. This paper presents DynamjcallyAdaptive Winner-Telke-All (DA WTA) netw<rls, which use a regulatory
unit to provide the competitive inhibition to the units in the network.
The DAWTA regulatory unit dynamically adjusts its level of activation
during competition to provide the right amount of inhibition to differentiate between competitors and drive a single winner. This dynamic
adaptation allows DA WTA networks to perform the winner-lake-all
function for nearly any network size or initial condition. using O(N)
connections. In addition, the DAWT A regulaaory unit can be biased 10
find the level of inhibition necessary to settle upon the K most highlyactivated units, and therefore serve as a K-Winners-Take-All network.
1. INTRODUCTION
Winner-Take-All networks are fixed group of units which compete by mutual inhibition
until the unit with the highest initial activation or input level suppresses the activation of
all the others. Winner-lake-all selection of the most highJy-activated unit is an important
part of many neural network models (e.g. McCleUand and Rumelhart, 1981; Feldman and
Ballard. 1982; Kohonen. 1984; Tomelzky. 1989; Lange and Dyer, 1989a,b).
Unfortunately, successful convergence in winner-lake-all networks is extremely sensitive
to the magnitudes of the inhibitory weights between units and other network parameU7S.
For example. a weight value for the mutually-inhibitory connections allowing the most
highly-activated unit to suppress the other units in one initial condition (e.g. Figure la)
may not provide enough inhibition to select a single winner if the initial input activation
levels are closer together and/or higher (e.g. Figure Ib). On the other hand, if the compe-
341
342
Lange
t.O
O.sa
0.8
0.7
0.8
0.5
0.4
0.3
0.2
0.1
10
20
30
40
50
80
70
80
sao
100
10
20
30
~
50
110
70
10
110
1011
1.0
0.11
0.8
0.7
0.8
0.5
0.4
0.3
0.2
0.1
0
7.0
,
#
5.0
3.0
-
1.0
-1.0
-3.0
-
/
.I
----
MIf-bill-
.1~
MIf~
.120
-,~
. 11U
MII-e- 1.0111
C)
-~o
-7.0
2
.
3
.
5
8
.
7
I
II
10
Figure 1. Several plots of activation versus time for different initial conditions in a winner-lake-all network in which there is a bidirectional inhibitory
connection of weight -0.2 between every pair of units. Unit activation function
is that from the interactive activation model of McClelland and Rumelhan
(1981). (a) Network in which five units are given an input self bias ranging
from 0.10 to 0.14. (b) Network in which five units are given an input self bias
ranging from 0.50 to 0.54. Note that the network ended up with three winners
because the inhibitory connections of weight -0.2 did not provide enough inhibition to suppress the second and third most-active nodes. (c) Network in which
100 units are given an input self bias ranging from 0.01 to 0.14. The combined
activation of all 100 nodes through the inhibitory weight of -0.2 provides far too
much inhibition. causing the network to overreact and oscillate wildly
tition involves a larger number of active units. then the same inhibitory weights may
provide too much inhibition and either suppress the activations of all units or lead to
oscillations (e.g. Figure Ie).
Dynamically-Adaptive Winner-Take-All Networks
Because of these problems, it is genenlly necessary to hand-tune network paramderS to
allow for successful winner-lake-all performance in a given neuraI network archilleCture
having certain expected levels of incoming activations. For complex networks. this can
require a detailed mathematical analysis of the model (cf. Touretzky & Hinton, 1988) ex' a
heuristic, computer-assisted trial-and-error search process (cf. Reggia, 1989) to find the
values of inhibitory weights, unit thresholds, and other network parameters necessary for
clear-cut winner-lake-all performance in a given model's input space. In some cases.
however, no set of network constant network parameters can be found to handle the range
of possible initial conditions a model may be faced with (Bamden. Kankanaballi. and
Dharmavaratha. 1990). such as when the numbers of units actually competing in a given
network may be two at one time and thousands at another (e.g. Bamden, 1990; Lange, in
press).
This paper presents a new variant of winnet-take-all networks. the Dynamically-Adaptive
Winner-Take-All (DAWfA) network. DAWTA networks. using O(N) connectioas. are
able to robustly act as winner-lake-all networks for nearly any network initial condition
without any hand-tuning of network parameters. In essence, the DAWTA network dynamically "tunes" itself by adjusting the level of inhibition sent to each unit in the network depending upon feedback from the current conditions of the competition. In addition. a biasing activation can be added to the network to allow it to act as a K-WinnersTake-All network (cf. Majani, Erlanson. and Abu-Mostafa, 1989). in which the K most
highly-activated units end up active.
2. DYNAMICALL Y-ADAPTIVE WT A NETWORKS
The basic idea behind the Dynamically-Adaptive Winner-Take-All mechanism can be described by looking at a version of a winner-lake-all network that is functionally equivalent
to a nonnal winner-lake-all network but which uses only O(N) connections. Several researchers have pointed out that the (N2_N)(l. bidirectional inhibitory connections (each of
weight -WI) normally needed in a winner-lake-all network can be replaced by an excitatory
self-connection of weight WI for each unit and a single regulatory unit that sums up the
activations of all N units and inhibits them each by that -WI times that amount
(fouretzky & Hinton. 1988: Majani et al.. 1989) (see Figure 2).
When viewed in this fashion, the mutually inhibitory connections of winner-lake-all networks can be seen as a regulator (i.e. the regulatory unit) that is attempting to IYOvide the
right amount of inhibition to the network to allow the winner-to-be unit's activation to
grow while suppressing the activations of all others. This is exactly what happens when
WI has been chosen correctly for the activations of the network (as in Figure la).
However, because the amounl of this regulatory inhibition is fixed precisely by that inhibitory weight (i.e. always equal to that weight times the sum of the network activations), there is no way for it to increase when it is not enough (as in Figure Ib) or decrease when it is too much (as in Figure lc).
2.1. THE DA WTA REGULATORY UNIT
From the point of view of the competing units' inputs. the Dynamically-Adaptive
Winner-Take-All network is equivalenl to the regulatory-unit simplification of a nonnal
winner-take-all network. Each unit has an excitatory connection to itself and an inhibitory connection from a regulatory unit whose function is to suppress the activations
343
344
Lange
Figure 2. Simplification of a standard WTA network using O(n) connectiOllS
by introduction of a regulatory unit (top node) that sums up the activations of all
network units. Each unit has an excitatory connection to itself and an inhibitay
connection of weight -WI from the regulatory unit Shading of units (darker
higher) represents their levels of activation at a hypothetical time in the middle
of network cycling.
=
of all but the winning unitl. However, the regulatory unit itself, and how it calculates
the inhibition it provides to the network, is different
Whereas the connections to the regulatory unit in a nonnal winner-lake-all network cause
it to produce an inhibitory activation (i.e. the sum of the units' activations) that happens
to work if its inhibitory weights were set correctly, the structure of connections to the
regulatory unit in a dynamically-adaptive winner-lake-all network cause it to continually
adjust its level of activation until the right amount of inhibition is found, regardless of
the network's initial conditions. As the network cycles and the winner-lake-all is being
perfonned, the DAWTA regulatory unit's activation inhibits the networks' units, which
results in feedback to the regulatory unit that causes it to increase its activation if more
inhibition is required to induce a single winner, or decrease its activation if less is required. Accordingly, the DAWTA regulatory unit's activation (aR(t) now includes its
previous activation, and is the following:
netR(t+l)
-8 <
S -8
+1 ) < 8
netR(t+l) ~ 8
net R ( t
where netR (t+l) is the total net input to the regulator at time 1+1. and 8 is a small
constant (typically 0.05) whose purpose is to stop the regulatory unit's activation from
rising or falling too rapidly on any given cycle. Figure 3 shows the actual DynamicallyAdaptive Winner-Take-All network. As in Figure 2, the regulatory unit is the unit at the
top and the competing units are the the circular units at the bottom that are inhibited by it
and which have connections (of weight ws) to themselves. However, there are now two
1As in all winner-lake-all networks, the competing units may also have inputs from
outside the network that provide the initial activations driving the competition.
Dynamically-Adaptive Winner-Take-All Networks
Figure 3. Dynamically-Adaptive Winnez-Take-All Network at a hypothetical
time in the middle of network cycling. The topmost unit is the DAWTA
regulatory unit. whose outgoing coonections to all of me competing units II abe
bottom all have weight -1. The input -A:-wd is a constant self biasing activation
to the regulatory unit whose value determines how many winners it will try to
drive. The two middle units are simple linear summation units each having
inputs of unit weight that calculate the total activation of the competing units at
time I and time I-I, respectively.
intennediate units that calculate the net inputs that increase or decrease the regulatory
unit's inhibitory activation depending on the state of the competition. These inputs cause
the regulatory unit to receive a net input netR (t+l) of:
which simplifies to:
netR(i+I)
=Wt{o,(t-l) - k) + wio,(t-l) - o,{t-2?
where Ol(t) is the total summed output of all of the competing units (calculated by the
and Wd are constant weights, and k is the number of
intennediate units shown),
winners the network is attempting to seek (1 to perfonn a nonnal winner-lake-all).
W,
The effect of the above activation function and the connections shown in Figure 3 is to
apply two different activation pressures on the regulatory unit. each of which combined
over time drive the DA WTA regulatory unit's activation to find the right level of
inhibition to suppress all but the winning uniL The most important pressure, and the
key to the DAWT A regulatory unit's success, is that the regulatory unit's activation
increases by a factor of if there is too much activation in the network, and decreases by
a corresponding factor if there is not enough activation in the network. This is the result
of the tenn w,(o,(I-I) - k) in its net input function, which simplifies to w,(o,(t-I) - 1)
when k equals 1. The "right amount" of total activation in the network is simply the
total summed activation of the goal state, i.e. the winner-lake-all network state in which
there is one active unit (having activation I) and in which all other competing units have
W,
345
346
Lange
been driven down to an activation of 0, leaving the IDtal network activation 01..1) equal to
1. The factor w,(o,(t-l) - 1) of the regulaWry input's net input will therefore laid to
increase the regulatory unit's activation if there are too many units active in abc network
(e.g. if there are three units with activity 0.7, 0.5, and 0.3, since the total outpUt 0,(1)
will be 1.5), to decrease its activation if there is not enough totally active units in the
network (e.g. one unit with activation 0.2 and the rest with activation O.O), and to leave
its activation unchanged if the activation is the same as the fmal goal activation. Noce
that any temporary coincidences in which the total network activation sums to 1 but
which is not the fmal winner-lake-all state (e.g. when one unit has activation 0.6 and
another has activation 0.4) will be broken by the competing units lhemselves. since the
winning unit's activation will always rise more quickly thaD the loser's just by ill own
activation function (e.g. that of McClelland and Rumelhart, 1981).
The other pressure on Ihe DAWTA regulatory unit. from the wct<o/..t-l) - 0I..t-2? tam of
netR(t+ I), is to tend to decrease the regulator's activation if the overall network activation
is falling too rapidly, or to increase it if the overall network activation is rising too
rapidly. This is essentially a dampening term to avoid oscillaIions in the network in the
early stages of the winner-lake-all, in which there may be many active units whose activations are falling rapidly (due inhibition from the regulatory unit), but in which Ihe total
network activation is still above the final goal activation. As can be seen, this second
term of the regulatory unit's net input will also sum to 0 and therefore leave the regulatory unit's activation unchanged when the goal state of the network has been reached,
since the total activation of the network in the winner-take-all state will remain constanl
All of the weights and connections of the DAWTA network are constant parameters that
are the same for any size network or set of initial network conditions. Typically we have
used W, =0.025 and Wd = 0.5. The actual values are not critical, as long as Wd ?Ws.
which assures that Wd is high enough to dampen the rapid rise or fall in total network
activation sometimes caused by the direct pressure of Wt. The value of the regulatory
unit's self bias term !W, that sets the goal total network activation that the regulatory
unit attempts to reach is simply detennined simply by Ie, the number of winnas desired
(1 for a normal winner-lake-all network), and
W,.
3. RESULTS
Dynamically-adaptive winner-lake-all networks have been tested in the DESCARTES connectionist simulator (Lange, 1990) and used in our connectionist model of short-tenn sequential memory (Lange, in press). Figures 4a-c show the plots of activation vasus time
in networks given the same initial conditions as those of the normal winner-lake-all network shown in Figures la-c. Note that in each case the regulatory unit's activation
starts off at zero and increases until il reaches a level that provides sufficient inhibition to
stan driving the winner-lake-all. So whereas the inhibitory weights of -0.2 that worked
for inputs ranging from 0.10 to 0.14 in the winner-lake-all network in FigUle la could
not provide enough inhibition to drive a single winner when the inputs were ovez 0.5
(Figure 1b). the DAWTA regulatory unit simply increases its activation level until the
inhibition it provides is sufficient 10 start suppressing the eventual losers (Figures 4a and
4b). As can also be seen in the figures, the activation of the regulatory unit tends to vary
over time with different feedback from the network in a process that maximizes
differentiation between units while assuring that the group of remaining potential winnas
stays active and are nOl over-inhibited.
Dynamically-Adaptive Winner-Take-All Networks
..""""""____"""
1~~-----------------:=-~
o.et---------:~"..IOi.._--------=+=_iiif.liIii:lN1
OJi-------------~~~------------------------.--.----~~
..-.oIua._i
O.7+------~#C_-------------..=-=:...a
o.s+------:~L-----------;;;;~;;~H
0.5+---~';#Jt..----------------------_t
0.4+---:2~-------------------------_t
0. 7~""""'--
0.8-t---f---
0.5+.1--0.1
0.0
0
1.0
o.a
0.7
o.
O.S
0.4
0.3
0.2
0.1
0
10
20
30
eo
7'0
eo
100
Figure 4. Plots of activation versus time in a dynamically-adaptive winnertake-all network given the same activation functions and initial conditions of the
winner-take-all plots in Figure 1. The grey background plot shows the
activation level of the regulatory uniL (a) With five units activated with selfbiases from 0.10 to 0.14. (b) With five units activated with self-biases from
0.50 to 0.54. (c) With 100 units activated with self-biases from 0.01 to 0.14
Finally, though there is not space to show the graphic results here, the same DAWTA
netwex-ks have been simulated to drive a successful winnez-take-all within 200 cycles on
networks ranging in size from 2 to 10,000 units and on initial conditions where the winning unit has an input of 0.000001 to initial conditions where the winning unit has an
input of 0.999, without tuning the network in any way. The same networks have also
been successfully simulated to act as K-wiMer-take-alJ networks (i.e. to select the K most
active units) by simply setting the desired value for Ie in the DAWT A's self bias term
kwd?
347
348
Lange
4. CONCLUSIONS
We have presented Dynamically-Adaptive Winner-Taite-All netwcdcs. which UIC O(N)
connections to perform the winner-take-all function. Unlike noonaI winner-lake-all networks, DAWTA networks are able to select the most highly-activated unit out of. group
of units for nearly any network size and initial condition witbout lUning any network parameters. They are able to do so because the inhibition that drives the winner-taite-all
network is provided by a regulatory-unit that is constantly getting feedback from the state
of the network and dynamically adjusting its level to provide the right amount of inhibition to differentiate the winning unit from the losers. An important side-feature of this
dynamically-adaptive inhibition approach is that it can be biased to select the K most
highly-activated units, and therefore laVe as a K-winnas-take-all netw<rt.
References
Bamden.l. (1990). The power of some unusual coonectiooist dala-SlruCturing techniques.
In 1. A. Bamden and 1. B. Pollack (Eels.), AdvalJces ill coltMctiollist and MJlTal computation theory, Norwood, NJ: Ablex.
Bamden, J., Kankanahalli, S., Dhannavaratba. D. (1990). Winner-tate-all networks:
Time-based versus activation-based mechanisms for various selection tasks.
Proceedings 0/ the IEEE International Symposium on Circuits and Systems, New
Orleans. LA.
Feldman, J. A. & Ballard, D. H. (1982). Connectionist models and their properties.
Cognitive Science, 6, 205-254.
Kohonen, T. (1984). Self-organization and associative memory. New York: SpringerVerlag, Berlin.
Lange, T. (1990). Simulation of heterogeneous neural networks on serial and parallel
machines. Parallel Computing, 14,287-303.
Lange, T. (in press). Hybrid connectionist models: Temporary bridges over the gap between the symbolic and the subsymbolic. To appear in J. Dinsmore (ed.), Closing the
Gap: Symbolic vs. Subsymbolic Processing. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Lange, T. & Dyer, M. O. (1989a). Dynamic, non-local role-bindings and inferencing in a
localist network for natural Janguage understanding. In David S. Touretzky. editor,
Advances in Neural In/ormation Processing Systems I, p. 545-552, Morgan
Kaufmann, San Mateo, CA.
Lange, T. & Dyer, M. O. (1989b). High-level inferencing in a connectionist network.
Connection SciellCe, I (2), 181-217.
Majani, E., ErIanson, R. & Abu-Mostafa, Y. (1989). On the k-winners-lake-all network.
In David S. Touretzky, editor, Advances in Neural InformaJion Processing Systems I,
p. 634-642, Morgan Kaufmann, San Mateo, CA.
McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in JelkS perception: Part 1. An account of basic findings. Psychological
Review.88,375-407.
Reggia, J. A. (1989). Methods for deriving competitive activation mechanisms.
Proceedings o/the First AnllUlJi International Joint Conference on Neural Networks.
Touretzky, D. (1989). Analyzing the energy landscapes of distributed winner-lake-all
networks (1989). In David S. Touretzky, editor, Advances in Neural Information
Processing Systems I, p. 626-633, Morgan Kaufmann, San Mateo, CA.
Touretzky, D., & Hinton, G. (1988). A distributed connectionist production system.
Cognitive Science, 12. 423-466.
PART VII
VISION
| 507 |@word trial:1 middle:3 version:1 rising:2 grey:1 seek:1 simulation:1 pressure:4 shading:1 initial:16 tuned:1 suppressing:2 lave:1 current:1 wd:5 activation:75 must:1 plot:5 v:1 tenn:2 accordingly:1 short:1 provides:4 node:3 five:4 mathematical:1 direct:1 symposium:1 expected:1 rapid:1 themselves:1 simulator:1 ol:1 actual:2 totally:1 provided:1 intennediate:2 maximizes:1 circuit:1 what:1 suppresses:1 finding:1 differentiation:1 ended:1 perfonn:1 nj:2 every:1 hypothetical:2 act:3 interactive:2 exactly:1 unit:101 normally:1 appear:1 continually:1 local:1 treat:1 tends:1 analyzing:1 k:1 dala:1 dynamically:16 mateo:3 nol:1 range:2 orleans:1 witbout:1 induce:1 symbolic:2 selection:2 context:1 bill:1 equivalent:1 regardless:1 adjusts:1 deriving:1 handle:1 assuring:1 us:1 associate:1 rumelhart:3 cut:1 bottom:2 role:1 coincidence:1 thousand:1 calculate:2 cycle:3 ormation:1 decrease:6 highest:1 topmost:1 broken:1 unil:2 dynamic:2 ablex:1 serve:1 upon:2 joint:1 various:1 outside:1 whose:5 heuristic:1 larger:1 interconnection:1 uic:1 itself:4 final:1 associative:1 differentiate:2 net:7 adaptation:1 kohonen:2 causing:1 ioi:1 rapidly:4 loser:3 luning:1 detennined:1 competition:4 getting:1 los:1 convergence:2 produce:1 leave:2 depending:2 inferencing:2 erlanson:1 alj:1 sa:1 involves:1 settle:1 hillsdale:1 require:1 summation:1 assisted:1 c_:1 majani:3 normal:3 lawrence:1 mostafa:2 driving:2 vary:1 early:1 purpose:1 sensitive:2 bridge:1 successfully:1 dampen:1 always:2 avoid:1 tate:1 typically:2 w:2 nonnal:4 overall:2 ill:2 summed:2 mutual:1 equal:3 having:3 represents:1 rls:1 nearly:3 others:2 connectionist:6 inhibited:2 replaced:1 dampening:1 attempt:1 organization:1 highly:5 circular:1 adjust:1 kwd:1 activated:9 behind:1 closer:1 necessary:3 desired:2 pollack:1 psychological:1 ar:1 localist:1 successful:3 erlbaum:1 too:8 graphic:1 combined:2 international:2 ie:3 stay:1 off:1 eel:1 pool:1 together:1 quickly:1 tition:1 cognitive:2 tam:1 account:1 potential:1 includes:1 caused:1 view:1 try:1 reached:1 competitive:2 start:2 parallel:2 il:1 kaufmann:3 landscape:1 drive:6 researcher:1 reach:2 touretzky:6 ed:1 unils:1 competitor:1 energy:1 stop:1 adjusting:2 actually:1 bidirectional:2 higher:2 fmal:2 though:1 wildly:1 just:1 stage:1 until:4 hand:4 oji:1 effect:2 laboratory:1 during:1 self:10 essence:1 ranging:5 mif:2 winner:54 functionally:1 feldman:2 tuning:2 pointed:1 closing:1 winnertake:1 inhibition:23 own:1 driven:1 certain:1 success:1 seen:3 morgan:3 eo:2 determine:1 ii:2 long:1 serial:1 calculates:1 descartes:1 variant:1 basic:2 heterogeneous:1 vision:1 essentially:1 sometimes:1 receive:1 addition:2 whereas:2 background:1 wct:1 grow:1 leaving:1 biased:2 rest:1 unlike:1 tend:1 sent:1 enough:7 competing:9 lange:12 idea:1 simplifies:2 angeles:1 york:1 oscillate:1 cause:4 generally:1 detailed:1 clear:1 tune:2 amount:7 mcclelland:3 inhibitory:16 correctly:2 abu:2 group:3 key:1 threshold:1 falling:3 sum:6 compete:1 rumelhan:1 laid:1 lake:27 oscillation:1 mii:1 simplification:2 activity:1 precisely:1 worked:1 regulator:3 extremely:2 attempting:2 relatively:1 inhibits:2 department:1 across:1 remain:1 wi:5 wta:7 happens:2 mutually:2 assures:1 mechanism:3 ihe:2 needed:1 ln1:1 dyer:3 end:1 unusual:1 apply:1 reggia:2 netr:6 robustly:1 top:2 remaining:1 cf:3 unchanged:2 added:1 rt:1 cycling:2 simulated:2 berlin:1 me:1 unfortunately:2 rise:2 suppress:5 perform:2 allowing:1 hinton:3 looking:1 abe:1 david:3 pair:1 required:2 connection:19 california:1 temporary:2 able:3 perception:1 biasing:2 memory:2 power:1 perfonned:1 critical:1 natural:1 hybrid:1 stan:1 faced:1 review:1 understanding:1 bamden:5 versus:3 norwood:1 sufficient:2 sao:1 editor:3 production:1 excitatory:3 bias:7 allow:3 side:1 fall:1 distributed:2 feedback:4 calculated:1 adaptive:14 san:3 far:1 netw:2 active:9 incoming:1 search:1 regulatory:38 ballard:2 ca:4 complex:1 da:4 did:1 taite:2 fashion:1 darker:1 lc:1 winning:6 ib:2 third:1 subsymbolic:2 down:1 jt:1 sequential:1 magnitude:2 gap:2 vii:1 simply:5 binding:1 determines:1 constantly:1 abc:1 viewed:1 goal:5 eventual:1 springerverlag:1 wt:3 total:11 wio:1 la:5 select:4 outgoing:1 tested:1 ex:1 |
4,498 | 5,070 | Learning to Pass Expectation Propagation Messages
Nicolas Heess?
Gatsby Unit, UCL
Daniel Tarlow
Microsoft Research
John Winn
Microsoft Research
Abstract
Expectation Propagation (EP) is a popular approximate posterior inference algorithm that often provides a fast and accurate alternative to sampling-based
methods. However, while the EP framework in theory allows for complex nonGaussian factors, there is still a significant practical barrier to using them within
EP, because doing so requires the implementation of message update operators,
which can be difficult and require hand-crafted approximations. In this work, we
study the question of whether it is possible to automatically derive fast and accurate EP updates by learning a discriminative model (e.g., a neural network or
random forest) to map EP message inputs to EP message outputs. We address the
practical concerns that arise in the process, and we provide empirical analysis on
several challenging and diverse factors, indicating that there is a space of factors
where this approach appears promising.
1
Introduction
Model-based machine learning and probabilistic programming offer the promise of a world where a
probabilistic model can be specified independently of the inference routine that will operate on the
model. The vision is to automatically perform fast and accurate approximate inference to compute
a range of quantities of interest (e.g., marginal probabilities of query variables). Approaches to
the general inference challenge can roughly be divided into two categories. We refer to the first
category as the ?uninformed? case, which is exemplified by e.g. Church [4], where the modeler has
great freedom in the model specification. The cost of this flexibility is that inference routines have
a more superficial understanding of the model structure, being unaware of symmetries and other
idiosyncrasies of its components, which makes the already challenging inference task even harder.
The second category is what we refer to as the ?informed? case, which is exemplified by
(e.g. BUGS[14], Stan[12], Infer.NET[8]). Here, models must be constructed out of a toolbox of
building blocks, and a building block can only be used if a set of associated computational operations have been implemented by the toolbox designers. This gives inference routines a deeper
understanding of the structure of the model and can lead to significantly faster inference, but the
tradeoff is that efficient and accurate implementation of the building blocks can be a significant
challenge. For example, EP message update operations, which are used by Infer.NET, often require
the computation of integrals that do not have analytic expressions, so methods must be devised that
are robust, accurate and efficient, which is generally quite nontrivial.
In this work, we aim to bridge the gap between the informed and the uninformed cases and achieve
the best of both worlds by automatically implementing the computational operations required for
the informed case from a specification such as would be given in the uninformed case. We train
high-capacity discriminative models that learn to map EP message inputs to EP message outputs
for each message operation needed for EP inference. Importantly, the training is done so that the
learned modules implement the same EP communication protocol as hand-crafted modules, so after
the training phase is complete, we get a factor that behaves like a fast hand-crafted approximation
that exploits factor structure, but which was generated using only the specification that would be
?
The majority of this work was done while NH was visiting Microsoft Research, Cambridge.
1
given in the uninformed case. Models may then be constructed from any combination of these
learned modules and previously implemented modules.
2
2.1
Background and Notation
Factor graphs, directed graphical models, and probabilistic programming
As is common for message passing algorithms, we assume that models of interest are represented
as factor graphs: the joint distribution over a set of random variables x = {x1 , . . . , xD } is specified in terms of non-negative factors ?1 , . . . , ?J , which capture the relation between variables, and
!J
it decomposes as p(x) = 1/Z j=1 ?j (x?j ). Here x?j is used to mean the set of variables that
factor ?j is defined over and whose index set we will denote by Scope(?j ). We further use x?j?i
to mean the set of variables x?j excluding xi . The set x may have a mix of discrete and continuous random variables and factors can operate
" over variables of mixed types. We are interested in
computing marginal probabilities pi (xi ) = p(x)dx?i , where x?i is all variables except for i,
and where integrals should be replaced by sums when the variable being integrated out is discrete.
Note that this formulation allows for conditioning on variables by attaching factors with no inputs
to variables which constrain the variable to be equal to a particular value, but we suppress this detail
for simplicity of presentation.
Although our approach can be extended to factors of arbitrary form, for the purpose of this paper
we will focus on directed factors, i.e. factors of the form ?j (xout(j) | xin(j) ) which directly specify
the (conditional) distribution (or density) over xout(j) as a function of the vector of inputs xin(j)
(here x?j is the set of variables {xout(j) } ? xin(j) ). In a (unconditioned) directed graphical model
all factors will have this form, and we allow xin(j) to be empty, for example, to allow for prior
distributions over the variables.
Probabilistic programming is an umbrella term for the specification of probabilistic models via a
programming-like syntax. In its most general form, an arbitrary program is specified, which can
include calls to a random number generator (e.g. [4]). This can be related to the factor graph
notation by introducing forward-sampling functions f1 , . . . , fJ . If we associate each directed factor ?j (xout(j) | xin(j) ) with a stochastic forward-sample function fj mapping xin(j) to xout(j) and
then define the probabilistic program as the sequential sampling of xout(j) = fj (xin(j) ) following a topographical ordering of the variables, then there is a clear association between directed
graphical models and forward-sampling procedures. Specifically, fj is a stochastic function that
draws a sample from ?j (xout(j) | xin(j) ). The key difference is that the factor graph specification
usually assumes that an analytic expression will be given to define ?j (xout(j) | xin(j) ), while the
forward-sampling formulation allows for fj to execute an arbitrary piece of computer code. The
extra flexibility afforded by the forward-sampling formulation has led to the popularity of methods
like Approximate Bayesian Computation (ABC) [11], although the cost of this flexibility is that
inference becomes less informed.
2.2
Expectation Propagation
Expectation Propagation (EP) is a message passing algorithm that is a generalization of sum-product
belief propagation. It can be used for approximate marginal inference in models that have a mixed
set of types. EP has been used successfully in a number of large-scale applications [5, 13], can be
used with a wide range of factors and can support some programming language constructs like for
loops and if statements [7]. For a detailed review of EP, we recommend [6].
For the purposes of this paper there are two important aspects of EP. First, we use the common
variant where the posterior is approximated as a fully factorized distribution (except for some homogeneous variables which we treat as a single vector-valued variable) and each variable then has
an associated type, type(x), which determines the distribution family used in its approximation. The
second aspect is the form of the message from a factor ? to a variable i. It is defined as follows:
#"
$!
%
&
"
"
proj ?(xout | xin )
m
(x
)
dx
"
i? i
??i
i ?Scope(?)
m?i (xi ) =
(1)
mi? (xi ).
The update has an intuitive form. The proj operator ensures that the message being passed is a
distribution of type type(xi ) ? it only has an effect if its argument is outside the approximating
family used for the target message. If the projection operation (proj [?]) is ignored, then the mi? (xi )
2
term in the denominator cancels with the corresponding term in the numerator, and standard belief propagation updates are recovered. The projection is implemented as finding the distribution
q in the approximating family that minimizes the KL-divergence between the argument and q:
proj [p] = argminq KL(p||q), where q is constrained to be a distribution of type(xi ). Multiplying
the reverse message mi? (xi ) into the numerator before performing the projection effectively defines
a ?context?, which can be seen as reweighting the approximation to the standard BP update, placing
more importance in the region where other parts of the model have placed high probability mass.
3
Formulation
We now present the method that is the focus of this paper. The goal is to allow a user to specify a
factor to be used in EP solely via specifying a forward sampling procedure; that is, we assume that
the user provides an executable stochastic function f (xin ), which, given xin returns a sample of
xout . The user further specifies the families of distributions with which to represent the messages
associated with the variables of the factor (e.g. Discrete, Gamma, Gaussian, Beta). Below we show
how to learn fast EP message operators so that the new factor can be used alongside existing factors
in a variety of models.
Computing Targets with Importance Sampling Our goal is to compute EP messages from the
factor ? that is associated with f , as if we had access to an analytic expression for ?(xout | xin ). The
only way a factor interacts with the rest of the model is via the incoming and outgoing messages, so
we can focus on this mapping and the resulting operator can be used in any model. Given incoming
messages {mi? (xi )}i?Scope(?) , the simplest approach to computing m?i (xi ) is to use importance
sampling. A proposal distribution q(xin ) is specified, and then the approach is based on the fact that
?
?
-!
.
'
*
"
"
i" ?Scope(?) mi ? (xi )
?
?
?(xout | xin )
mi" ? (xi" ) dx? = Er
,
(2)
q(xin )
"
i ?Scope(?)
where r(x) = q(xin )?(xout | xin ) can be sampled from by first drawing values of xin from q,
then passing those values through the forward-sampling procedure f to get a value for xout . To
use this procedure for computing messages m?i (xi ), we use importance
sampling with proposal
Q
"
m
"
(x " )
i ?
i
distribution r. Roughly, samples are drawn from r and weighted by i ?Scope(?)
, then
q(xin )
all variables other than xi are summed out to yield a mixture of point mass distributions over xi .
The
! proj [?] operator is then applied to this distribution. Note that a simple choice for q(xin ) is
"
"
i" ?in mi ? (xi ), in which case the weighting term simplifies to just be mout? (xout ). Despite its
simplicity, however, we found this choice to often be suboptimal. We elaborate on this issue and
give concrete suggestions for improving over the naive approach in the experiments section.
Generation of Training Algorithm 1 Generate training data
Data For a given set 1: Input: ?, i, specifying we are learning to send message m (x ).
i
?i
of incoming messages 2: Input: Dm training distribution over messages {m " (x " )}
i ?
i
i" ?Scope(?)
{mi? (xi )}i?Scope(?) ,
3: Input: q(xin ) importance sampling distribution
we can produce a target 4: for n = 1 : N do
n
m
outgoing message using 5:
Draw mn
0 (x0 ), . . . , mD (xD ) ? D
the technique from the 6: for k = 1 : K do
nk
nk
nk
Draw xnk
previous section. To train 7:
in ? q(xin ) then compute xout = f (xin )
Q
mn
(xnk
)
"
i
?Scope(?)
a model to automatically 8:
i" ?
i"
Compute importance weight wnk =
.
nk
q(x in )
compute these messages,
9:
end
for
h
i
P
we need many example
wnk ?(xi )
k
P
Compute ?
?n (xi ) = proj
nk
incoming and target out- 10:
kw
n
n
11:
Add
pair
("m
(x
),
.
.
.
,
m
(x
)#
,?
?n (xi )) to training set.
going message pairs. We
0
D
0
D
can generate such a data 12: end for
set by drawing sets of 13: Return training set.
incoming messages from
some specified distribution, then computing the target outgoing message as above.
Learning Given the training data,
/ we learn0a neural network model that takes as input the sufn
ficient statistics defining m = mni" ? (xi" ) "
and outputs sufficient statistics defining
i ?Scope(?)
3
the approximation g(mn ) to target ?
?n (xi ). For each output message that the factor needs to send,
we train a separate network. The error measure that we optimize is the average KL divergence
1N
1
?n ||g(mn )). We differentiate this objective analytically for the appropriate output
n=1 KL(?
N
distribution type and compute gradients via back-propagation.
Choice of Decomposition Structure So far, we have shown how to incorporate factors into a
model when the definition of the factor is via the forward-sample procedure f rather than as an
analytic expression ?. When specifying a model, there is some flexibility in how this capability
is used. The natural use case is when a model can mostly be expressed using factors that have
analytic expressions and corresponding hand-constructed operator implementations, but when a few
of the interactions that we would like to use are more easily specified in terms of a forward-sampling
procedure or would be difficult to implement hand-crafted approximations for.
There is an alternative use-case, which is that even if we have analytic expressions and hand-crafted
implementations for all the factors that we wish to use in a model, it might be that the approximations
which arise due to the nature of message passing (that is, passing messages that factorize fully over
variables) leads to a poor approximation in some block of the model. In this case, it may be desirable
to collapse the problematic block of several factors into a single factor, then to use the approach we
present here. If the new collapsed factor is sufficiently structured in a statistical sense, then this may
lead to improved accuracy. In this view, the goal should be to find groups of modeling components
that go together logically, which are reusable, and which define interactions that have input-output
structure that is amenable to the learned approximation strategy.
4
Related Work
Perhaps the most superficially similar line of work to the approach we present here is that of inference machines and truncated belief propagation [2, 3, 10, 9], where inference is done via an algorithm that is structurally similar to belief propagation, but where some parameters of the updates
are learned. The fundamental difference between those approaches and ours is how the learning is
performed. In inference machine training, learning is done jointly over parameters for all updates
that will be used in the model. This means that the process of learning couples together all factors
in the model; if part of the model changes, the parameters of the updates must be re-learned. A key
property of our approach is that a factor may be learned once, then used in a variety of different
models without need for re-training.
The most closely related work is ABC-EP [1]. This approach employs a very similar importance
sampling strategy but performs inference simply by sending the messages that we use as training
data. The advantage is that no function approximator needs to be chosen, and if enough samples are
drawn for each message update, the accuracy should be good. There is also no up-front cost of learning as in our case. The downside is that generation and weighting of a sufficient number of samples
can be very expensive, and it is usually not practical to generate enough samples every time a message needs to be sent. Our formulation allows for a very large number of samples to be generated
once as an up-front cost then, as long as the learning is done effectively, each message computation
is much faster while still effectively drawing on a large number of samples. Our approach also opens
up the possibility of using more accurate but slower methods to generate the training samples, which
we believe will be important as we look ahead to applying the method to even more complex factors.
Empirically we have found that using importance sampling but reducing the number of samples so
as to make runtime computation times close to our method can lead to unreliable inference.
Finally, at a high level, our goal in this work is to start from an informed general inference scheme
and to extend the range of model specifications that can be used within the framework. There is work
that aims for a similar goal but comes from the opposite direction of starting with a general specification language and aiming to build more informed inference routines. For example, [15] attempts
to infer basic interaction structure from general probabilistic program specifications. Also of note
is [16], which applies mean field-like variational inference to general program specifications. We
believe both these directions and the direction we explore here to be promising and worth exploring.
5
Experimental Analyses
We now turn our attention to experimental evaluation. The primary question of interest is whether
given f it is feasible to learn the mapping from EP message inputs to outputs in such a way that the
learned factors can be used within nontrivial models. This obviously depends on the specifics of f
and the model in which the learned factor is used. We attempt to explore these issues thoroughly.
4
Choice of Factors We made specific choices about which functions f to apply our framework
to. First, we wanted a simple factor to prove the concept and give an indication of the performance
that we might expect. For this, we chose the sigmoid factor, which deterministically computes
1
xout = f (xin ) = 1+exp(?x
. For this factor, sensible choices for the messages to xout and xin are
in )
Beta and Gaussian distributions respectively. Second, we wanted factors that stressed the framework
in different ways. For the first of these, we chose a compound Gamma factor, which is sampled by
first drawing a random Gamma variable r2 with rate r1 and shape s1 , then drawing another random
Gamma variable xout with rate r2 and shape s2 . This defines xout = f (r1 , s1 , s2 ), which is a
challenging factor because depending on the choice of inputs, this can produce a very heavy tailed
distribution over xout . Another challenging factor we experiment with is the product factor, which
uses xout = f (xin,1 , xin,2 ) = xin,1 ? xin,2 . While this is a conceptually simple function, it is
highly challenging to use within EP for several reasons, including symmetries due to signs, and the
fact that message outputs can change very quickly as functions of message inputs (see Fig. 3).
One main reason for the above factor choices is that there are existing hand-crafted implementations in Infer.NET, which we can use to evaluate our learned message operators. It would have
been straightforward to experiment with more example factors that could not be implemented with
existing hand-crafted factors, but it would have been much harder to evaluate our proposed method.
Finally, we developed a factor that models the throwing of a ball, which is representative of the type
of factors that we believe our framework to be well-suited for, and which is not easily implemented
with hand-crafted approximations.
For all factors, we use the extensible factor interface in Infer.NET to create factors that compute
messages by running a forward pass of the learned neural network. We then studied these factors
in a variety of models, using the default Infer.NET settings for all other implementation details,
e.g. message schedules and other factor implementations. Additional details of the models used in
the experiments can be found in the supplemental material.
Sigmoid Factor For the sigmoid factor, we ran two main sets of experiments. First, we learned a
factor using the methodology described in Section 3 and evaluated how well the network was able
to reconstruct the training data. In Fig. 1 we show histograms of KL errors for the network trained
to send forward messages (Fig. 1a) and the network trained to send backwards messages (Fig. 1b).
To aid the interpretation of these results, we also show the best, median, and worst approximations
for each. There are a small number of moderate-sized errors, but average performance is very good.
We then used the learned factor within a Bayesian logistic regression model where the output nonlinearity is implemented using either the default Infer.NET sigmoid factor or our learned sigmoid factor.
The number of training points is given in the table. There were always 2000 data points for testing.
Data points for training and testing were generated according to p(y = 1|x) = sigmoid(wT x).
Entries of x were drawn from N (0, 1). Entries of w were drawn from N (0, 1) for all relevant
dimensions, and the others were set to 0. Results are shown in Table 1, which appears in the Supplementary materials. Predictive performance is very similar across the board, and although there are
moderately large KL divergences between the learned posteriors in some cases, when we compared
the distance between the true generating weights and the learned posteriors means for the EP and
NN case, we found them to be similar.
?
?
?????
?
?
? ?
?
?????
?
?
?
???????
? ?
???????
(a) Backward Message (to Gaussian)
?
?
?
?
(b) Forward Message (to Beta)
Figure 1: Sigmoid factor: Histogram of training KL divergences between target and predicted distributions for the two messages outgoing from the learned sigmoid factor (left: backward message;
right: forward message). Also illustrated are best(1), median (2,3), and worst (4) examples. The red
curve is the density of the target message, and the green is of the predicted message. In the inset are
message parameters (left: Gaussian mean and precision; right: Beta ? and ?) for the true (top line)
and predicted (middle line) message, along with the KL (bottom line).
5
Compound Gamma Factor The compound Gamma factor is useful as a heavy-tailed prior over
precisions of Gaussian random variables. Accordingly, we evaluate performance in the context of
models where the factor provides a prior precision for learning a Gaussian or mixture of Gaussians
model. As before, we trained a network using the methodology from Section 3. For this factor, we
fixed the value of the inputs xin , which is a standard way that the compound Gamma construction is
used as a prior. We experimented with values of (3, 3, 1) and (1, 1, 1) for the inputs. In both cases,
these settings induce a heavy-tailed distribution over the precision.
We begin by evaluating the importance sampler. We first evaluate the naive choice for proposal
distribution q as described in Section 3. As can be seen in the bottom left plot of Fig. 2, there is a
relatively large region of possible input-message space (white region) where almost no samples are
drawn, and thus the importance sampling estimates will be unreliable. Here shapein and ratein denote the parameters of the message being sent from the precision variable to the compound Gamma
factor. By instead using a mixture distribution over q, which has one component equivalent to the
naive sampler and one broader component, we achieve the result in the top left of Fig. 2, which
has better coverage of the space of possible messages. The plots in the second column show the
importance sampling estimates of factor-to-variable messages (one plot per message parameter) as a
function of the variable-to-factor message coming from the precision variable, which are unreliable
in the regions that would be expected based on the previous plot. The third column shows the same
function but for the learned neural network model. Surprisingly, we see that the neural network has
smoothed out some of the noise of the importance sampler, and that it has extrapolated in a smooth,
reasonable manner. Overlaid on these plots are the message values that were actually encountered
when running the experiments in Fig. 8, which are described next.
shapein
4
10
2
messages
shapeinexpt. 1
! (learned)
10
0
10
CG111, 20
CG111, 100
CG331, 20
CG331, 100
!2
10
!4
10
!4
10
messages expt. 1
!2
0
2
10
10
10
! (default)
4
10
Figure 2: Compound Gamma plots. First column: Log sum of importance weights arising from improved importance sampler (top) and naive sampler (bottom) as a function of the incoming context
message. Second column: Improved importance sampler estimate of outgoing message shape parameter (top) and rate parameter (bottom) as a function of the incoming context message. We show
the sufficient statistics of the numerator of eq. 1. Third column: Learned neural network estimates
for the same messages. Parameters of the variable-to-factor messages encountered when running
the experiments in Fig. 8 are super-imposed as black dots. Rightmost plot: Precisions learned for
mixture of Gaussians model with ?learned? / standard Infer.NET (?default?) factor for 20 and 100
datapoints respectively and true precisions: ?1 = 0.01; ?2 = 1000. Best viewed in color.
In the next experiments, we generate data from Gaussians with a wide range of variances, and we
evaluate how well we are able to learn the precision as a function of the number of data points (xaxis). We compare to the same construction but using two hand-crafted Gamma factors to implement
the compound Gamma prior. The plots in Fig. 8 in the supplementary material show the means of
the learned precisions for two choices of compound Gamma parameters (top is (3, 3, 1), bottom is
(1, 1, 1)). Even though some messages were passed in regions with little representation under the
importance sampling, the factor was still able to perform reliably.
We next evaluate performance of the compound Gamma factors when learning a mixture of Gaussians. We generated data from a mixture of two Gaussians with fixed means but widely different
variances, using the compound Gamma prior on the precisions of both Gaussians in the mixture. Results are shown in the right-most plot of Fig. 2. We see that both factors sometimes under-estimate
the true variance, but the learned factor is equally as reliable as the hand-crafted version. We also observed in these experiments that the learned factor was an order of magnitude faster than the built-in
factor (total runtime was 11s for the learned factor vs. 110s for the standard Infer.NET construction).
6
!"%#&
!"!'!
!"(#%
!"!!&
!!"# ! !"#
$
!"!%!
!"!!#
!"#
%
!
%"'&)
!"!!%
!!"# ! !"#
$
%"'!)
!"!!&
%"# !"#
%
!
!"(#*
!"!!(
!!"# ! !"#
$
!!"# ! !"#
$
!"!!'
!"!!+
%"# !"#
!"%*+
!"!!!
!!"# ! !"#
$
!"!&(
!"!!%
%
!
%"# !"#
!"('%
!"!%(
!!"# ! !"#
$
!"!&(
!"!!#
%
!
%"# !"#
!"!!(
!"!!&
%
!
%"#!*
!"!!%
%"# !"#
%
!
!"%&#
!"!!!
!!"# ! !"#
$
!"&'%
!"!!'
%"# !"#
%
!
!!"# ! !"#
$
!"!!+
!"!!*
%"# !"#
%
!
%"#
Figure 4: Learned posteriors from the multiplicative noise regression model. We compare the builtin factor?s result (green) to our learned factor (red) and an importance sampler that is given the same
runtime budget as the learned model (black). Top row: Representative posteriors over weights w.
Bottom row: Representative posteriors over ?n variables. Inset gives KL between built-in factor
and learned factor (red) and IS factor (black).
Product Factor The product factor is a surprisingly difficult factor to work with. To illustrate
some of the difficulty, we provide plots of output message parameters along slices in input message
space (Fig. 3). In our first experiment with the product factor, we build a Bayesian linear regression
model with multiplicative output noise. Given a vector of inputs xn , we take an inner product of
xn with multivariate Gaussian variables w, then for each instance n multiply the result by a random
noise variable ?n that is drawn from a Gaussian with mean 1 and standard deviation 0.1. Additive
noise is then added to the output to produce a noisy observation yn . The goal is to infer w and ?
values given x?s and y?s. We compare using the default Infer.NET product factor to using our learned
product factor for the multiplication of ? and the output of the inner products. Results are shown in
Fig. 4, where we also compare to importance sampling, which was given a runtime budget similar
to that of the neural network.
! =1;! =1;? =5;! =1 ! =0.1;! =0.1;? =5;! =0.1
In the second experiment
10
10
with the product factor, we
regular
0
0
implemented an ideal point
2
SHG
model, which is essentially
NN
!10
!10
a 1 latent-dimensional
0
binary matrix-factorization
10
10
model, using our learned
0
0
product factor for the
!2
multiplications. This is the
!10
!10
?
?
senator index
most challenging model
we have considered yet,
Figure 3: Message surfaces and failure case plot for the product factor
because (a) EP is known to
(computing z = xy). Left: Mean of the factor to z message as
be unreliable in matrix faca function of the mean-parameters of the incoming messages from x
torization models [13], and
and y. Top row shows ground truth, the bottom row shows the learned
(b) there is an additional
NN approximation. Right: Posterior over the ideal-point variables
level of approximation
for all senators (inferred std.-dev. is shown as error bars). Senators
due to the loopiness of the
are ordered according to ideal-points means inferred with factor [13]
graph, which pushes the
(SHG). Red/blue dots indicate true party affiliation.
factor into more extreme
ranges, which it might not have been trained as reliably for and/or where importance sampling
estimates used for generating training data are unreliable.
y
z
z
x
y
z
z
?y
ideal point
?y
x
x
x
We ran the model on a subset of US senate vote records from the 112th congress.1 We evaluated
the model based on how well the learned factor version recovered the posteriors over senator latent
factors that were found by the built-in product factor and the approximate product factor of [13]. The
result of this experiment was that midway through inference, the learned factor version produced
posteriors with means that were consistent with the built-in factors, although the variances were
slightly larger, and the means were noisier. After this, we observed gradual degradation of the
estimates for a subset of about 5-10% of the senators. By the end of inference, results had degraded
significantly. Investigating the cause of this result, we found that a large number of zero-precision
messages were being sent, which happens when the projected distribution has larger variance than
1
Data obtained from http://www.govtrack.us/
7
the context message. We believe that the cause of this is that as the messages in this model begin to
converge, the messages being passed take on a distribution that is difficult to approximate (leading
the neural network to underfit), that is different from the training distribution, or is in a regime where
importance sampling estimates are noisy. In these cases, our KL-objective factors are overestimating
the variance.
In some cases, these errors can propagate and lead to complete failure of inference, and we have
observed this in our experiments. This leads to perhaps an obvious point, which is that our approach
will fail when messages required by inference are significantly different from those that were in
the training distribution. This can happen via the choice of too extreme priors, too many observations driving precisions to be extreme, and due to complicated effects arising from the dynamics of
message passing on loopy graphs. We will discuss some possibly mitigating strategies in Section 6.
Throwing a Ball Factor With this factor, we model the distance that a ball travels as a function of
the angle, velocity, and initial height that it was thrown from. While this is also a relatively simple
interaction conceptually, it would be highly challenging to implement it as a hand-crafted factor. In
our framework, it suffices to provide a function f that, given the angle, velocity, and initial height,
computes and returns the distance that the ball travels. We do so by constructing and solving the
appropriate quadratic equation. Note that this requires multiplication and trigonometric functions.
We learn the factor as before
person #4
person #1
person #2
person #5
person #7
0.2
0.2
0.2
0.2
0.2
and evaluate it in the context of
0.1
0.1
0.1
0.1
0.1
two models. In the first model,
0
0
0
0
0
we have person-specific distri0 10
30
50 0 10
30
50 0 10
30
50 0 10
30
50 0 10
30
50
velocity
butions over height (Gaussian), Figure 5: Throwing a ball factor experiments. True distributions
log slope (Gaussian) and the over individual throwing velocities (black) and predictive distrirate parameter (Gamma) of a bution based on the learned posterior over velocity rates.
Gamma distribution that determines velocity. We then observe several samples (generated from the model) of noisy distances that
the ball traveled for each person. We then use our learned factor to infer posteriors over the personspecific parameters. The inferred posteriors for several representative people are shown in Fig. 5.
Second, we extended the above model to have the person-specific rate parameter be produced by a
linear regression model (with exponential link function) with observed person-specific features and
unknown weights. We again generated data from the model, observed several sample throws per
person, and inferred the regression weights. We found that we were able to recover the generating
weights with reasonable accuracy, although the posterior was a bit overconfident: true (?.5, .5, 3)
vs. posterior mean (?.43, .55, 3.1) and standard deviations (.04, .03, .02).
6
Discussion
We have shown that it is possible to learn to pass EP messages in several challenging cases. The
techniques that we use build upon a number of tools well-known in the field, but the combination
in this application is novel, and we believe it to have great practical potential. Although we have
established viability of the idea, in its current form it works better for some factors than others. Its
success depends on (a) the ability of the function approximator to represent the required message
updates (which may be highly discontinuous) and (b) the availability of reliable samples of these
mappings (some factors may be very hard to invert). Here, we expect that great improvements can
be made taking advantage of recent progress in uninformed sampling, and high capacity regression
models. We tested factors with multiple models and/or datasets but this does not mean that they will
work with all models, hyper-parameter settings, or datasets (we found varying degrees of robustness
to such variations). A critical ingredient is here an appropriate choice of the distribution of training
messages which, at the current stage, can require some manual tuning and experimentation. This
leads to an interesting extension, which would be to maintain an estimate of the quality of the
approximation over the domain of the factor, and to re-train the factor on the fly when a message
is encountered that lies in a low-confidence region. A second direction for future study, which
is enabled by our work, is to add additional constraints during learning in order to guarantee that
updates have certain desirable properties. For example, we may be able to ask the network to learn
the best message updates subject to a constraint that guarantees convergence.
Acknowledgements: NH acknowledges funding from the European Community?s Seventh Framework Programme (FP7/2007-2013) under grant agreement no.
270327, and from the Gatsby Charitable foundation.
8
References
[1] S. Barthelm?e and N. Chopin. ABC-EP: Expectation Propagation for likelihood-free Bayesian
computation. In Proceedings of the 28th International Conference on Machine Learning, 2011.
[2] J. Domke. Parameter learning with truncated message-passing. In Computer Vision and Pattern
Recognition (CVPR). IEEE, 2011.
[3] J. Domke. Learning graphical model parameters with approximate marginal inference. Pattern
Analysis and Machine Intelligence (PAMI), 2013.
[4] N.D. Goodman, V.K. Mansinghka, D.M. Roy, K. Bonawitz, and J.B. Tenenbaum. Church: A
language for generative models. In Proc. of Uncertainty in Artificial Intelligence (UAI), 2008.
[5] R. Herbrich, T.P. Minka, and T. Graepel. Trueskill: A Bayesian skill rating system. Advances
in Neural Information Processing Systems, 19:569, 2007.
[6] T.P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001.
[7] T.P. Minka and J. Winn. Gates: A graphical notation for mixture models. In Advances in
Neural Information Processing Systems, 2008.
[8] T.P. Minka, J.M. Winn, J.P. Guiver, and D.A. Knowles. Infer.NET 2.5, 2012. Microsoft
Research. http://research.microsoft.com/infernet.
[9] P. Kohli R. Shapovalov, D. Vetrov. Spatial inference machines. In Computer Vision and Pattern
Recognition (CVPR). IEEE, 2013.
[10] S. Ross, D. Munoz, M. Hebert, and J.A. Bagnell. Learning message-passing inference machines for structured prediction. In Computer Vision and Pattern Recognition (CVPR). IEEE,
2011.
[11] D.B. Rubin. Bayesianly justifiable and relevant frequency calculations for the applies statistician. The Annals of Statistics, pages 1151?1172, 1984.
[12] Stan Development Team. Stan: A C++ library for probability and sampling, version 1.3, 2013.
[13] D.H. Stern, R. Herbrich, and T. Graepel. Matchbox: Large scale online Bayesian recommendations. In Proceedings of the 18th international conference on World Wide Web, pages 111?120.
ACM, 2009.
[14] A. Thomas. BUGS: A statistical modelling package. RTA/BCS Modular Languages Newsletter, 1994.
[15] D. Wingate, N.D. Goodman, A. Stuhlmueller, and J. Siskind. Nonstandard interpretations of
probabilistic programs for efficient inference. In Advances in Neural Information Processing
Systems, 2011.
[16] D. Wingate and T. Weber. Automated variational inference in probabilistic programming. In
arXiv:1301.1299, 2013.
9
| 5070 |@word kohli:1 version:4 middle:1 open:1 gradual:1 propagate:1 decomposition:1 infernet:1 xout:22 harder:2 initial:2 daniel:1 ours:1 rightmost:1 existing:3 trueskill:1 recovered:2 current:2 com:1 yet:1 dx:3 must:3 john:1 additive:1 happen:1 midway:1 shape:3 analytic:6 wanted:2 plot:11 update:13 v:2 intelligence:2 generative:1 accordingly:1 record:1 tarlow:1 provides:3 herbrich:2 height:3 along:2 constructed:3 beta:4 prove:1 manner:1 x0:1 expected:1 roughly:2 automatically:4 little:1 becomes:1 begin:2 notation:3 factorized:1 mass:2 what:1 minimizes:1 developed:1 informed:6 supplemental:1 finding:1 guarantee:2 every:1 xd:2 runtime:4 unit:1 grant:1 yn:1 before:3 treat:1 congress:1 aiming:1 despite:1 vetrov:1 solely:1 pami:1 might:3 chose:2 black:4 studied:1 specifying:3 challenging:8 collapse:1 factorization:1 range:5 directed:5 practical:4 testing:2 block:5 implement:4 procedure:6 empirical:1 significantly:3 projection:3 confidence:1 induce:1 regular:1 get:2 close:1 operator:7 context:6 collapsed:1 applying:1 optimize:1 equivalent:1 map:2 imposed:1 www:1 send:4 go:1 attention:1 starting:1 independently:1 straightforward:1 guiver:1 simplicity:2 importantly:1 siskind:1 datapoints:1 enabled:1 variation:1 annals:1 target:8 construction:3 user:3 programming:6 homogeneous:1 us:1 agreement:1 associate:1 velocity:6 roy:1 approximated:1 expensive:1 recognition:3 std:1 ep:25 bottom:7 module:4 observed:5 fly:1 wingate:2 capture:1 worst:2 region:6 ensures:1 ordering:1 ran:2 moderately:1 dynamic:1 trained:4 solving:1 predictive:2 upon:1 easily:2 joint:1 represented:1 train:4 fast:5 query:1 artificial:1 hyper:1 outside:1 quite:1 modular:1 whose:1 valued:1 supplementary:2 widely:1 drawing:5 reconstruct:1 larger:2 cvpr:3 ability:1 statistic:4 jointly:1 noisy:3 unconditioned:1 online:1 obviously:1 differentiate:1 advantage:2 indication:1 net:10 ucl:1 interaction:4 product:14 coming:1 relevant:2 loop:1 trigonometric:1 flexibility:4 achieve:2 bug:2 intuitive:1 convergence:1 empty:1 r1:2 produce:3 generating:3 stuhlmueller:1 derive:1 depending:1 illustrate:1 uninformed:5 mansinghka:1 progress:1 eq:1 throw:1 implemented:7 predicted:3 coverage:1 come:1 expt:1 indicate:1 direction:4 closely:1 discontinuous:1 stochastic:3 material:3 implementing:1 require:3 f1:1 generalization:1 suffices:1 exploring:1 extension:1 sufficiently:1 considered:1 ground:1 exp:1 great:3 overlaid:1 mapping:4 scope:10 driving:1 purpose:2 proc:1 travel:2 ross:1 bridge:1 create:1 successfully:1 tool:1 weighted:1 gaussian:10 always:1 aim:2 super:1 rather:1 varying:1 broader:1 focus:3 improvement:1 modelling:1 likelihood:1 logically:1 sense:1 inference:29 nn:3 integrated:1 xnk:2 relation:1 proj:6 going:1 chopin:1 interested:1 mitigating:1 issue:2 development:1 constrained:1 summed:1 spatial:1 marginal:4 equal:1 construct:1 once:2 field:2 sampling:23 placing:1 kw:1 look:1 cancel:1 future:1 others:2 recommend:1 overestimating:1 few:1 employ:1 gamma:16 divergence:4 senator:5 individual:1 replaced:1 phase:1 statistician:1 microsoft:5 maintain:1 attempt:2 freedom:1 thrown:1 interest:3 message:86 possibility:1 highly:3 multiply:1 evaluation:1 mixture:8 extreme:3 xaxis:1 amenable:1 accurate:6 integral:2 xy:1 matchbox:1 re:3 instance:1 column:5 modeling:1 downside:1 dev:1 extensible:1 loopy:1 cost:4 introducing:1 deviation:2 subset:2 entry:2 seventh:1 front:2 too:2 barthelm:1 thoroughly:1 person:10 density:2 fundamental:1 international:2 probabilistic:9 together:2 quickly:1 concrete:1 nongaussian:1 again:1 thesis:1 possibly:1 idiosyncrasy:1 leading:1 return:3 potential:1 availability:1 depends:2 piece:1 performed:1 view:1 multiplicative:2 doing:1 red:4 start:1 bution:1 recover:1 capability:1 complicated:1 slope:1 accuracy:3 degraded:1 variance:6 yield:1 conceptually:2 bayesian:7 produced:2 multiplying:1 worth:1 justifiable:1 nonstandard:1 manual:1 definition:1 failure:2 frequency:1 minka:4 obvious:1 dm:1 associated:4 mi:8 modeler:1 couple:1 sampled:2 popular:1 ask:1 massachusetts:1 color:1 graepel:2 schedule:1 routine:4 actually:1 back:1 appears:2 methodology:2 specify:2 improved:3 formulation:5 done:5 execute:1 evaluated:2 though:1 just:1 stage:1 hand:12 web:1 reweighting:1 propagation:10 shg:2 defines:2 logistic:1 quality:1 perhaps:2 believe:5 building:3 effect:2 umbrella:1 concept:1 true:7 analytically:1 illustrated:1 white:1 numerator:3 during:1 syntax:1 butions:1 complete:2 shapovalov:1 performs:1 newsletter:1 interface:1 fj:5 weber:1 variational:2 novel:1 funding:1 common:2 sigmoid:8 behaves:1 executable:1 empirically:1 conditioning:1 nh:2 association:1 extend:1 interpretation:2 significant:2 refer:2 cambridge:1 munoz:1 tuning:1 nonlinearity:1 language:4 had:2 dot:2 specification:9 access:1 surface:1 add:2 posterior:15 multivariate:1 recent:1 moderate:1 reverse:1 compound:10 certain:1 binary:1 affiliation:1 success:1 seen:2 additional:3 converge:1 multiple:1 mix:1 desirable:2 infer:13 bcs:1 smooth:1 faster:3 calculation:1 offer:1 long:1 divided:1 devised:1 equally:1 prediction:1 variant:1 basic:1 regression:6 denominator:1 vision:4 expectation:5 essentially:1 arxiv:1 histogram:2 represent:2 sometimes:1 invert:1 proposal:3 background:1 argminq:1 winn:3 median:2 goodman:2 extra:1 operate:2 rest:1 subject:1 sent:3 call:1 backwards:1 ideal:4 enough:2 viability:1 automated:1 variety:3 suboptimal:1 opposite:1 inner:2 simplifies:1 idea:1 tradeoff:1 whether:2 expression:6 passed:3 ficient:1 passing:8 cause:2 heess:1 generally:1 ignored:1 clear:1 detailed:1 useful:1 tenenbaum:1 category:3 simplest:1 generate:5 specifies:1 http:2 problematic:1 designer:1 sign:1 arising:2 popularity:1 per:2 blue:1 diverse:1 discrete:3 promise:1 group:1 key:2 reusable:1 mout:1 drawn:6 backward:2 graph:6 sum:3 angle:2 package:1 uncertainty:1 topographical:1 family:5 almost:1 reasonable:2 knowles:1 draw:3 bit:1 quadratic:1 encountered:3 mni:1 nontrivial:2 ahead:1 constraint:2 throwing:4 constrain:1 bp:1 afforded:1 govtrack:1 aspect:2 argument:2 performing:1 relatively:2 structured:2 according:2 overconfident:1 combination:2 poor:1 ball:6 across:1 slightly:1 s1:2 happens:1 equation:1 previously:1 turn:1 discus:1 fail:1 needed:1 fp7:1 end:3 sending:1 operation:5 gaussians:6 experimentation:1 apply:1 observe:1 appropriate:3 alternative:2 robustness:1 slower:1 gate:1 thomas:1 assumes:1 running:3 include:1 top:7 graphical:5 exploit:1 build:3 wnk:2 approximating:2 objective:2 question:2 quantity:1 already:1 added:1 strategy:3 primary:1 md:1 interacts:1 bagnell:1 visiting:1 gradient:1 distance:4 separate:1 link:1 capacity:2 majority:1 sensible:1 reason:2 code:1 index:2 difficult:4 mostly:1 statement:1 negative:1 suppress:1 implementation:7 reliably:2 stern:1 unknown:1 perform:2 observation:2 datasets:2 truncated:2 defining:2 extended:2 communication:1 excluding:1 team:1 smoothed:1 arbitrary:3 community:1 inferred:4 rating:1 pair:2 required:3 specified:6 toolbox:2 kl:10 learned:36 established:1 address:1 able:5 bar:1 alongside:1 usually:2 exemplified:2 below:1 pattern:4 regime:1 challenge:2 program:5 built:4 including:1 green:2 reliable:2 belief:4 critical:1 natural:1 difficulty:1 senate:1 mn:4 scheme:1 technology:1 library:1 stan:3 church:2 acknowledges:1 naive:4 torization:1 traveled:1 prior:7 understanding:2 review:1 acknowledgement:1 multiplication:3 fully:2 expect:2 mixed:2 suggestion:1 generation:2 interesting:1 approximator:2 ingredient:1 generator:1 foundation:1 degree:1 sufficient:3 consistent:1 rubin:1 charitable:1 pi:1 heavy:3 row:4 extrapolated:1 placed:1 surprisingly:2 free:1 hebert:1 allow:3 deeper:1 institute:1 wide:3 taking:1 barrier:1 attaching:1 slice:1 curve:1 default:5 dimension:1 world:3 superficially:1 unaware:1 computes:2 evaluating:1 forward:13 made:2 xn:2 projected:1 programme:1 far:1 party:1 approximate:8 skill:1 unreliable:5 incoming:8 investigating:1 uai:1 discriminative:2 xi:22 factorize:1 continuous:1 latent:2 decomposes:1 tailed:3 table:2 bonawitz:1 promising:2 nature:1 learn:7 robust:1 superficial:1 nicolas:1 symmetry:2 forest:1 rta:1 improving:1 complex:2 european:1 constructing:1 protocol:1 domain:1 main:2 s2:2 noise:5 arise:2 underfit:1 x1:1 crafted:11 fig:13 representative:4 elaborate:1 board:1 gatsby:2 aid:1 precision:13 structurally:1 wish:1 deterministically:1 exponential:1 lie:1 weighting:2 third:2 specific:5 inset:2 er:1 r2:2 experimented:1 concern:1 sequential:1 effectively:3 importance:20 phd:1 magnitude:1 budget:2 push:1 nk:5 gap:1 suited:1 led:1 simply:1 explore:2 expressed:1 ordered:1 recommendation:1 applies:2 truth:1 determines:2 abc:3 acm:1 conditional:1 goal:6 presentation:1 sized:1 viewed:1 feasible:1 change:2 hard:1 specifically:1 except:2 reducing:1 wt:1 sampler:7 domke:2 degradation:1 total:1 pas:3 experimental:2 xin:31 vote:1 indicating:1 support:1 people:1 stressed:1 noisier:1 incorporate:1 evaluate:7 outgoing:5 tested:1 |
4,499 | 5,071 | Translating Embeddings for Modeling
Multi-relational Data
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Dur?an
Universit?e de Technologie de Compi`egne ? CNRS
Heudiasyc UMR 7253
Compi`egne, France
{bordesan, nusunier, agarciad}@utc.fr
Jason Weston, Oksana Yakhnenko
Google
111 8th avenue
New York, NY, USA
{jweston, oksana}@google.com
Abstract
We consider the problem of embedding entities and relationships of multirelational data in low-dimensional vector spaces. Our objective is to propose a
canonical model which is easy to train, contains a reduced number of parameters
and can scale up to very large databases. Hence, we propose TransE, a method
which models relationships by interpreting them as translations operating on the
low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge
bases. Besides, it can be successfully trained on a large scale data set with 1M
entities, 25k relationships and more than 17M training samples.
1
Introduction
Multi-relational data refers to directed graphs whose nodes correspond to entities and edges of the
form (head, label, tail) (denoted (h, `, t)), each of which indicates that there exists a relationship of
name label between the entities head and tail. Models of multi-relational data play a pivotal role in
many areas. Examples are social network analysis, where entities are members and edges (relationships) are friendship/social relationship links, recommender systems where entities are users and
products and relationships are buying, rating, reviewing or searching for a product, or knowledge
bases (KBs) such as Freebase1 , Google Knowledge Graph2 or GeneOntology3 , where each entity
of the KB represents an abstract concept or concrete entity of the world and relationships are predicates that represent facts involving two of them. Our work focuses on modeling multi-relational
data from KBs (Wordnet [9] and Freebase [1] in this paper), with the goal of providing an efficient
tool to complete them by automatically adding new facts, without requiring extra knowledge.
Modeling multi-relational data In general, the modeling process boils down to extracting local or
global connectivity patterns between entities, and prediction is performed by using these patterns to
generalize the observed relationship between a specific entity and all others. The notion of locality
for a single relationship may be purely structural, such as the friend of my friend is my friend in
1
freebase.com
google.com/insidesearch/features/search/knowledge.html
3
geneontology.org
2
1
social networks, but can also depend on the entities, such as those who liked Star Wars IV also
liked Star Wars V, but they may or may not like Titanic. In contrast to single-relational data where
ad-hoc but simple modeling assumptions can be made after some descriptive analysis of the data,
the difficulty of relational data is that the notion of locality may involve relationships and entities
of different types at the same time, so that modeling multi-relational data requires more generic
approaches that can choose the appropriate patterns considering all heterogeneous relationships at
the same time.
Following the success of user/item clustering or matrix factorization techniques in collaborative
filtering to represent non-trivial similarities between the connectivity patterns of entities in singlerelational data, most existing methods for multi-relational data have been designed within the framework of relational learning from latent attributes, as pointed out by [6]; that is, by learning and
operating on latent representations (or embeddings) of the constituents (entities and relationships).
Starting from natural extensions of these approaches to the multi-relational domain such as nonparametric Bayesian extensions of the stochastic blockmodel [7, 10, 17] and models based on tensor
factorization [5] or collective matrix factorization [13, 11, 12], many of the most recent approaches
have focused on increasing the expressivity and the universality of the model in either Bayesian
clustering frameworks [15] or energy-based frameworks for learning embeddings of entities in lowdimensional spaces [3, 15, 2, 14]. The greater expressivity of these models comes at the expense of
substantial increases in model complexity which results in modeling assumptions that are hard to interpret, and in higher computational costs. Besides, such approaches are potentially subject to either
overfitting since proper regularization of such high-capacity models is hard to design, or underfitting due to the non-convex optimization problems with many local minima that need to be solved to
train them. As a matter of fact, it was shown in [2] that a simpler model (linear instead of bilinear)
achieves almost as good performance as the most expressive models on several multi-relational data
sets with a relatively large number of different relationships. This suggests that even in complex
and heterogeneous multi-relational domains simple yet appropriate modeling assumptions can lead
to better trade-offs between accuracy and scalability.
Relationships as translations in the embedding space In this paper, we introduce TransE, an
energy-based model for learning low-dimensional embeddings of entities. In TransE, relationships
are represented as translations in the embedding space: if (h, `, t) holds, then the embedding of the
tail entity t should be close to the embedding of the head entity h plus some vector that depends
on the relationship `. Our approach relies on a reduced set of parameters as it learns only one
low-dimensional vector for each entity and each relationship.
The main motivation behind our translation-based parameterization is that hierarchical relationships
are extremely common in KBs and translations are the natural transformations for representing them.
Indeed, considering the natural representation of trees (i.e. embeddings of the nodes in dimension
2), the siblings are close to each other and nodes at a given height are organized on the x-axis,
the parent-child relationship corresponds to a translation on the y-axis. Since a null translation
vector corresponds to an equivalence relationship between entities, the model can then represent
the sibling relationship as well. Hence, we chose to use our parameter budget per relationship
(one low-dimensional vector) to represent what we considered to be the key relationships in KBs.
Another, secondary, motivation comes from the recent work of [8], in which the authors learn word
embeddings from free text, and some 1-to-1 relationships between entities of different types, such
?capital of? between countries and cities, are (coincidentally rather than willingly) represented by
the model as translations in the embedding space. This suggests that there may exist embedding
spaces in which 1-to-1 relationships between entities of different types may, as well, be represented
by translations. The intention of our model is to enforce such a structure of the embedding space.
Our experiments in Section 4 demonstrate that this new model, despite its simplicity and its architecture primarily designed for modeling hierarchies, ends up being powerful on most kinds of
relationships, and can significantly outperform state-of-the-art methods in link prediction on realworld KBs. Besides, its light parameterization allows it to be successfully trained on a large scale
split of Freebase containing 1M entities, 25k relationships and more than 17M training samples.
In the remainder of the paper, we describe our model in Section 2 and discuss its connections with
related methods in Section 3. We detail an extensive experimental study on Wordnet and Freebase
in Section 4, comparing TransE with many methods from the literature. We finally conclude by
sketching some future work directions in Section 5.
2
Algorithm 1 Learning TransE
input Training set S = {(h, `, t)}, entities and rel. sets E and L, margin ?, embeddings dim. k.
1: initialize ` ? uniform(? ?6 , ?6 ) for each ` ? L
k
k
2:
` ? `/ k ` k for each ` ? L
3:
e ? uniform(? ?6k , ?6k ) for each entity e ? E
4: loop
5:
e ? e/ k e k for each entity e ? E
6:
Sbatch ?sample(S, b) // sample a minibatch of size b
7:
Tbatch ? ? // initialize the set of pairs of triplets
8:
for (h, `, t) ? Sbatch do
0
9:
(h0 , `, t0 ) ?sample(S(h,`,t)
) // sample a corrupted triplet
10:
Tbatch ? Tbatch ? (h, `, t), (h0 , `, t0 )
11:
end for
X
? ? + d(h + `, t) ? d(h0 + `, t0 ) +
12:
Update embeddings w.r.t.
(h,`,t),(h0 ,`,t0 ) ?Tbatch
13: end loop
2
Translation-based model
Given a training set S of triplets (h, `, t) composed of two entities h, t ? E (the set of entities) and a
relationship ` ? L (the set of relationships), our model learns vector embeddings of the entities and
the relationships. The embeddings take values in Rk (k is a model hyperparameter) and are denoted
with the same letters, in boldface characters. The basic idea behind our model is that the functional
relation induced by the `-labeled edges corresponds to a translation of the embeddings, i.e. we want
that h + ` ? t when (h, `, t) holds (t should be a nearest neighbor of h + `), while h + ` should be
far away from t otherwise. Following an energy-based framework, the energy of a triplet is equal to
d(h + `, t) for some dissimilarity measure d, which we take to be either the L1 or the L2 -norm.
To learn such embeddings, we minimize a margin-based ranking criterion over the training set:
X
X
? + d(h + `, t) ? d(h0 + `, t0 ) +
(1)
L=
0
(h,`,t)?S (h0 ,`,t0 )?S(h,`,t)
where [x]+ denotes the positive part of x, ? > 0 is a margin hyperparameter, and
0
S(h,`,t)
= (h0 , `, t)|h0 ? E ? (h, `, t0 )|t0 ? E .
(2)
The set of corrupted triplets, constructed according to Equation 2, is composed of training triplets
with either the head or tail replaced by a random entity (but not both at the same time). The loss
function (1) favors lower values of the energy for training triplets than for corrupted triplets, and is
thus a natural implementation of the intended criterion. Note that for a given entity, its embedding
vector is the same when the entity appears as the head or as the tail of a triplet.
The optimization is carried out by stochastic gradient descent (in minibatch mode), over the possible
h, ` and t, with the additional constraints that the L2 -norm of the embeddings of the entities is 1 (no
regularization or norm constraints are given to the label embeddings `). This constraint is important
for our model, as it is for previous embedding-based methods [3, 6, 2], because it prevents the
training process to trivially minimize L by artificially increasing entity embeddings norms.
The detailed optimization procedure is described in Algorithm 1. All embeddings for entities and
relationships are first initialized following the random procedure proposed in [4]. At each main
iteration of the algorithm, the embedding vectors of the entities are first normalized. Then, a small
set of triplets is sampled from the training set, and will serve as the training triplets of the minibatch.
For each such triplet, we then sample a single corrupted triplet. The parameters are then updated by
taking a gradient step with constant learning rate. The algorithm is stopped based on its performance
on a validation set.
3
Related work
Section 1 described a large body of work on embedding KBs. We detail here the links between our
model and those of [3] (Structured Embeddings or SE) and [14].
3
Table 1: Numbers of parameters and their values
for FB15k (in millions). ne and nr are the nb. of entities and relationships; k the embeddings dimension.
M ETHOD
Unstructured [2]
RESCAL [11]
SE [3]
SME( LINEAR ) [2]
SME( BILINEAR ) [2]
LFM [6]
TransE
N B . OF PARAMETERS
O(ne k)
O(ne k + nr k2 )
O(ne k + 2nr k2 )
O(ne k + nr k + 4k2 )
O(ne k + nr k + 2k3 )
O(ne k + nr k + 10k2 )
O(ne k + nr k)
ON
FB15 K
0.75
87.80
7.47
0.82
1.06
0.84
0.81
Table 2: Statistics of the data sets used
in this paper and extracted from the two
knowledge bases, Wordnet and Freebase.
DATA SET
WN
E NTITIES
40,943
R ELATIONSHIPS
18
T RAIN . EX .
141,442
VALID EX .
5,000
T EST EX .
5,000
FB15 K
14,951
1,345
483,142
50,000
59,071
FB1M
1?106
23,382
17.5?106
50,000
177,404
SE [3] embeds entities into Rk , and relationships into two matrices L1 ? Rk?k and L2 ? Rk?k
such that d(L1 h, L2 t) is large for corrupted triplets (h, `, t) (and small otherwise). The basic idea
is that when two entities belong to the same triplet, their embeddings should be close to each other
in some subspace that depends on the relationship. Using two different projection matrices for the
head and for the tail is intended to account for the possible asymmetry of relationship `. When the
dissimilarity function takes the form of d(x, y) = g(x ? y) for some g : Rk ? R (e.g. g is a
norm), then SE with an embedding of size k + 1 is strictly more expressive than our model with an
embedding of size k, since linear operators in dimension k + 1 can reproduce affine transformations
in a subspace of dimension k (by constraining the k +1th dimension of all embeddings to be equal to
1). SE, with L2 as the identity matrix and L1 taken so as to reproduce a translation is then equivalent
to TransE. Despite the lower expressiveness of our model, we still reach better performance than
SE in our experiments. We believe this is because (1) our model is a more direct way to represent
the true properties of the relationship, and (2) optimization is difficult in embedding models. For
SE, greater expressiveness seems to be more synonymous to underfitting than to better performance.
Training errors (in Section 4.3) tend to confirm this point.
Another related approach is the Neural Tensor Model [14]. A special case of this model corresponds
to learning scores s(h, `, t) (lower scores for corrupted triplets) of the form:
s(h, `, t) = hT Lt + `T1 h + `T2 t
(3)
where L ? Rk?k , L1 ? Rk and L2 ? Rk , all of them depending on `.
If we consider TransE with the squared euclidean distance as dissimilarity function, we have:
d(h + `, t) =k h k22 + k ` k22 + k t k22 ?2 hT t + `T (t ? h) .
Considering our norm constraints (k h k22 =k t k22 = 1) and the ranking criterion (1), in which k ` k22
does not play any role in comparing corrupted triplets, our model thus involves scoring the triplets
with hT t + `T (t ? h), and hence corresponds to the model of [14] (Equation (3)) where L is the
identity matrix, and ` = `1 = ?`2 . We could not run experiments with this model (since it has been
published simultaneously as ours), but once again TransE has much fewer parameters: this could
simplify the training and prevent underfitting, and may compensate for a lower expressiveness.
Nevertheless, the simple formulation of TransE, which can be seen as encoding a series of 2-way
interactions (e.g. by developing the L2 version), involves drawbacks. For modeling data where
3-way dependencies between h, ` and t are crucial, our model can fail. For instance, on the smallscale Kinships data set [7], TransE does not achieve performance in cross-validation (measured
with the area under the precision-recall curve) competitive with the state-of-the-art [11, 6], because
such ternary interactions are crucial in this case (see discussion in [2]). Still, our experiments of
Section 4 demonstrate that, for handling generic large-scale KBs like Freebase, one should first
model properly the most frequent connectivity patterns, as TransE does.
4
Experiments
Our approach, TransE, is evaluated on data extracted from Wordnet and Freebase (their statistics are
given in Table 2), against several recent methods from the literature which were shown to achieve
the best current performance on various benchmarks and to scale to relatively large data sets.
4
4.1
Data sets
Wordnet This KB is designed to produce an intuitively usable dictionary and thesaurus, and support automatic text analysis. Its entities (termed synsets) correspond to word senses, and relationships define lexical relations between them. We considered the data version used in [2], which we
denote WN in the following. Examples of triplets are ( score NN 1, hypernym, evaluation NN 1)
or ( score NN 2, has part, musical notation NN 1).4
Freebase Freebase is a huge and growing KB of general facts; there are currently around 1.2
billion triplets and more than 80 million entities. We created two data sets with Freebase. First, to
make a small data set to experiment on we selected the subset of entities that are also present in
the Wikilinks database5 and that also have at least 100 mentions in Freebase (for both entities and
relationships). We also removed relationships like ?!/people/person/nationality? which just reverses
the head and tail compared to the relationship ?/people/person/nationality?. This resulted in 592,213
triplets with 14,951 entities and 1,345 relationships which were randomly split as shown in Table 2.
This data set is denoted FB15k in the rest of this section. We also wanted to have large-scale data
in order to test TransE at scale. Hence, we created another data set from Freebase, by selecting the
most frequently occurring 1 million entities. This led to a split with around 25k relationships and
more than 17 millions training triplets, which we refer to as FB1M.
4.2
Experimental setup
Evaluation protocol For evaluation, we use the same ranking procedure as in [3]. For each test
triplet, the head is removed and replaced by each of the entities of the dictionary in turn. Dissimilarities (or energies) of those corrupted triplets are first computed by the models and then sorted by
ascending order; the rank of the correct entity is finally stored. This whole procedure is repeated
while removing the tail instead of the head. We report the mean of those predicted ranks and the
hits@10, i.e. the proportion of correct entities ranked in the top 10.
These metrics are indicative but can be flawed when some corrupted triplets end up being valid
ones, from the training set for instance. In this case, those may be ranked above the test triplet, but
this should not be counted as an error because both triplets are true. To avoid such a misleading
behavior, we propose to remove from the list of corrupted triplets all the triplets that appear either in
the training, validation or test set (except the test triplet of interest). This ensures that all corrupted
triplets do not belong to the data set. In the following, we report mean ranks and hits@10 according
to both settings: the original (possibly flawed) one is termed raw, while we refer to the newer as
filtered (or filt.). We only provide raw results for experiments on FB1M.
Baselines The first method is Unstructured, a version of TransE which considers the data as
mono-relational and sets all translations to 0 (it was already used as baseline in [2]). We also
compare with RESCAL, the collective matrix factorization model presented in [11, 12], and the
energy-based models SE [3], SME(linear)/SME(bilinear) [2] and LFM [6]. RESCAL is trained via
an alternating least-square method, whereas the others are trained by stochastic gradient descent,
like TransE. Table 1 compares the theoretical number of parameters of the baselines to our model,
and gives the order of magnitude on FB15k. While SME(linear), SME(bilinear), LFM and TransE
have about the same number of parameters as Unstructured for low dimensional embeddings, the
other algorithms SE and RESCAL, which learn at least one k ? k matrix for each relationship
rapidly need to learn many parameters. RESCAL needs about 87 times more parameters on FB15k
because it requires a much larger embedding space than other models to achieve good performance.
We did not experiment on FB1M with RESCAL, SME(bilinear) and LFM for scalability reasons in
terms of numbers of parameters or training duration.
We trained all baseline methods using the code provided by the authors. For RESCAL, we had to set
the regularization parameter to 0 for scalability reasons, as it is indicated in [11], and chose the latent
dimension k among {50, 250, 500, 1000, 2000} that led to the lowest mean predicted ranks on the
validation sets (using the raw setting). For Unstructured, SE, SME(linear) and SME(bilinear), we
4
WN is composed of senses, its entities are denoted by the concatenation of a word, its part-of-speech tag
and a digit indicating which sense it refers to i.e. score NN 1 encodes the first meaning of the noun ?score?.
5
code.google.com/p/wiki-links
5
Table 3: Link prediction results. Test performance of the different methods.
DATASET
WN
M ETRIC
M EAN R ANK
H ITS @10 (%)
Eval. setting
Raw
Filt.
Raw
Filt.
Unstructured [2]
315
304 35.3
38.2
RESCAL [11]
1,180 1,163 37.2
52.8
SE [3]
1,011
985 68.5
80.5
SME( LINEAR ) [2]
545
533 65.1
74.1
SME( BILINEAR ) [2]
526
509 54.7
61.3
LFM [6]
469
456 71.4
81.6
TransE
263
251
75.4
89.2
FB15 K
M EAN R ANK H ITS @10 (%)
Raw Filt.
Raw
Filt.
1,074 979
4.5
6.3
828 683 28.4
44.1
273 162 28.8
39.8
274 154 30.7
40.8
284 158 31.3
41.3
283 164 26.0
33.1
243
125
34.9
47.1
FB1M
M EAN R ANK H ITS @10 (%)
Raw
Raw
15,139
2.9
22,044
17.5
14,615
34.0
selected the learning rate among {0.001, 0.01, 0.1}, k among {20, 50}, and selected the best model
by early stopping using the mean rank on the validation sets (with a total of at most 1,000 epochs
over the training data). For LFM, we also used the mean validation ranks to select the model and to
choose the latent dimension among {25, 50, 75}, the number of factors among {50, 100, 200, 500}
and the learning rate among {0.01, 0.1, 0.5}.
Implementation For experiments with TransE, we selected the learning rate ? for the stochastic
gradient descent among {0.001, 0.01, 0.1}, the margin ? among {1, 2, 10} and the latent dimension
k among {20, 50} on the validation set of each data set. The dissimilarity measure d was set either
to the L1 or L2 distance according to validation performance as well. Optimal configurations were:
k = 20, ? = 0.01, ? = 2, and d = L1 on Wordnet; k = 50, ? = 0.01, ? = 1, and d = L1 on
FB15k; k = 50, ? = 0.01, ? = 1, and d = L2 on FB1M. For all data sets, training time was limited
to at most 1, 000 epochs over the training set. The best models were selected by early stopping using
the mean predicted ranks on the validation sets (raw setting). An open-source implementation of
TransE is available from the project webpage6 .
4.3
Link prediction
Overall results Tables 3 displays the results on all data sets for all compared methods. As expected, the filtered setting provides lower mean ranks and higher hits@10, which we believe are
a clearer evaluation of the performance of the methods in link prediction. However, generally the
trends between raw and filtered are the same.
Our method, TransE, outperforms all counterparts on all metrics, usually with a wide margin, and
reaches some promising absolute performance scores such as 89% of hits@10 on WN (over more
than 40k entities) and 34% on FB1M (over 1M entities). All differences between TransE and the
best runner-up methods are important.
We believe that the good performance of TransE is due to an appropriate design of the model
according to the data, but also to its relative simplicity. This means that it can be optimized efficiently
with stochastic gradient. We showed in Section 3 that SE is more expressive than our proposal.
However, its complexity may make it quite hard to learn, resulting in worse performance. On FB15k,
SE achieves a mean rank of 165 and hits@10 of 35.5% on a subset of 50k triplets of the training set,
whereas TransE reaches 127 and 42.7%, indicating that TransE is indeed less subject to underfitting
and that this could explain its better performances. SME(bilinear) and LFM suffer from the same
training issue: we never managed to train them well enough so that they could exploit their full
capabilities. The poor results of LFM might also be explained by our evaluation setting, based
on ranking entities, whereas LFM was originally proposed to predict relationships. RESCAL can
achieve quite good hits@10 on FB15k but yields poor mean ranks, especially on WN, even when
we used large latent dimensions (2, 000 on Wordnet).
The impact of the translation term is huge. When one compares performance of TransE and Unstructured (i.e. TransE without translation), mean ranks of Unstructured appear to be rather good
(best runner-up on WN), but hits@10 are very poor. Unstructured simply clusters all entities cooccurring together, independent of the relationships involved, and hence can only make guesses
of which entities are related. On FB1M, the mean ranks of TransE and Unstructured are almost
similar, but TransE places 10 times more predictions in the top 10.
6
Available at http://goo.gl/0PpKQe.
6
Table 4: Detailed results by category of relationship. We compare Hits@10 (in %) on FB15k in
the filtered evaluation setting for our model, TransE and baselines. (M. stands for M ANY).
TASK
R EL . CATEGORY
Unstructured [2]
SE [3]
SME( LINEAR ) [2]
SME( BILINEAR ) [2]
TransE
1- TO -1
34.5
35.6
35.1
30.9
43.7
P REDICTING head
1- TO -M. M.- TO -1
2.5
6.1
62.6
17.2
53.7
19.0
69.6
19.9
65.7
18.2
M.- TO -M.
6.6
37.5
40.3
38.6
47.2
1- TO -1
34.3
34.9
32.7
28.2
43.7
P REDICTING tail
1- TO -M. M.- TO -1
4.2
1.9
14.6
68.3
14.9
61.6
13.1
76.0
19.7
66.7
M.- TO -M.
6.6
41.3
43.3
41.8
50.0
Table 5: Example predictions on the FB15k test set using TransE. Bold indicates the test triplet?s
true tail and italics other true tails present in the training set.
I NPUT (H EAD AND L ABEL )
J. K. Rowling influenced by
Anthony LaPaglia performed in
Camden County adjoins
The 40-Year-Old Virgin nominated for
Costa Rica football team has position
Lil Wayne born in
WALL-E has the genre
P REDICTED TAILS
G. K. Chesterton, J. R. R. Tolkien, C. S. Lewis, Lloyd Alexander,
Terry Pratchett, Roald Dahl, Jorge Luis Borges, Stephen King, Ian Fleming
Lantana, Summer of Sam, Happy Feet, The House of Mirth,
Unfaithful, Legend of the Guardians, Naked Lunch, X-Men, The Namesake
Burlington County, Atlantic County, Gloucester County, Union County,
Essex County, New Jersey, Passaic County, Ocean County, Bucks County
MTV Movie Award for Best Comedic Performance,
BFCA Critics? Choice Award for Best Comedy,
MTV Movie Award for Best On-Screen Duo,
MTV Movie Award for Best Breakthrough Performance,
MTV Movie Award for Best Movie, MTV Movie Award for Best Kiss,
D. F. Zanuck Producer of the Year Award in Theatrical Motion Pictures,
Screen Actors Guild Award for Best Actor - Motion Picture
Forward, Defender, Midfielder, Goalkeepers,
Pitchers, Infielder, Outfielder, Center, Defenseman
New Orleans, Atlanta, Austin, St. Louis,
Toronto, New York City, Wellington, Dallas, Puerto Rico
Animations, Computer Animation, Comedy film,
Adventure film, Science Fiction, Fantasy, Stop motion, Satire, Drama
Detailed results Table 4 classifies the results (in hits@10) on FB15k depending on several categories of the relationships and on the argument to predict for several of the methods. We categorized
the relationships according to the cardinalities of their head and tail arguments into four classes:
1- TO -1, 1- TO -M ANY, M ANY- TO -1, M ANY- TO -M ANY. A given relationship is 1- TO -1 if a head
can appear with at most one tail, 1- TO -M ANY if a head can appear with many tails, M ANY- TO -1 if
many heads can appear with the same tail, or M ANY- TO -M ANY if multiple heads can appear with
multiple tails. We classified the relationships into these four classes by computing, for each relationship `, the averaged number of heads h (respect. tails t) appearing in the FB15k data set, given a pair
(`, t) (respect. a pair (h, `)). If this average number was below 1.5 then the argument was labeled
as 1 and M ANY otherwise. For example, a relationship having an average of 1.2 head per tail and
of 3.2 tails per head was classified as 1-to-Many. We obtained that FB15k has 26.2% of 1- TO -1
relationships, 22.7% of 1- TO -M ANY, 28.3% of M ANY- TO -1, and 22.8% of M ANY- TO -M ANY.
These detailed results in Table 4 allow for a precise evaluation and understanding of the behavior of
the methods. First, it appears that, as one would expect, it is easier to predict entities on the ?side
1? of triplets (i.e., predicting head in 1- TO -M ANY and tail in M ANY- TO -1), that is when multiple
entities point to it. These are the well-posed cases. SME(bilinear) proves to be very accurate
in such cases because they are those with the most training examples. Unstructured performs
well on 1- TO -1 relationships: this shows that arguments of such relationships must share common
hidden types that Unstructured is able to somewhat uncover by clustering entities linked together
in the embedding space. But this strategy fails for any other category of relationship. Adding
the translation term (i.e. upgrading Unstructured into TransE) brings the ability to move in the
embeddings space, from one entity cluster to another by following relationships. This is particularly
spectacular for the well-posed cases.
Illustration Table 5 gives examples of link prediction results of TransE on the FB15k test set
(predicting tail). This illustrates the capabilities of our model. Given a head and a label, the top
7
Figure 1: Learning new relationships with few examples. Comparative experiments on FB15k
data evaluated in mean rank (left) and hits@10 (right). More details in the text.
predicted tails (and the true one) are depicted. The examples come from the FB15k test set. Even if
the good answer is not always top-ranked, the predictions reflect common-sense.
4.4
Learning to predict new relationships with few examples
Using FB15k, we wanted to test how well methods could generalize to new facts by checking how
fast they were learning new relationships. To that end, we randomly selected 40 relationships and
split the data into two sets: a set (named FB15k-40rel) containing all triplets with these 40 relationships and another set (FB15k-rest) containing the rest. We made sure that both sets contained
all entities. FB15k-rest has then been split into a training set of 353,788 triplets and a validation
set of 53,266, and FB15k-40rel into a training set of 40,000 triplets (1,000 for each relationship)
and a test set of 45,159. Using these data sets, we conducted the following experiment: (1) models
were trained and selected using FB15k-rest training and validation sets, (2) they were subsequently
trained on the training set FB15k-40rel but only to learn the parameters related to the fresh 40 relationships, (3) they were evaluated in link prediction on the test set of FB15k-40rel (containing only
relationships unseen during phase (1)). We repeated this procedure while using 0, 10, 100 and 1000
examples of each relationship in phase (2).
Results for Unstructured, SE, SME(linear), SME(bilinear) and TransE are presented in Figure 1.
The performance of Unstructured is the best when no example of the unknown relationship is
provided, because it does not use this information to predict. But, of course, this performance does
not improve while providing labeled examples. TransE is the fastest method to learn: with only
10 examples of a new relationship, the hits@10 is already 18% and it improves monotonically with
the number of provided samples. We believe the simplicity of the TransE model makes it able to
generalize well, without having to modify any of the already trained embeddings.
5
Conclusion and future work
We proposed a new approach to learn embeddings of KBs, focusing on the minimal parametrization
of the model to primarily represent hierarchical relationships. We showed that it works very well
compared to competing methods on two different knowledge bases, and is also a highly scalable
model, whereby we applied it to a very large-scale chunk of Freebase data. Although it remains
unclear to us if all relationship types can be modeled adequately by our approach, by breaking down
the evaluation into categories (1-to-1, 1-to-Many, . . . ) it appears to be performing well compared to
other approaches across all settings.
Future work could analyze this model further, and also concentrates on exploiting it in more tasks,
in particular, applications such as learning word representations inspired by [8]. Combining KBs
with text as in [2] is another important direction where our approach could prove useful. Hence, we
recently fruitfully inserted TransE into a framework for relation extraction from text [16].
Acknowledgments
This work was carried out in the framework of the Labex MS2T (ANR-11-IDEX-0004-02), and
funded by the French National Agency for Research (EVEREST-12-JS02-005-01). We thank X.
Glorot for providing the code infrastructure, T. Strohmann and K. Murphy for useful discussions.
8
References
[1] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. Freebase: a collaboratively created
graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD
international conference on Management of data, 2008.
[2] A. Bordes, X. Glorot, J. Weston, and Y. Bengio. A semantic matching energy function for
learning with multi-relational data. Machine Learning, 2013.
[3] A. Bordes, J. Weston, R. Collobert, and Y. Bengio. Learning structured embeddings of knowledge bases. In Proceedings of the 25th Annual Conference on Artificial Intelligence (AAAI),
2011.
[4] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics
(AISTATS)., 2010.
[5] R. A. Harshman and M. E. Lundy. Parafac: parallel factor analysis. Computational Statistics
& Data Analysis, 18(1):39?72, Aug. 1994.
[6] R. Jenatton, N. Le Roux, A. Bordes, G. Obozinski, et al. A latent factor model for highly
multi-relational data. In Advances in Neural Information Processing Systems (NIPS 25), 2012.
[7] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of
concepts with an infinite relational model. In Proceedings of the 21st Annual Conference on
Artificial Intelligence (AAAI), 2006.
[8] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of
words and phrases and their compositionality. In Advances in Neural Information Processing
Systems (NIPS 26), 2013.
[9] G. Miller. WordNet: a Lexical Database for English. Communications of the ACM, 38(11):39?
41, 1995.
[10] K. Miller, T. Griffiths, and M. Jordan. Nonparametric latent feature models for link prediction.
In Advances in Neural Information Processing Systems (NIPS 22), 2009.
[11] M. Nickel, V. Tresp, and H.-P. Kriegel. A three-way model for collective learning on multirelational data. In Proceedings of the 28th International Conference on Machine Learning
(ICML), 2011.
[12] M. Nickel, V. Tresp, and H.-P. Kriegel. Factorizing YAGO: scalable machine learning for
linked data. In Proceedings of the 21st international conference on World Wide Web (WWW),
2012.
[13] A. P. Singh and G. J. Gordon. Relational learning via collective matrix factorization. In
Proceedings of the 14th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
(KDD), 2008.
[14] R. Socher, D. Chen, C. D. Manning, and A. Y. Ng. Learning new facts from knowledge bases
with neural tensor networks and semantic word vectors. In Advances in Neural Information
Processing Systems (NIPS 26), 2013.
[15] I. Sutskever, R. Salakhutdinov, and J. Tenenbaum. Modelling relational data using bayesian
clustered tensor factorization. In Advances in Neural Information Processing Systems (NIPS
22), 2009.
[16] J. Weston, A. Bordes, O. Yakhnenko, and N. Usunier. Connecting language and knowledge
bases with embedding models for relation extraction. In Proceedings of the Conference on
Empirical Methods in Natural Language Processing (EMNLP), 2013.
[17] J. Zhu. Max-margin nonparametric latent feature models for link prediction. In Proceedings
of the 29th International Conference on Machine Learning (ICML), 2012.
9
| 5071 |@word version:3 proportion:1 norm:6 seems:1 open:1 mention:1 configuration:1 contains:1 score:7 series:1 selecting:1 etric:1 born:1 ours:1 outperforms:2 existing:1 atlantic:1 current:1 com:4 comparing:2 universality:1 yet:1 must:1 luis:1 evans:1 kdd:1 wanted:2 remove:1 designed:3 update:1 intelligence:3 fewer:1 selected:7 item:1 parameterization:2 indicative:1 guess:1 parametrization:1 yamada:1 egne:2 filtered:4 infrastructure:1 provides:1 node:3 toronto:1 org:1 simpler:1 height:1 constructed:1 direct:1 prove:1 underfitting:4 borges:1 introduce:1 expected:1 indeed:2 behavior:2 frequently:1 growing:1 multi:12 hypernym:1 buying:1 inspired:1 utc:1 salakhutdinov:1 automatically:1 considering:3 increasing:2 cardinality:1 provided:3 project:1 notation:1 classifies:1 idex:1 null:1 what:1 kinship:1 kind:1 lowest:1 duo:1 fantasy:1 transformation:2 smallscale:1 universit:1 k2:4 hit:11 wayne:1 appear:6 louis:1 harshman:1 positive:1 t1:1 local:2 dallas:1 modify:1 despite:3 bilinear:11 encoding:1 might:1 plus:1 umr:1 chose:2 equivalence:1 suggests:2 fastest:1 factorization:6 limited:1 averaged:1 directed:1 acknowledgment:1 orleans:1 ternary:1 union:1 drama:1 ead:1 digit:1 procedure:5 area:2 empirical:1 significantly:2 yakhnenko:2 projection:1 matching:1 word:6 intention:1 refers:2 griffith:2 close:3 operator:1 nb:1 spectacular:1 www:1 equivalent:1 dean:1 lexical:2 center:1 pitcher:1 starting:1 duration:1 convex:1 focused:1 simplicity:4 unstructured:15 roux:1 embedding:18 searching:1 notion:2 updated:1 hierarchy:1 play:2 user:2 trend:1 particularly:1 database:3 labeled:3 observed:1 role:2 inserted:1 solved:1 ensures:1 trade:1 removed:2 goo:1 substantial:1 agency:1 complexity:2 abel:1 technologie:1 js02:1 graph2:1 trained:8 depend:1 reviewing:1 singh:1 purely:1 serve:1 represented:3 various:1 genre:1 jersey:1 train:3 fast:1 describe:1 artificial:3 h0:8 whose:1 quite:2 larger:1 film:2 posed:2 otherwise:3 football:1 anr:1 favor:1 statistic:4 ability:1 unseen:1 hoc:1 descriptive:1 propose:3 lowdimensional:1 interaction:2 product:2 fr:1 remainder:1 frequent:1 loop:2 combining:1 rapidly:1 achieve:4 scalability:3 constituent:1 billion:1 parent:1 cluster:2 asymmetry:1 exploiting:1 sutskever:2 produce:1 comparative:1 liked:2 depending:2 friend:3 clearer:1 measured:1 nearest:1 aug:1 predicted:4 involves:2 come:3 revers:1 direction:2 foot:1 concentrate:1 drawback:1 correct:2 attribute:1 stochastic:5 kb:12 subsequently:1 human:1 translating:1 clustered:1 wall:1 county:9 extension:2 strictly:1 hold:2 around:2 considered:2 lundy:1 k3:1 predict:5 achieves:2 dictionary:2 early:2 collaboratively:1 label:4 currently:1 successfully:2 tool:1 city:2 puerto:1 offs:1 always:1 freebase:14 rather:2 avoid:1 structuring:1 heudiasyc:1 focus:1 parafac:1 properly:1 rank:13 indicates:2 modelling:1 contrast:1 sigkdd:1 blockmodel:1 baseline:5 sense:2 dim:1 cooccurring:1 synonymous:1 cnrs:1 nn:5 el:1 stopping:2 essex:1 hidden:1 relation:4 reproduce:2 france:1 issue:1 overall:1 html:1 among:9 denoted:4 art:3 breakthrough:1 oksana:2 initialize:2 special:1 noun:1 equal:2 once:1 never:1 having:2 flawed:2 extraction:2 ng:1 represents:1 icml:2 future:3 others:2 t2:1 simplify:1 defender:1 primarily:2 producer:1 report:2 few:2 randomly:2 composed:3 simultaneously:1 resulted:1 national:1 murphy:1 replaced:2 intended:2 phase:2 atlanta:1 huge:2 interest:1 highly:2 mining:1 eval:1 evaluation:8 runner:2 light:1 behind:2 sens:2 accurate:1 edge:3 tree:1 iv:1 euclidean:1 old:1 initialized:1 taylor:1 theoretical:1 minimal:1 stopped:1 instance:2 modeling:10 unfaithful:1 phrase:1 cost:1 subset:2 uniform:2 predicate:1 fruitfully:1 conducted:1 stored:1 dependency:1 answer:1 corrupted:11 my:2 chunk:1 person:2 st:3 international:5 yago:1 fb15k:23 sketching:1 concrete:1 together:2 connecting:1 connectivity:3 squared:1 aaai:2 again:1 reflect:1 containing:4 choose:2 possibly:1 ms2t:1 management:1 emnlp:1 worse:1 usable:1 account:1 de:2 star:2 dur:1 bold:1 lloyd:1 matter:1 ranking:4 ad:1 depends:2 collobert:1 performed:2 jason:1 linked:2 analyze:1 competitive:1 multirelational:2 capability:2 parallel:1 collaborative:1 minimize:2 square:1 accuracy:1 musical:1 who:1 efficiently:1 miller:2 correspond:2 yield:1 generalize:3 bayesian:3 raw:11 published:1 classified:2 database5:1 explain:1 reach:3 influenced:1 against:1 energy:8 involved:1 boil:1 sampled:1 costa:1 dataset:1 stop:1 recall:1 knowledge:12 improves:1 organized:1 uncover:1 jenatton:1 appears:3 focusing:1 rico:1 higher:2 originally:1 formulation:1 evaluated:3 just:1 web:1 expressive:3 google:5 minibatch:3 french:1 mode:1 brings:1 indicated:1 believe:4 name:1 usa:1 k22:6 concept:2 requiring:1 normalized:1 true:5 counterpart:1 hence:6 regularization:3 managed:1 alternating:1 adequately:1 semantic:2 during:1 whereby:1 criterion:3 complete:1 demonstrate:2 performs:1 l1:8 interpreting:1 motion:3 meaning:1 adventure:1 recently:1 common:3 functional:1 million:4 tail:23 belong:2 interpret:1 refer:2 automatic:1 nationality:2 trivially:1 pointed:1 language:2 had:1 funded:1 actor:2 similarity:1 operating:2 goalkeeper:1 base:7 outfielder:1 recent:3 showed:2 sturge:1 termed:2 success:1 jorge:1 scoring:1 seen:1 minimum:1 greater:2 additional:1 somewhat:1 wellington:1 monotonically:1 stephen:1 corrado:1 full:1 multiple:3 cross:1 compensate:1 alberto:1 award:8 impact:1 prediction:13 involving:1 basic:2 sme:17 heterogeneous:2 scalable:2 metric:2 iteration:1 represent:6 proposal:1 whereas:3 want:1 ank:3 country:1 source:1 crucial:2 extra:1 rest:5 sure:1 subject:2 induced:1 tend:1 member:1 legend:1 jordan:1 extracting:1 structural:1 constraining:1 split:5 embeddings:26 easy:1 wn:7 enough:1 bengio:3 feedforward:1 architecture:1 competing:1 idea:2 avenue:1 sibling:2 t0:8 war:2 filt:5 suffer:1 speech:1 york:2 rescal:9 deep:1 generally:1 buck:1 detailed:4 involve:1 se:15 useful:2 coincidentally:1 nonparametric:3 tenenbaum:2 category:5 reduced:2 http:1 wiki:1 outperform:1 exist:1 canonical:1 fiction:1 per:3 sbatch:2 hyperparameter:2 key:1 four:2 nevertheless:1 capital:1 mono:1 prevent:1 dahl:1 ht:3 graph:2 year:2 realworld:1 run:1 letter:1 powerful:2 named:1 place:1 almost:2 ueda:1 transe:39 thesaurus:1 summer:1 display:1 annual:2 mtv:5 constraint:4 encodes:1 tag:1 argument:4 extremely:1 performing:1 mikolov:1 relatively:2 upgrading:1 structured:2 developing:1 according:5 poor:3 manning:1 across:1 character:1 newer:1 sam:1 lunch:1 intuitively:1 jweston:1 handling:1 explained:1 taken:1 satire:1 equation:2 remains:1 discus:1 turn:1 fail:1 nusunier:1 ascending:1 end:5 usunier:2 available:2 hierarchical:2 away:1 generic:2 appropriate:3 enforce:1 ocean:1 appearing:1 original:1 denotes:1 clustering:3 rain:1 top:4 gordon:1 exploit:1 sigmod:1 prof:2 especially:1 tensor:4 objective:1 move:1 already:3 nput:1 strategy:1 nr:7 antoine:1 italic:1 unclear:1 gradient:5 subspace:2 distance:2 link:12 thank:1 entity:60 capacity:1 concatenation:1 considers:1 kemp:1 trivial:1 reason:2 boldface:1 fresh:1 besides:3 code:3 modeled:1 relationship:71 illustration:1 providing:3 happy:1 difficult:1 setup:1 potentially:1 expense:1 design:2 implementation:3 collective:4 proper:1 ethod:1 lil:1 unknown:1 recommender:1 benchmark:1 descent:3 relational:19 communication:1 head:20 team:1 precise:1 expressiveness:3 lfm:9 rating:1 compositionality:1 pair:3 extensive:2 connection:1 optimized:1 comedy:2 expressivity:2 fleming:1 nip:5 able:2 kriegel:2 usually:1 pattern:5 below:1 max:1 terry:1 difficulty:2 natural:5 ranked:3 predicting:2 zhu:1 representing:1 improve:1 movie:6 misleading:1 titanic:1 ne:8 picture:2 axis:2 created:3 carried:2 tresp:2 willingly:1 text:5 epoch:2 literature:2 l2:9 understanding:2 checking:1 discovery:1 relative:1 loss:1 expect:1 men:1 filtering:1 nickel:2 validation:11 labex:1 affine:1 rica:1 critic:1 bordes:5 translation:16 naked:1 austin:1 share:1 course:1 gl:1 free:1 strohmann:1 english:1 synset:1 side:1 allow:1 paritosh:1 neighbor:1 wide:2 taking:1 absolute:1 distributed:1 curve:1 dimension:9 world:2 valid:2 stand:1 author:2 made:2 forward:1 counted:1 far:1 social:3 confirm:1 global:1 overfitting:1 conclude:1 factorizing:1 search:1 latent:9 triplet:37 table:12 promising:1 learn:8 nicolas:1 ean:3 complex:1 artificially:1 anthony:1 domain:2 protocol:1 did:1 aistats:1 main:2 motivation:2 whole:1 animation:2 child:1 repeated:2 pivotal:1 categorized:1 body:1 screen:2 ny:1 embeds:1 precision:1 nominated:1 position:1 fails:1 house:1 breaking:1 burlington:1 learns:2 ian:1 down:2 rk:8 friendship:1 removing:1 specific:1 list:1 glorot:3 exists:1 socher:1 rel:5 adding:2 compi:2 dissimilarity:5 magnitude:1 budget:1 occurring:1 illustrates:1 margin:6 chen:2 easier:1 locality:2 depicted:1 garcia:1 lt:1 led:2 simply:1 prevents:1 contained:1 kiss:1 bollacker:1 corresponds:5 relies:1 extracted:2 lewis:1 acm:3 weston:4 obozinski:1 goal:1 identity:2 sorted:1 king:1 hard:3 infinite:1 except:1 wordnet:8 total:1 secondary:1 experimental:2 est:1 indicating:2 select:1 support:1 people:2 alexander:1 ex:3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.